LLAMATOR – Red Team Framework for Testing LLM Security
不安全
2025-09-16 18:00:07
收藏
LLAMATOR 是一个用于评估大型语言模型系统安全性的 Python 框架,支持可重复攻击活动和多角色测试(攻击者、目标、裁判),并提供预设攻击库、接口适配器和标准化报告功能。
侵权请联系站方: [email protected]
目录
最新
- 免费无限量的 GLM-5、Qwen3.5-398B 模型,AtomGit 限时免费
- The Good, the Bad and the Ugly in Cybersecurity – Week 10
- From the endpoint to the prompt: a unified data security vision in Cloudflare One
- 软银寻求400亿美元贷款用于投资OpenAI
- IDORs Explained: How One Number Can Hack an Entire Company
- IDORs Explained: How One Number Can Hack an Entire Company
- How I Passed eCPPT within 3 months Without Losing My Mind
- 亚马逊押注医疗保健AI为患者和医生提供工具