How Microsoft defends against indirect prompt injection attacks
MSRC
2025-07-29 15:00:00
收藏
Summary The growing adoption of large language models (LLMs) in enterprise workflows has introduced a new class of adversarial techniques: indirect prompt injection. Indirect prompt injection can be used against systems that leverage large language models (LLMs) to process untrusted data. Fundamentally, the risk is that an attacker could provide specially crafted data that the LLM misinterprets as instructions.
目录
最新
- Why XSS still matters: MSRC’s perspective on a 25-year-old threat
- BlueHat Asia 2025: Closing soon: Submit your papers by September 14, 2025
- postMessaged and Compromised
- Zero Day Quest: Join the largest hacking event with up to $5 million in total bounty awards
- .NET Bounty Program now offers up to $40,000 in awards
- .NET Bounty Program now offers up to $40,000 in awards
- How Microsoft defends against indirect prompt injection attacks
- Customer guidance for SharePoint vulnerability CVE-2025-53770