Top 10 (2025)
The 10 most critical LLM application risks.
Prompt Injection
Manipulating an LLM through crafted inputs can cause unintended actions, including direct (jailbreak) and indirect (via data sources) injections.
Sensitive Information Disclosure
LLMs can inadvertently reveal confidential data, PII, or proprietary algorithms through their outputs.
Supply Chain
LLM supply chains are vulnerable through compromised training data, models, deployment platforms, and third-party plugins.
Data and Model Poisoning
Pre-training, fine-tuning, or embedding data is manipulated to introduce vulnerabilities, backdoors, or biases.
Improper Output Handling
Insufficient validation, sanitization, or handling of LLM outputs before they are passed downstream — leads to XSS, SSRF, RCE.
Excessive Agency
LLM-based systems granted excessive functionality, permissions, or autonomy can take damaging actions.
System Prompt Leakage
System prompts containing sensitive information (credentials, business logic) can be extracted by attackers.
Vector and Embedding Weaknesses
Weaknesses in how vectors and embeddings are generated, stored, or retrieved in RAG systems can be exploited to inject malicious content or extract data.
Misinformation
LLMs can produce false or misleading information that appears credible — hallucinations and over-reliance on outputs.
Unbounded Consumption
Allowing unrestricted inferences leads to denial of wallet (DoW), denial of service, or model theft via excessive queries.