// Level 3 · Controls

Top 10 (2025)

The 10 most critical LLM application risks.

LLM01:2025Critical

Prompt Injection

Manipulating an LLM through crafted inputs can cause unintended actions, including direct (jailbreak) and indirect (via data sources) injections.

LLM02:2025Critical

Sensitive Information Disclosure

LLMs can inadvertently reveal confidential data, PII, or proprietary algorithms through their outputs.

LLM03:2025High

Supply Chain

LLM supply chains are vulnerable through compromised training data, models, deployment platforms, and third-party plugins.

LLM04:2025High

Data and Model Poisoning

Pre-training, fine-tuning, or embedding data is manipulated to introduce vulnerabilities, backdoors, or biases.

LLM05:2025High

Improper Output Handling

Insufficient validation, sanitization, or handling of LLM outputs before they are passed downstream — leads to XSS, SSRF, RCE.

LLM06:2025High

Excessive Agency

LLM-based systems granted excessive functionality, permissions, or autonomy can take damaging actions.

LLM07:2025Medium

System Prompt Leakage

System prompts containing sensitive information (credentials, business logic) can be extracted by attackers.

LLM08:2025High

Vector and Embedding Weaknesses

Weaknesses in how vectors and embeddings are generated, stored, or retrieved in RAG systems can be exploited to inject malicious content or extract data.

LLM09:2025Medium

Misinformation

LLMs can produce false or misleading information that appears credible — hallucinations and over-reliance on outputs.

LLM10:2025Medium

Unbounded Consumption

Allowing unrestricted inferences leads to denial of wallet (DoW), denial of service, or model theft via excessive queries.