You are here

OWASP’s Top 10 Risks for LLMs

OWASP’s Top 10 Risks for LLMs

Created: Tuesday, November 26, 2024 - 13:05
Categories:
Cybersecurity, Security Preparedness

As artificial intelligence (AI) tools continue to proliferate among nearly all sectors and organizations, risks associated with their use will also continue to multiply. OWASP – the Open Worldwide Application Security Project – recently updated its list of the top dangers facing large language models (LLMs). The “OWASP Top 10 for LLM Applications 2025” explores the latest risks, vulnerabilities, and mitigations for developing and securing generative AI and LLMs across the development, deployment, and management lifecycle. This updated resource comes as DHS recently released its Roles and Responsibilities Framework for AI in Critical Infrastructure.

OWASP dives into each risk, providing extensive discussion, mitigations, and security recommendations. If your utility is using or considering using AI , WaterISAC highly encourages members to utilize the OWASP top 10 as the primary risks to plan mitigations for. The Top 10 includes:

  1. Prompt injection
  2. Sensitive information disclosure
  3. Supply chain
  4. Data and model poisoning
  5. Improper output handling
  6. Excessive agency
  7. System prompt leakage
  8. Vector and embedding weaknesses
  9. Misinformation
  10. Unbounded consumption

For more information, visit OWASP.