Artificial Intelligence is revolutionizing industries, but it also introduces new threats, vulnerabilities, and security challenges. This page provides insights into the evolving AI Threat Landscape, covering risks such as adversarial attacks, model poisoning, and deepfake fraud. Explore known AI Vulnerabilities—from system misalignment to privacy leaks—and real-world AI Incidents, including security breaches, biased decision-making, and automated system failures. Stay informed with the latest research, case studies, and security tools to strengthen AI defenses and mitigate emerging risks.
"Explore the MITRE ATLAS™ matrix, a comprehensive framework tailored for AI Red Team Testing. This matrix categorizes and outlines a range of adversarial tactics and techniques specifically designed to evaluate and enhance the security of AI systems. It serves as an invaluable resource for understanding potential threats and developing robust defenses against them. Click the link to dive deeper into how the MITRE ATLAS™ matrix can guide and improve your AI security strategies."
As Large Language Models (LLMs) and Generative AI continue to reshape industries, they also introduce significant security risks. The OWASP LLM Top-10 provides a comprehensive list of the most critical vulnerabilities affecting LLM-based applications, including prompt injections, training data poisoning, model extraction, and AI supply chain risks. This section highlights these top threats and outlines mitigation strategies to enhance the security, reliability, and ethical deployment of AI systems. Explore the OWASP LLM Top-10 framework to better understand the risks and safeguard your LLM and GenAI applications against adversarial attacks and unintended consequences.
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
The AI Incidents Monitor (AIM) is an initiative by the OECD.AI expert group on AI incidents, supported by the Patrick J. McGovern Foundation. AIM is designed to track real-world AI incidents and hazards in real time, providing critical data to inform AI incident reporting frameworks and AI policy discussions.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.