Entry Date:
August 26, 2025

AI Risk Initiative

Principal Investigators Neil Thompson , Peter Slattery , Alexander Saeri

Co-investigator Albert Scerbo

Associated Departments, Labs & Centers

Project Website https://airisk.mit.edu/

Project Start Date August 2025


As AI capabilities rapidly advance, we face critical information gaps in effective AI risk management:
(*) What are the risks from AI, which are most important, and what are the critical gaps in response?
(*) What are the mitigations for AI risks, and which are the highest priority to implement?
(*) Which AI risks and mitigations are relevant to which actors and sectors?
(*) Which mitigations are being implemented, and which are neglected?
(*) How is the above changing over time?

The MIT AI Risk Initiative was created to provide credible, timely, and decision-relevant answers to these
questions. The Initiative has two parts: the AI Risk Repository and the AI Risk Index.

The project is doing important work:
(*) Building a comprehensive repository of AI risks (toxicity, discrimination, security, privacy, compliance, misuse, etc.).
(*) Mapping risks to real-world incidents (in partnership with the AI Incident Database).
(*) Identifying the most vulnerable actors (developers, deployers, end users) and sectors (including finance and insurance).
(*) Tracking how 200+ organizations (labs, enterprises, government) are responding (or failing to respond) and the mitigations they employ.
(*) Developing a repository of prioritized mitigations (the preliminary database has been well received).
(*) Constructing a framework for global protocols to detect, track, and mitigate AI risks (with the Emerging Technology Observatory).

Participation benefits are significant:
(*) Risk reduction & cost avoidance – A single avoided incident could more than justify the investment.
(*) Benchmarking – CommBank is benchmarking internal processes against sector peers.
(*) Policy influence – Findings aggregate governance approaches and gaps, valuable as global AI regulation emerges.
(*) Reputation ("halo effect") – Visible leadership in responsible AI with MIT carries meaningful brand benefits.

We will systematically (1) update the AI Risk Repository, (2) engage experts to identify and prioritize AI risks and mitigations, (3) document mitigations, best practices, and gaps in how key actors are responding, and (4)) synthesize findings into an accessible interactive tool.