We are now the AI Security Institute
Please enable javascript for this website.

Recent work

Investigating models for misalignment

Insights from our alignment evaluations of Claude Opus 4.1, Sonnet 4.5, and a pre‑release snapshot of Opus 4.5.

UKAISI at NeurIPS 2025

An overview of the research we’ll be presenting at this year’s NeurIPS conference.

Mapping the limitations of current AI systems

Takeaways from expert interviews on barriers to AI capable of automating most cognitive labour.

Our mission is to equip governments with a scientific understanding of the risks posed by advanced AI.

Technical research

Monitoring the fast-moving landscape of AI development

Evaluating the risks AI poses to national security and public safety

Advancing solutions like safeguards, alignment, and control

Global impact

Working with AI developers to ensure responsible development

Informing policymakers about current and emerging risks from AI

Collaborating and sharing findings with allies

Join us to shape the trajectory of AI

For our ambitious and urgent mission, we need top talent. We have built a unique structure within the government so we can operate like a startup. We have over 100 technical staff, including senior alumni from OpenAI, Google DeepMind and the University of Oxford, and we are scaling quickly. Our staff are supported by substantial funding and computing resources, priority access to top models, partnerships with leading research organisations and an incredibly talented, close-knit and driven team.