We are now the AI Security Institute
Please enable javascript for this website.

Recent work

Do chatbots inform or misinform voters?

What we learned from a large-scale empirical study of AI use for political information-seeking.

How we’re working with frontier AI developers to improve model security

Insights into our ongoing voluntary collaborations with Anthropic and OpenAI.

From bugs to bypasses: adapting vulnerability disclosure for AI safeguards

Exploring how far cyber security approaches can help mitigate risks in generative AI systems, in collaboration with the National Cyber Security Centre (NCSC).

Our mission is to equip governments with a scientific understanding of the risks posed by advanced AI.

Technical research

Monitoring the fast-moving landscape of AI development

Evaluating the risks AI poses to national security and public safety

Advancing solutions like safeguards, alignment, and control

Global impact

Working with AI developers to ensure responsible development

Informing policymakers about current and emerging risks from AI

Collaborating and sharing findings with allies

Join us to shape the trajectory of AI

For our ambitious and urgent mission, we need top talent. We have built a unique structure within the government so we can operate like a startup. We have over 100 technical staff, including senior alumni from OpenAI, Google DeepMind and the University of Oxford, and we are scaling quickly. Our staff are supported by substantial funding and computing resources, priority access to top models, partnerships with leading research organisations and an incredibly talented, close-knit and driven team.