We are now the AI Security Institute
Please enable javascript for this website.
About

Our mission is to equip governments with a scientific understanding of the risks posed by advanced AI.

The AI Security Institute is a research organisation within the UK Government’s Department for Science, Innovation, and Technology.

Organisation

The AI Security Institute is a research organisation within the Department of Science, Innovation and Technology.

We are working to:

  • Test advanced AI systems and inform policymakers about their risks;
  • Foster collaboration across companies, governments, and the wider research community to mitigate risks and advance publicly beneficial research; and
  • Strengthen AI development practices and policy globally.

To deliver on these ambitious goals, we designed the Institute as a startup in the government, combining the authority of government with the expertise and agility of the private sector.

We have recruited top talent from across the public and private sectors  

  • Our Chair Ian Hogarth brings experience as a tech investor and entrepreneur.  
  • Our Director Oliver Ilott previously led the Prime Minister’s domestic private office.
  • Our advisory board comprises national security and machine learning leaders, such as Yoshua Bengio.  
  • Our Chief Technology Officer Jade Leung previously led the Governance team at OpenAI.  
  • Our Research Directors Geoffrey Irving, Professor Yarin Gal, and Professor Chris Summerfield have led teams at OpenAI, Google DeepMind and the University of Oxford.  
  • We already have >50 technical staff, and we are scaling rapidly.

We are backing our team with the resources they need to move fast

  • £100m in initial funding for the organisation
  • Privileged access to top AI models from leading companies
  • Priority access to over £1.5 billion of compute in the UK’s AI Research Resource and exascale supercomputing programme 
  • Over 20 partnerships with top research organisations

Research

A core driver of our work is the belief that governments need to understand advanced AI to inform policy decisions and keep the public safe and secure.

Because of this, we have focused on building in-house capabilities to evaluate the capabilities of advanced AI systems and develop and test risk mitigations.

We aim to conduct rigorous and scientifically informed assessments of advanced AI systems before and after they are launched.  

We are currently building and running evaluations for: 

  • Misuse: How much models could assist with dual-use cyber, chemical and biological attacks
  • Safeguards: How effective safety and security features of advanced AI systems are against attempts to circumvent them
  • Autonomy: How well models could conduct AI research & development, autonomously make copies of themselves, interact with and manipulate humans and evade human intervention  
  • Criminal Misuse: How AI systems could support criminal activity
  • Human Influence: How AI systems could influence humans and reduce individual autonomy
  • Societal Resilience: How can we make society more resilient to AI risks

By evaluating these risks now, we can help governments assess their significance and get ahead of them.

We have open-sourced our testing framework Inspect so the research community can use and build upon our work.   

Setting the global standard

The UK has driven the global conversation on AI governance. We have already shaped how companies and other governments are navigating the technology. To date, we have:

AI Safety Summit
We contributed to the first AI Safety Summit hosted by the UK at Bletchley Park and the follow-up summit hosted by the Republic of Korea in Seoul. The summits are bringing together world leaders, top AI companies and civil society to make unprecedented commitments to mitigate risks.
Partnered with the US AI Safety Institute
We partnered with the US AI Safety Institute to jointly test advanced AI models, share research insights, share model access, and enable expert talent transfers.
International AI Safety Report
We commissioned Yoshua Bengio to chair the International Scientific Report on the Safety of Advanced AI, an evidence-based report on the state of the science of advanced AI safety.

We are excited by the amount our team has been able to accomplish — and we are just getting started.