The AI Safety Institute is a research organisation within the UK Government’s Department for Science, Innovation, and Technology.
The AI Safety Institute is a research organisation within the Department of Science, Innovation and Technology.
We are working to:
To deliver on these ambitious goals, we designed the Institute as a startup in the government, combining the authority of government with the expertise and agility of the private sector.
A core driver of our work is the belief that governments need to understand advanced AI to inform policy decisions and to enable public accountability.
Because of this, we have focused on building in-house capabilities to test the safety of advanced AI systems such as large language model assistants. We aim to conduct rigorous, trustworthy assessments of advanced AI systems before and after they are launched.
By evaluating these risks now, we can help governments assess their significance and get ahead of them.
We have open-sourced our testing framework Inspect so the research community can use and build upon our work.
We are also pursuing research beyond evaluations to mitigate risks and make AI more publicly beneficial, such as research to make AI systems fundamentally safer and to increase societal resilience to advanced AI.
The UK has driven the global conversation on AI governance. We have already shaped how companies and other governments are navigating the technology. To date, we have: