The AI Safety Institute is the first state-backed organisation dedicated to advancing this goal.
We are conducting research and building infrastructure to test the safety of advanced AI and to measure its impacts on people and society. We are also working with the wider research community, AI developers and other governments to affect how AI is developed and to shape global policymaking on this issue.
We are launching a bounty for novel evaluations and agent scaffolds to help assess dangerous capabilities in frontier AI systems.
We look into the evolving role of third-party evaluators in assessing AI safety, and explore how to design robust, impactful testing frameworks.
Calling researchers from academia, industry, and civil society to apply for up to £200,000 of funding.
Monitoring the fast-moving landscape of AI development
Evaluating the risks AI poses to national security and public welfare
Advancing the field of systemic safety to improve societal resilience
Working with AI developers to ensure responsible development
Informing policymakers about current and emerging risks from AI
Promoting global coordination on AI governance
For our ambitious and urgent mission, we need top talent. We have built a unique structure within the government so we can operate like a startup. We have recruited over 30 technical staff, including senior alumni from OpenAI, Google DeepMind and the University of Oxford, and we are scaling rapidly. Our staff are supported by substantial funding and computing resources, priority access to top models, partnerships with leading research organisations and an incredibly talented, close-knit and driven team.