Our mission is to equip governments with an empirical understanding of advanced AI. We have recruited over 30 technical staff, including senior alumni from OpenAI, Google DeepMind and the University of Oxford, and we are scaling rapidly. Our staff are supported by substantial funding and computing resources, priority access to top models, partnerships with leading research organisations and an incredibly talented, close-knit and driven team.
20 Systemic Safety Grant Awardees Announced
We outline our approach to technical solutions for misuse and loss of control.
The AI Safety Institute reflects on its first year
We’re open-sourcing dozens of LLM evaluations to advance safety research in the field
We are launching a bounty for novel evaluations and agent scaffolds to help assess dangerous capabilities in frontier AI systems.
We look into the evolving role of third-party evaluators in assessing AI safety, and explore how to design robust, impactful testing frameworks.
Calling researchers from academia, industry, and civil society to apply for up to £200,000 of funding.
Our Chief Scientist, Geoffrey Irving, on why he joined the UK AI Safety Institute and why he thinks other technical folk should too
AISI is bringing together AI companies and researchers for an invite-only conference to accelerate the design and implementation of frontier AI safety frameworks. This post shares the call for submissions that we sent to conference attendees.
We are opening an office in San Francisco! This will enable us to hire more top talent, collaborate closely with the US AI Safety Institute and engage even more with the wider AI research community.
Since February, we released our first technical blog post, published the International Scientific Report on the Safety of Advanced AI, open-sourced our testing platform Inspect, announced our San Francisco office, announced a partnership with the Canadian AI Safety Institute, grew our technical team to >30 researchers and appointed Jade Leung as our Chief Technology Officer.
This is an up-to-date, evidence-based report on the science of advanced AI safety. It highlights findings about AI progress, risks, and areas of disagreement in the field. The report is chaired by Yoshua Bengio and coordinated by AISI.
The UK and US AI Safety Institutes signed a landmark agreement to jointly test advanced AI models, share research insights, share model access and enable expert talent transfers.
The UK AI Safety Institute and France’s Inria (The National Institute for Research in Digital Science and Technology) are partnering to advance AI safety research.
This post offers an overview of why we are doing this work, what we are testing for, how we select models, our recent demonstrations and some plans for our future work.
Since October, we have recruited leaders from DeepMind and Oxford, onboarded 23 new researchers, published the principles behind the International Scientific Report on Advanced AI Safety, and began pre-deployment testing of advanced AI systems.
Since September, we have recruited leaders from OpenAI and Humane Intelligence, tripled the capacity of our research team, announced 6 new research partnerships, and helped establish the UK’s fastest supercomputer.
In our first 11 weeks, we have recruited an advisory board of national security and ML leaders, including Yoshua Bengio, recruited top professors from Cambridge and Oxford and announced 4 research partnerships.