We are now the AI Security Institute
Please enable javascript for this website.

The AISI Organisation is a startup in government with world-leading talent and an urgent mission.

Our mission is to equip governments with an empirical understanding of advanced AI. We have recruited over 30 technical staff, including senior alumni from OpenAI, Google DeepMind and the University of Oxford, and we are scaling rapidly. Our staff are supported by substantial funding and computing resources, priority access to top models, partnerships with leading research organisations and an incredibly talented, close-knit and driven team.

AISI brand artwork

Strengthening AI resilience

Organisation

April 3, 2025

20 Systemic Safety Grant Awardees Announced

How we’re addressing the gap between AI capabilities and mitigations

Organisation

March 11, 2025

We outline our approach to technical solutions for misuse and loss of control.

Our First Year

Organisation

November 13, 2024

The AI Safety Institute reflects on its first year

Announcing Inspect Evals

Organisation

November 13, 2024

We’re open-sourcing dozens of LLM evaluations to advance safety research in the field

Bounty programme for novel evaluations and agent scaffolding

Organisation

November 5, 2024

We are launching a bounty for novel evaluations and agent scaffolds to help assess dangerous capabilities in frontier AI systems.

Early lessons from evaluating frontier AI systems

Organisation

October 24, 2024

We look into the evolving role of third-party evaluators in assessing AI safety, and explore how to design robust, impactful testing frameworks.

Advancing the field of systemic AI safety: grants open

Organisation

October 15, 2024

Calling researchers from academia, industry, and civil society to apply for up to £200,000 of funding.

Why I joined AISI by Geoffrey Irving

Organisation

October 3, 2024

Our Chief Scientist, Geoffrey Irving, on why he joined the UK AI Safety Institute and why he thinks other technical folk should too

Conference on frontier AI safety frameworks

Organisation

September 19, 2024

AISI is bringing together AI companies and researchers for an invite-only conference to accelerate the design and implementation of frontier AI safety frameworks. This post shares the call for submissions that we sent to conference attendees.

Announcing our San Francisco office

Organisation

May 20, 2024

We are opening an office in San Francisco! This will enable us to hire more top talent, collaborate closely with the US AI Safety Institute and engage even more with the wider AI research community.

Fourth progress report

Organisation

May 20, 2024

Since February, we released our first technical blog post, published the International Scientific Report on the Safety of Advanced AI, open-sourced our testing platform Inspect, announced our San Francisco office, announced a partnership with the Canadian AI Safety Institute, grew our technical team to >30 researchers and appointed Jade Leung as our Chief Technology Officer.

International Scientific Report on the Safety of Advanced AI: Interim Report

Organisation

May 17, 2024

This is an up-to-date, evidence-based report on the science of advanced AI safety. It highlights findings about AI progress, risks, and areas of disagreement in the field. The report is chaired by Yoshua Bengio and coordinated by AISI.

Announcing the UK and US AISI partnership

Organisation

April 2, 2024

The UK and US AI Safety Institutes signed a landmark agreement to jointly test advanced AI models, share research insights, share model access and enable expert talent transfers.

Announcing the UK and France AI Research Institutes’ collaboration

Organisation

February 29, 2024

The UK AI Safety Institute and France’s Inria (The National Institute for Research in Digital Science and Technology) are partnering to advance AI safety research.

Our approach to evaluations

Organisation

February 9, 2024

This post offers an overview of why we are doing this work, what we are testing for, how we select models, our recent demonstrations and some plans for our future work.

Third progress report

Organisation

February 5, 2024

Since October, we have recruited leaders from DeepMind and Oxford, onboarded 23 new researchers, published the principles behind the International Scientific Report on Advanced AI Safety, and began pre-deployment testing of advanced AI systems.

Second progress report

Organisation

October 30, 2023

Since September, we have recruited leaders from OpenAI and Humane Intelligence, tripled the capacity of our research team, announced 6 new research partnerships, and helped establish the UK’s fastest supercomputer.

First Progress Report

Organisation

September 7, 2023

In our first 11 weeks, we have recruited an advisory board of national security and ML leaders, including Yoshua Bengio, recruited top professors from Cambridge and Oxford and announced 4 research partnerships.