We are now the AI Security Institute
Please enable javascript for this website.
Blog

Updates on AISI's work

We accessibly share many of our tools, findings, and organisational updates so everyone can benefit.

AISI brand artwork

Blogs:

Mapping the limitations of current AI systems

Blog

Oct 23, 2025

Takeaways from expert interviews on barriers to AI capable of automating most cognitive labour.

Introducing ControlArena: A library for running AI control experiments

Blog

Oct 22, 2025

Our dedicated library to make AI control experiments easy, consistent, and repeatable.

Transcript analysis for AI agent evaluations

Blog

Oct 10, 2025

Why we use transcript analysis for our agent evaluations, and results from an early case study.

Examining backdoor data poisoning at scale

Blog

Oct 9, 2025

Our work with Anthropic and the Alan Turing Institute suggests that data poisoning attacks may be easier than previously believed.

Do chatbots inform or misinform voters?

Blog

Sep 30, 2025

What we learned from a large-scale empirical study of AI use for political information-seeking.

How we’re working with frontier AI developers to improve model security

Blog

Sep 13, 2025

Insights into our ongoing voluntary collaborations with Anthropic and OpenAI.

From bugs to bypasses: adapting vulnerability disclosure for AI safeguards

Blog

Sep 2, 2025

Exploring how far cyber security approaches can help mitigate risks in generative AI systems, in collaboration with the National Cyber Security Centre (NCSC).

Managing risks from increasingly capable open-weight AI systems

Blog

Aug 29, 2025

Current methods and open problems in open-weight model risk management.

The Inspect Sandboxing Toolkit: Scalable and secure AI agent evaluations

Blog

Aug 7, 2025

A comprehensive toolkit for safely evaluating AI agents.

Navigating the uncharted: Building societal resilience to frontier AI

Blog

Jul 24, 2025

We outline our approach to study and address AI risks in real-world applications

International joint testing  Exercise: Agentic testing

Blog

Jul 17, 2025

Advancing methodologies for agentic evaluations across domains, including leakage of sensitive Information, fraud and cybersecurity threats.

A structured protocol for elicitation experiments

Blog

Jul 16, 2025

Calibrating AI risk assessment through rigorous elicitation practices.

Why we're working on white box control

Blog

Jul 10, 2025

An introduction to white box control, and an update on our research so far.

LLM judges on trial: A new statistical framework to assess autograders

Blog

Jul 9, 2025

Our new framework can assess the reliability of LLM evaluators, while simultaneously answering a primary research question.

How will AI enable the crimes of the future?

Blog

Jul 3, 2025

How we're working to track and mitigate against criminal misuse of AI.

Making Safeguard Evaluations Actionable

Blog

May 29, 2025

An Example Safety Case for Safeguards Against Misuse

HiBayES: Improving LLM Evaluation with Hierarchical Bayesian Modelling

Blog

May 12, 2025

HiBayES: a flexible, robust statistical modelling framework that accounts for the nuances and hierarchical structure of advanced evaluations.

Research Agenda

Blog

May 6, 2025

We outline our research priorities, our approach to developing technical solutions to the most pressing AI concerns, and the key risks that must be addressed as AI capabilities advance.

RepliBench: measuring autonomous replication capabilities in AI systems

Blog

Apr 22, 2025

A comprehensive benchmark to detect emerging replication abilities in AI systems and provide a quantifiable understanding of potential risks

How to evaluate control measures for AI agents?

Blog

Apr 11, 2025

Our new paper outlines how AI control methods can mitigate misalignment risks as capabilities of AI systems increase

Strengthening AI Resilience

Blog

Apr 3, 2025

20 Systemic Safety Grant Awardees Announced

How we’re addressing the gap between AI capabilities and mitigations

Blog

Mar 11, 2025

We outline our approach to technical solutions for misuse and loss of control.

How can safety cases be used to help with frontier AI safety?

Blog

Feb 10, 2025

Our new papers show how safety cases can help AI developers turn plans in their safety frameworks into action

Principles for Safeguard Evaluation

Blog

Feb 4, 2025

Our new paper proposes core principles for evaluating misuse safeguards

Pre-Deployment Evaluation of OpenAI’s o1 Model

Blog

Dec 18, 2024

The UK Artificial Intelligence Safety Institute and the U.S. Artificial Intelligence Safety Institute conducted a joint pre-deployment evaluation of OpenAI's o1 model

Long-Form Tasks

Blog

Dec 3, 2024

A Methodology for Evaluating Scientific Assistants

Pre-Deployment Evaluation of Anthropic’s Upgraded Claude 3.5 Sonnet

Blog

Nov 19, 2024

The UK Artificial Intelligence Safety Institute and U.S. Artificial Intelligence Safety Institute conducted a joint pre-deployment evaluation of Anthropic’s latest model

Safety case template for ‘inability’ arguments

Blog

Nov 14, 2024

How to write part of a safety case showing a system does not have offensive cyber capabilities

Our First Year

Blog

Nov 13, 2024

The AI Safety Institute reflects on its first year

Announcing Inspect Evals

Blog

Nov 13, 2024

We’re open-sourcing dozens of LLM evaluations to advance safety research in the field

Bounty programme for novel evaluations and agent scaffolding

Blog

Nov 5, 2024

We are launching a bounty for novel evaluations and agent scaffolds to help assess dangerous capabilities in frontier AI systems.

Early lessons from evaluating frontier AI systems

Blog

Oct 24, 2024

We look into the evolving role of third-party evaluators in assessing AI safety, and explore how to design robust, impactful testing frameworks.

Advancing the field of systemic AI safety: grants open

Blog

Oct 15, 2024

Calling researchers from academia, industry, and civil society to apply for up to £200,000 of funding.

Why I joined AISI by Geoffrey Irving

Blog

Oct 3, 2024

Our Chief Scientist, Geoffrey Irving, on why he joined the UK AI Safety Institute and why he thinks other technical folk should too

Should AI systems behave like people?

Blog

Sep 25, 2024

We studied whether people want AI to be more human-like.

Early Insights from Developing Question-Answer Evaluations for Frontier AI

Blog

Sep 23, 2024

A common technique for quickly assessing AI capabilities is prompting models to answer hundreds of questions, then automatically scoring the answers. We share insights from months of using this method.

Conference on frontier AI safety frameworks

Blog

Sep 19, 2024

AISI is bringing together AI companies and researchers for an invite-only conference to accelerate the design and implementation of frontier AI safety frameworks. This post shares the call for submissions that we sent to conference attendees.

Cross-post: "Interviewing AI researchers on automation of AI R&D" by Epoch AI

Blog

Aug 27, 2024

AISI funded Epoch AI to explore AI researchers’ differing predictions on the automation of AI research and development and their suggestions for how to evaluate relevant capabilities.

Safety cases at AISI

Blog

Aug 23, 2024

As a complement to our empirical evaluations of frontier AI models, AISI is planning a series of collaborations and research projects sketching safety cases for more advanced models than exist today, focusing on risks from loss of control and autonomy. By a safety case, we mean a structured argument that an AI system is safe within a particular training or deployment context.

Our approach to evaluations

Blog

Feb 9, 2024

This post offers an overview of why we are doing this work, what we are testing for, how we select models, our recent demonstrations and some plans for our future work.

Announcing our San Francisco office

Blog

May 20, 2024

We are opening an office in San Francisco! This will enable us to hire more top talent, collaborate closely with the US AI Safety Institute and engage even more with the wider AI research community.

International Scientific Report on the Safety of Advanced AI: Interim Report

Blog

May 17, 2024

This is an up-to-date, evidence-based report on the science of advanced AI safety. It highlights findings about AI progress, risks, and areas of disagreement in the field. The report is chaired by Yoshua Bengio and coordinated by AISI.

Fourth progress report

Blog

May 20, 2024

Since February, we released our first technical blog post, published the International Scientific Report on the Safety of Advanced AI, open-sourced our testing platform Inspect, announced our San Francisco office, announced a partnership with the Canadian AI Safety Institute, grew our technical team to >30 researchers and appointed Jade Leung as our Chief Technology Officer.

First Progress Report

Blog

Sep 7, 2023

In our first 11 weeks, we have recruited an advisory board of national security and ML leaders, including Yoshua Bengio, recruited top professors from Cambridge and Oxford and announced 4 research partnerships.

First AI Safety Summit

Blog

Nov 2, 2023

At the first AI Safety Summit at Bletchley Park, world leaders and top companies agreed on the significance of advanced AI risks and the importance of testing.

Second progress report

Blog

Oct 30, 2023

Since September, we have recruited leaders from OpenAI and Humane Intelligence, tripled the capacity of our research team, announced 6 new research partnerships, and helped establish the UK’s fastest supercomputer.

Third progress report

Blog

Feb 5, 2024

Since October, we have recruited leaders from DeepMind and Oxford, onboarded 23 new researchers, published the principles behind the International Scientific Report on Advanced AI Safety, and began pre-deployment testing of advanced AI systems.

Announcing the UK and France AI Research Institutes’ collaboration

Blog

Feb 29, 2024

The UK AI Safety Institute and France’s Inria (The National Institute for Research in Digital Science and Technology) are partnering to advance AI safety research.

Announcing the UK and US AISI partnership

Blog

Apr 2, 2024

The UK and US AI Safety Institutes signed a landmark agreement to jointly test advanced AI models, share research insights, share model access and enable expert talent transfers.

Open sourcing our testing framework Inspect 

Blog

Apr 21, 2024

We open-sourced our framework for large language model evaluation, which provides facilities for prompt engineering, tool usage, multi-turn dialogue, and model-graded evaluations.

Advanced AI evaluations at AISI: May update

Blog

May 20, 2024

We tested leading AI models for cyber, chemical, biological, and agent capabilities and safeguards effectiveness. Our first technical blog post shares a snapshot of our methods and results.