Please enable javascript for this website.
Careers

We are an empowered team of technical experts and operators on a mission to advance research and infrastructure for AI governance.

If our work sounds interesting, we encourage you to apply. All applications are assessed on a rolling basis, so it is best to apply as soon as possible.

Mission

Advanced AI systems offer transformative opportunities to boost economic growth, human creativity, and public education, but they also pose significant risks.   AISI was launched at the Bletchley Park AI Safety Summit in 2023 because taking responsible action on this extraordinary technology requires technical expertise about AI to be available in the heart of government.  Originally called the UK's Frontier AI Taskforce, we have evolved into a leading research institution within the UK’s Department of Science, Innovation, and Technology, from where we can:

"AISI has built a wonderful group of technical, policy, and civil service experts focused on making AI and AGI go well, and I'm finding it extremely motivating to do technical work in this interdisciplinary context."

Geoffrey Irving

Chief Scientist

"AISI is situated at the centre of the action where true impact can be made. I'm excited about the opportunities unfolding in front of us at such a rapid pace."

Professor Yarin Gal

Research Director

Working at AISI

Working at AISI

  • We have built a unique structure within the government to ensure technical staff have the resources to work quickly and solve complex problems.  
  • Your work will have an unusually high impact. There is a lot of important work to do at AISI, and when AISI does something, the world watches.  
  • You will be part of a close-knit, driven team and work with field leaders like Geoffrey Irving, Professor Yarin Gal and Professor Chris Summerfield.
  • You will work in the heart of the UK government, which has made AI a top priority.
  • We enable hybrid work and offer a range of competitive benefits.

What we look for

  • We value collaboration, innovation and diversity. We need these qualities to drive forward the nascent science of understanding AI and mitigating AI’s risks.  
  • Our work is often fast-paced and has global impacts. We are looking for the talent, ambition and responsibility to deliver in this environment.  
  • Many of our staff previously worked at top industry and academic labs. For most technical roles, it is helpful to have substantial machine learning experience, especially large language model experience. That said, some of our most valuable hires will have more niche domain expertise, such as in cybersecurity.  
  • We encourage you to apply even if you are not in or from the UK. We may be able to explore other options, such as seconding you in from either your current employer or a third-party organisation.

Resources:

  • £100m in initial funding for the organisation‍
  • Privileged access to top AI models from leading companies
  • Priority access to over £1.5 billion of compute in the UK’s AI Research Resource and exascale supercomputing programme
  • Over 20 partnerships with top research organisers
  • Opportunities to collaborate with AI leaders around the world

Open roles

Our typical interview process includes submitting a CV and short written statement, skills assessments such as a technical interview and a take-home coding test, and 2-4 interviews, including a conversation with a senior member of our team. We tailor this process as needed for each role and candidate.

Please note that if you're applying to our technical roles, this privacy policy applies.

Events, Marketing, and Sourcing Lead

£58,040 - £64,995

Scaling the AI Safety Institute is a critical project and a once-in-a-generation moment. You'll take the lead on developing our events programme, market engagement, and headhunting strategy. You’ll also work with closely with our hiring managers to generate and land dozens of candidates for our senior London and San Francisco roles.

Research Engineer – General Application

Evaluations

£65,000 - £135,000

This application is for those without a preference for a team. We prefer you apply to team-specific RE roles below. /// Design and build evaluations to assess the capabilities and safety of advanced AI systems. Candidates should have relevant experience in machine learning.

Research Scientist - General Application

£65,000 - £135,000

This application is for those without a preference for a team. We prefer that you applying team-specific RS roles below. /// Lead research projects to improve our ability to assess the capabilities and safety of advanced AI systems. Candidates should have relevant experience in machine learning.

Research Engineer – Cyber Misuse Team

Evaluations

£65,000 - £135,000

Design experiments and build evaluations to assess the cyber offensive capabilities of advanced AI systems. Candidates should have relevant experience in machine learning and cybersecurity.

Research Scientist/Research Engineer/Security Researcher – Safeguards, Controls, & Mitigations

Evaluations

£65,000-145,000

Drive projects to understand advanced AI systems' vulnerability to misuse. Candidates should bring experience in ML research, ML engineering, or in security (e.g. redteaming in other domains).

Research Scientist – Cyber Misuse Team

Evaluations

£65,000 - £135,000

Lead research projects to improve our ability to assess the cyber offensive capabilities of advanced AI systems. Candidates should have relevant experience in machine learning and cybersecurity.

Research Engineer - Autonomous Systems Team

Evaluations

£65,000 - £135,000

Build large-scale experiments, to empirically evaluate risks such as uncontrolled self-improvement, autonomous replication, manipulation and deception. Collaborate with others to push forward the state of the science on model evaluations.‍

AISI Residency - Autonomous Systems

Evaluations

£65,000

As an AISI resident, you'll be mentored by a multi-disciplinary team including scientists, engineers and domain experts on autonomy risks. You will work in a team of other scholars to build evaluations.

Research Scientist – Autonomous Systems Team

Evaluations

£65,000 - £135,000

Research risks such as uncontrolled self-improvement, autonomous replication, manipulation and deception. Improve the science of model evaluations with things like scaling laws for dangerous capabilities.

Interpretability Researcher – Autonomous Systems Team

£65,000 - £135,000

As an interpretability research scientist or engineer, you'll lead early work to push forward the science on detecting scheming and white-box evaluations.

Research Scientist - Safety Cases Team

Safety Cases

£65,000 - £135,000

Drive research to develop our understanding of how safety cases could be developed for advanced AI. You'll work closely with Geoffrey Irving to build out safety cases as a new pillar of AISI's work.

Evaluations Technical Program Manager and Strategy Lead

£65000 - £135000

You will be a part of the Testing Team, which is responsible for our overall testing strategy, and the end-to-end preparation and delivery of individual testing exercises. You will collaborate closely with researchers and engineers from our evaluations workstreams, as well as policy and delivery teams. Your role will be broad and cross-cutting, involving project management, strategy, and scientific and policy communication.

Crime and Social Destabilisation Workstream Lead – Societal Impacts Team

Societal Impacts

£65,000 - £135,000

As workstream lead of a novel team, you will build a team to evaluate and mitigate some of the pressing societal-level risks that Frontier AI systems may exacerbate, including radicalization, misinformation, fraud, and social engineering.

Psychological and Social Risks Workstream Lead – Societal Impacts Team

Societal Impacts

£65,000 - £135,000

As workstream lead for this novel team, you will build and lead a multidisciplinary team to evaluate and mitigate the behavioural and psychological risks that emerge from AI systems. Your teams’ work will address how human interaction with advanced AI can impact human users, with a focus on identifying and preventing negative outcomes.

Head of Information Security

Strategy & Operations

£110,000.00 - £125,000.00

As the Head of Security at the AI Safety Institute (AISI), you will lead on building a cyber resilient AISI. This will include efforts to harden our systems and protect our people, information and technologies. You think big picture about organisational risk based on mission objectives and a calibrated understanding of existing and potential attacks. You want to combine meaningful security with creative solutions rather than being limited to the compliance playbook.

Research Scientist - Science of Evaluations

Evaluations

£85,000 - £145,000

AISI’s Science of Evaluations team will conduct applied and foundational research focused on two areas at the core of our mission: (i) measuring existing frontier AI system capabilities and (ii) predicting the capabilities of a system before running an evaluation.

Research Engineer - Chem-Bio

Evaluations

£65,000 - £115,000

As a Research Engineer in the LLM evaluations team of the Chem/Bio workstream, you will develop and run evaluations that measure the ability of LLMs to provide detailed end-to-end instructions and troubleshooting advice for biological/chemical tasks and/or automate key steps of the scientific R&D pipeline.

Research Scientist/Research Engineer - Societal Impacts

Societal Impacts

£65,000 - £135,000

Successful candidates will work with other researchers to design and run studies that answer important questions about the effect AI will have on society. For example, can AI effectively change people's political and social views? Research Scientists/Engineers have scope to use a range of research methodologies and drive the strategy of the team.

Research Scientist - Post-Training

Evaluations

£65,000 - £145,000

As a member of this team, you will use cutting-edge machine learning techniques to improve model performance in our domains of interest. The work is split into two sub teams: Agents and Fine-Tuning. Our Agents Team focuses on developing the LLM tools and scaffolding to create highly capable LLM-based agents, while our Fine-Tuning Team builds out finetuning pipelines to improve models on our domains of interest.

Research Engineer - Post-training

Evaluations

£65,000 - £145,000

As a member of this team, you will use cutting-edge machine learning techniques to improve model performance in our domains of interest. The work is split into two sub teams: Agents and Fine-Tuning. Our Agents Team focuses on developing the LLM tools and scaffolding to create highly capable LLM-based agents, while our Fine-Tuning Team builds out finetuning pipelines to improve models on our domains of interest.

Full Stack Software Engineer - Platform Team

Platforms

£85,000 - £145,000

The AI Safety Institute (AISI) is looking for exceptionally motivated and talented Full-Stack Engineers to join our Platform Engineering team. Our platform is the core of our project to build and run safety evaluations for next-generation frontier AI systems: in this diverse role you’ll collaborate with our research teams and blend web development, UX engineering and data visualisation to provide inference channels, facilitate hosting our own models, and create expert interfaces for evals development.

Technical Lead, Biological and Chemical Models

£65,000.00 - £135,000.00

We are looking for an experienced Technical Lead who specialises in AI/ML applied to engineering biology/chemistry. You will be responsible for leading a stream for researching specialised models within Biology and Chemistry with a goal to develop model evaluations, benchmarks, and technical safeguards for specialised models. This is a technical lead role with management responsibility and will be a source of nuanced technical insight.

Head of People - AI Safety Institute

Strategy & Operations

£67,250 - £79,440

This is a new role and new team that will be responsible for owning and implementing programmes and systems that will scale and grow the employee experience at AISI. We want to make AISI the best place to work in Government: this role will be critical to achieving this mission.

We are excited by the amount our team has been able to accomplish — and we are just getting started.