Please enable javascript for this website.
Grants

Systemic AI Safety Grants

This programme will fund researchers who will collaborate with the UK government to advance systemic approaches to AI Safety.

Overview

The application window is closed.

To fully address AI risks, we must consider both the capabilities of AI models and their potential impact on people, society and the systems they interact with.

Systemic AI safety is focused on safeguarding the societal systems and critical infrastructure into which AI is being deployed—to make our world more resilient to AI-related risks and to enable its benefits.

The AI Safety Institute (AISI), in partnership with the Engineering and Physical Sciences Research Council (EPSRC) and Innovate UK, part of UK Research and Innovation (UKRI), is excited to announce support for impactful research that takes a systemic approach to AI safety. We are offering seed grants round of £200,000 for 12 months, and plan to follow in future rounds with more substantial awards. Successful applicants will receive ongoing support, computing resources where needed, and access to a community of AI and sector-specific domain experts.

Helpful links:

AISI Systemic Safety Grant programme Webinar:

Watch the Grant programme webinar below

Programme details

Grants timeline

  • Applications Open
  • 15th October 2024
  • Application deadline
  • 26th November 2024
  • Sift outcomes
  • By Mid December 2024
  • Expert Assessment Panel
  • Early January 2025
  • Final award decision
  • By Mid January 2025
  • Granting period begins
  • 5th February 2025

What AISI Systemic Safety Grants are funding

We are seeking applications focused on a range of safety-related problems: this could involve monitoring and anticipating AI use and misuse in society, or the risks they expose in certain sectors. We want to see applications that could enhance understanding of how government could intervene where needed - with new infrastructure and technical innovations - to make society more resilient to AI-related risks. 

We conceive of systemic AI safety as a very broad field of research and interventions. Below we introduce some examples of the kinds of research we are interested in. A longer list of example projects is available here.

  • A systems-informed approach for how to improve trust in authentic digital media, protect against AI-generated misinformation, and improve democratic deliberation.
  • Targeted interventions that protect critical infrastructure, for example, those providing energy or healthcare, from an AI-mediated cyberattack.
  • Ideas about how to measure or mitigate the potentially destabilising effects of AI transformations of the labour market.
  • Ways to measure, model, and mitigate the secondary effects of AI systems that take autonomous actions on digital platforms.

We recognise that future risks from AI remain largely unknown. We are open to a range of plausible assumptions about how AI technologies will develop and be deployed in the next 2-5 years. We are excited about work that addresses both ongoing and anticipated risks, as long as it is credible and evidence based.  

Benefits of working with the AI Safety Institute

  • Access to technical experts in the field of AI Safety, working with researchers who have previously worked at OpenAI, Google DeepMind, and Cambridge University.  
  • Access to compute infrastructure to drive forward projects and applications into tangible and innovative solutions.
  • A supportive community across Government and research organisations to promote systemic wide interventions in AI Safety.

In the future, we will build on the outputs of this first phase and make larger, longer-term investments in specific interventions that have a promise for increasing systemic safety. Projects in the first phase will be prioritised on their ability to help us make these decisions in the second phase.

Application details

How to apply

The application window is now closed.

Please note that by applying you agree to allow us to make public the answer to question 9 of the proposal (“What problem does your idea solve / what risk does it address / what question does it answer?”). In order to maximise the visibility of the problems in systemic AI safety, we reserve the right to publish all the proposed answers to this question (anonymously) on our website. Our goal here is to create a repository of relevant questions in systemic AI safety, that future researchers might choose to address. Note that we will not publish your proposed solution, or link researcher identities to any published text.

What do we expect from successful applicants?

In addition to delivering your proposed projects, successful grantees will be expected to produce quarterly progress updates, against financial and non-financial performance metrics, participate in regular progress meetings with AISI and UKRI, participate in workshops organised by AISI on a regular basis, and to engage with the programme officer to increase the impact of your work. These expectations would be laid out in the grant agreement terms and conditions.

Contact AISI

If you have any questions regarding the call, please email AISIgrants@dsit.gov.uk.

Application window is now closed.

Frequently asked questions