Please enable javascript for this website.

Advancing the field of systemic AI safety: grants open

Calling researchers from academia, industry, and civil society to apply for up to £200,000 of funding.

Introduction

At the AI Safety Institute (AISI), we work to understand and measure a wide spectrum of AI risks, which can then inform decision making by governments and policy makers.  

One key area of focus is systemic AI safety, an emerging area of research, which aims to understand and mitigating the broader societal risks associated with AI deployment, beyond the capabilities of individual models. It is critical for us to advance this field, to map priority areas of research and to develop new methods and approaches. We want to be prepared for the possibility of continued rapid progress in AI R&D, and for significantly increased adoption of AI across various sectors in the next 2-5 years. The program seeks to understand, anticipate, and mitigate potential risks. Our Systemic AI Safety Grants programme, first announced at the AI Seoul Summit, is designed to expand the field and deepen our understanding of the topic.  

Tackling these risks head on will boost public confidence in the range of AI innovations which are being increasingly adopted across the economy, sparking long-term growth and keeping the UK at the heart of research into responsible and trustworthy AI development. Ensuring public confidence in AI is central to the government’s plans for seizing its potential, as the UK harnesses the technology to drive up productivity and deliver public services which are fit for the future. To ensure the UK can continue to harness the enormous opportunities of AI innovations, the government has also committed to introduce highly targeted legislation for the handful of companies developing the most powerful AI models, ensuring a proportionate approach to regulation rather than new blanket rules on its use.

What is systemic AI safety?

Systemic AI safety is a field that aims to understand and mitigate the broader societal risks associated with AI deployment, beyond the capabilities of individual models. It is critical for us to advance this field and map priority areas of research. These include evaluations of dangerous capabilities and safeguards, studies on user interactions with models, and work on risk governance through protocols and safety cases. We are now expanding our focus to the systemic impact of frontier AI systems.

Systemic AI safety focuses on risks and mitigations in the context of AI deployment, both in specific sectors and across society. For example, we want to know what risks could emerge when frontier AI is integrated into education, healthcare and finance – such research requires understanding of both technical aspects of AI, but also human and social aspects of the sector-specific context1,2,3, and a more holistic approach to system safety4,5. We also want to know how different AI models, potentially operating as agents, could interact with each other “in the wild” and what risks could emerge6. Systemic safety complements our other work – where dangerous capability evaluations highlight a model’s ability to support cyber-attacks, systemic safety evaluates critical infrastructure vulnerability to such attacks, and explores ways to reduce this vulnerability, including by leveraging frontier AI defensively7.  

Better understanding systemic safety will help inform priority interventions that governments and others could invest in, to address critical risks before they become severe harms8. Interventions could come in many forms, including technical solutions, guidelines, monitoring and information sharing, and novel governance mechanisms.

Why are we launching this programme?

The Systemic AI Safety Grants programme aims to safeguard societal systems during the rapid advancement and adoption of AI technologies. The program seeks to understand, anticipate, and mitigate potential risks.

We assume AI models will become increasingly capable, personalized, and interconnected, presenting both opportunities and risks. This program promotes research to explore systemic impacts of AI, develop robust safety measures beyond the models, and implement interventions to enhance resilience.

Our goals in phase one are:  

  • To develop an initial understanding of risks from frontier AI deployment in key sectors.
  • To build a wider research community focused on these issues.
  • To identify promising mitigations that we can promote in future phases of the program.  

Programme eligibility

Researchers from the UK at all career stages are encouraged to apply, and we particularly value projects that bring together academic, industry, and civil society experts. You may be an entrepreneur pursuing an innovative solution to AI risks with a great understanding of the landscape of different solutions, or a researcher fascinated by AI adoption and its impacts with a great network amongst practitioners. International partners are also welcome.

We encourage collaborations that span various specialisations and research domains, including AI-generated misinformation, critical infrastructure protection, labour market transformation, infrastructure for AI agents, and more. For more information on potential topics, see here. The programme is open to innovative, feasible, and actionable proposals that address both ongoing and anticipated AI risks.

Get involved

Phase 1 of the Systemic Safety Grants programme is now open. Help us shape our understanding of systemic AI safety by proposing an exciting project and encouraging your colleagues to apply as well! Learn more and apply here.

Acknowledgements

We want to acknowledge the contributions of UKRI, as well as multiple external researchers for contributions, feedback and advice.