The deployment of AI across critical infrastructure and societal systems - like financial markets, healthcare, and transportation - offers immense benefits. It also introduces risks that could cause significant disruption in high-stakes domains.
Our Systemic AI Safety Grants Programme, first announced at the Seoul AI Summit, aims to increase societal resilience to AI-related risks so its benefits can be fully realised. Today, we are proud to announce the 20 projects that have been awarded seed grants of up to £200,000 to carry out independent research focused on safeguarding the societal systems and critical infrastructure into which AI is being deployed.
The selection process
We received over 300 grant applications from universities, businesses, and non-profit organisations across the UK and internationally. Each application underwent a rigorous evaluation process, led by the Engineering and Physical Sciences Research Council (EPSRC) and Innovate UK, both part of UK Research and Innovation (UKRI).
More than 150 external reviewers initially shortlisted 82 applications, which were then assessed by an expert panel chaired by Siân John MBE, EPSRC member and CTO of NCC Group. The panel evaluated proposals based on merit and potential impact, before the UK AI Security Institute made the final selection.
The selected portfolio of 20 projects covers a diverse range of focus areas, reflecting the breadth of challenges in AI safety, security and resilience:
- Making AI decisions safer—Exploring fundamental improvements to AI decision theory to ensure AI systems make safer and more aligned choices.
- Protecting workers in AI-driven industries—Studying the impact of automation on operator skills, developing AI risk training for union workers, and assessing AI adoption risks in local government decision-making.
- Combating misinformation & disinformation—Developing tools to help users assess content reliability and investigating how synthetic content can be exploited to amplify social unrest in times of crisis.
- Ensuring AI is safe and effective in high-stakes domains—Mapping and mitigating risks in healthcare, finance, law, education, transport, and high-hazard industries, ensuring regulators and practitioners can respond to AI-related failures.
- Securing AI infrastructure—Developing technical solutions for secure AI model sharing, assessing risks from AI agents interacting with operating systems and telecom networks, and ensuring confidentiality in private data exchanges.
- Protecting individual privacy—Developing sociotechnical solutions to prevent geolocation of individuals through vision AI systems, and to address privacy concerns of AI embedded in wearable devices.
- Understanding how AI agents interact—Exploring the emergent behaviours of AI agents in game environments to understand how competition or collusion might emerge without designer intent.
Awardees
What’s next
Over the next 12 months, these projects will generate crucial insights to inform AI governance, risk mitigation, and systemic safeguards. The work will help shape practical interventions that ensure AI’s integration into critical infrastructure is safe, secure and beneficial.
We offer congratulations to our awardees and our thanks to the hundreds of applicants, expert reviewers, and our distinguished external panel who made this programme possible. Their collective efforts have shaped a robust and diverse research portfolio that will help prepare society for the next frontier of AI development.