We are now the AI Security Institute
Please enable javascript for this website.

Strengthening AI Resilience

20 Systemic Safety Grant Awardees Announced

The deployment of AI across critical infrastructure and societal systems - like financial markets, healthcare, and transportation - offers immense benefits. It also introduces risks that could cause significant disruption in high-stakes domains.  

Our Systemic AI Safety Grants Programme, first announced at the Seoul AI Summit, aims to increase societal resilience to AI-related risks so its benefits can be fully realised. Today, we are proud to announce the 20 projects that have been awarded seed grants of up to £200,000 to carry out independent research focused on safeguarding the societal systems and critical infrastructure into which AI is being deployed.  

The selection process  

We received over 300 grant applications from universities, businesses, and non-profit organisations across the UK and internationally. Each application underwent a rigorous evaluation process, led by the Engineering and Physical Sciences Research Council (EPSRC) and Innovate UK, both part of UK Research and Innovation (UKRI).

More than 150 external reviewers initially shortlisted 82 applications, which were then assessed by an expert panel chaired by Siân John MBE, EPSRC member and CTO of NCC Group. The panel evaluated proposals based on merit and potential impact, before the UK AI Security Institute made the final selection.  

The selected portfolio of 20 projects covers a diverse range of focus areas, reflecting the breadth of challenges in AI safety, security and resilience:

  • Making AI decisions safer—Exploring fundamental improvements to AI decision theory to ensure AI systems make safer and more aligned choices.
  • Protecting workers in AI-driven industries—Studying the impact of automation on operator skills, developing AI risk training for union workers, and assessing AI adoption risks in local government decision-making.
  • Combating misinformation & disinformation—Developing tools to help users assess content reliability and investigating how synthetic content can be exploited to amplify social unrest in times of crisis.
  • Ensuring AI is safe and effective in high-stakes domains—Mapping and mitigating risks in healthcare, finance, law, education, transport, and high-hazard industries, ensuring regulators and practitioners can respond to AI-related failures.
  • Securing AI infrastructure—Developing technical solutions for secure AI model sharing, assessing risks from AI agents interacting with operating systems and telecom networks, and ensuring confidentiality in private data exchanges.
  • Protecting individual privacy—Developing sociotechnical solutions to prevent geolocation of individuals through vision AI systems, and to address privacy concerns of AI embedded in wearable devices.  
  • Understanding how AI agents interact—Exploring the emergent behaviours of AI agents in game environments to understand how competition or collusion might emerge without designer intent.  

Awardees

Project Leads Organisation(s) Project Title
Adel Bibi, Philip Torr and Adam Mahdi University of Oxford The Safety of Operating Systems AI Agents: Formulations, Evaluations, and Certification
Alexander Babuta, Ardi Janjeva and Sam Stockwell The Alan Turing Institute AI-enabled disinformation during security incidents: mitigating the risk of public disorder
Alexander Serb and Themis Prodromakis University of Edinburgh Artificial intelligence Enabled Guide for Investment Strategies (AEGIS)
Antonio Valerio Miceli-Barone, Vaishak Belle and Shay B. Cohen University of Edinburgh Understanding and Improving the Behaviour of AI Agents in Competitive and Cooperative Games
Brian Sheil, Jennifer Schooling and Maya Indira Ganesh University of Cambridge and Anglia Ruskin University Guiding the boots on the ground: Advancing ethically informed socio-technical safety of AI systems in the public sector.
Caitlin Bentley, Gordon Meadow and David Wavell King's College London, Seabot Maritime and Frontier Robotics Evolving Human-AI Competencies: Workforce Development for Building Systemically Safe Cyber-physical Systems
Carl Macrae University of Nottingham Building a learning infrastructure for systemic AI safety: developing processes for investigating and learning from systemic AI safety incidents
Farhana Ferdousi Liza, Shoaib Ahmed and Katherine Deane University of East Anglia and University of Sussex Business School BRA(AI)N: ​Building Resilience and Accountability in Artificial Intelligence Navigation
Gina Neff Queen Mary University of London Work for AI Safety (WAIS): Sociotechnical capabilities in the workplace to counter systemic risks
Haris Shuaib Newtons Tree CLIO: Clinical LLM Integration and Oversight
John Keers, Jun Liu and Niall McCarroll Ulster University, Artificial Intelligence Research Centre and The Centre for Legal Technology The use of Agentic AI in Judicial Decision - Making.
Marios Kogias and Hamed Haddadi Imperial College London C3Infer: A Framework for Compartmentalized, Confidential, and Certified AI Inference
Martin Thomson and Richard Waine Health and Safety Executive Science and Research Centre and Emlyn Square Safe Artificial Intelligence for High Hazard Environments (SAFEHAZ)
Mark McGill, Richard Jones and Tanaya Guha University of Glasgow and University of Edinburgh WearAI: Examining the Societal Vulnerabilities Exposed by AI Embedded in Context-Aware Smart Wearables
Michael Bronstein, Reihaneh Rabbany and Shenyang Huang University of Oxford, McGill University and Mila Towards Trustworthy AI Agents for Information Veracity
Nick Hawes and Ruth Chang University of Oxford AI and Hard Choices: The Parity Model
Qian Lu, Vasile Palade and Huw Davis Coventry University Systemic Risk of Using AI-Generated Synthetic Data in Autonomous Vehicle Development
Shoaib Ehsan, Jack Stilgoe and Michael Milford University of Southampton, University College London and Queensland University of Technology PRIV-LOC: Assessing and Mitigating Privacy Risks of Vision-Language Models in Image-based Geolocation Systems
Syed A. R. Zaidi, Zeinab Nizami and Maryam Hafeez University of Leeds Strategic Decision-Making and Cooperation among AI Agents: Exploring Safety and Governance in Telecom
Yali Du King's College London Evaluating the Cooperative Behaviour of Systems of Generative Agents

What’s next

Over the next 12 months, these projects will generate crucial insights to inform AI governance, risk mitigation, and systemic safeguards. The work will help shape practical interventions that ensure AI’s integration into critical infrastructure is safe, secure and beneficial.  

We offer congratulations to our awardees and our thanks to the hundreds of applicants, expert reviewers, and our distinguished external panel who made this programme possible. Their collective efforts have shaped a robust and diverse research portfolio that will help prepare society for the next frontier of AI development.