AISI collaborates with leading researchers to foster cutting-edge solutions that address the most pressing challenges in AI security and safety.
Our Grants programmes are designed to maximise impact and adaptability in an era of rapidly evolving technology. We support projects that push the boundaries of AI research while ensuring security, safety and responsible development remain at the core of innovation.
Through targeted grant funding, we collaborate with leading researchers, institutions, and organisations to foster cutting-edge solutions that address the most pressing challenges in AI security and safety. Our approach ensures that resources are effectively allocated to generate high impact outcomes for the broader AI ecosystem.
If you are working on innovative solutions that align with our mission, we invite you to explore our research partnerships funding opportunities. Together, we can shape a safer and more secure AI future.
You can see the types of research are we excited to fund in our Problem Books and Research Statements.
The Alignment Project is a multidisciplinary research agenda to prevent advanced AI systems from behaving dangerously—either intentionally or accidentally. Its work supports and funds theory and experimentation across 11 fields, aiming to develop robust alignment, oversight, and monitoring techniques.
We are interested in funding projects to develop mitigations to safety and security risks from misaligned AI systems.
As AI technologies rapidly evolve, collaboration with the research community is essential to safely develop the next generation of AI tools, evaluations, and mitigations.
The Challenge Fund will award grants of up to £200,000 per project to address pressing, unresolved questions in AI safety and security. Researchers worldwide can access grants for innovative research in fields such as cyber-attacks and AI misuse.
The fund will focus on supporting research tackling four critical AI security challenges. As AI integrates into financial markets, healthcare, and energy grids, failures or misuse could cause systemic disruptions and security risks. AI systems are also increasingly targeted for manipulation, with bad actors attempting to bypass safeguards and exploit advanced capabilities. This funding will support research to strengthen protections and reduce these risks.
The Fund is designed to support a diverse range of projects that contribute to the broader field of safe and secure AI development. While we are particularly interested in research areas outlined here, we also welcome proposals that explore other innovative topics relevant to the safe and secure development of AI systems.
Please refer to the recorded session of the applicant webinar from March 11, 16:00-17:00 GMT for an overview of the Challenge Fund and a discussion of the questions raised during the session. For official responses, please consult the Clarification Questions document, which will include answers to both the questions addressed in the webinar and those that were not covered. We will update this document regularly to ensure it reflects the latest information.
Please refer to the recorded session of the applicant webinar from June 12, 16:00–17:00 GMT for further insights into the Challenge Fund. This follow-up session focused on the types of research we’re looking to fund, areas of interest for our research teams, and opportunities for collaboration. It also included a live Q&A segment with applicants.
The AISI Challenge Fund encourages researchers who are based at eligible* UK and international academic institutions and non-profit organisations, to submit their solution in response to the following Challenge Statement.
We are seeking applications that propose research that aligns with AISI's remit. We particularly welcome proposals that address AISI's priority areas.
Proposals should demonstrate clear pathways to impact and offer research that would not otherwise occur without this funding.
*More details on eligible applicants can be found in our Application Pack.
The Fund will award grants ranging from £50,000 to £200,000 per project, tailored to the scope of proposals.
As the first ever government-backed AI institute, working with AISI also offers a unique opportunity to translate research on safe and secure AI development into real-world impact. By collaborating with us, you will:
The deployment of AI across critical infrastructure and societal systems - like financial markets, healthcare, and energy grids - offers immense benefits. It also poses risks which have the potential for significant disruption in high-stakes domains. Our Systemic AI Safety Grants Programme, first announced at the Seoul AI Summit, aims to increase societal resilience to AI-related risks so its benefits can be fully realised. Its key objectives included:
The grant application window is now closed, and we are pleased to announce that 20 projects have been selected and are currently in the delivery period. This marks an exciting milestone as it was the first grant scheme that AISI has delivered.