We're looking for academics to collaborate on cutting-edge research to make AI safer.
AI safety is a fast-evolving field. At the AI Safety Institute (AISI), our work surfaces both opportunities and gaps in technical AI safety research. We want to raise awareness of these opportunities, and help academics identify research directions that will have meaningful impact. Through our academic engagement programme, AISI aims to build a bridge between AISI’s technical staff and academics working on technical AI safety research to help advance the field.
Our programme is designed to foster collaboration with the wider academic community to push forward safe AI research beyond what we do in-house, with a focus on developing new AI safety mitigations and the next generation of evaluations.
The academic engagement programme consists of two key components:
As the first ever government-backed AI safety institute, collaborating with us offers a unique opportunity to translate AI safety research into real world impact.
By collaborating with us, you can:
We have opened a call for researchers and academics to express their interest in collaborating with AISI on priority research areas - focused on our safeguards, safety cases, and science of evaluations workstreams, which includes agents and capability elicitation research.
Those interested in collaborating with us on these priority research areas can fill out the Expression of Interest form. You will be asked to include your CV, a 500-word research statement detailing your project proposal, and list of relevant publications. The call will stay open on a rolling basis.
We are looking for new collaborators for research in our safeguards, safety cases, and science of evaluations (including agents and capability elicitation) workstreams.
Our programme aims to study open research problems in technical AI safety. For example, understanding the limitations of certain safeguards, improving the efficacy of a mitigation, or developing novel evaluations beyond our existing work. The results of these projects will be shared in the public domain for others to build upon, either through conference publications jointly with AISI researchers, releasing open-source code, building public datasets, or organising competitions together.
Collaborations can take different shapes, ranging from hands-on joint work to providing expert advice. Initial project definitions will be created via an iterative process, refining ideas jointly with AISI researchers until we converge to a topic of mutual interest. Project lengths are usually 4-6 months.
AISI organises workshops which bring together AISI’s technical and policy staff with external researchers – with a goal of bringing together expertise and identifying promising research topics. Workshops help to strengthen the community around technical topics of AI safety. We will publish workshop outcomes in the form of problem books which articulate consensus areas of important research questions on a given topic.
Our first problem book on AI & Security highlights some of the most pressing challenges in generative AI security and provides a roadmap for future research. It is the result of AISI’s first series of workshops hosted at DEF CON 32. The workshops, kindly hosted by AI Security Forum ‘24, brought together prominent voices on generative AI security to collaborate on identifying a priority set of open and pressing technical problems.
Participants’ discussions were focused on four key areas:
The workshop provided key insights for cutting edge research problems which informed the problem book shared above. The goal of the problem book is to guide generative AI security research toward the most pressing issues. We invite researchers in academia, non-profits, and industry to use this problem book as a starting point for informing future research work. The research problems identified in the problem book have also informed our list of AISI priority research areas for collaborations.
Collaborations are not restricted to any organisation types, but we expect most applications to come from academic institutions and external research groups. We will assess project fit and suitability based on experience in the field and appropriate fit for AISI’s needs.
The essential selection criteria we are looking for are:
The result of these projects will be put in the public domain for others to build upon, e.g. through conference publications jointly with AISI researchers, releasing open-source code, building public datasets, or organising competitions together.