Please enable javascript for this website.

Academic Engagement

We're looking for academics to collaborate on cutting-edge research to make AI safer.

Overview

AI safety is a fast-evolving field. At the AI Safety Institute (AISI), our work surfaces both opportunities and gaps in technical AI safety research. We want to raise awareness of these opportunities, and help academics identify research directions that will have meaningful impact. Through our academic engagement programme, AISI aims to build a bridge between AISI’s technical staff and academics working on technical AI safety research to help advance the field.

Our programme is designed to foster collaboration with the wider academic community to push forward safe AI research beyond what we do in-house, with a focus on developing new AI safety mitigations and the next generation of evaluations.

The academic engagement programme consists of two key components:

  • Collaborative research projects:
    We partner with leading academics on cutting-edge AI safety research. Example collaborations to date include a project with Sergey Levine at UC Berkeley looking to reduce deception in Foundation Models, a project with Zico Kolter at Carnegie Mellon University looking at new unlearning techniques for AI safety problems, and a survey on unlearning for AI safety problems with Oxford. See more here.
  • Workshops and problem books:
    Workshops are an essential tool to strengthen the AI safety community and foster alignment on technical topics. We organise and participate in workshops to bring together researchers from different backgrounds to facilitate knowledge exchange. An output of these workshops are problem books - which articulate consensus areas of important research questions in the given topic area. See more here.

Collaborations

As the first ever government-backed AI safety institute, collaborating with us offers a unique opportunity to translate AI safety research into real world impact.

By collaborating with us, you can:  

  • Work with AISI experts: Work alongside leading experts in AI safety.  
  • Make a meaningful impact: Your research could be applied to real world model testing and support technical advice for AI policy and international AI governance.  
  • Gain valuable experience: Contribute to the development of safe and beneficial AI systems.
  • Receive additional support: On a case-by-case basis we may be able to provide funding or compute to support the research collaboration.

We have opened a call for researchers and academics to express their interest in collaborating with AISI on priority research areas - focused on our safeguards, safety cases, and science of evaluations workstreams, which includes agents and capability elicitation research.

Those interested in collaborating with us on these priority research areas can fill out the Expression of Interest form. You will be asked to include your CV, a 500-word research statement detailing your project proposal, and list of relevant publications. The call will stay open on a rolling basis.

What we’re looking for

We are looking for new collaborators for research in our safeguards, safety cases, and science of evaluations (including agents and capability elicitation) workstreams.

Our programme aims to study open research problems in technical AI safety. For example, understanding the limitations of certain safeguards, improving the efficacy of a mitigation, or developing novel evaluations beyond our existing work. The results of these projects will be shared in the public domain for others to build upon, either through conference publications jointly with AISI researchers, releasing open-source code, building public datasets, or organising competitions together.

Collaborations can take different shapes, ranging from hands-on joint work to providing expert advice. Initial project definitions will be created via an iterative process, refining ideas jointly with AISI researchers until we converge to a topic of mutual interest. Project lengths are usually 4-6 months.

Workshops & problem books

Workshops

AISI organises workshops which bring together AISI’s technical and policy staff with external researchers – with a goal of bringing together expertise and identifying promising research topics. Workshops help to strengthen the community around technical topics of AI safety. We will publish workshop outcomes in the form of problem books which articulate consensus areas of important research questions on a given topic.

The AISI Research Problem Book on AI & Security

Our first problem book on AI & Security highlights some of the most pressing challenges in generative AI security and provides a roadmap for future research. It is the result of AISI’s first series of workshops hosted at DEF CON 32. The workshops, kindly hosted by AI Security Forum ‘24, brought together prominent voices on generative AI security to collaborate on identifying a priority set of open and pressing technical problems.

Participants’ discussions were focused on four key areas:  

  1. Vulnerabilities in the generative AI software stack
  2. Integrating generative AI models into software applications
  3. Malicious use of open access generative AI models
  4. Malicious use of closed access generative AI models

The workshop provided key insights for cutting edge research problems which informed the problem book shared above. The goal of the problem book is to guide generative AI security research toward the most pressing issues. We invite researchers in academia, non-profits, and industry to use this problem book as a starting point for informing future research work. The research problems identified in the problem book have also informed our list of AISI priority research areas for collaborations.

Frequently asked questions