Project proposal: Scenario analysis group for AI safety strategy
As a part of our incubation work, RP researchers previously looked into a number of different potential interventions to reduce existential risks. In a new post, they share a proposal for one such project: a scenario analysis group for AI safety strategy.
RP’s AI Governance & Strategy team - June 2023 interim overview
Rethink Priorities’ AI Governance & Strategy team works to reduce catastrophic risks related to development and deployment of AI systems. This post provides an overview of their strategy as of June 2023.
Concrete projects for reducing existential risk
This post contains is a list of 20 projects that the Rethink Priorities’ Existential Security Team thinks might be especially promising projects, including: improving info/cybersec at top AI labs, AI lab coordination, facilitating people’s transition from AI capabilities research to AI safety research, field building for AI policy, and finding market opportunities for biodefence-relevant technologies.