Publications

While our publications are all listed here, they are easier to browse on our research page.

AI Safety & Governance, Surveys and data analysis Rethink Priorities AI Safety & Governance, Surveys and data analysis Rethink Priorities

Why some people disagree with the CAIS statement on AI

Previous research from Rethink Priorities found that a majority of the population agreed with a statement from the Center for AI Safety (CAIS) that stated “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This research piece explores why 26% of the population disagreed with this statement.

Read More
AI Safety & Governance, Surveys and data analysis Rethink Priorities AI Safety & Governance, Surveys and data analysis Rethink Priorities

US public perception of CAIS statement and the risk of extinction

On June 2-3, 2023, Rethink Priorities conducted an online poll of US adults to assess their views regarding a recent open statement from the Center for AI Safety (CAIS). The statement read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Read More

US public opinion of AI policy and risk

This nationally-representative survey of U.S. public opinions on AI aimed to replicate and extend other recent polls. The findings suggest that people are cautious about AI and favor federal regulation though they perceive other risks (e.g. nuclear war) as more likely to cause human extinction.

Read More
Longtermism, AI Safety & Governance Ben Cottier Longtermism, AI Safety & Governance Ben Cottier

Conclusion and Bibliography for “Understanding the diffusion of large language models”

This is the ninth and final post in the “Understanding the diffusion of large language models” sequence, which presented key findings from case studies on the diffusion of eight language models that are similar to GPT-3. This post provides a conclusion, highlighting key findings from the research, along with a bibliography.

Read More
Longtermism, AI Safety & Governance Ben Cottier Longtermism, AI Safety & Governance Ben Cottier

Publication decisions for large language models, and their impacts

This is the sixth post in the “Understanding the diffusion of large language models” sequence. In this piece, the researcher provides an overview of the information and artifacts that have been published for the GPT-3-like models studied in this project, estimates some of the impacts of these publication decisions, assesses the rationales for these decisions, and makes predictions about how decisions and norms will change in the future.

Read More
Longtermism, AI Safety & Governance Ben Cottier Longtermism, AI Safety & Governance Ben Cottier

The replication and emulation of GPT-3

This is the fourth post in the “Understanding the diffusion of large language models” sequence. This piece explores what was required for various actors to produce a GPT-3-like model from scratch, and the timing of various GPT-3-like models being developed. A timeline of selected GPT-3-like models and their significance examines the development of GPT-3-like models (or attempts at producing them) since GPT-3’s release.

Read More
Longtermism, AI Safety & Governance Ben Cottier Longtermism, AI Safety & Governance Ben Cottier

Background for “Understanding the diffusion of large language models”

This is the second post in the “Understanding the diffusion of large language models” sequence. This piece provides background, including definitions of relevant terms, the inputs to AI development, the relevance of AI diffusion, and other information to contextualize the remainder of the sequence.

Read More
Longtermism, AI Safety & Governance Ben Cottier Longtermism, AI Safety & Governance Ben Cottier

Understanding the diffusion of large language models: summary

How might transformative AI technology (or the means of producing it) spread among companies, states, institutions, and even individuals? What might the impact of that be, and how can we minimize risks in light of that?

This is the first post in the “Understanding the diffusion of large language models” sequence, which introduces and summarizes the research project.

Read More