Publications

While our publications are all listed here, they are easier to browse on our research page.

US public opinion of AI policy and risk

This nationally-representative survey of U.S. public opinions on AI aimed to replicate and extend other recent polls. The findings suggest that people are cautious about AI and favor federal regulation though they perceive other risks (e.g. nuclear war) as more likely to cause human extinction.

Read More
Longtermism, Biosecurity Jam Kraprayoon Longtermism, Biosecurity Jam Kraprayoon

Does the US public support ultraviolet germicidal irradiation technology for reducing risks from pathogens?

Jam Kraprayoon’s reserach fellowship culminated in this report on the on U.S. public’s attitudes toward ultraviolet germicidal irradiation technology to reduce pathogens. Understanding the level of support for and awareness of these technologies, what framings of benefits are most compelling, and what concerns exist should be helpful for developing strategies around advocacy and expanding deployment.

Read More
Longtermism, Biosecurity Jam Kraprayoon Longtermism, Biosecurity Jam Kraprayoon

Air Safety to Combat Global Catastrophic Biorisks

This report on air safety is a collaboration between 1Day Sooner and Rethink Priorities. The researchers explain how extending existing indoor air quality (IAQ) standards to include airborne pathogen levels could meaningfully reduce global catastrophic biorisk from pandemics. The report addresses bottlenecks and ways various actors could accelerate deployment and improve IAQ.

Read More
Longtermism, AI Safety & Governance Ben Cottier Longtermism, AI Safety & Governance Ben Cottier

Conclusion and Bibliography for “Understanding the diffusion of large language models”

This is the ninth and final post in the “Understanding the diffusion of large language models” sequence, which presented key findings from case studies on the diffusion of eight language models that are similar to GPT-3. This post provides a conclusion, highlighting key findings from the research, along with a bibliography.

Read More
Longtermism, AI Safety & Governance Ben Cottier Longtermism, AI Safety & Governance Ben Cottier

Publication decisions for large language models, and their impacts

This is the sixth post in the “Understanding the diffusion of large language models” sequence. In this piece, the researcher provides an overview of the information and artifacts that have been published for the GPT-3-like models studied in this project, estimates some of the impacts of these publication decisions, assesses the rationales for these decisions, and makes predictions about how decisions and norms will change in the future.

Read More
Longtermism, AI Safety & Governance Ben Cottier Longtermism, AI Safety & Governance Ben Cottier

The replication and emulation of GPT-3

This is the fourth post in the “Understanding the diffusion of large language models” sequence. This piece explores what was required for various actors to produce a GPT-3-like model from scratch, and the timing of various GPT-3-like models being developed. A timeline of selected GPT-3-like models and their significance examines the development of GPT-3-like models (or attempts at producing them) since GPT-3’s release.

Read More
Longtermism, AI Safety & Governance Ben Cottier Longtermism, AI Safety & Governance Ben Cottier

Background for “Understanding the diffusion of large language models”

This is the second post in the “Understanding the diffusion of large language models” sequence. This piece provides background, including definitions of relevant terms, the inputs to AI development, the relevance of AI diffusion, and other information to contextualize the remainder of the sequence.

Read More
Longtermism, AI Safety & Governance Ben Cottier Longtermism, AI Safety & Governance Ben Cottier

Understanding the diffusion of large language models: summary

How might transformative AI technology (or the means of producing it) spread among companies, states, institutions, and even individuals? What might the impact of that be, and how can we minimize risks in light of that?

This is the first post in the “Understanding the diffusion of large language models” sequence, which introduces and summarizes the research project.

Read More
Longtermism, Nanotechnology Ben Snodin Longtermism, Nanotechnology Ben Snodin

My thoughts on nanotechnology strategy research as an EA cause area

Advanced nanotechnology might arrive in the next couple of decades (my wild guess: there’s a 1-2% chance in the absence of transformative AI) and could have very positive or very negative implications for existential risk. There has been relatively little high-quality thinking on how to make the arrival of advanced nanotechnology go well, and I think there should be more work in this area (very tentatively, I suggest we want 2-3 people spending at least 50% of their time on this by 3 years from now).

Read More
Longtermism, Forecasting and Decision-making Linchuan Zhang Longtermism, Forecasting and Decision-making Linchuan Zhang

Why short-range forecasting can be useful for longtermism

Most work on forecasting, including most EA work on forecasting, is on short-range forecasting (defined loosely here as timescales of ~1 week - ~3 years). Yet most of the motivation for why forecasting is valuable appeals to long-range forecasting, defined loosely as forecasting on the timescale of greater than 10 years, or much longer. Here, I argue that advances in short-range forecasting (particularly in quality of predictions, number of hours invested, and the quality and decision-relevance of questions) can be robustly and significantly useful for existential risk reduction, even without directly improving our ability to forecast long-range outcomes, and without large step-change improvements to our current approaches to forecasting itself (as opposed to our pipelines for and ways of organizing forecasting efforts).

Read More
Longtermism, Forecasting and Decision-making Linchuan Zhang Longtermism, Forecasting and Decision-making Linchuan Zhang

Potentially great ways forecasting can improve the longterm future

In addition to the EA Early Warning Forecasting Center I outlined in my other post, I think there are several ways forecasting may be very useful for longtermism, including: (1) Forecasting as a way to amplify EA research (2) Prediction-evaluation setups as a way to improve EA grantmaking (3) Large-scale broad forecasting as an EA outreach intervention (4) Large-scale forecasting tournaments as a talent training and vetting pipeline (5) The dream: high-quality, calibrated, long-range forecasting (ideally also at scale and on-demand).

Read More

Issues with futarchy

This post collects possible issues with futarchy, a proposed form of governance based on prediction markets. (Possible benefits of futarchy are listed in the paper that introduces the idea and in my summary of it, among other places). The post also lays out my main takeaways and a rough explanation for why I think futarchy should not be a focus for the EA community.

Read More