Summary
See companion post here.
Most work on forecasting, including most EA work on forecasting, is on short-range forecasting (defined loosely here as timescales of ~1 week – ~3 years). Yet most of the motivation for why forecasting is valuable appeals to long-range forecasting, defined loosely as forecasting on the timescale of greater than 10 years, or much longer.
Here, I argue that advances in short-range forecasting (particularly in quality of predictions, number of hours invested, and the quality and decision-relevance of questions) can be robustly and significantly useful for existential risk reduction, even without directly improving our ability to forecast long-range outcomes, and without large step-change improvements to our current approaches to forecasting itself (as opposed to our pipelines for and ways of organizing forecasting efforts).
To do this, I propose the hypothetical example of a futuristic EA Early Warning Forecasting Center. The main intent is that, in the lead up to or early stages of potential major crises (particularly in bio and AI), EAs can potentially (a) have several weeks of lead time to divert our efforts to respond rapidly to such crises and (b) target those efforts effectively.
A hedge fund is a useful analogy here, in terms of the hypothetical set-up. In the same way that hedge funds are private entities that (among other activities) convert large amounts of both structured data and unstructured data into accurate-ish prices, we can have a “forecasting center” that converts various forms of data into accurate predictions on EA-relevant questions. The forecasts are directly fed to key EA decision-makers.
This is the basic idea. There are several variants of this general idea that also seem promising (though also potentially net-negative). Of these, my top 3 are: a) combining the forecasting with a civilian intelligence (OSINT) arm to generate data about issues of unusually high EA importance, b) combination with a crisis response unit to respond to pivotal moments[1] directly and c) housing this forecasting center within a nation-state actor (perhaps the US government), to increase the chance that the forecasting center informs governmental rather than (just) EA rapid response decisions.
This post paints a specific vision for an organization that could be set up, and argues that we should aim towards it. Further work, from myself and others, would consider and prioritize a number of ways to aim towards this vision (including further research, starter projects or organizations to initiate, grants that are worth making, etc). Further work should also include a red-teaming of this vision, and other refinements.
Most of the points in this post will not seem original to people extremely acquainted with the EA forecasting space. Nonetheless, I thought it may be helpful to bring much of it in one place, as well as include my current opinionated best guesses and judgements.
Early Warning Forecasting Center: What it is, and why it’d be cool
The “hedge fund” metaphor can convey a lot of related concepts, but the definition/cluster of ideas I most want to invoke is “a large private entity that (relatively) accurately converts much of the information in the world into clear, precise prices on the financial market.” Similarly, I want us to consider having an Early Warning Forecasting Center as analogous to a hedge fund: a large entity that converts all sorts of data into accurate, calibrated, by-default-private, predictions on short-range questions of EA interest.
What outputs would an Early Warning Forecasting Center provide?
I think the most obvious outputs are accurate, calibrated, constantly updated, forecasts for important questions of (longtermist) EA interest. The most important use case is Crisis Warning and Guidance: giving us additional “warning time” and decision-relevant information for future pivotal moments, which I expect to be both quite important and useful for biosecurity and AI risk, as well as future longtermist priorities.
Earlier examples of work in this space include Epidemic Forecasting and Samotsevy’s nuclear risk forecasts, though they are of course about events substantially less pivotal than future events I’m hypothesizing.
Biosecurity/Pandemic Response
A forecasting center can provide early warning about pandemics to select decision-makers in EA, or the EA community overall. (For why it might not be good to share forecasts with the entire EA community or the entire public by default, see “Why private forecasting?” below.)
The basic idea is that we would have many forecasting folks[2] within the forecasting center privately tracking the probability of an existentially threatening pandemic next year, or next month. Once the aggregated best-guess forecast of an existentially threatening pandemic[3] is above a predetermined threshold, the information would be relayed to trusted EA decisionmakers (for example Open Phil) who would then decide what to do with this information. The forecasting center, in collaboration with other decision-makers, would continue to carefully monitor the situation and provide rapid, constantly updated forecasts to continue clarifying the situation.
AI
The story is less clean for AI, but potentially more important. The basic idea is that right now we don’t know what are the early signs/clearest warning indicators for pivotal moments for AI (potentially including but not necessarily limited to the “point of no return”). But we can imagine us being much less confused in the next 10 years, as researchers in AI alignment, AI governance, and AI deployment researchers consider the question of warning indicators further. Once we have much more strategic clarity on what are the forecasting warning indicators (or other signs of pivotal moments), the EA forecasting center could then forecast on those questions. Because we scaled up a forecasting organization concurrently with getting the appropriate strategic clarity, we can deploy the forecasters on the important early-warning questions as soon as we have enough clarity to know which questions are good to forecast.
This will give us lead time on how to respond to major AI crises and opportunities, akin to how, as discussed above, an forecasting center would allow us to respond more quickly to future pandemics above.
Why does this matter?
I think “lead time” on crises and other pivotal moments in the future can just be incredibly useful, assuming we don’t squander such lead time. For example, in an alternate timeline where COVID-19 had a 100% fatality rate, 2-3 weeks lead time can be dramatically useful in helping EAs prioritize decisions. Most biosecurity-focused EAs and many non-biosecurity-focused EAs would likely drop everything to work on stopping this, some of us may seclude, we may try to inform our friends in government, etc.
There’s a similar if murkier picture for AI. I have not thought much about other use cases, but I imagine there should be a reasonable analogous argument for great power conflict, nanotech risks and opportunities, etc., as well.
Why focus on short-range forecasting rather than long-range forecasting?
The short answer is that EAs should work on both!
The longer answer is that I do not know whether a) humanity in general has the forecasting ability to do well on important long-range forecasting questions of EA relevance, and b) whether the existing forecasting tools and methodologies are best equipped for this. So I want to advocate for something that I think would be tractable and really useful to have at scale, while sidestepping the question of the feasibility of long-range forecasting.
While this particular post couples a general argument (the tractability and usefulness of short-range questions for longtermism) and a specific design (hypothetical setup of the early warning forecasting center), the coupling is not strictly necessary:
- Things like the early warning forecasting center may be used for long-range questions as well, and
- I see the tractability and usefulness of the early warning forecasting center as a sufficient but not necessary condition for the usefulness of short-range forecasting for longtermism.
- My companion post lists a number of other potentially useful projects in the short-range forecasting ∩ longtermism space.
Why private forecasting? Why an early warning forecasting center rather than (e.g.) an EA prediction market?
The short answer is I see my above proposal as the most exciting option, but see the other things as probably great too!
The longer answer is that I have a number of overlapping reasons why I prefer the current proposed structure to other potentially great structures:
- I think markets are not the best way to organize small (<1000?) numbers of forecasters, or people in general.
- I do think prediction markets are great, and liquid/large-scale ones are likely a large improvement over the status quo.
- However, I also expect them to be a large waste of resources (especially forecasting time), compared to idealized setups
- I expect large-scale prediction markets to involve a lot of redundancy and waste
- There’s a reason that very few companies are internally structured like a market.
- I think at sufficiently large scales, organizing people as a market rather than a firm would make more sense/is the only viable way to prevent bureaucratic bloat, but I expect this to be true only at very large scales.
- H/T Misha: There’s also some limited empirical evidence against the quality of small-scale prediction markets
- Private forecasting allows for careful navigation and discretion around information hazards
- This is especially relevant if we think the most important private forecasting may stumble across/intentionally evaluate “dangerous information”
- Relatedly, private forecasting maintains option value and allows altruistic actors to maintain an “epistemic edge” about the questions that matter
- There’s always the option to release the less sensitive subset of your forecasts!
How should a early warning forecasting center be structured?
Here’s a specific vision of how a forecasting center may be operated. I expect reality to end up looking extremely different, but the specificity of a vision is useful as a) a target to aim at and b) a target to criticize and hammer out object-level disagreements with.
In a mature form, the forecasting center will employ 50-1000 people. Forecasters are incentivized by high pay, prestige, and the promise of working on among the world’s most important problems. Forecasters in a forecasting center can be recruited/selected through one of several plausible recruiting paths:
- Select just like regular hedge funds: hire people who are generally intelligent, quantitative, good at math puzzles, etc.
- Based on forecasting track records: we can look at people who performed well on past forecasting tournaments, prediction markets, etc, and use those as recruiting pipelines.
- Future forecasting-specific psychometrics: As we collect more data on forecasters, and do more research on what makes for good short-range forecasters, future forecasting researchers and forecasting orgs will increasingly be able to select on traits that we believe to be associated with short-range forecasting prowess.
Just as most people in a financial trading firm are not traders, most people in the forecasting center will not be judgmental forecasters. (Note that this would be different from existing forecasting setups.) Instead, a large number of people will do other tasks, including:
- Data engineers working on making novel data pipelines
- Data analysts/data scientists
- Professional news readers/followers of direct novel signals
- Project managers
- People working on operationalizing more precise questions/intermediate cruxes
- Comms with stakeholders
- Operational support
And other important specialized tasks that accelerate the work of a forecasting organization.
Just like a trading firm, in addition to people who specialize in tasks dissimilar to current forecasters, a forecasting center will also have specialization in task type within forecasting units. This will include:
- “Cold Teams” of forecasters who look at relatively long time ranges, cycling through all questions at a 3-12 month cadence,
- In a pandemic example, this could include questions like P(100% fatality rate pandemic in the world next year), P(100% fatality rate pandemic from China), P(100% fatality rate pandemic of viral origin), P(100% fatality rate pandemic from Wuhan), etc.
- “Signal Teams” of people who specialize in looking at signals (from data pipelines, news, other forms of looking at the world) and alerting others of what signals changed in specific probabilities.
- In the pandemic example, this could be people noticing reports of unusually high rates of pneumonia in Wuhan, or that lab researchers are falling sick
- “Hot Teams” of forecasters trained to look at the new signals and update the overall numbers based on new signals very quickly, trying to have a <24h turnaround time for updating based on novel signals
If something looks potentially serious, more effort within the org will be rapidly reallocated to carefully monitor and rapidly forecast on a situation, trying to assess if a warning signal is the “real deal” versus yet another false alarm.
As mentioned above, if the bottom-line conclusion of the org is that the net probability of an event is sufficiently high to go over a predetermined threshold (or if judgemental factors swing the balance), the forecasting center will quickly alert actors in other parts of the EA response chain, whether it’s a trusted third party like Open Phil, or a specialized crisis response unit (see below).
Variants
The above is the basic idea. There are several other variants of the same basic idea that I think are worth exploring, which I briefly outline below.
Variant 1: Combination with a civilian intelligence/OSINT arm
Civilian OSINT is intelligence analysis conducted by civilians on open source data (see profile by eukaryote here). There’s a sense that there’s already a fairly natural continuum from judgmental forecasting to civilian intelligence, but we can potentially combine a forecasting center with a pretty explicit civilian OSINT arm if we decide this is the right move. Having better intelligence analytics can potentially be really useful in informing our forecasts, and may also help with recruitment and coordination with certain individuals.
One clear downside of this approach is that “civilian intelligence” (both the name and the actual activity/approach) may be seen as legibly threatening to nation-state actors. This has some benefits (for example if gov’ts spy on us they might be more likely to pay attention to what we say), but overall my current best guess is that unintended-by-us nation-state actor attention is likely to be on balance bad.
If we do end up doing the civilian intelligence/OSINT route, we need to manage a) general optics, b) attention from nation-state actors, and c) how public we want to be with our results (my guess is not much).
Variant 2: Combination with a crisis response unit
A common critique of EA actions during crises is that we already have some nontrivial foresight compared to the rest of the world, but we mostly squandered the “what’s next” step of actually taking actions in accordance with this novel understanding (“Mastery over the nature of reality does not guarantee you mastery over the behavior of reality”). E.g., many of us were paying attention to covid and saying it was a big deal, and some of us bought food, hand sanitizers and masks, but very few of us shorted the relevant stocks or did much to mitigate the crisis in its very early days.(H/T Misha) Similarly with the current Russia-Ukraine crises, some EAs saw the war coming and made the relevant bets, but we again didn’t short the stocks, didn’t plan the evacuation of EAs in Ukraine until the war was ongoing, and didn’t (to my knowledge) plan anything Russia-wise.
So the appropriate structure of an elite Forecasting Center might be to pair it up with a elite crisis response unit, possibly in a network/constellation model such that most people are very part-time before urgent crises, such that the additional foresight is guaranteed to be acted on, rather than tossing the forecasts to the rest of the movement (whether decisionmakers or community members) to be acted on later.
Variant 3: Housed in a national government
Another potential variant is to house this forecasting center (with or without the intelligence arm, and with or without a crisis response arm) in a national government (most likely the US), instead of as an independent EA entity. Downsides are that:
- Tractability might not be very high
- My impression is that many people have tried to get gov’ts more interested in forecasting (including the original IARPA grants by Matheny to Tetlock among others), with limited success
- I think there were more success in the UK gov’t?
- My impression is that many people have tried to get gov’ts more interested in forecasting (including the original IARPA grants by Matheny to Tetlock among others), with limited success
- The organization being part of a government gives EAs less freedom to act and less control over a) what questions the forecasting center pay attention to and b) how we respond to such information
- EA might squander some of our epistemic edge relatively to the rest of the world, especially on important big picture and longtermist-relevant issues
- The relevant government may take unilateral actions that defect against the rest of the world.
On the other hand, there’s a very clear upside in getting clearer information and better epistemic institutions in national gov’ts, and in leveraging the weight of the (e.g.) US gov’t behind crisis response to important existential events.
My current best guess is that the downsides outweigh the upsides, but I believe this is worth further exploration and research.
Next Steps
The General Longtermist Team at Rethink Priorities is excited about a) forecasting and b) identifying the megaprojects of the future and pointing people at them, so the intersection of the two may well be a good fit for us.
Further work here (from our team and others) should elucidate and prioritize specific next steps that will be valuable to take, including recommending grants, directions for further research, useful skills and learning value, and (especially) identifying and incubating starter projects to move towards this vision.
There are a small number of other groups in this space that I’m excited about, including new projects, many of whom I expect to have launch posts before the middle of this year.
We are also excited about further work by other people in this space and adjacent spaces, including from entrepreneurs, forecasters, forecasting researchers, and grantmakers interested in setting up something like one of the models I proposed above, or being an early employee of them. There are also a number of other exciting ways to contribute. The one I’d like to highlight is for decisionmakers and researchers to consider how expert forecasting can be most helpful for their own work.
Acknowledgements
Many of the ideas in this document have floated around the nascent EA forecasting community for some time, and are not original to me. Unfortunately I have not tracked well which ideas are original, so maybe it’s safe to assume that none of the ideas are. At any rate, I consider the primary value of the current work as a distillation. Thanks to Peter Wildeford, Michael Aird, Ben Snodin, Max Raüker, Misha Yagudin, Eli Lifland, Renan Araujo, and Ashwin Acharya for feedback on earlier drafts. Most mistakes are my own.
This research is a project of Rethink Priorities. It was written by Linch Zhang. If you like our work, please consider subscribing to our newsletter. You can explore our completed public work here.
- ^
While most/all of the existing examples here are of given warnings for catastrophic risks, I expect there may also times where we warnings can be useful for pivotal moments that are strongly positively valenced (cf. existential hope) or ambiguously valenced (e.g. strategically important moments in dual-use technology or things that may affect the strategic equilibrium between powerful actors)
- ^
Including both conventional forecasters, and also lots of support staff, including operational staff, data engineering support, people who are professional data gatherers, and so forth.
- ^
Operationalization tbd, I’m currently thinking something in the range of “disease with a R0 > 4, a 100% fatality rate, and no known cure has infected >1000 people”