Charles Dillon
Research Articles
An examination of Metaculus’ resolved AI predictions and their implications for AI timelines
Metaculus is a forecasting website which aggregates quantitative predictions of future events. One topic of interest on Metaculus is artificial intelligence. Here I look at what we might be able to learn from how the questions on this subject have gone so far, in particular, how the predictions of the Metaculus community have performed. If they have done poorly, it would be of value to making future predictions and interpreting existing ones to know if there are any patterns to this which might reveal common mistakes in making AI related predictions.
An examination of Metaculus’ resolved AI predictions and their implications for AI timelines
Edit: I was contacted by the author of many unresolved AI questions on Metaculus, who tells me he intends to resolve up to 40 overdue questions on the AI subdomain…
How does forecast quantity impact forecast quality on Metaculus?
Many people worry about how much we can trust aggregate forecasts (e.g., the Metaculus community median) that are based on the predictions of only a small number of individuals. This consideration also came up in my recent post analysing predictions of future grants by Open Philanthropy, where having few predictors left me unsure of how much we could really trust the aggregate predictions. How justified is this worry? In other words, to what extent is the number of individual predictors on a question correlated with the accuracy of the aggregate forecast on that question? And to what extent does increasing the number of predictors on a question itself cause the aggregate forecast on that question to be more accurate?
An analysis of Metaculus predictions of future EA resources, 2025 and 2030
6 weeks ago I shared a Metaculus question series I had authored, focused mainly on predicting grants by Open Philanthropy in 2025 and 2030, with some other questions on new large EA-aligned donors also included. This post contains a summary of the predictions on these questions so far.
An analysis of Metaculus predictions of future EA resources, 2025 and 2030
6 weeks ago I shared a Metaculus question series I had authored, focused mainly on predicting grants by Open Philanthropy in 2025 and 2030, with some other questions on new…
How does forecast quantity impact forecast quality on Metaculus?
Written By: Charles Dillion Introduction Many people, myself included, worry about how much we can trust aggregate forecasts (e.g., the Metaculus community median) that are based on the predictions of…
Data on forecasting accuracy across different time horizons and levels of forecaster experience
Forecasting well is a valuable skill for many purposes and people, including for EA organisations aiming to identify which areas they should focus on and what the outcomes of various initiatives would be. There is a limited public record of people making scored forecasts over time horizons greater than ~1 year. Here I use data from PredictionBook and Metaculus to study performance of predictions over different time horizons. I also looked at performance between users with different levels of forecasting practice.
An examination of Metaculus’ resolved AI predictions and their implications for AI timelines
Edit: I was contacted by the author of many unresolved AI questions on Metaculus, who tells me he intends to resolve up to 40 overdue questions on the AI subdomain…