FIG. 11 The AGI horizon
The future is always
twenty years out.
Every notable figure in AI has a public guess for when artificial general intelligence arrives. They have been making those guesses for seventy years. The average prediction has, almost without exception, been about twenty years from now. Said in 1965. Said in 1993. Said again last week.
Two hundred and fifty forecasts from one hundred and forty named figures. Press play and watch the predictions accumulate from 1950 forward. The twenty-year horizon emerges as a faint glowing crest along which most predictions terminate, regardless of when they were made.
§ I Seventy years of "almost ready"
Everyone agrees AGI is coming. They have for seventy years.
In 1965, Herbert Simon said machines would be capable of doing any work a man could do within twenty years. In 1970, Marvin Minsky told Life Magazine the problem would be solved in three to eight. In 1993, Vernor Vinge said the singularity would arrive within thirty years. In 1999, Ray Kurzweil drew a line at 2029. In 2025, Demis Hassabis said five to ten years. Each of these predictions, made roughly twenty years apart, was for roughly twenty years out.
The plot below is every public prediction we could find, organized so that the year a person spoke is on one axis and the year they pointed to is on the other. There is a faint crest across the chart where most of the dots terminate. That crest is twenty years from the speaker. It has been there the whole time.
§ II · FIG. 11.1 The Horizon
Each lift is one prediction. The bar starts at the year it was said and rises to the year it forecasted. Press play to watch them appear in order. Hover any lift for the quote.
§ III · FIG. 11.2 The 2022 compression
ChatGPT shipped in November 2022. In the eighteen months that followed, the median expert prediction for AGI moved closer by roughly twenty-five years. It is the largest single shift in expert opinion in the history of the field.
§ IV · FIG. 11.3 Where the four camps land
The same word, four very different distributions. The frontier lab average and the survey-of-academics average are nearly twenty years apart, on the same question, in the same year.
§ V · FIG. 11.4 The minds that changed
Six figures who have publicly revised their AGI timeline more than once. Almost all moves have been earlier.
§ VI · FIG. 11.5 Pick a year
Drag the slider. See who agrees with that year, what they said, and what camp they belong to.
§ VII · FIG. 11.6 The jagged frontier
The 2023 ESPAI survey asked researchers when AI would reach human-level performance on specific tasks. The answer is a calendar, not a date. The cracks come first.
§ VIII · FIG. 11.7 The capability staircase
DeepMind's "Levels of AGI" framework (Morris et al., 2023). One word, six different definitions. Where we stand and where each rung is projected to land. Frontier-lab framework, optimist by design.
§ IX · FIG. 11.8 The sixty-nine year gap
The same 2023 expert survey gave two answers to two slightly different questions. By when can AI do every task better than a human? Median answer: 2047. By when does AI actually do every task, replacing every job? Median answer: 2116. The gap between "possible" and "deployed" is bigger than the gap between today and "possible."
§ X Why so wrong, so often
The Armstrong-Sotala study put a name on the pattern. Across sixty years of dated AGI predictions, the central tendency has been fifteen to twenty-five years from the time of utterance, regardless of the year. Simon to 1985, Minsky to the late seventies, Vinge to 2023, Kurzweil to 2029, Hassabis to 2030. Twenty years is what "soon, but not yet" sounds like in the mouth of an expert.
Three forces probably keep the horizon stable. Frontier-lab executives benefit from short timelines for fundraising. Safety researchers benefit from "soon enough to matter." Tenured academics benefit from "current paradigm will not work." The camps disagree on the year, but each camp has a structural reason to keep the year where it is.
And the definitions keep moving. AGI in 2015 meant something different than AGI in 2025. Jensen Huang's "achieved" definition is "an AI that can build a billion-dollar business." Sam Altman calls AGI "a very sloppy term" and now talks about "superintelligence" instead. As capabilities arrive, the goalposts slide forward to include the still-missing pieces.
So: are 2026's predictions the first to break the pattern, or are they the same diagonal in new clothes? The honest answer is that we will know in 2046.
§ XI · FIG. 11.9 Potential outcomes
Forecasts answer when. They don't answer what. The most-discussed AI futures are the surface ones, utopia and extinction, but the probability mass sits below the waterline, in scenarios where humans don't get conquered so much as made redundant.
Click a label to read the scenario. Press Esc or the × to close.
Tegmark, Life 3.0 (2017), Ch. 5; Kulveit et al., "Gradual Disempowerment," arXiv:2501.16946 (2025). Tegmark deliberately does not assign probabilities to these scenarios. They are structural possibilities, not forecasts.
§ XII Methodology & Colophon
Two hundred and fifty rows assembled across four research batches in April 2026. Sources include the BLS-style synthesis of public-record predictions, the 2023 AI Impacts ESPAI survey (n=2,778), Metaculus aggregate snapshots, frontier-lab CEO statements, podcasts, blogs, and primary press. Each row carries a verification level and a source URL where available.
For each prediction we record the year it was said, the year (or year range) it forecasted, the speaker's role at time, and the concept used (AGI, HLMI, ASI, TAI, "powerful AI"). When a person made multiple predictions across years, each is its own row. Definitions vary across speakers; concept is preserved per row rather than normalized.
AI Impacts ESPAI 2023 ↗
Metaculus weakly-general-AI ↗
AI 2027 (Kokotajlo et al.) ↗
Situational Awareness (Aschenbrenner) ↗
Sister lab: The Productivity Paradox Predictions Archive applies the same horizon methodology to macroeconomic productivity forecasts.
Many quotes are paraphrased from secondary press; rows are flagged with a verification level. Single-year midpoints flatten ranges. Survey aggregates appear as one row each despite representing thousands of underlying respondents. The sample is heavily anglophone and tilted toward public figures. The dataset is not exhaustive and is not meant to be.
FAQ
When will AGI arrive?
That's the question the lab tries to answer empirically by collecting every public AGI prediction it can find from 1950 to 2026. The aggregate of 250 predictions has averaged "about twenty years from now" in essentially every decade, with the median forecaster expecting AGI within their own remaining career. The pattern itself, stable across seventy years of AI history, is more informative than any individual prediction.
What counts as an "AGI prediction" in this dataset?
A public, named, dated forecast about when artificial general intelligence (or human-level AI, or strong AI) will arrive. Each entry has the predictor's name, role, the source (press article, conference talk, paper, or interview), the year said, and the year targeted. Implicit predictions and unattributed forecasts are excluded.
Who predicts AGI is closest?
Researchers at frontier labs (OpenAI, Anthropic, Google DeepMind, Meta AI) consistently forecast the nearest arrival dates, typically within 5 to 10 years of when they speak. Academic researchers and historians of technology predict longer horizons, often 30 to 50 years out. The gap between the two camps is narrower in 2024 to 2026 than in any prior period in the dataset.
Has anyone been right about AGI yet?
No. The earliest predictions had AGI arriving in the 1970s or 1980s. None of those came true; the field went through an "AI winter" instead. Predictions made between 1990 and 2010 mostly targeted 2020 to 2030, also unfulfilled. Whether 2026-era predictions targeting 2030 to 2040 will be the first accurate cohort is the open question.
Where does the data come from?
Press archives (New York Times, Financial Times, MIT Technology Review), conference proceedings (NeurIPS, ICML, ICLR), publication databases, and individual statements from interviews, podcast appearances, and blog posts. Each source link is exposed in the lab. Some entries are reconstructed retroactively for older dates and flagged accordingly in the data.