A Look Back to Sam Altman’s AGI timeline

Last week Sam Altman shot down claims for AGI after hype vibrated across the AI community. While Sam’s comments seem pessimistic in comparison to his usual optimistic, cryptic self, his current stance is not radically different for when we hit AGI, it’s just that the litmus test has changed. His recent address on X aimed to ground “out of control” hype across the platform, while still aligning to earlier comments made in 2024, where Sam predicted AGI to arrive in a “few thousand days”.

At large, we feel the AGI claims were too soon, but with how quickly AI is scaling across pure intelligence, markets, and adoption, its hard to think otherwise. Adding to this, there is speculation that Sam is tempering expectations to steady the AI ship and grow profitability. Remember, Sam sits in a league of heavily influential figures within AI, and is leading OpenAI who a)  are valued at $157 billion as of October 2024 and b) who are now equity funders of the recently announced $500 billion Stargate project. Adding to the community hype will only exaggerate expectations - creating second and third order consequences across the AI ecosystem. If Sam reaffirms the hype and doesn’t deliver, he steers the OpenAI ship closer to an iceberg, where the sharks of Anthropic, Google, and particularly DeepSeek are closing in.

AGI is monumental, yet because we have no true definition, we also have no recognition of when it will arrive. One remedy is to rely on the simple assumption; that the more we become familiar with AI, the greater our clarity. And so, understanding the evolution of Sam’s stance over time gives us indication of AGI’s appearance - when it does finally arrive. Here’s a timeline of Sam Altman’s stance on AGI.

2015: Founding OpenAI with Caution at the Forefront

Altman described superhuman machine intelligence as potentially "the greatest threat to humanity's continued existence." With this sobering view, he co-founded OpenAI, embedding its mission to ensure AGI would benefit all of humanity.

2019: Gradual Realisations and Bold Predictions

By 2025 “AGI will feel within reach to many people in the industry”. Whilst not here, the community do feel that AGI is within reach. Altman suggested the transition would be far less dramatic than anticipated—blurry and gradual, not marked by a clear-cut breakthrough. This hinted at his belief in shorter timelines but a slower, more incremental evolution of AGI. His other predictions of net-gain nuclear fusion and gene editing curing a major disease remains yet to be seen however.

2023: AGI = Scientific problem, ASI = Engineering problem

Sam framed AGI development as a scientific problem and then super intelligence (ASI) as an engineering problem, distinguishing the complexity of building AGI from superintelligence. He underscored the importance of AGI safety, advocating for regulation of frontier models while protecting smaller models and startups from excessive oversight. Sama identified three key elements for a positive AGI future: aligning superintelligence, fostering coordination among leading AI efforts, and establishing effective global regulation with democratic governance.

2024: The Road Becomes Clearer but the destinations changed

Altman announced that OpenAI now had a clear roadmap to achieve AGI. Progress, he suggested, would be faster than many expected. He also acknowledged that AGI, while complex, might be less challenging to develop than initially thought. Saying this, Sam’s definition of AGI was promoted to the definition he gave for ASI, in that this would be “AI systems that are generally smarter than humans. Does this mean a demotion for AGI, hence it’s closer proximity and Sam’s confidence? For instance, speaking at the New York Times DealBook Summit, Altman said “My guess is we will hit AGI sooner than most people in the world think and it will matter much less” downplaying its immediate societal impact.

2025: Proof is in the Money

After Sama declared that OpenAI was confident in its ability to build AGI as traditionally envisioned. He projected that AI agents might enter the workforce as early as 2025. However, he tempered these statements, cautioning against runaway hype, clarifying that AGI was still a goal in progress, not a present reality. It’s also worth noting here that Sam’s definitions have since changed and aligned more to that of Microsoft, where now achieving AGI means developing AI systems that can generate at least $100 billion in profits. Here we can see a shift in direction for Sam and OpenAI, where AGI was once rooted in technical capability and philosophical meaning to financial profitability. Do you reckon OpenAI’s shift from a non-profit to for-profit entity is relevant? We do, and it makes sense considering OpenAI’s financial struggles to service powerful AI.

Take home

For us, As opposed to trying to statically define AGI, it’s better to just see it as a fluid, continuous progression of machine intelligence that becomes broader with significant impacts on most industries, and at a capability similar or beyond that of a human. Financial markers whilst labelled with greed, are significant due to the costs associated with servicing powerful AI, so it can’t be undermined. Ask again next week and we might just have a new definition, cheers Sam!