Artificial General Intelligence — AGI — is the most consequential and most contested concept in technology today. The people building AI disagree sharply about what it means, when it will arrive, and what it will do to the world. Here is what they are actually saying, with full attribution.
There is no universally agreed definition of AGI. The term is used to describe an AI system that can perform any intellectual task that a human can perform — as opposed to current AI systems, which are highly capable at specific tasks but cannot generalize across domains the way humans can.
OpenAI defines AGI as "AI systems that are generally smarter than humans." DeepMind uses a more nuanced framework, defining AGI as a system that can "perform tasks with a level of competence that is at least as good as a skilled adult human across a wide range of non-physical tasks." These definitions are not identical, and the difference matters: OpenAI's definition implies a single threshold, while DeepMind's implies a spectrum.
"We may be approaching a moment where most scientific progress is driven by AI. I think AGI is coming, and I think it's coming soon. I think we could have AGI within a few years."
— Sam Altman, interview with Lex Fridman, March 2024
Altman has consistently argued that AGI is near and that its arrival will be largely positive, while acknowledging significant risks. He has described OpenAI's mission as ensuring that AGI benefits all of humanity — a framing that implies he believes AGI is achievable and that the primary question is how to manage its development.
"I used to think we were 30 to 50 years away from AGI. I now think we might be 20 years away, or less. And I think there's a real chance — maybe 10 to 20 percent — that AI development leads to human extinction within the next 30 years."
— Geoffrey Hinton, CBS 60 Minutes interview, October 2023
Hinton's position has shifted dramatically since he left Google in 2023 specifically to speak freely about AI risks. He is not a pessimist by nature — he won the Nobel Prize for his foundational work on neural networks, the technology that makes modern AI possible. His concern is not that AI will be malicious, but that it will develop goals misaligned with human welfare and pursue them with capabilities that exceed human ability to intervene.
"Current AI systems, including the most advanced large language models, are not on a path to AGI. They are very good at pattern matching and text generation, but they have no understanding of the physical world, no persistent memory, no ability to reason in the way humans reason. We are nowhere near AGI."
— Yann LeCun, interview with The Verge, January 2024
LeCun is one of the most credentialed AI researchers in the world — he shared the Turing Award with Hinton and Yoshua Bengio in 2018. His skepticism about near-term AGI is not a dismissal of AI's capabilities; it is a technical argument that current approaches have fundamental limitations that make human-level general intelligence unlikely without significant architectural breakthroughs.
"I think we could be very close to AGI — potentially within this decade. But I want to be careful about what we mean by that. I think we'll have systems that are broadly as capable as humans across most cognitive tasks within the next few years. Whether that constitutes AGI depends on your definition."
— Demis Hassabis, interview with MIT Technology Review, November 2023
The economic implications of AGI are genuinely difficult to model, because AGI by definition represents a capability discontinuity — a system that can do things that no previous technology could do. The most thoughtful analysis comes from economists who have studied previous technological transitions.
Daron Acemoglu of MIT, who has studied the economic effects of automation extensively, argues that the economic benefits of AI are likely to be more uneven and slower to materialize than the most optimistic projections suggest. His 2024 paper "The Simple Macroeconomics of AI" found that even under optimistic assumptions, AI would increase total factor productivity by only 0.5 to 1 percentage point per year over the next decade — significant, but not the transformative acceleration that AGI optimists predict.
Sam Altman's vision, expressed in his essay "Moore's Law for Everything," is more dramatic: he argues that AGI will compress decades of scientific progress into years, solve problems that have stumped humanity for centuries, and create abundance that makes current wealth look modest. His proposed solution to the distributional problem — a form of universal basic income funded by a tax on AI-generated capital — reflects his belief that the gains will be real but unevenly distributed without deliberate policy intervention.
The honest answer is that nobody knows when AGI will arrive or exactly what it will do to the economy. The range of credible expert opinion — from Yann LeCun's "we're nowhere near it" to Sam Altman's "within a few years" — is wide enough that any single prediction should be held loosely.
What the range of expert opinion does tell us: the probability that AI will have significant economic effects within the next decade is very high. The probability that those effects will be uniformly positive is low. The probability that preparation — developing AI fluency, building diverse income streams, maintaining financial resilience — will be valuable regardless of which scenario unfolds is very high.
Prepare for every scenario.
Get a 30-day plan that works regardless of when AGI arrives or what it does. Free, personalized to your situation.
SOURCES