VOICES

What the people building AI
actually say about it.

These are the primary sources — videos, interviews, and articles from the researchers, founders, and thinkers shaping AI. Every quote is their own. Every summary is ours. We present all views, not just the comfortable ones.

Editorial note: The summaries below represent our interpretation of each thinker's publicly stated views. Quotes are reproduced verbatim from cited sources. We do not endorse any particular prediction.

Geoffrey Hinton

Nobel Laureate in Physics (2024) · Former Google AI Researcher · "Godfather of AI"

Existential Risk

"There is also a longer term existential threat that will arise when we create digital beings that are more intelligent than ourselves. We have no idea whether we can stay in control."

— Nobel Prize Banquet Speech, December 10, 2024 — nobelprize.org

OUR SUMMARY

Hinton is arguably the most credible voice warning about AI risk. He won the 2024 Nobel Prize in Physics for his foundational work on neural networks — the technology that makes modern AI possible. He left Google in 2023 specifically so he could speak freely about AI dangers. He estimates a 10–20% chance that AI leads to human extinction within 30 years. He also believes AI will be enormously beneficial if managed well. His concern is not that AI is evil — it is that we have no proven method for ensuring that superintelligent systems remain aligned with human interests.

WATCH

Sam Altman

CEO, OpenAI

Optimistic — With Caveats

"This revolution will generate enough wealth for everyone to have what they need, if we as a society manage it responsibly. Even more power will shift from labor to capital. If public policy doesn't adapt accordingly, most people will end up worse off than they are today."

— "Moore's Law for Everything," moores.samaltman.com, March 2021

OUR SUMMARY

Altman is the most powerful person in commercial AI. His view is genuinely nuanced: he believes AI will create extraordinary abundance, but he is explicit that this abundance will concentrate at the top unless policy intervenes. He has testified before the U.S. Senate about the need for AI regulation. He is not a pure optimist — he has said he believes there is some chance AI development ends badly. His position is that the potential upside is so large that it justifies the risk, provided the right safeguards are built.

WATCH

Dario Amodei

CEO, Anthropic

Unemployment Warning

"AI could wipe out half of all entry-level white-collar jobs and spike unemployment to 10-20% in the next one to five years."

— Interview with Axios, May 28, 2025 — axios.com

OUR SUMMARY

Amodei co-founded Anthropic after leaving OpenAI over safety concerns. He is not a pessimist about AI's long-term potential — he has written extensively about AI's ability to compress decades of scientific progress into years. But he is unusually direct about the near-term economic disruption. His concern is specifically about entry-level white-collar work: the jobs that recent graduates take to build careers. He believes this disruption will happen faster than society or policy can adapt. His recommended response is not to avoid AI — it is to build skills in AI oversight and direction before the displacement wave arrives.

WATCH

Ray Kurzweil

Futurist · Google AI Scientist · Author, "The Singularity Is Nearer"

Radical Optimism

"We are going to expand intelligence a millionfold by 2045 and it is going to deepen our awareness and consciousness. It is hard to imagine what that will be like."

— Interview with The Guardian, June 29, 2024

OUR SUMMARY

Kurzweil has the most accurate long-range prediction record of any futurist: of 147 predictions he made for 2009 in his 1999 book, 86% were correct or essentially correct. His core argument is that AI follows the same exponential growth curve as computing power — a curve that has held for 60+ years — and that this curve leads to a point around 2045 where AI surpasses human intelligence by orders of magnitude. He believes this will be overwhelmingly positive: disease eliminated, poverty ended, human lifespans extended dramatically. His critics argue that exponential growth curves eventually hit physical limits, and that the gap between narrow AI and general intelligence may be larger than extrapolation suggests.

WATCH

Yuval Noah Harari

Historian · Author, "Sapiens," "Nexus"

Power & Democracy

"The biggest political question of our era is: who controls the data?"

— Official Facebook page, October 2024

OUR SUMMARY

Harari's concern is not primarily about jobs or existential risk — it is about power. His argument, developed in "Nexus" (2024), is that AI is the most powerful information-processing tool ever created, and that whoever controls it will have unprecedented ability to shape what billions of people believe, want, and do. He draws parallels to previous information revolutions — the printing press, the radio — and argues that each one initially empowered authoritarian movements before eventually being tamed by democratic institutions. His warning is that AI is moving faster than democratic institutions can adapt. He is not anti-technology; he is arguing for urgent, serious governance.

WATCH

Bill Gates

Co-Founder, Microsoft · Philanthropist

Abundance — Disruption Ahead

"Within 10 years, AI will replace many doctors and teachers — humans won't be needed 'for most things.' But great medical advice and great tutoring will become free and commonplace."

— NBC Tonight Show with Jimmy Fallon, March 2025

OUR SUMMARY

Gates occupies an interesting position: he is optimistic about AI's long-term impact but unusually direct about the disruption it will cause to specific professions. His view is that AI will make high-quality services — medical advice, education, legal guidance — available to everyone, not just those who can afford professionals. He sees this as a net positive for humanity. The disruption to doctors, teachers, and lawyers is, in his framing, the price of democratizing access to expertise. He has also written about AI's potential to accelerate progress on global health challenges, including diseases that primarily affect the poor.

WATCH

Elon Musk

CEO, Tesla & SpaceX · Founder, xAI

Existential Risk

"I always thought AI was going to be way smarter than humans and an existential risk, and that's turning out to be true."

— Joe Rogan Experience podcast, February 2025

OUR SUMMARY

Musk's position on AI is genuinely complicated by the fact that he is simultaneously warning about its dangers and building it (through xAI and Grok). He co-founded OpenAI, left over disagreements about its direction, and has sued OpenAI for allegedly abandoning its safety mission. His stated concern is that AI developed without sufficient safety constraints could become uncontrollable. He has estimated a 10–20% chance that AI development ends badly for humanity. His critics argue that his warnings are partly competitive positioning — that he wants to slow down competitors while his own AI company catches up. His supporters argue that his warnings are genuine and that his involvement in AI is an attempt to ensure it is developed safely.

WATCH

Andrew Yang

Former U.S. Presidential Candidate · Founder, Forward Party

Unemployment Warning

"I believe that millions of white-collar workers are going to lose their jobs in the next 12 to 18 months due to AI. AI is now able to do the work of a very, very smart human in minutes or even seconds. This is going to displace marketers, coders, designers, lawyers, accountants, call center workers — you name it."

— Instagram video, March 2026 (600,000+ views) — reported by Basic Income Earth Network, March 23, 2026

OUR SUMMARY

Yang built his 2020 presidential campaign around the argument that automation would displace millions of American workers and that Universal Basic Income was the necessary policy response. He was widely dismissed at the time. His predictions have proven more accurate than most mainstream economists expected. He now argues that the pace of AI displacement is accelerating beyond even his original projections. His proposed solution — UBI funded by a value-added tax on AI-generated productivity — remains controversial, but his diagnosis of the problem has gained significant mainstream credibility.

WHAT TO DO WITH ALL OF THIS

You've heard the views.
Now get the preparation plan.

The free Shift Starter Guide covers the practical steps that make sense regardless of which prediction turns out to be correct — across your finances, career, business, and family.