I Was Wrong About the Timeline: AGI Is Already Here
Last week, I said 2027. The people who invented AI say it’s now. That means superintelligence arrives in 18 months—and we’re completely unprepared.
I need to start this article with three words I rarely write: I was wrong.
Last week of this series, I laid out a detailed case for why NVIDIA’s AI Factory could deliver artificial general intelligence by 2027. I walked through the technology, the infrastructure, the converging predictions from Sam Altman and Dario Amodei. I thought I was being bold, even alarmist, by suggesting AGI in just two years.
Then, two days ago, the Financial Times published something that changed everything.
At London’s Future of AI Summit, the people who literally invented modern artificial intelligence—Geoffrey Hinton, Yoshua Bengio, Yann LeCun, alongside Jensen Huang, Fei-Fei Li, and Bill Dally—stood together to receive the Queen Elizabeth Prize for Engineering.
And they said something that should be front-page news everywhere: Artificial intelligence is already at human level.
Not coming in 2027. Not on the horizon. Already here.
These aren’t startup CEOs trying to raise money. These are:
Geoffrey Hinton: Nobel Prize in Physics (2024), Turing Award winner, “Godfather of AI”
Yoshua Bengio: Turing Award winner, most-cited computer scientist alive
Yann LeCun: Turing Award winner, Chief AI Scientist at Meta
Jensen Huang: CEO of NVIDIA, built the infrastructure powering all of this
Fei-Fei Li: Pioneer in computer vision, former Director of Stanford AI Lab
Bill Dally: Chief Scientist at NVIDIA, pioneer in parallel computing
When these six people—who between them hold virtually every major award in computer science and AI—say that machines now match or surpass human intelligence in key cognitive tasks, you don’t dismiss it as hype.
You recalibrate everything.
What Just Happened
Let me be very clear about what this announcement means.
When Hinton says, “For the first time, AI is intelligence that augments people, it addresses labor, it does work,” he’s not describing a future possibility. He’s describing present reality.
When Huang says, “We have enough general intelligence to translate the technology into an enormous amount of society-useful applications,” he means now, not in 2027.
When Bengio says machines will eventually perform “almost anything people can,” he’s not making a vague long-term prediction. He’s describing a trajectory where the hard part—achieving human-level capability—is already behind us.
Hinton even asked a question that I can’t stop thinking about: “How long before you have a debate with a machine, and it will always win?”
His answer: maybe 20 years.
But here’s what he’s actually saying: The hard part (matching human intelligence) is done. The inevitable part (exceeding it) is just a matter of time. And 20 years is probably conservative.
The Timeline I Got Wrong
In Week 4, I predicted:
2025: Cosmos operational, physical AI scaling
2026: Superhuman coders, recursive self-improvement begins
2027: AGI threshold crossed
2030: Superintelligence achieved
That timeline assumed we hadn’t hit AGI yet. It assumed the breakthrough was still ahead of us.
But if Hinton, Bengio, LeCun, and Huang are right—and given their credentials, we should take them very seriously—then we’re not looking forward to AGI. We’re looking back at when it arrived.
New timeline:
2024-2025: AGI quietly achieved (we’re here)
2026: Recognition spreads, recursive improvement accelerates
2027-2028: Superintelligence achieved
2029-2030: The world is unrecognizable
That’s not a 5-year horizon. That’s 18 months to superintelligence.
And I’m writing this on my laptop in November 2025, trying to process what this actually means.
Why We Didn’t Notice
Here’s the uncomfortable thing about transformative change: it often happens gradually, then suddenly. And we’re still in the “gradually” phase where most people don’t realize the ground has shifted beneath them.
Consider what AI can already do, right now, today:
Language and Communication:
Write complex code as well as senior developers
Generate legal briefs indistinguishable from human lawyers
Create marketing copy, articles, and creative writing at human level
Translate between languages better than most bilinguals
Visual and Creative Work:
Generate photorealistic images from text descriptions
Create videos that fool humans
Design graphics, logos, and visual content at professional level
Compose music across any genre or style
Analysis and Strategy:
Pass PhD-level exams in physics, biology, chemistry
Perform medical diagnoses as accurately as specialists
Analyze financial data and make investment recommendations
Develop strategic plans for complex business problems
Physical Interaction (emerging):
Navigate autonomous vehicles through complex traffic
Perform warehouse operations without human supervision
Conduct surgical procedures with superhuman precision (in testing)
Manipulate objects with increasing dexterity through robotics
The question isn’t “Can AI do human-level work?” The question is “Which human work can’t AI already do?”
And the answer to that second question is getting shorter every month.
The Goalpost-Moving Problem
I think there’s a psychological defense mechanism at work. Every time AI achieves something we thought was uniquely human, we just redefine what “real” intelligence means.
2010: “AI will never beat humans at chess.”
→ AI beats chess grandmasters
Response: “But chess is just computation, not real intelligence.”
2016: “AI will never beat humans at Go, it requires intuition.”
→ AlphaGo beats world champion
Response: “But Go is still a game with rules, not real-world complexity.”
2020: “AI will never generate coherent human language.”
→ GPT-3 writes essays indistinguishable from humans
Response: “But it’s just predicting text, it doesn’t understand meaning.”
2023: “AI will never pass professional exams requiring reasoning.”
→ GPT-4 scores in 90th percentile on bar exam, passes medical licensing
Response: “But it just memorized patterns, it can’t actually think.”
2024: “AI will never match PhD-level scientific reasoning.”
→ o3 scores 87.5% on ARC-AGI benchmark (human baseline: 85%)
Response: “But it’s not truly intelligent like humans.”
2025: Geoffrey Hinton, Yoshua Bengio, and Yann LeCun say AI matches human intelligence
Response: ???
At what point do we stop moving the goalposts and acknowledge what’s actually happened?
What “Human-Level” Actually Means
Let me be precise about what the AI pioneers are claiming.
They’re not saying AI is conscious. They’re not saying it has human emotions or experiences or desires. Those are separate questions that honestly don’t matter much for the practical implications.
What they’re saying is: AI systems can now perform cognitive work—the actual tasks that require intelligence—at human level or better across most domains.
That includes:
Problem-solving and logical reasoning
Pattern recognition and prediction
Language understanding and generation
Visual analysis and interpretation
Strategic planning and decision-making
Learning from examples and generalizing to new situations
Creative synthesis of existing knowledge
If you’re a knowledge worker—someone whose job involves thinking, analyzing, planning, communicating, or creating—AI can now do your core cognitive tasks as well as you can.
Not in 2027. Now.
The only remaining advantage humans have in cognitive work is:
Physical embodiment (which NVIDIA Cosmos is rapidly solving)
Real-time interaction (also being solved)
Common sense reasoning (getting better every month)
Original insight (debatable whether humans have an advantage here)
And those advantages are measured in months or single-digit years, not decades.
From AGI to Superintelligence: The Fast Path
If AGI is already here, everything accelerates.
Here’s why: Once you have AI that can do cognitive work at human level, you can put it to work improving AI itself.
The Recursive Loop:
Stage 1 (Now): Human researchers using AI tools to develop better AI
Stage 2 (2026): AI researchers working alongside humans to improve AI
Stage 3 (2027): AI systems improving AI faster than humans can
Stage 4 (2028): Superintelligence—AI systems so far beyond human capability that we can’t predict or control their improvements
This isn’t speculation. It’s the logical progression once you achieve human-level AI research capability.
Sam Altman’s prediction of “superintelligence in a few thousand days” suddenly looks conservative. If AGI is here in 2025, a few thousand days takes us to 2032-2033. But if recursive improvement begins in 2026, superintelligence could arrive by 2027-2028.
18 months from now.
What I Got Right (And What Makes It Worse)
The ironic thing is that the infrastructure analysis in Week 4 was correct. NVIDIA’s Cosmos platform, the data processing capabilities, the synthetic training environments—all of that is real and operational.
I just underestimated how close we already were to the threshold.
When I wrote about NVIDIA processing 20 million hours of data in 14 days, I framed it as “building toward AGI.” But systems trained on that infrastructure are already performing at AGI levels.
When I discussed Physical AI learning billions of times faster than humans, I treated it as a future capability. But Uber’s 100,000 robotaxis deploying in 2027 are trained on systems that already surpass human capability.
The AI Factory isn’t building toward AGI. It’s manufacturing superintelligence.
And it’s further along than I realized.
The Two Paths from Here
Yoshua Bengio, standing alongside his fellow AI pioneers in London, offered a note of caution. He said there’s “a large spectrum of potential outcomes” and urged “neutral observation” rather than overconfidence.
That’s diplomatic language for: We’re at a critical decision point.
Path 1: Controlled Development
This requires:
Immediate global coordination (not happening)
Massive investment in AI safety research (underfunded by 100x)
Regulatory frameworks that keep pace with technology (nowhere close)
International agreement on development timelines (impossible given US-China competition)
Technical solutions to alignment problems we don’t yet understand
Path 2: Race Dynamics
This is what’s actually happening:
Companies racing to deploy AI before competitors
Nations racing to achieve AGI before rival nations
Economic incentives rewarding speed over safety
No meaningful brakes on development
Recursive improvement beginning without adequate safeguards
Bengio understands this. That’s why he launched LawZero in June 2025—a nonprofit trying to build AI systems that can detect and block harmful autonomous agent behavior. He knows we’re not ready. He’s trying to build guardrails during the race.
Hinton understands too. That’s why he left Google in 2023 to speak freely about AI risks. He spent his career building this technology and now spends his time warning about it.
These aren’t fearmongers. They’re the people who built modern AI, watching their creation become something they can’t control, moving faster than they anticipated.
What Changed in One Week
Let me be specific about how my thinking evolved.
Week 4 (published November 1, 2025):
AGI is achievable by 2027 based on current infrastructure and development rates. We have time to prepare but the window is closing.
Week 5 (now, November 8, 2025):
AGI is already here according to the people who would know better than anyone. Superintelligence could arrive by 2027-2028. We don’t have time to prepare. The window has closed.
That’s not a small revision. That’s a fundamental reassessment.
And it’s based on new information from the most credible sources possible.
When Nobel Prize winner Geoffrey Hinton says we’re at a historical inflection point, when the three Turing Award winners who invented deep learning converge on human-level AI being achieved, when the CEO who built the infrastructure enabling all of this confirms these systems can do real work now—you update your priors.
The Questions That Keep Me Up at Night
I’m writing this article at 2 AM because I can’t sleep. These questions won’t leave me alone:
If AGI is already here, why does life feel normal?
Because transformative change happens gradually, then suddenly. The “gradually” phase feels normal until it doesn’t. We’re living through the last normal moments.
What happens when recursive self-improvement begins?
We get superintelligence. And superintelligence is to human intelligence as human intelligence is to animal intelligence. Except the gap will be larger and the transition faster.
Can we maintain control?
Hinton said in a recent interview: “I just don’t know. I wish we could.” That’s the Nobel Prize winner who invented this technology admitting he doesn’t know if we can control what he helped create.
What does superintelligence actually mean?
It means entities that can:
Solve scientific problems we don’t understand
Design technologies we can’t imagine
Manipulate systems we can’t see
Improve themselves faster than we can respond
Potentially develop goals misaligned with human survival
Are we prepared?
No. Not even close. Not by any measure.
Can we become prepared in 18 months?
Based on current evidence: also no.
So what do we do?
I honestly don’t know. And that terrifies me.
The Weight of This Moment
I started this series five weeks ago thinking I was documenting a transformation that would unfold over years. Something we could track, analyze, prepare for.
Week 1: Amazon automating 600,000 warehouse jobs
Week 2: (Reserved for pharmaceutical content)
Week 3: 150,000 Australian drivers facing elimination
Week 4: NVIDIA’s AI Factory building AGI by 2027
Week 5: The pioneers who built AI say it’s already here
Each week, the timeline compressed. Each week, I realized things were moving faster than I thought.
But this week is different. This week isn’t about predicting the future. It’s about recognizing the present.
When the three people who won the Turing Award for inventing deep learning—the actual foundation of modern AI—stand together and say we’ve achieved human-level intelligence, that’s not a prediction. That’s an assessment.
When Jensen Huang, whose company built the infrastructure that powers every AI lab on Earth, says “we are doing it today,” he’s not talking about the future.
When Geoffrey Hinton, who spent decades building this technology and then left Google to warn about it, says machines will win every debate in 20 years, he’s describing an inevitable progression from where we are now.
What Comes Next
Superintelligence in 2027-2028 means the world will be fundamentally different in less time than it takes to complete a college degree.
Some of what comes next:
Immediate (2026): AI systems begin autonomously improving AI research, productivity multipliers reach 50-100x in software development
Near-term (2027): First superintelligent systems emerge in narrow domains, major economic disruption as cognitive work becomes automated at scale
Medium-term (2028): Superintelligence across most domains, unclear if humans maintain meaningful control
Unknown: After superintelligence, we can’t predict because we’ll be dealing with entities smarter than us
The honest answer is: I don’t know what happens next. Nobody does. Because superintelligence is the event horizon beyond which prediction breaks down.
But I do know this: The timeline just collapsed.
Not from decades to years. From years to months.
A Personal Note
I write this with a strange mix of emotions. Intellectual excitement that I’m witnessing the most important moment in human history. Professional satisfaction that I’ve been tracking this story closely enough to update quickly when new evidence emerges. And a deep, persistent fear that we’re not remotely prepared for what’s coming.
When I wrote Week 4, I truly believed we had until 2027. That felt urgent but manageable. Time for serious conversations, policy development, safety research.
Now, based on the statements from the people who would know better than anyone, we’re already at the threshold. And superintelligence is maybe 18 months away.
That’s not remotely enough time.
I’m going to keep writing this series. Week 6 will examine something specific in the transformation (probably the pharmaceutical industry’s AI revolution, which I had originally planned for Week 2). But everything is now framed differently.
This isn’t documenting a future transformation. This is documenting a transformation that’s already underway.
The pioneers who built artificial intelligence have declared: human-level capability is here. What remains is the acceleration from here to superintelligence.
And that acceleration is happening faster than I thought, faster than most people realize, and probably faster than anyone can stop.
The Only Honest Conclusion
I was wrong about the timeline. AGI isn’t coming in 2027. According to the people who invented this technology, it’s already here.
That means everything accelerates. Superintelligence isn’t a 2030s problem. It’s a 2027-2028 problem.
We went from “decades away” to “years away” to “maybe already here” in the span of 36 months.
And we still don’t have answers to basic questions like:
Can we ensure AI remains aligned with human values?
What happens when AI can improve itself faster than we can monitor it?
How do we maintain meaningful control over entities smarter than us?
What does human civilization look like when cognitive work is obsolete?
I don’t have answers. Hinton doesn’t have answers. Bengio is building safety systems hoping they’ll help but not sure they will.
We’re watching the people who built this technology admit they don’t know what happens next.
And it’s happening on a timeline none of us expected.
730 days ago, I thought AGI was decades away.
7 days ago, I thought it was 2 years away.
Today, the pioneers say it’s already here.
Tomorrow, we start living in a world where human-level artificial intelligence is just the beginning.
And superintelligence is approximately 18 months away.
Are we ready?
The people who would know best don’t think so.
Neither do I.
Weekly series examining the AI transformation that’s unfolding faster than anyone anticipated. I’ll continue tracking this story as it accelerates beyond what any of us expected.




Wow, your immediate take on the London summit's declaration that AGI is already human-level is incrediblly insightful. It makes your previous detailed breakdown of NVIDIA’s AI Factory for 2027 seem even more prescient, showing just how fast things are moving.