The Week That Proved Nobody Is Ready
The founder of DeepMind just told us that one year in AI equals a decade of change. In that same week, Anthropic’s safety chief walked out, $285 billion evaporated from global markets, and the world’s
I’ve spent the last several months documenting the intelligence revolution as it unfolds—week by week, announcement by announcement, tracking how the gap between “this is coming” and “this is here” keeps collapsing.
But the week of February 3-11, 2026, wasn’t just another week of data points. It was a snapshot of a species encountering a speed of transformation it has no institutional framework to manage. And every signal—from Silicon Valley to Wall Street to Davos to Southeast Asia—pointed to the same conclusion.
Nobody is ready. Not governments. Not corporations. Not the people building the technology. Not even the person whose job it was to keep it safe.
Seven Days That Changed Everything
Let me walk you through the week, because the chronology matters.
February 3: Anthropic releases industry-specific plugins for Claude Cowork—its workplace automation tool designed for legal, finance, marketing, and data analysis workflows. What the company described as a product update triggered something unprecedented: a $285 billion market rout in a single day. Bloomberg reported that a Goldman Sachs basket of US software stocks fell 6%. Thomson Reuters crashed over 15%. LegalZoom plunged more than 15%. FactSet dropped 10%. India’s Nifty IT index—representing the $300 billion outsourcing industry—fell nearly 6%.
February 6: Anthropic releases Claude Opus 4.6, capable of coordinating entire teams of AI agents working in parallel. Financial data providers take another hit.
February 9: Mrinank Sharma—head of Anthropic’s Safeguards Research Team, the person literally responsible for making Claude safe—posts his resignation letter on X. Viewed over a million times. “The world is in peril,” he wrote. His final research project? Studying how AI assistants could distort our humanity itself. His next career move? Studying poetry.
February 11: Two things happen simultaneously, on opposite sides of the planet. In Davos, Demis Hassabis—Nobel laureate, founder of DeepMind, the person most credited with creating modern AI—tells Fortune’s editor-in-chief that “10 years almost happens every year” in AI. In Singapore, Minister Josephine Teo describes AI adoption to McKinsey in furniture assembly metaphors: the “IKEA moment” where enterprises learn to use AI tools.
Same week. Same technology. Two completely different understandings of what’s happening.
The Hassabis Timeline
I want to sit with the Hassabis quote because I think it’s the most important thing anyone in AI has said publicly this year.
“Every year is pretty pivotal in AI. And it feels like, at least for those working at the coalface, that 10 years almost happens every year.”
This is not a journalist editorializing. This is not a venture capitalist talking his book. This is the founder of DeepMind—the company that built AlphaGo, AlphaFold, and Gemini—telling us from the Davos stage that a single calendar year now contains a decade of progress.
Think about what that means for any planning framework. A government announces a 4-year AI talent strategy? That’s 40 years of AI progress. A company commits to a 2-year digital transformation? Twenty years of change will unfold before the project is complete. A university redesigns its curriculum for “the AI era”? By the time the first graduating class walks across the stage, the field has advanced by the equivalent of half a century.
In that same Fortune interview, Hassabis said he expects AI systems to be building and delegating tasks to autonomous agents by the end of 2026. He predicted breakthrough moments in robotics within 18 months. And he described his vision of a “universal assistant” embedded across all devices—computer, phone, glasses, car—understanding your context seamlessly across every interaction.
This isn’t 2035 speculation. This is a Nobel laureate describing what his teams are building right now.
And here’s what’s easy to miss in the headline quotes: Hassabis is being measured. He placed full AGI at 5-10 years away, saying it needs one or two more breakthroughs—continual learning, better memory, long-term reasoning. But he described what’s already happening as the foundation for “a new golden era of discovery, a kind of new renaissance.” Personalized medicine. Solving the energy crisis. “Radical abundance.”
He also said something that should get more attention: “If we don’t disrupt ourselves, someone else will.” This is the CEO of Google DeepMind—a division he describes as the “engine room” and “nuclear power plant” powering one of the world’s largest companies—acknowledging that even Google feels existential pressure to move faster. If Google is racing against obsolescence, what does that mean for companies a fraction of its size?
When asked about AI hardware—smart glasses with embedded AI assistants—Hassabis said “maybe by summer” 2026. Not a prototype. A product. The universal assistant doesn’t wait for your planning cycle.
What Dario Amodei Already Told Us
Here’s what makes the week even more surreal. Anthropic’s own CEO has been saying the quiet part out loud for months.
In January 2026, Dario Amodei published a 20,000-word essay—”The Adolescence of Technology”—warning that AI would cause “unusually painful” disruption to jobs. He told Axios that AI could wipe out half of all entry-level white-collar jobs within five years and push unemployment to 10-20%. He said CEOs would “quietly stop hiring and start replacing humans with AI the moment it makes business sense.”
Then his company released the exact product that makes it make business sense. And investors did exactly what you’d expect—they repriced the future of every company whose business model depends on humans doing cognitive work.
Anthropic’s own Economic Index, released in January 2026, found that 49% of jobs can now use AI in at least a quarter of their tasks—up from 36% in early 2025. The company’s own research shows adoption spreading faster than any major technology in the past century.
And internal Anthropic employees can feel it. The Telegraph published results from an internal company survey where one staffer said: “It kind of feels like I’m coming to work every day to put myself out of a job.” Another confided: “In the long term, I think AI will end up doing everything and make me and many others irrelevant.”
Three days after that survey was published, the safety chief walked out.
The Safety Chief Who Left for Poetry
Now hold that timeline against what Sharma wrote in his resignation letter.
“Throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions. I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.”
This is the head of safety at the company that built its entire brand on being the “responsible” AI lab—founded by former OpenAI researchers who left specifically because they felt OpenAI was prioritizing products over safety. Anthropic’s whole reason for existing is supposed to be different. And the person most responsible for that difference is telling us: it’s not working.
He’s not accusing Anthropic of specific wrongdoing. He’s saying something worse: that the structural pressures of the AI race make it nearly impossible for any organization to live its values, no matter how sincere those values are.
Sharma isn’t the first safety researcher to leave with warnings. Jan Leike left OpenAI’s Superalignment team in 2024, saying the company was prioritizing “shinier products” over safety. But something has shifted. The earlier departures were about companies not doing enough safety work. Sharma’s departure suggests the gap between technical capacity and human wisdom has grown so large that incremental safety work may no longer be meaningful.
His solution? Poetry. And before you dismiss that—consider what it means when the person most qualified to solve the technical safety problem concludes that the answer isn’t technical.
“We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.”
The Innovation Monopoly, Validated
I’ve been developing a framework I call the “innovation monopoly”—the mechanism by which foundation model companies absorb entire business categories. The Claude Cowork selloff is the most dramatic real-world validation of that thesis I’ve ever seen.
Think about what actually happened. Anthropic released plugins—essentially the ability for Claude to do legal contract review, financial analysis, compliance tracking, and customer relationship management. Not perfectly. Not yet replacing entire departments. But enough that investors instantly recognized the trajectory.
The market’s response was brutally rational. Thomson Reuters, Gartner, S&P Global, Moody’s, LegalZoom—companies whose entire value proposition is human analysts synthesizing information—saw billions evaporate. Not because Claude is currently better than their products. But because the trajectory points toward what I’ve been calling capability absorption: the mechanism by which foundation models observe what’s valuable, build it natively, and offer it at a fraction of the cost.
This is how innovation monopolies work. Traditional monopolies control a market by owning distribution or supply. Innovation monopolies control markets by absorbing the capability itself into the model. You don’t compete with an innovation monopoly by building a better product. You can’t. The foundation model is the product—and every product simultaneously.
What’s remarkable about the Cowork selloff is how precisely the market identified the targets. The companies that fell hardest weren’t random tech stocks. They were companies whose core offering is humans doing cognitive synthesis—exactly the capability that foundation models absorb most naturally. Contract review. Credit rating analysis. Market research. Legal document preparation. These aren’t adjacent to what Claude does. They’re inside the expanding frontier of what Claude is becoming.
Rest of World’s analysis of the impact on Indian IT was particularly striking. The $300 billion Indian outsourcing industry—companies like Infosys, TCS, Wipro—is built on billing for human hours spent on exactly the kind of repetitive knowledge work that Claude Cowork automates. The sell-off wasn’t panic. It was recognition that the business model of selling human cognitive labor by the hour has an expiration date.
As one Deutsche Bank analyst put it, the market has shifted from “every tech stock is a winner” to “a true winners and losers landscape.”
The startups were just the first domino. Established knowledge industries are next.
The Briefing Gap
Here’s what makes the McKinsey report from the same week so revealing—not as a critique of any particular country or institution, but as evidence of a universal condition.
McKinsey surveyed 330 companies across Southeast Asia. They found that over 60% had allocated 11-40% of their tech budgets to AI. The result? Nearly one in five reported zero discernible earnings impact. Over 60% said AI contributed less than 5% of operating profit. One executive joked they had “more AI pilots than pilots at Singapore Airlines.”
This isn’t a Southeast Asian problem. You’d find the exact same pattern in Frankfurt, London, São Paulo, and Tokyo. Companies everywhere are treating AI as a tool to be adopted—running pilots, training users, measuring adoption rates. And they’re getting minimal returns.
Why? Because they’re measuring the old paradigm. The companies running chatbot pilots and productivity tools are playing Phase 1 of a three-phase game.
Phase 1 is Enhancement: AI makes existing workers more productive. This is where most companies are. This is what the McKinsey report measures. And this is what delivers 5% operating profit impact—if you’re lucky.
Phase 2 is Oversight: Your role becomes managing AI-generated work. You supervise, approve, occasionally correct. Your value shifts from creation to quality control. I’m already seeing this in legal and compliance teams where junior analysts spend more time reviewing Claude’s output than generating their own.
Phase 3 is Obsolescence: Even oversight gets automated. AI approves AI work. The human becomes optional. This isn’t theoretical—Opus 4.6’s ability to coordinate teams of AI agents working in parallel is the infrastructure for Phase 3.
I’ve been calling this progression “agentrification”—the keystroke-by-keystroke automation of cognitive work, where the displaced actively participate in their own displacement. The parallel to gentrification is deliberate: in both cases, the people being displaced don’t see it happening because each individual step feels like improvement.
And the critical insight is that Phase 1 feels like the endpoint. It feels like you’ve “adopted AI.” You’ve run the pilots. You’ve trained the teams. You’re measuring the productivity gains. Everything your consulting firm told you to do. But it’s just the beginning—and the companies that will see massive P&L impact aren’t the ones training thousands of workers to use ChatGPT. They’re the ones that recognize the foundation model IS the worker.
The honest sequence goes like this: First AI augments your work, and you feel more productive. Then management notices that one person with AI produces what three people did before. Then they don’t hire the next two replacements. Then they restructure. Then the “augmented” worker does 3x the work for the same pay. Then AI improves again, and management realizes they don’t need the augmented worker either.
That’s not dystopian speculation. That’s what Amodei himself described. That’s what Anthropic’s own employees feel. That’s what the market priced in on February 3.
The P&L Math Nobody Discusses at Conferences
Let me spell out what a CFO actually sees when they look at this—because it’s rarely said at industry events.
A knowledge worker costs $80,000-$150,000 annually—salary, benefits, office space, management overhead, recruitment, training. An AI agent doing equivalent cognitive work costs a fraction of that, operates continuously, requires no leave, no performance reviews, no severance.
If an autonomous AI system can handle 60-70% of what a compliance team, a contract review team, a data analysis team, or a marketing analytics team does—you don’t need an “AI-enabled workforce.” You need fewer workers.
This isn’t a prediction. It’s arithmetic. And it’s what Anthropic CEO Dario Amodei has been telling us explicitly. In January, he warned that AI could eliminate half of all entry-level white-collar jobs within five years. He said CEOs would “quietly stop hiring and start replacing humans with AI the moment it makes business sense.”
Then his company released the exact product that makes it make business sense.
Anthropic’s own Economic Index found that 49% of jobs can now use AI in at least a quarter of their tasks—up from 36% in early 2025. The company’s own research shows adoption spreading faster than any major technology in the past century.
And internal Anthropic employees can feel it. The Telegraph reported results from a company survey where one staffer said: “It kind of feels like I’m coming to work every day to put myself out of a job.” Another confided: “In the long term, I think AI will end up doing everything and make me and many others irrelevant.”
Three days after that survey was published, the safety chief walked out.
The Global Pattern: Everyone Is Planning for Yesterday
This is where I need to be clear: this isn’t about any single country getting it wrong. The pattern is universal.
In Singapore, Minister Josephine Teo described AI adoption at a McKinsey event using a charming analogy—the “IKEA moment,” where enterprises learn that AI isn’t that hard to use. She talked about expanding from 60 AI Centres of Excellence to thousands. She described an evolved talent model that goes beyond creators, practitioners, and users to encompass talent “at every level, in every nook and cranny.”
These are thoughtful, well-informed positions. Singapore is arguably the most sophisticated small country in the world when it comes to technology strategy, and Minister Teo is clearly deeply engaged with the subject.
But her framework—like every institutional framework I’ve encountered globally—is built for Phase 1. The “IKEA moment” is a beautiful description of what it feels like when humans learn to use AI tools. What it doesn’t capture is what happens when the tools no longer need the humans.
And this isn’t a Singapore problem. The EU’s AI Act is regulating a paradigm that’s being superseded while the ink dries. The US executive orders on AI focus on safety testing frameworks while the safety chief at the leading safety-focused lab walks out saying those frameworks aren’t enough. The UK’s AI strategy emphasizes “AI-ready” workforce development while the workforce being developed for is being absorbed into foundation models.
Every institution is planning in years. The technology is moving in Hassabis-years—where each one contains a decade.
Two Conversations That Aren’t Talking to Each Other
This is the core insight from the week. There are two conversations happening on planet Earth right now, and they’re not talking to each other.
Conversation One happens in foundation model labs, at Davos panels with Hassabis and Amodei, in the resignation letters of safety researchers. It sounds like: “10 years happens every year.” “The world is in peril.” “50% of entry-level jobs eliminated within five years.” “We’re on an exponential curve, straight up.”
Conversation Two happens in boardrooms, government ministries, and consulting engagements worldwide. It sounds like: “How do we adopt AI?” “How do we train our workforce?” “How do we build AI Centres of Excellence?” “What’s our 3-year digital transformation roadmap?”
Conversation One is describing a future that arrives before the preparation is complete. Every time.
Conversation Two is building preparation frameworks for a future that’s already here.
The gap between these conversations is where the disruption lives. Not in any single technology release or policy announcement—but in the accumulated mismatch between exponential capability growth and linear institutional response.
What Massive Transformation Actually Looks Like
I don’t think anyone is ready for what’s coming. Not because people are incompetent or uninformed—but because the speed is genuinely unprecedented. Hassabis just told us so. He’s at the coalface, working until 4am, and even he describes the pace as something his most experienced colleagues—people who’ve been in tech for 20, 30 years—call “the most intense environment they’ve ever seen, perhaps ever in the technology industry.”
This isn’t about any single country’s policy or any single company’s strategy being right or wrong. It’s about a global condition: every institution on earth—government, corporate, academic—is operating with frameworks designed for a world where change is measured in years. The technology now operates on a timeline where, in Hassabis’s words, a decade happens every twelve months.
What does massive transformation look like when nobody is ready?
It looks like $285 billion evaporating on a product announcement. It looks like a safety chief leaving to study poetry because technical fixes no longer feel adequate. It looks like the technology’s own creators warning about consequences they can’t prevent because competitive dynamics won’t allow it. It looks like 60% of companies reporting their AI investments haven’t moved the needle—measured against a paradigm that’s already being superseded. It looks like well-intentioned institutions everywhere—from Silicon Valley to Singapore to London to Brussels to São Paulo—planning for a future that’s already behind them.
And it looks like the people building these systems, the employees inside the labs, saying “I feel like I’m coming to work every day to put myself out of a job”—and then the market confirming their intuition by wiping out the value of the very industries those jobs serve.
The Question That Changes the Framework
Sharma asked it in his resignation letter, though most coverage focused on the poetry angle: “We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world.”
Not our skills. Not our budgets. Not our talent pipelines. Not our AI Centres of Excellence. Our wisdom.
Every conversation I’ve had with senior leaders over the past year eventually arrives at this gap. They can see the technology accelerating. They can feel their planning frameworks straining. What they’re struggling with isn’t information—there’s more AI information available than anyone can process. It’s something deeper: the right framework for thinking about change at this speed.
The readiness question isn’t “have you adopted AI?” That’s Phase 1 thinking for a world that’s entering Phase 2.
The real question is: have you accepted that the future will arrive before your preparation is complete—and built the institutional capacity to adapt in real-time rather than plan in advance?
Traditional strategic planning assumes you can see the destination, chart a course, and execute. What Hassabis is describing—what this single week demonstrated—is a world where the destination moves faster than the planning cycle. Where the map becomes outdated before the expedition begins.
That’s not a planning failure. That’s a new condition of existence. And it requires a fundamentally different relationship with uncertainty—not as a problem to be solved through better forecasting, but as a permanent state to be navigated with wisdom, agility, and the intellectual honesty to admit when our frameworks are no longer adequate.
If Hassabis is right that 10 years happens every year, then by the time you finish reading this article, the world has already moved on.
The question is whether we’re moving with it—or still assembling the furniture.
This is part of my ongoing series “Framing the Future of Superintelligence,” documenting the transition from AGI to superintelligence in real time. For the complete series and deeper analysis, follow my work on Substack.
What’s the widest gap you’ve seen between AI planning and AI reality in your organization? I’d genuinely like to know.



