The Three Speeds: Why the Diagnosis, the Warning, and the Response Are Running on Different Clocks
Three stories landed in the same two-week window. Together, they tell us everything we need to know about why we are going to struggle with what is coming.
By Dr. Elias Kairos Chen
February 25: Dario Amodei, CEO of Anthropic, described AI as a tsunami already visible on the horizon. Speaking on the WTF Is podcast with Indian investor Nikhil Kamath in Bangalore, he said it is surprising that society has not recognized what is about to happen. People keep explaining it away, he said. That is just a trick of the light.
February 17: Federal Reserve Governor Michael Barr, speaking to the New York Association for Business Economics, laid out three scenarios for AI and the labor market. One of them: a “jobless boom” that leaves a significant portion of the population “essentially unemployable.” He urged policymakers to be clear-eyed about how painful these changes could be.
March 2: Singapore Minister for Digital Development Josephine Teo announced the National AI Impact Programme, training 100,000 workers to be “AI bilingual” by 2029 and equipping 10,000 enterprises with AI capabilities.
A tsunami warning. An institutional acknowledgment. A policy response.
Three institutions, three timeframes, three speeds.
And the gap between those speeds is where the damage will happen.
Speed One: AI capability (months)
Amodei did not use the tsunami metaphor casually. He used it precisely. Not that destruction is inevitable, but that the wave is visible and people are still debating whether it is real.
This is the same CEO who warned in May 2025 that AI could eliminate 50% of entry-level white-collar jobs within five years, causing unemployment to spike to 10-20%. In January 2026, he published a 20,000-word essay doubling down, calling AI disruption “unusually painful” and warning that AI systems smarter than Nobel laureates could arrive by 2027. He described a “country of geniuses in a datacenter” — 50 million entities, each more capable than any human expert, emerging within roughly a year. His language escalated from warning to unusually painful to tsunami in the space of nine months.
Each escalation tracks a real acceleration in capability. In February 2026 alone, Anthropic released Claude Opus 4.6 and OpenAI released GPT-5.3 Codex on the same day. Reviewers described these not as tools but as colleagues. Microsoft AI CEO Mustafa Suleyman warned that virtually all office tasks will be automated by AI agents within eighteen months, and separately published an essay warning that “seemingly conscious AI” is on the horizon. DeepSeek V4 is expected imminently, with performance reportedly exceeding both Claude and ChatGPT.
The pace of AI capability improvement is measured in months. Not years. Not decades. Months. And each new release does not just add features. It absorbs entire categories of professional work that were previously considered safe. The financial analyst who felt secure a year ago now watches AI produce investment memos indistinguishable from her own. The junior lawyer who assumed contract review required human judgment now sees models that spot clause conflicts faster and more consistently than any associate.
Amodei made a point that most coverage missed. The same week he issued the tsunami warning, two things happened that revealed how little control even the builders have. Anthropic weakened its Responsible Scaling Policy, the internal commitment to halt training if safety could not be guaranteed, replacing hard tripwires with softer disclosure frameworks. Its chief science officer admitted they could not justify unilateral safety commitments while competitors blazed ahead. Separately, Defense Secretary Hegseth gave Amodei an ultimatum: allow unrestricted military use of Claude or lose a $200 million Pentagon contract and be blacklisted. Anthropic held firm on two red lines, no autonomous weapons and no mass domestic surveillance. Amodei said they “cannot in good conscience” comply. Trump ordered all federal agencies to stop using Anthropic. Hegseth designated the company a supply chain risk.
The company that positioned itself as the responsible adult in the room weakened its own internal safety commitments under competitive pressure, then got blacklisted by its own government for refusing to weaken its external ones. In his essay, Amodei wrote: “There is so much money to be made with AI — literally trillions of dollars per year. This is the trap: AI is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all.”
When the person building the tsunami tells you it is coming, weakens his own safety framework because he cannot afford to fall behind, and then gets blacklisted for maintaining the lines that remain — that is not marketing. That is a signal about how little anyone controls what is coming.
Speed Two: Institutional acknowledgment (quarters)
Fed Governor Barr’s speech on February 17 was extraordinary for what it represented, not just what it said.
The Federal Reserve is arguably the most conservative economic institution in the United States. Its language is deliberately measured. Every word in a formal speech to the New York Association for Business Economics is vetted, reviewed, and chosen with full awareness of how markets, media, and policymakers will interpret it. When a Fed governor uses the phrase “essentially unemployable” in that context, it was not a slip. It was a deliberate signal.
Barr laid out three futures. The first: gradual adoption, where AI follows previous technology waves and workers retrain successfully. He noted that current research seems most consistent with this scenario, where many workers successfully retrain and retain their jobs or find new ones. The second: rapid displacement, where AI capabilities overwhelm the labor market, agentic AI systems replace professional and service roles, autonomous vehicles eliminate transportation jobs, and robotics depletes manufacturing. This creates a jobless boom and a population that is essentially unemployable. The third: a middle path of strong productivity growth with managed disruption.
What made the speech remarkable was not that the Fed acknowledged AI might cause job losses. That is conventional wisdom now. What was remarkable was how specific the doomsday scenario was. Barr described AI-centric startups with radically new business models displacing firms unable to adapt, with layoffs soaring, leading to widespread unemployment in the short run and declines in labor force participation over time. He warned that society would need to rethink the social safety net to ensure gains are shared rather than concentrated among a small group of capital holders and AI superstars.
That language — capital holders, AI superstars, rethinking the social safety net — from a Federal Reserve governor would have been unthinkable eighteen months ago. This is the vocabulary of structural economic transformation, not cyclical adjustment. The Fed is no longer treating AI as a productivity story. It is treating it as a potential rupture in the relationship between labor and economic value.
But notice the timing. Amodei’s original white-collar bloodbath warning was May 2025. The Fed’s formal acknowledgment came February 2026, nine months later. That is the speed of institutional acknowledgment. By the time the most important economic institution in the world processes a warning from the technology sector, three generations of AI models have shipped, each more capable than the last.
Barr also revealed a quiet but important detail about the current economic landscape. As of February 2026, U.S. job creation had been near zero over the previous year, while inflation remained elevated at 3%, driven partly by tariffs. Goldman Sachs projected unemployment was holding steady only because nearly 800,000 immigrants had left the workforce in 2026. Barr described the current labor market as maintaining a “delicate balance” that is vulnerable to negative shocks.
The labor market is already fragile. And the AI wave has not fully arrived.
Given these conditions, Barr signaled that the Federal Reserve is unlikely to lower interest rates soon. If AI drives a productivity boom, it would increase demand for capital and investment, putting upward pressure on interest rates. In other words: even in the optimistic scenario, the economic adjustment is painful for ordinary workers. In the pessimistic scenario, it is catastrophic.
The institutional clock runs on quarters. The AI clock runs on months. The gap between them is where workers fall.
Speed Three: Policy response (years)
Which brings us to Singapore.
Singapore is, by most measures, the most AI-forward government on Earth. PM Lawrence Wong chairs the National AI Council personally. The country launched the world’s first Agentic AI Governance Framework at Davos in January 2026 — the first of its kind anywhere, providing guidance on deploying AI agents responsibly while maintaining human accountability. The 2026 Budget included 400% tax deductions for AI expenditures (capped at $50,000 per year), a Champions of AI program providing tailored enterprise transformation support, a merger of SkillsFuture and Workforce Singapore into a single agency for seamless skills-to-jobs support, and a redesigned SkillsFuture website making AI learning pathways clearer.
The centrepiece: 100,000 workers trained in AI fluency by 2029 under the National AI Impact Programme, with 10,000 enterprises equipped with AI capabilities over three years.
I want to be clear. Singapore is doing this better than almost anyone. Minister Teo’s bilingual framing — workers who speak both their professional domain and AI — is more sophisticated than anything I have seen from other governments. The decision to start with accountants and lawyers, developing programs in partnership with the Institute of Singapore Chartered Accountants, the Singapore Academy of Law, and the Singapore Corporate Counsel Association, shows strategic sequencing with industry buy-in. The parallel commitment to 10,000 enterprises ensures the demand side matches the supply side. The expansion of TechSkills Accelerator to non-tech occupations for the first time recognizes that AI fluency is not just a tech-sector issue.
This is what good governance looks like.
And it still may not be fast enough.
Here is the math. 100,000 workers by 2029 means roughly 33,000 trained per year. Singapore’s workforce is approximately 3.6 million. That is less than 1% of the workforce being AI-upskilled annually.
Meanwhile, Amodei says 50% of entry-level white-collar jobs disrupted within 1-5 years. The program completes roughly when his disruption window peaks. The training launching in early 2026 will teach accountants AI-assisted financial reporting and compliance monitoring, and lawyers AI-assisted research, document review, and contract management. These are precisely the tasks that Opus 4.6 and GPT-5.3 Codex already handle autonomously, and that the next generation of models will handle better.
Singapore’s own data tells the story. According to the recent Singapore Digital Economy Report, AI adoption among small and medium enterprises jumped from 4.2% in 2023 to 14.5% in 2024. Among larger firms, it leaped from 44% to 62.5%. That adoption curve is accelerating faster than the training pipeline. Minister Teo acknowledged this risk directly: if AI follows the same path as previous technology waves, only a small group of companies at the frontier will get ahead, while smaller businesses take longer.
But AI is not following the same path. It is moving faster than any previous technology wave — by the explicit assessment of the people building it.
Singapore’s Tech.Pass program, attracting elite global AI talent with salary thresholds above $22,500 a month, reveals another dimension of the tension. The government is simultaneously importing the people who build AI, which accelerates capability, and training local workers to use AI, which assumes capability stabilizes long enough for training to remain relevant. Both policies make sense independently. Together, they illustrate the paradox: accelerating the technology while trying to help the workforce keep up with it.
Jessica Zhang from ADP, commenting on the Singapore Budget measures, identified the core challenge: “Without job redesign and practical training, the transition to AI risks widening skills gaps and undermining long-term talent development.” She is politely naming the three speeds problem. Training without fundamental redesign of what work means is running to catch a train that is already accelerating away from the platform.
Across every country I advise, and I have worked in more than twenty, I see the same pattern. The AI teams know what is coming. The C-suite acknowledges it privately. The policy response operates on a timeline that assumes the world of 2029 will resemble 2026 closely enough for plans made today to remain relevant.
That assumption is the vulnerability.
The structural problem nobody is naming
The three speeds are not a coordination failure. They are a structural impossibility.
AI capability improves at the speed of compute, capital, and competition. Institutional acknowledgment moves at the speed of evidence, consensus, and bureaucratic process. Policy response moves at the speed of legislation, implementation, and democratic accountability.
These speeds have never aligned for any technology. But previous transitions — the steam engine, electricity, the internet — had a critical feature that AI may lack: they moved slowly enough that institutions could eventually catch up. Workers displaced by automation in the 1980s retrained over a decade. The dot-com disruption of the late 1990s played out over years. Even the smartphone revolution took a decade to fully reshape industries.
Amodei is explicitly arguing that AI does not have this property. The tsunami metaphor is about speed. Not that the wave is coming, but that it is coming too fast for normal adaptive mechanisms to work. He said it himself: “You can’t just step in front of the train and stop it. The only move that’s going to work is steering the train — steer it 10 degrees in a different direction. That can be done. But we have to do it now.”
Barr acknowledged exactly this. His rapid displacement scenario is specifically defined by AI capabilities swarming the economy far more quickly than the labor market can adjust. The distinguishing feature of the doomsday scenario is not the power of AI. It is the speed.
And here is where I want to connect this to what I have been analyzing throughout this series. The global coordination problem (Week 12) was about nations failing to cooperate on AI governance. The creativity crisis (Week 13) was about innovation pipelines breaking when curiosity has zero cost. The three speeds problem is about something more fundamental: the architecture of human institutions is structurally incompatible with the rate of change AI is introducing.
It is not that governments are failing. It is that governance itself — the act of collective decision-making, implementation, and democratic accountability — operates on a clock that AI has already outpaced. This is not fixable by working harder or spending more. The clock speeds are determined by the nature of the systems themselves.
An AI lab can release a model that transforms an industry in weeks. A government needs years to study the impact, draft legislation, debate it, pass it, fund implementation, and measure outcomes. By the time that cycle completes, the model that prompted it has been replaced three times.
What the three speeds demand
I will not pretend I have a policy solution that closes the gap. Nobody does. The gap is structural, not a failure of imagination or political will.
But I can name what the gap demands.
For policymakers: Design for obsolescence. Every training program, every regulatory framework should be built with the assumption it will need fundamental redesign within 18-24 months. Singapore’s model of starting with specific sectors and expanding is sound, but the expansion cadence needs to match AI capability acceleration, not bureaucratic planning cycles. Build review mechanisms that trigger redesign at capability milestones, not calendar dates. When a model ships that can autonomously perform the tasks your training program teaches, that is the trigger for redesign — not the next annual review.
For organizations: Stop planning for a stable skills landscape. The companies that navigate this will be those building adaptive capacity: the ability to absorb and deploy new capabilities as they emerge, rather than training for a fixed set of tools. The valuable competency is not how to use Claude. It is how to evaluate, adopt, and integrate whatever comes next, whatever replaces what came before, and how to redesign workflows around capabilities that did not exist six months ago. That is a meta-skill. And it is the only skill with a shelf life longer than the next model release.
For individuals: The three speeds problem means institutional support will always arrive late. Not because institutions do not care, but because they structurally cannot move fast enough. Your career resilience depends on your personal rate of adaptation exceeding the institutional rate of support. This means you cannot wait for your government’s training program, your company’s reskilling initiative, or your industry association’s certification update. You need to be learning what the institutions will be teaching two years from now. That sounds harsh. It is harsh. It is also honest.
For everyone: Watch the language. When an AI CEO says tsunami, when a Fed governor says unemployable, when a government says train 100,000 by 2029, read those as data points on different clocks. The diagnosis runs ahead. The acknowledgment catches up. The response falls behind. That pattern will hold for every country, every institution, every sector.
The question is not whether the three clocks synchronize. They will not.
The question is what you build — personally, organizationally, institutionally — when you know they will not.
The honest assessment
Amodei’s tsunami is real. He is building it, and he is telling you it is coming. That combination of builder and warner is unprecedented, and the fact that he admits he cannot fully control the commercial and geopolitical forces driving it forward should remove any remaining comfort.
The Fed’s acknowledgment is significant. When the institution responsible for employment stability formally models a scenario where large populations become essentially unemployable, the window for dismissing this as tech industry hype has closed. Central bankers do not use apocalyptic language unless they believe the scenario is plausible enough to require formal economic modeling.
Singapore’s response is exemplary. No country is doing more, faster, with more strategic sophistication. And even Singapore’s response operates on a timeline that may be overtaken by the technology it is preparing for.
Three speeds. Three clocks. One destination.
The tsunami, the warning, and the lifeboat are all real. They are just running on different schedules.
And the wave does not wait for the slowest clock.
This is Week of “Framing the Future of Superintelligence,” a series documenting the transformation unfolding faster than anyone anticipated.
Dr. Elias Kairos Chen is the author of “Framing the Intelligence Revolution: How AI Is Already Transforming Your Life, Work, and World” and a strategic advisor on AI transformation across countries.



