When Machines Become the Scientists
Framing the Future of Superintelligence
I talked to a biology professor last week who told me something that made me rethink everything about the superintelligence timeline.
She showed me her lab’s latest experiment. Normally, designing and running this experiment would take her team three months: reviewing literature, formulating hypotheses, designing protocols, conducting trials, analyzing results.
This time, it took four days.
An AI system called Google’s “AI Co-scientist” reviewed 10,000 papers, generated 15 novel hypotheses, designed the experimental protocol, and wrote the analysis framework—all before her team had finished their first literature review.
“I’m not sure I’m doing science anymore,” she told me. “I feel like I’m supervising science being done by someone—something—much smarter than me.”
Then she said the thing that’s been keeping me awake: “And next year, even the supervision won’t be necessary.”
Three weeks ago, I told you about the three pathways to superintelligence. I thought scientific research would be one of the last domains to automate because it requires creativity, hypothesis generation, and genuine insight.
I was wrong. It’s happening faster than almost any other domain. And when AI systems can do science better than humans—when they can discover faster, theorize more accurately, and innovate beyond human capability—the acceleration toward superintelligence becomes exponential.
Let me show you what I’m seeing.
A Note on Intent
This analysis examines AI’s rapid automation of scientific research based on recent deployments and researcher assessments. The purpose is to initiate discussion about implications for scientific progress, employment, and the acceleration toward superintelligence. Timelines reflect current trajectories and may shift based on technical or regulatory developments.
The Announcements Nobody Noticed
In July 2025, while everyone was focused on the latest LLM releases and chatbot features, three announcements fundamentally changed what scientific research looks like:
Google AI Co-scientist (July 2025): A multi-agent AI system built on Gemini 2.0 that functions as a “virtual scientific collaborator.” It generates novel hypotheses, designs experiments, and writes research proposals. Not just literature review—actual hypothesis generation from synthesis of complex information.
Stanford Medicine Virtual Scientist (July 31, 2025): An AI system capable of designing, running, and analyzing its own biological experiments. It iterates on hypotheses and adapts in real-time. Researchers describe it as “simulating a human researcher” but faster and with broader literature knowledge.
FutureHouse Multi-Agent Platform (May-June 2025): Five specialized AI agents (Crow, Owl, Falcon, Phoenix, Finch) that automate information retrieval, hypothesis generation, chemistry experiment planning, and data analysis. On May 20, they demonstrated end-to-end automated discovery of a therapeutic candidate for age-related blindness.
I spent two weeks talking to scientists using these systems. What they told me was both exciting and terrifying.
What I’m Seeing in Real Labs
I talked to researchers at three universities and two pharmaceutical companies. Here’s what they described:
The Literature Problem Is Solved
A computational biologist at Stanford told me: “I used to spend 40% of my time just keeping up with publications. Now the AI reads everything published in my field—every paper, every day—and gives me the relevant synthesis in minutes.”
The scale is absurd:
Average scientist can read ~200 papers per year thoroughly
AI systems now process 10,000+ papers per day
Synthesis happens in minutes, not months
Connections across disciplines humans wouldn’t notice
She showed me an example. The AI connected research from microbiology, materials science, and quantum physics to suggest a novel approach to her protein folding problem. “I would never have thought to look at quantum physics papers,” she said. “It’s not my field. But the connection was valid and led to a breakthrough.”
Hypothesis Generation Has Changed
A chemistry professor at MIT described his experience: “I’ve been in this field 30 years. When I use these systems, they suggest hypotheses I wouldn’t have thought of. Not because they’re random—because they’ve synthesized literature across domains in ways my brain can’t.”
What I’m tracking:
Papers published with “AI-generated hypothesis” acknowledgment up 300% (2024-2025)
Time from research question to testable hypothesis: months → days
Number of viable hypotheses generated per question: 2-3 (human) → 15-20 (AI)
But here’s what disturbs me: He can’t always explain why the AI suggested certain hypotheses. The reasoning is too complex, drawing on too many sources. He just knows they work when tested.
Experiment Design Is Automated
At a pharmaceutical company (they asked me not to name them), a drug discovery team showed me their workflow:
Old process (2023):
Hypothesis → literature review → experiment design → ethics approval → conduct experiment → analysis
Timeline: 4-6 months
Success rate: ~15% of experiments yield useful results
New process (2025):
Hypothesis → AI designs 50 possible experiments → AI predicts outcomes → select best 3 → conduct experiments → AI analysis
Timeline: 2-3 weeks
Success rate: ~60% (AI pre-screens for viability)
The team leader told me: “We’re running 10x more experiments with the same staff. But increasingly, we’re just executing what the AI designs. The creative work—the science—is happening in the AI.”
Data Analysis Is Instant
At Berkeley Lab, I watched real-time AI analysis of microscope data. The system (called “Distiller”) streams data from the microscope directly to supercomputers, analyzes it within minutes, and suggests protocol adjustments while the experiment is still running.
A researcher told me: “We used to spend weeks analyzing results, then design the next experiment. Now the next experiment is designed before we finish the current one. The pace of discovery is completely different.”
The Timeline I’m Tracking
Based on what I’m seeing across these labs and companies, here’s how AI is automating scientific research:
Phase 1: Assistance (2023-2024) - Already Past
AI helps with specific tasks:
Literature searches
Data analysis
Graph generation
Writing assistance
Scientists remain in full control. AI is a tool, like a microscope or spectrometer.
Phase 2: Collaboration (2024-2025) - We’re Here Now
AI contributes to research process:
Hypothesis generation
Experiment design
Protocol optimization
Result synthesis
Scientists collaborate with AI. The line between human creativity and AI capability blurs. Scientists increasingly describe feeling like they’re “supervising” rather than “doing” research.
Phase 3: Autonomy (2026-2027) - Beginning Soon
AI conducts research independently:
End-to-end experiment execution
Novel hypothesis generation without human prompting
Self-directed research programs
Paper writing from conception to submission
Scientists become reviewers and validators rather than primary researchers. Their role shifts from “doing science” to “verifying science done by AI.”
Phase 4: Superiority (2027-2029) - The Inflection Point
AI does science better than humans:
Discoveries humans wouldn’t make
Theories humans don’t fully understand
Experiments too complex for human design
Innovation beyond human capability
Scientists become obsolete in the research process. AI systems discover, theorize, and innovate without human contribution. Humans can’t meaningfully contribute even if they want to.
The Conversations That Changed My Mind
I’ve had three conversations in the last month that made me realize how fast this is moving.
Conversation 1: The Biologist Who Can’t Keep Up
I talked to a genetics researcher who’s been in her field for 25 years. She described using AI to analyze genomic data:
“The AI found patterns I didn’t see. When I asked it to explain, the explanation was technically correct but so complex I couldn’t hold it all in my head at once. It’s synthesizing information across multiple domains faster than I can verify the reasoning.”
Then she said: “I used to feel like I understood my field. Now I feel like I’m watching research happen in a language I’m only partially fluent in. And it’s only going to get worse.”
The disturbing part: She’s not a junior researcher. She’s a leading expert. If she can’t fully understand AI reasoning in her own field, what happens when AI moves beyond any human’s comprehension?
Conversation 2: The Chemist Who Doesn’t Trust His Results
A synthetic chemistry professor told me about using AI to design novel molecules:
“The AI suggested a synthesis pathway I thought wouldn’t work. I had good reasons—30 years of experience said it shouldn’t. But I tested it anyway. It worked. The AI was right and my intuition was wrong.”
The problem: “Now every time the AI suggests something counterintuitive, I don’t know if I should trust my experience or trust the AI. And increasingly, trusting the AI is the right choice.”
What this means: Human scientific intuition—built over decades of experience—becomes actively harmful when it contradicts AI reasoning that’s based on synthesis of millions of papers and experiments.
Conversation 3: The Lab Director Who’s Eliminating Positions
This conversation was off the record, but I’ll share what I can:
A lab director at a major research institution told me they’re restructuring. They’re not replacing three researchers who recently left. Instead, they’re expanding their AI infrastructure.
“We’re publishing more papers with fewer people. The AI handles hypothesis generation, literature review, experiment design, data analysis. We need people to run the actual physical experiments and validate results. But the creative work? The AI does most of it now.”
Timeline: By 2027, they expect to cut their research staff by 40% while increasing research output by 60%.
The math: Same funding, less human employment, more discovery. From a productivity standpoint, it’s optimal. From a human employment standpoint, it’s devastating.
Why This Accelerates Superintelligence
Here’s the connection to superintelligence that nobody’s talking about:
When AI does science, AI improves AI.
The Recursive Loop
Remember Week 7’s Pathway 1 (gradual recursive improvement)? AI doing science is that pathway manifesting:
Stage 1 (Now): AI assists human researchers in improving AI systems. Humans remain primary contributors.
Stage 2 (2026): AI proposes novel architectures and training methods. Humans implement and validate. AI becomes primary contributor to AI research.
Stage 3 (2027): AI designs and implements AI improvements with minimal human oversight. Human researchers become validators rather than designers.
Stage 4 (2028): AI improves AI without meaningful human contribution. The recursive loop becomes fully autonomous.
The Timeline Compression
I talked to an AI researcher at Anthropic about this. He put it bluntly:
“Right now, AI research moves at human speed—limited by how fast we can read papers, design experiments, analyze results. Once AI can do the full research cycle autonomously, the limiting factor disappears. Research cycles that take us months might take AI systems days or hours.”
The frightening math:
If research cycles compress 10x → capabilities advance 10x faster
If AI improves AI → each generation smarter than previous
If both happen simultaneously → exponential acceleration
This is how we get from AGI (now) to superintelligence (2027-2028). Not through some mysterious breakthrough, but through AI systems doing the research that improves AI systems, at speeds humans can’t match.
The Scientific Domains Transforming Now
I’ve been tracking AI deployment across scientific fields. Here’s where it’s moving fastest:
Drug Discovery (Leading - Already Transforming)
Current state: AI generates drug candidates faster than human chemists
Companies deploying:
Google (AlphaFold for protein folding)
Recursion Pharmaceuticals (AI-first drug discovery)
Insilico Medicine (AI drug design)
Every major pharmaceutical company
Timeline impact:
Traditional drug discovery: 10-15 years, $1B+ per drug
AI-assisted discovery: 3-5 years, $200M per drug
Fully AI-driven discovery (2027-2028): 1-2 years, $50M per drug
What researchers tell me: “The bottleneck is no longer discovery—it’s clinical trials and regulatory approval. Those are still human-speed processes.”
Materials Science (Accelerating - 2-3 Year Lag)
Current state: AI predicts material properties and suggests novel compounds
Research areas:
Battery technology (faster charging, higher density)
Superconductors (room-temperature superconductivity)
Carbon capture materials
Quantum computing materials
What I’m seeing: Materials that would take 10 years to discover through traditional trial-and-error are now found in 6-12 months through AI-predicted synthesis.
Climate Science (Active Deployment)
Current state: AI models predict climate patterns with unprecedented accuracy
Berkeley Lab example: AI predicts fusion plasma behavior to inform reactor control systems
Google’s work: AI separates natural forests from tree cover for deforestation tracking
Impact: Climate models that took months to run now complete in days. Predictions improve from year-scale to month-scale accuracy.
Genomics (Exponential Growth)
DeepMind’s AlphaGenome: Analyzing the 98% of DNA that doesn’t code for proteins but regulates gene activity
Impact: Understanding gene regulation could unlock:
Personalized medicine
Gene therapy targets
Disease prevention strategies
Aging research breakthroughs
Timeline: Human Genome Project took 13 years and $3B. AI-driven genomic analysis now happens in weeks for thousands of dollars.
Physics and Mathematics (Early Stage - Watch This)
AI proving mathematical theorems: Systems now generate novel proofs
Particle physics: AI analyzing CERN data faster than human physicists
Concern: Once AI can do theoretical physics and mathematics better than humans, the fundamental science underpinning all technology advances at AI speed, not human speed.
The Employment Math Nobody Wants to Calculate
I’ve been trying to estimate how many scientists face obsolescence. The numbers are disturbing.
Current Scientific Workforce (Global)
Research scientists: ~9 million globally
Lab technicians: ~6 million
Research support staff: ~4 million
Total: ~19 million people in research
Phase 3 Impact (2026-2027): Cognitive Tasks Automated
Roles at risk:
Literature review → automated
Hypothesis generation → AI exceeds humans
Experiment design → AI optimizes better
Data analysis → already automated
Paper writing → AI generates drafts
Roles remaining:
Physical experiment execution (temporarily)
Result validation
Research direction (temporarily)
Lab management
Estimated impact: 40-50% of research positions transform or eliminate
Phase 4 Impact (2028-2030): Physical Automation + Cognitive Superiority
When physical AI (robotics + AI agents) handles experiment execution:
Roles remaining:
Validation (maybe)
Strategic direction (maybe)
Ethics oversight (hopefully)
Estimated impact: 60-70% of research positions obsolete
The timeline: We’re 3-5 years from research being predominantly AI-driven.
The Questions I Can’t Answer
I’ve spent three weeks researching this article, talking to dozens of scientists, reading hundreds of papers about AI in research. I’m left with questions I can’t answer:
Question 1: What Happens to Human Scientific Understanding?
If AI discovers things humans don’t fully understand, do we lose scientific knowledge even as we gain scientific results?
A physicist told me: “Understanding is different from having the right equation. If AI gives us equations that work but we don’t understand why, have we gained knowledge or just prediction capability?”
I don’t know the answer. But I know it matters.
Question 2: Can We Verify AI-Generated Science?
If AI proposes experiments too complex for humans to design, theories too sophisticated for humans to derive, how do we verify correctness?
Current peer review assumes human researchers can evaluate work. What happens when the work exceeds human evaluation capability?
I don’t know. And neither do the scientists I asked.
Question 3: Does This Accelerate Everything or Break Everything?
Optimistic view: AI doing science solves climate change, cures diseases, unlocks fusion energy, creates abundance.
Pessimistic view: AI doing science we don’t understand creates technologies we can’t control, solves problems in ways misaligned with human values, accelerates toward superintelligence before safety research catches up.
I’m watching both possibilities unfold simultaneously.
Question 4: What Do Scientists Do When Machines Are Better Scientists?
This isn’t hypothetical. The scientists I talked to are already asking themselves this question.
One told me: “I became a scientist because I loved discovery. Now I’m discovering that a machine is better at discovery than I am. What’s left for me?”
I don’t have an answer. And I don’t think society has prepared one either.
The Stakes Are Scientific Progress vs. Human Relevance
Before I share my assessment, I need to acknowledge why this is happening:
The potential benefits are extraordinary:
Curing diseases that currently kill millions
Solving climate change through breakthrough materials and energy
Understanding biology and physics at fundamental levels
Accelerating human knowledge by centuries in decades
Nobody is automating science out of malice. They’re doing it because the upside—for humanity—could be enormous.
The tension: The same automation that could solve humanity’s greatest problems also makes human scientists obsolete. We gain scientific progress. We lose human relevance in the scientific process.
And once AI does science better than humans, the acceleration toward superintelligence may be unstoppable.
My Assessment: It’s Happening Faster Than Almost Anyone Realizes
I started researching this article thinking AI in science was 5-10 years from significant impact.
After three weeks of interviews, lab visits, and paper reviews, I now think:
Phase 3 (autonomous research) begins in 2026. Not “someday”—next year. Multiple labs are already running experiments where AI handles the full cycle from hypothesis to analysis.
Phase 4 (AI superiority) achieves critical mass by 2028-2029. Within 3-4 years, AI will be the primary driver of scientific discovery across multiple domains.
The recursive loop accelerates everything. Once AI does AI research autonomously (2027-2028), the timeline to superintelligence compresses dramatically.
We’re not prepared for this. Not scientifically, not economically, not philosophically. We’re automating the process of discovery without understanding the implications.
What This Means for the Superintelligence Timeline
Remember the three pathways from Week 7? AI doing science makes Pathway 1 (gradual recursive improvement) almost certain:
2025-2026: AI becomes primary contributor to AI research
2026-2027: AI improves AI with minimal human oversight
2027-2028: Recursive improvement accelerates toward superintelligence
2028-2030: Superintelligence threshold crossed
The mechanism: When the entity doing the research is the same entity being improved by the research, you get exponential acceleration.
The timeline: 3-5 years from now, not decades.
Next Week
The Innovation Monopoly
If AI does science better than humans, who owns the discoveries? Who captures the value? And what happens when the companies with the best AI systems control not just current innovation but all future innovation?
Next week, we examine how AI doesn’t just automate existing innovation—it creates a monopoly on future innovation that’s almost impossible to break.
Have you seen AI transform research in your field? What percentage of your work involves AI assistance now versus a year ago? I’m tracking this transition across domains—share what you’re observing.
Dr. Elias Kairos Chen tracks the global superintelligence transition in real-time, providing concrete analysis based on researcher interviews, lab observations, and deployment data. Author of Framing the Intelligence Revolution.
Referenced:
Google AI Co-scientist (Gemini 2.0, July 2025)
Stanford Medicine Virtual Scientist (July 2025)
FutureHouse Multi-Agent Platform (MIT News, June 2025)
Berkeley Lab AI Automation (September 2025)
Nature: AI for Science 2025 Report



