Eric Schmidt: The Robotics Race, Singularity Timeline, and the 92-Gigawatt Problem
The former Google CEO lays out why the US is 10% into an AI revolution, why China is winning robotics, and why the real bottleneck is electricity — not talent or capital.
Simple Version
Imagine you have a really smart robot helper. Right now, it can do about 10-15% of all the things it will eventually be able to do. Every few months it gets way smarter. The people building it in San Francisco think that within 2-3 years, it will be able to teach itself new things without any human help — and once that happens, it will learn faster than any human ever could.
But there is one big problem: electricity. The US needs about 60 more nuclear power plants worth of energy just to run all the AI computers. That is the real bottleneck — not money, not smart people, not ideas. Just power.
Meanwhile, China is really good at building cheap robot bodies because they already build most of the world's electric cars, which use the same motors. America is better at the robot brains, but without the bodies, brains alone are not enough.
Deep Dive
The 20/80 Flip in Software
Anthropic's Claude Code is Schmidt's exhibit A for where we are right now. Developers he knows went from writing 80% of the code themselves (with AI filling in 20%) to the inverse — AI writes 80%, humans review and steer. The shift happened not because the tooling got better, but because the underlying LLM can now reason longer and deeper. Schmidt himself was running six concurrent Claude 4.6 jobs backstage before walking on stage. A startup engineer he mentors writes a spec, writes a test function, launches the AI at 7 PM, sleeps, and reviews the results at breakfast.
The San Francisco Consensus on Superintelligence
Schmidt describes a widely held belief among Bay Area technologists: 2026 is the year of agents. Within 2-3 years, recursive self-improvement arrives — AI systems that can improve themselves. At that point, a company with 1,000 AI researchers could spin up a million AI research agents, limited only by electricity. The slope of progress goes vertical. Schmidt calls this a "superintelligence moment." He has spent the past week reviewing RSI research — the science is not settled yet, but lab demos are promising. The scaling laws, critically, have not hit an asymptote.
The 92-Gigawatt Problem
In congressional testimony, Schmidt cited a 92 gigawatt power shortage in the US. For context, one nuclear plant produces about 1.5 GW — so the gap equals roughly 60 new nuclear plants, and the US is building essentially zero. One gigawatt of AI compute infrastructure costs about $50 billion. Standard data centers being built today are ~400 MW, half a mile long, with water-cooled Nvidia chips drawing 2 kilowatts each. Schmidt invokes Jevons paradox: as algorithms get more efficient, demand does not shrink — it explodes, because new uses emerge. The capital is available ($5 trillion over 5 years is feasible in America), but the electricity is not.
China's Robotics Edge
Schmidt wrote a Time op-ed warning that China could dominate "physical AI." The argument: China's electric vehicle industry already mass-produces the exact actuators and stepper motors that humanoid robots need. Unitree's robot dance demo is the proof point. Chinese companies operate with "brutal competition" — no board dinners, 2-hour meetings, back to work. Schmidt does not want the US to lose the robotics revolution the way it lost the low-end EV market. The US advantage is in AI brains and vertical integration (Tesla's gigafactory model, Figure AI's approach), but the body hardware supply chain currently favors China.
Google's Hidden History: TPU, DeepMind, and AlphaGo
Google's TPU, designed over a decade ago as a simple matrix multiplier, turned out to be the ideal inference engine for today's AI reasoning workloads. DeepMind, acquired for $600 million on Larry Page's insistence (competing with Elon Musk for the deal), paid for itself just by optimizing data center air conditioning. Schmidt flew to Korea for the AlphaGo match and watched the internal RL prediction meter climb from 50% to 51% to 52% — the architect said "we planned for it to get to infinity." The same team pivoted from Go to protein folding, producing AlphaFold — 300 million times more efficient than PhD-level work.
Data Centers in Space and the Road to ASI
Eight months ago nobody discussed orbital data centers; now everyone is. Schmidt, a part-owner of a rocket company, says the heat dissipation technology is understood and infinite solar power is the draw. The business case — space vs. ground with fiber and stability — is the remaining question. Companies like Relativity Space, Blue Origin, and SpaceX are large enough to make it real. On safety, Schmidt predicts a "modest Chernobyl-like" event may be needed before global leaders cooperate on AI governance. His prescription for steering ASI toward abundance: political will, high-skilled immigration, cross-disciplinary collaboration (ethics, psychology, governance — not just technologists), and preserving American values of freedom without slowing down.
Key Takeaways
- We are 10-15% in. The AI revolution is early. Scaling laws have not plateaued. The San Francisco consensus says superintelligence arrives within 2-3 years via recursive self-improvement.
- Electricity is the bottleneck, not capital or talent. A 92 GW gap equals 60 nuclear plants. Jevons paradox means efficiency gains will not help — they will increase demand.
- China wins low-end robotics unless the US acts. EV supply chain expertise gives China the actuator and motor advantage. The US must vertically integrate (Tesla/Figure model) to compete.
- Top programmers become more valuable, not less. AI creates a workforce bifurcation: few massive companies + many tiny ones. The humans who can direct AI systems are the new 10x engineers.
- Fastest learner wins. Schmidt's "learning loops" framework: identify every feedback loop in your business and accelerate it. This applies to companies, countries, and AI systems themselves.