all theories
Theory · long form

Why Semiconductors Reshaped Global Power

Compute is the first commodity since uranium that is both critical and concentrated.

Three places on earth — Hsinchu (TSMC), Veldhoven (ASML), and Santa Clara (NVIDIA) — control the supply of the silicon that every frontier AI model trains on. No other strategic commodity in modern history has been this concentrated. Oil's choke-points are at least ten cities; rare-earth refining is more concentrated than oil but on commodities with substitutes; compute, currently, has none.

The concentration is structural, not accidental. EUV lithography requires droplets of tin hit by a 50 kW laser at 50 kHz; ASML is the only company that has shipped a working machine. TSMC has spent 30 years and approximately $200B in capital expenditure learning to run those machines at scale; the learning curve is not legible from outside the firm. NVIDIA's CUDA software platform is the result of a 2006 strategic decision that, by 2023, made every other accelerator vendor effectively non-interoperable with the existing AI software stack.

The consequence is that the rules of geopolitical competition have changed in a way few state planners had anticipated. The 2022 U.S. chip export controls — and the 2024 expansion to HBM and lithography equipment — are the strategic equivalent of an oil embargo, applied against a state (China) that imports 60% of its semiconductors, including essentially all of the leading-edge ones it needs to train frontier AI. China's response — accelerated indigenization through SMIC, Huawei's HiSilicon, plus DRAM at CXMT, plus the Loongson and other CPU lines — has been substantial but slow, and the chip-supply-constrained training efficiency of DeepSeek R1 (early 2025) is, in retrospect, a forced response to that constraint.

The deeper question is whether semiconductors are durable as a strategic asset class. Three plausible scenarios. (1) The current pattern persists: compute remains expensive, frontier AI requires concentrated supply, and the three-city choke-point sets the rules for two decades. (2) Hardware diversifies: custom ASICs (Google TPU, Amazon Trainium, Microsoft Maia), neuromorphic, photonic, and Chinese-fab catch-up break the concentration over five to ten years. (3) Software shifts the bottleneck: efficiency gains via better training recipes (Mixture-of-Experts at scale, R1-style cheap RL post-training) reduce the marginal compute required per unit of capability, partially de-fanging the concentration.

The historical analogue is the 19th-century coal-fired Royal Navy. For about a century, anyone who wanted to project power around the world needed access to British coaling stations. The pattern held until oil-fired naval propulsion (1910s) and aviation broke the coaling-station logic. Coal as a strategic asset class lasted about 100 years. Semiconductors as the same kind of strategic class may last a similar amount of time. The question is what replaces them, and where.