
NVIDIA CEO Jensen Huang’s GTC Washington D.C. keynote speech once again ignited investor enthusiasm. The company’s stock opened high and continued to rise, surging by 5.55% at one point to reach a high of $212.19. It closed at $207.04 for the day, up 2.99%, with a total market capitalization of $5.03 trillion, marking the first time it has surpassed the $5 trillion milestone.
Huang stated that through 30 years of continuous investment, NVIDIA has built an accelerated computing system centered around GPUs and CUDA, completely breaking through the bottlenecks of Moore’s Law. By redesigning algorithm kernels and creating over 350 specialized acceleration libraries covering key areas such as chip manufacturing, supply chain optimization, medical imaging, genomics, and aviation simulation, NVIDIA has established the world’s deepest software ecosystem barrier. CUDA not only ensures long-term compatibility with hundreds of millions of devices but has also become the foundational language for scientific and industrial computing. A demonstration video entirely generated by mathematical simulation fully showcased the evolution from virtual fighter jets in 1993 to today’s AI and quantum computing, marking a fundamental shift in computing paradigms from sequential processing to parallel intelligence.
Huang noted that AI factories and computing infrastructure have redefined AI, transforming it from a tool into autonomous “workers” capable of executing tasks. Its core capability lies in unifying and tokenizing information such as language, images, genes, and proteins, and learning semantic relationships. To support AI’s continuous reasoning and thinking, NVIDIA introduced the concept of the “AI factory”—a specialized computing system built specifically for generating tokens, whose scale has given rise to “chip mountains” and data center clusters. Unlike general-purpose data centers, AI factories require extreme computing density, low-latency interconnects, and high energy efficiency, driving global capital expenditure toward AI infrastructure construction. The triple scaling laws (pre-training, post-training, and reasoning) form a positive feedback loop of “intelligence—usage—computing power—stronger models,” resulting in a compound annual growth rate of over 100% for computing demand and making AI factories a central theme in global tech investment.
Facing the slowdown in transistor growth, NVIDIA has adopted extreme co-design, restructuring everything from chips and interconnects to networks and software stacks. The Grace Blackwell architecture achieves super-sized GPUs through wafer-level packaging, while Spectrum-X Ethernet connects thousands of GPUs, building a GigaScale cross-data-center network that delivers nonlinear performance leaps. A single rack system integrates 130 trillion transistors, 2 miles of copper cables, and 1,024 HBM chips, weighing 2 tons and setting the physical benchmark for AI factories. Performance improvements no longer rely solely on doubling transistors but are achieved through architectural synergy, delivering a 10x efficiency leap. This system represents the most comprehensive overhaul of computer architecture since the IBM System/360, marking NVIDIA’s transformation from a chipmaker to a definer of AI infrastructure.
Additionally, NVIDIA unveiled the NVLink-Q high-speed interconnect architecture, supporting data exchange of several terabytes per second between quantum processors and GPUs, addressing the bottleneck of massive data transmission required for quantum error correction (603138).
