Section IV · The Digital Revolution & Its Critics
Jensen Huang
Compute, Acceleration, and the Infrastructure of Artificial Intelligence
To understand Jensen Huang, you have to begin with a compute question: what happens when processing power becomes the limiting factor in technological progress?
As software systems grew more complex — especially in graphics, simulation, and artificial intelligence — the demand for faster, more specialized computation increased. General-purpose CPUs were no longer sufficient for certain classes of problems.
Huang built for that constraint.
At the center of his worldview is a defining claim:
Specialized compute infrastructure enables new technological paradigms.
As the co-founder and CEO of NVIDIA, Huang focused on graphics processing units (GPUs), originally designed to accelerate visual rendering. Over time, these chips proved well-suited for parallel processing tasks, including machine learning and AI.
From this perspective, compute is foundational. The ability to process large volumes of data quickly determines what technologies are possible. As AI models grew in scale, GPUs became essential infrastructure for training and deploying these systems.
This created a new form of power:
Control over the hardware layer that enables advanced computation.
NVIDIA’s GPUs, combined with its software ecosystem (such as CUDA), positioned the company at the center of AI development. Researchers, startups, and large technology firms rely on this infrastructure to build and run models.
This reflects a broader framework: Technological progress is constrained and shaped by compute capacity.
Huang’s strategy extended beyond hardware. By building integrated systems — chips, software tools, and developer ecosystems — NVIDIA created a platform that is difficult to replicate. The combination of performance and ecosystem lock-in reinforced its position.
Supporters see Huang as an architect of the AI era.
They argue that NVIDIA’s innovations enabled breakthroughs in machine learning, scientific computing, and real-time graphics. By investing early in GPU computing, the company helped unlock new capabilities across industries.
From this perspective, Huang expands the analysis of economic systems to include compute infrastructure as a critical resource.
Critics, however, raise important concerns.
They argue that concentration in the supply of advanced chips can create bottlenecks and dependencies. Access to high-performance compute may become uneven, favoring large firms and well-funded institutions.
Critics also point to systemic risks: when a small number of companies control key infrastructure, innovation pathways can narrow.
A deeper tension lies in the relationship between capability and access. As compute becomes more powerful, who gets to use it — and under what conditions? How are resources allocated in a world where demand for computation exceeds supply?
Huang’s continued focus on AI infrastructure, data centers, and accelerated computing reflects an effort to define this layer of the digital economy.
Jensen Huang did not invent computing. But he helped redefine its trajectory — demonstrating how specialized hardware and integrated ecosystems can shape the direction of technological change.
His legacy raises enduring questions: Who controls the compute infrastructure that powers AI and advanced technologies? How should access to these resources be distributed? And what are the implications of concentrating such foundational capabilities in a small number of firms?