Nvidia's $1 Trillion AI Vision Unveiled at GTC 2026

Nvidia GTC has evolved into a premier AI conference since its inception in 2009, mirroring the company’s increasing influence and revenue. This year’s event marked a shift in focus, emphasizing expansion within existing markets rather than solely pursuing new ones.

Revenue Projection Soars to $1 Trillion

In a significant announcement, Nvidia upgraded its AI revenue forecast from $500 billion by 2026 to a staggering $1 trillion by 2027. CEO Jensen Huang highlighted the increasing demand for AI inferencing and the rapid growth of agentic workloads as key drivers of this projection.

The “Five-Layer Cake” of AI

Since December 2025, Huang has been articulating a “five-layer cake” model of AI. This framework illustrates the vertical complexity of AI, positioning it as critical infrastructure with layers ranging from energy requirements to end-user applications. The goal is to simplify understanding of the complete AI stack.

Understanding the AI Stack Layers

  • Energy: The foundational layer, increasingly recognized as a limiting factor for compute capacity.
  • Compute: Chips powered by energy, providing the processing power for AI tasks.
  • Infrastructure: Land, buildings, power delivery, cooling, and networking supporting the chips.
  • Models: AI models tailored to specific applications and use cases.
  • Applications: Delivering results to consumers, businesses, and governments.

Huang emphasized the need to reinvent the entire computing stack to support “the largest infrastructure buildout in human history,” positioning Nvidia as a foundational enabler for all AI computing.

New Platforms: Vera Rubin and Beyond

Nvidia unveiled the Vera Rubin platform, combining Vera CPUs and Rubin GPUs, scheduled for release soon. The Vera Rubin Pod, a rack-scale offering, promises up to 10x the performance of the Blackwell platform, potentially unlocking a $300 billion annual inference opportunity.

Groq Acquisition and Diversified Compute

A key component of the Vera Rubin Pod is the Groq LPX platform, leveraging the Groq 3 LM30 chip. Nvidia acquired Groq’s IP and talent for $20 billion in December 2025. This acquisition signals a move beyond GPU-centric solutions, incorporating diverse architectures like Groq’s language processing units (LPUs) with high SRAM bandwidth.

Nvidia is also expanding into CPU-only solutions with its 88-core Vera CPU, competitive with Intel and AMD, and the Bluefield 4 STX for storage-oriented applications.

AI Factory and Space Computing

Nvidia DSX, based on Vera Rubin, is the latest platform for its AI Factory offering, providing a turnkey solution for hyperscalers and enterprises. The company also announced Space 1, a module designed for space applications, delivering up to 25x the AI performance of the H100 GPU.

Agentic AI and DLSS 5.0

Agentic AI is a growing focus, with the introduction of OpenClaw, an open-source agent running on Nvidia’s NemoClaw stack. Nvidia also announced DLSS 5.0, an AI-assisted feature for enhancing image quality, introducing neural rendering techniques and increased developer control.

Strong Demand and Future Growth

Jensen Huang reported “Demand for Nvidia GPUs is off the charts.” The company is now projecting $1 trillion in revenue by 2027, driven by the transition to AI inference and agentic AI. This represents a significant increase from the previously projected $500 billion by 2026.