How China’s DeepSeek Triggered Nvidia’s Biggest-Ever Stock Crash And Why Jensen Huang Is Impressed

Dwijesh t

In January 2025, a relatively unknown Chinese AI startup shocked global markets and wiped nearly $590 billion off Nvidia’s valuation in a single trading session. The company behind the chaos was DeepSeek, a Hangzhou-based artificial intelligence firm whose breakthrough forced investors to rethink the economics of AI development and Nvidia’s dominance.

On January 27, 2025, Nvidia’s stock plunged 17%, marking the largest single-day market value loss in U.S. corporate history. The trigger wasn’t a new chip, but a new way of building AI.

Why DeepSeek Caused Nvidia’s Historic Sell-Off

DeepSeek released its R1 reasoning model, claiming performance comparable to OpenAI’s GPT-4o but trained at a fraction of the cost. According to the company, R1 was trained using just 2,048 Nvidia H800 GPUs, older chips designed to comply with U.S. export restrictions to China.

Even more alarming for investors was the cost: under $6 million. At the time, U.S. tech giants were spending hundreds of millions of dollars and deploying tens of thousands of Nvidia’s most powerful H100 chips to train similar models.

Wall Street’s fear was simple but brutal: if cutting-edge AI could be built cheaply on older hardware, demand for Nvidia’s premium GPUs could collapse.

Jensen Huang’s Surprising Response

Rather than pushing back, Nvidia CEO Jensen Huang took an unexpected stance. Speaking at CES 2026 on January 5, Huang praised DeepSeek’s work as “really, really exciting.”

He framed the breakthrough as a triumph of smart engineering over brute-force computing, noting that DeepSeek proved AI innovation doesn’t require unlimited hardware just better architecture. Huang also credited the company with accelerating the global shift toward open-source AI development.

The Inference Advantage

Crucially, Huang argues that training efficiency doesn’t weaken Nvidia’s long-term outlook. Instead, he believes it strengthens it.

According to Huang, advanced reasoning models like DeepSeek R1 require up to 100 times more computing power during inference the stage where AI models are actually used. Greater efficiency lowers barriers to adoption, encouraging more companies and consumers to deploy AI at scale, which ultimately increases demand for Nvidia chips.

Where Things Stand in 2026

By January 2026, Nvidia’s stock has largely recovered, as investors embraced Huang’s long-term vision for AI infrastructure growth. Meanwhile, DeepSeek continues to push boundaries, recently publishing research on Manifold-Constrained Hyper-Connections (mHC) to further reduce AI training costs.

Nvidia, for its part, is already moving ahead with its Vera Rubin platform, promising up to 5× performance gains, ensuring it stays ahead even in an era of extreme efficiency.

What began as a market panic may ultimately prove to be a catalyst reshaping AI economics while reinforcing Nvidia’s central role in the industry’s future.

Share This Article