DeepSeek’s Disruption: The Impact on Nvidia and the Semiconductor Industry
28 January 2025
Read Time 5 MIN
Please note that VanEck may have positions in the firms referenced herein.
DeepSeek’s recent claim of high-end results at a fraction of typical costs has rattled investors and raised questions about Nvidia’s dominance. However, it’s important to see the forest through the trees: hyperscalers and enterprises could now shift focus on leveraging existing AI infrastructure for inference, where Nvidia still holds significant advantages with its software ecosystem and next-gen products. Meanwhile, the broader semiconductor market is ripe for diversification, with specialized ASICs and other chipmakers poised to capture demand. As a result, a balanced investment approach across the semiconductor landscape remains the most prudent strategy.
- The Breakthrough Claim: DeepSeek’s R1 Model
- The Hyperscalers’ Massive CapEx on AI Training
- Training vs. Inference in the Semiconductor Industry
- DeepSeek’s Disruption: Concerns for Nvidia and Other Chipmakers
- Two Ways to Interpret the DeepSeek Story (and the Case for Diversification)
- Nvidia’s Continued Strengths and the Move to Inference
- Why We Favor a Diversified Approach
The Breakthrough Claim: DeepSeek’s R1 Model
- DeepSeek’s R1 supposedly delivers near state-of-the-art reasoning performance at a fraction of the cost.
- Compared with OpenAI and other leading AI labs, DeepSeek says they used inferior chips yet still achieved impressive results at a significantly lower training cost.
- This prompts a key industry question: If new models can be trained so cheaply, do hyperscalers still need more massive GPU investments to achieve top performance?
The Hyperscalers’ Massive CapEx on AI Training
- Over the past two years, major cloud providers (AWS, Azure, Google Cloud, etc.) have poured billions into data centers outfitted with Nvidia GPUs to handle AI training at scale.
- The open question has been how these hyperscalers plan to monetize that infrastructure.
- AI researchers’ commentary now suggests that performance gains from ever-bigger models may be reaching diminishing returns, nudging the market from investing in bigger clusters toward monetizing existing models.
- The GPU power dynamic has started to shift from Nvidia’s ownership of the market to the hyperscalers’ increased purchasing power.
Note: Each hyperscaler has different paths to monetization—through cloud subscriptions, consumer apps, enterprise services, etc.—which reduces the likelihood that open-source alone will displace them.
Training vs. Inference in the Semiconductor Industry
- The industry is moving from a training-dominated focus to the inference phase, where real-world applications and monetization happen.
- Tesla offers a perfect illustration:
- They train their Full Self-Driving (FSD) models in big data centers.
- Inference happens in each car (edge computing)—with the car running the model locally without constant round trips to the cloud.
- This edge-based inference is monetized through subscription fees (e.g., $99/month for FSD).
DeepSeek’s Disruption: Concerns for Nvidia and Other Chipmakers
- DeepSeek’s announcement—training a high-performance model on cheaper hardware—caused market jitters about future GPU demand.
- Previous assumptions around scaling laws could be disproven with this model. R1 did more with less computing.
- The concern: if hyperscalers can achieve state-of-the-art results with less costly or alternative hardware, Nvidia’s growth in data center GPUs specifically could slow. This sentiment contributed to recent stock price drops across AI chipmakers.
Two Ways to Interpret the DeepSeek Story (and the Case for Diversification)
Despite the negative headlines, two main perspectives point to the wisdom of diversifying semiconductor investments—for example, through ETF like the VanEck Semiconductor ETF (SMH).
If DeepSeek’s Claims Are Fully True…
- DeepSeek’s R1 shows that you can train top-tier models with cheaper hardware.
- This doesn’t eliminate Nvidia, but it accelerates a timeline where Nvidia’s near-total dominance in AI training normalizes.
- As training hardware demand broadens, other chip designers (e.g., ASIC makers, Broadcom, Intel, AMD, specialized startups) can gain footholds—especially in inference, where purpose-built chips are cost- and power-efficient.
- Takeaway: Even if GPU training demand levels out, there is still significant opportunity across the diverse semiconductor ecosystem (i.e. fabless).
If DeepSeek’s Claims Are Overstated…
- Perhaps DeepSeek’s achievement isn’t as groundbreaking as it seems, or there are undisclosed constraints.
- Still, the long-term AI cycle naturally shifts to monetizing models via inference.
- That inference stage often favors specialized ASICs, smaller GPU instances, and other accelerators (including CPU enhancements), broadening competition with Nvidia’s high-end GPUs.
- Takeaway: Nvidia may continue leading in training, but, as AI matures, more players will compete for different parts of the AI hardware stack. A diversified semis strategy remains prudent.
Nvidia’s Continued Strengths and the Move to Inference
- Nvidia remains a highly innovative leader in AI hardware and software.
- They have announced new products designed for inference workloads—like next-gen GPU architectures (e.g., Hopper) and specialized platforms that bridge training and inference.
- As the market transitions, Nvidia’s data center GPU business will likely see more normalized growth—but their comprehensive ecosystem (hardware, CUDA software, enterprise partnerships) still positions them as a key player.
- In parallel, other chipmakers are ramping up, and a broader selection of ASICs and CPUs is emerging, enabling a variety of cost-effective inference solutions.
Why We Favor a Diversified Approach
- Training vs. Inference: Industry focus is shifting from massive training (where Nvidia dominated) to inference (where more players will have competitive offerings).
- Monetization: AI is moving into real-world deployments and subscriptions (e.g., Tesla’s FSD), highlighting the importance of efficient inference hardware and networks.
- Nvidia’s Role: Nvidia is still well-positioned with leading GPU and software solutions, plus new products targeting inference. However, the days of unbounded data center GPU demand may be giving way to a more balanced, multi-vendor market.
- Investment Strategy: In this environment, diversifying across the semiconductor sector (e.g., via ETFs like SMH and SMHX) can hedge against potential shifts in market leadership—from Nvidia’s GPUs to specialized ASICs, fabless designers, and other hardware innovators.
Ultimately, the AI hardware landscape remains dynamic. While Nvidia is poised to remain a major force, Deepseek’s story—and the broader shift to inference—underscore the value of broad exposure to the entire semiconductor value chain.
Related Insights
Related Insights
24 December 2024
26 September 2024