Semiconductor Industry Updates: Hyperscalers Go Vertical and Policy Clouds Linger
30 July 2025
Read Time 10+ MIN
3 Key Takeaways This Month
- AI Buildout Accelerates Across Sectors: From hyperscalers to automakers, AI infrastructure spend continues to ramp. Meta and Amazon unveiled new AI chip strategies while traditional players like Marvell and Micron are seeing momentum in HBM and connectivity.
- Policy Moves Remain a Wildcard: As CHIPS Act disbursements pick up, labor shortages and Investment Accelerator bottlenecks persist. Meanwhile, export restrictions on advanced lithography tools tighten.
- Ecosystem Consolidation Picks Up: M&A chatter intensifies across the fabless and equipment space, with investors eyeing design efficiency and vertical integration as key to next-phase gains.
Semiconductor Industry Snapshot
June-July Highlights
- Meta’s In-House Chip Strategy Unveiled: Meta formally introduced its in-house inference chip “Artemis,” part of a broader effort to optimize LLM performance and reduce reliance on NVIDIA. The chip will support the company’s Llama 3 and upcoming Llama 4 deployments.
- Amazon Doubles Down on Custom AI Stack: Amazon confirmed the internal rollout of Trainium2 and Inferentia3, both designed to handle model training and inference at AWS scale. Rumors suggest the company is also evaluating Arm-based server CPUs to replace x86 in key workloads.
- Micron Reports HBM3E Ramp: Micron announced volume production of HBM3E for NVIDIA’s Blackwell platform. With demand surging from both hyperscalers and AI startups, Micron’s memory capacity expansion is ahead of schedule.
- ASML Hit by New Restrictions: The Dutch government moved to further limit ASML’s exports of advanced EUV tools to China, following U.S. lobbying. This creates potential revenue uncertainty for ASML but reinforces nearshoring demand in allied regions.
- Marvell Gains on AI Networking: Marvell’s earnings call highlighted record growth in AI-related connectivity, with particular strength in custom interconnects for hyperscaler data centers. The company noted expanding design wins in 800G switching and custom ASICs.
Top Semiconductor Stories
Meta Enters AI Silicon Arena
Meta’s launch of “Artemis,” its in-house AI inference chip, signals a new phase in hyperscaler silicon independence. The chip—developed over two years—will serve both training and inference for internal LLMs. Meta joins Google, Amazon, and Microsoft in the push to reduce reliance on NVIDIA, although initial benchmarks suggest Artemis will be deployed in targeted workloads, rather than general-purpose computing.
CHIPS Act Progress Shadowed by Labor Crunch
CHIPS Act funding has now surpassed $40 billion in approvals, with significant projects in Arizona, Texas, and New York entering the build-out phase. However, execution delays persist due to labor shortages in engineering. The U.S. Investment Accelerator has faced criticism for its opaque approval processes and unclear benchmarks, which have stalled timelines for key fabs, including Samsung Austin and TSMC Phase 2.
Semiconductor M&A Rumblings
Industry observers have noted an increase in chatter around consolidation in the fabless design and IP space. Companies with a strong AI adjacency (e.g., IP vendors, RF specialists) are reportedly exploring strategic options amid the rising costs of next-gen node development. While no major deals have closed, investor attention has shifted to capital efficiency and platform scale.
Sector Headwinds & Tailwinds
What’s Working:
- Micron: Capitalizing on the HBM3E ramp for NVIDIA and AI data center demand. Positive pricing trends in DRAM and NAND also support margins.
- Marvell: Benefiting from rising demand for high-speed connectivity and custom ASIC design services in AI cloud deployments.
- AMD: Momentum continues after Microsoft’s MI325X adoption, with reports of additional wins at Meta and Oracle.
What’s Challenged:
- NVIDIA: While demand remains high, growing hyperscaler self-reliance and China export restrictions are creating long-term questions about volume visibility.
- ASML: Regulatory hurdles continue to cast a shadow over revenue exposed to China. Meanwhile, nearshoring may delay tool deployments beyond 2026.
- GlobalFoundries: Lags peers in high-end AI logic manufacturing and faces elevated fixed costs as U.S. fab buildouts proceed without major AI design wins.
Looking Ahead
Expect continued investment in custom AI silicon strategies as LLM workloads diversify. Meta, Amazon, and Microsoft are all expected to detail second-gen roadmaps by fall. Meanwhile, the CHIPS Act’s effectiveness will hinge on resolving labor bottlenecks and clarifying funding approvals. Keep an eye on potential M&A in the fabless design space as firms seek scale and IP depth to remain competitive at 3nm and below.
Catch Up on Last Month’s Updates Below
Tightening Tariffs and CHIPS Act Changes
3 Key Takeaways in May
- Tariff Pause Signals Temporary Relief: The Trump administration paused new Section 301 tariffs on Chinese semiconductors amid negotiations. This provides short-term breathing room for supply chains but leaves long-term policy friction unresolved.
- Google I/O Spotlights Custom AI Chips: Google’s unveiling of its TPU v6 and Gemini 2.5 platform underscores the hyperscaler shift toward in-house AI silicon boosting demand for advanced memory, packaging, and compute innovations.
- Hyperscalers Intensify AI Silicon Race: From Microsoft’s AMD-powered builds to Amazon’s acceleration of Trainium2, hyperscalers are shaping next-gen silicon strategy, transforming the competitive landscape for semiconductor players.
Semiconductor Industry Snapshot
April Recap Highlights
- The U.S. paused the rollout of expanded Section 301 semiconductor tariffs as part of renewed negotiations with China. This signals a potential opening in trade tensions but continues to fuel uncertainty around long-term supply chain stability.
- Google I/O 2025 introduced TPU v6 chips, co-developed with Broadcom, delivering major improvements in training and inference for large models like Gemini 2.5. The emphasis on in-house silicon aligns with ongoing hyperscaler trends toward vertical integration.
- Microsoft announced a major AI infrastructure buildout using both AMD’s new MI325X GPUs and NVIDIA’s Blackwell Ultra chips, confirming robust demand for advanced compute silicon.
- Amazon is reportedly fast-tracking Trainium2 and Inferentia3 development as part of a broader strategy to reduce dependence on external GPU vendors.
- On the geopolitical front, labor and investment concerns persist. While CHIPS Act projects continue to advance, the U.S. Investment Accelerator's oversight remains a wildcard for timelines and execution confidence.
Top Semiconductor Stories
Tariff Pause Reflects Strategic De-escalation
In a notable shift, the U.S. Trade Representative announced a temporary halt to new Section 301 tariffs on Chinese semiconductor imports. The move is intended to allow more constructive negotiations with Beijing and avoid immediate supply chain disruption. Semiconductor players with heavy China exposure, such as Qorvo and ON Semiconductor, are seeing short-term relief. However, the long-term policy direction remains uncertain, especially amid ongoing congressional scrutiny.
Hyperscalers Expand Custom Silicon Ambitions
At Google I/O, the release of TPU v6 built for Gemini 2.5 and future LLMs highlights how hyperscalers are increasingly designing their own silicon. Co-development with Broadcom signals rising interest in semi-custom models. Microsoft’s AI stack, now incorporating AMD’s MI325X alongside NVIDIA's Blackwell GPUs, shows a multi-vendor approach to compute. Meanwhile, Amazon is accelerating development of Trainium2 and Inferentia3. These moves reinforce demand for HBM memory, advanced packaging, and fabless logic expertise.
CHIPS Act Oversight Looms Over Execution
While more than $30 billion in CHIPS Act funding has been earmarked, the U.S. Investment Accelerator’s new review authority is already creating friction. Industry insiders warn that delays may affect critical fabs, particularly TSMC Arizona and Samsung Texas. Though aimed at improving project efficiency, the lack of clarity on approval processes is affecting private-sector confidence and timeline visibility.
Sector Headwinds & Tailwinds
What’s Working:
- Micron: Continuing to benefit from HBM demand driven by AI growth, with additional momentum from IoT and edge memory segments.
- Broadcom: Strengthening position as a custom silicon partner to hyperscalers.
- AMD: Riding strong sentiment following Microsoft’s endorsement of the MI325X for AI infrastructure builds.
What’s Challenged:
- NVIDIA: Hyperscalers developing their own silicon in partnership with companies like Broadcom leaves a lot of unknowns with Nvidia’s near term revenues.
- Qorvo & ON Semiconductor: Despite tariff pause, long-term China exposure remains a risk variable.
- Labor Shortages: With many CHIPS Act projects now in the execution phase, the talent bottleneck, especially for 4-year technical roles, remains a concern.
Looking Ahead
The next phase of the semiconductor cycle will be shaped by how three forces converge: policy, hyperscaler strategy, and supply chain resilience. Stay tuned for updates on U.S.-China tariff negotiations, formal AI chip roadmaps from Amazon and Meta, and clarification from the U.S. Investment Accelerator on project approvals.
Related Insights
28 October 2025
27 October 2025
03 December 2025
AI is reshaping innovation across biotech by accelerating drug discovery, improving genomics, and advancing precision medicine.
29 October 2025
AI is transforming video gaming by accelerating development and enhancing player engagement, while the growth of mobile and in-game spending is redefining how games are monetized and experienced.
28 October 2025
27 October 2025
Pharma's strong pipelines, reduced policy risks, and demand growth drive renewed investor interest. PPH offers targeted access to top innovators like Eli Lilly and Merck.
This five-part educational series will provide you with a better understanding of ETFs, from what they are to how to potentially use them within an investment portfolio. In this first part, let’s start with the basics.