When Intel and Microsoft announced their plan to build a ‘custom processor’ on Intel’s 18A fabrication process in early 2024, neither of the companies even hinted at the purpose of that silicon, leaving a lot of space for guesswork and interpretation by industry observers. Today, SemiAccurate reportedly broke the silence about Intel Foundry’s 18A foundry customers, reporting that Intel Foundry (IF) is on track to produce an AI processor on 18A or 18A-P for Microsoft.
So far, Intel Foundry has officially landed only one major external customer for its 18A manufacturing technology, which is Microsoft. But while we tend to think about Microsoft as a cloud and software giant, the company has quite a potent hardware development (or at least hardware definition) team that builds custom silicon for a variety of data center applications, including Cobalt CPUs, DPUs, and Maia AI accelerators, just to name some. As it turns out, one of Microsoft’s next-generation AI processors will reportedly be made by Intel Foundry.
If true, the deal would give Microsoft access to a US-based chip supply chain that isn’t as exposed to the capacity constraints that we see with both chip manufacturing and advanced packaging at TSMC. Additionally, the deal could be seen as favorable to Microsoft in other ways, given the US government’s investment in Intel.
For the lack of details, we can only speculate which of the next-generation Maia processors will be produced by IF, but this would be a big development for Intel. Since we are dealing with data center-grade silicon, we are talking about processors with a fairly large die size. Hence, if they are on track to be produced at Intel Foundry, then the company’s 18A (or 18A-P with 8% higher performance) fabrication process is projected to be good enough not only for Intel itself (which is on track to ramp its Xeon 6+ ‘Clearwater Forest in 2026), but also for its foundry customers. The contract could be a sign of good yields on Intel’s node: Yields would have a significant impact on a large processor like the Maia, meaning Microsoft would have likely opted for using a product based on a smaller die instead if there were yield issues with Intel’s node.
Microsoft’s original Maia 100 processor is a massive 820 mm^2 piece of silicon that packs 105 billion transistors, and which is larger than Nvidia’s H100 (814 mm^2) or B200/B300 compute chiplets (750 mm^2). While the lion’s share of Microsoft’s Azure offerings for AI run on Nvidia’s AI accelerators, the company is investing a lot to co-optimize its hardware and software to achieve higher performance while increasing efficiency and thus lowering the total cost of ownership. As such, Maia is an important project for Microsoft.
Assuming that Microsoft’s AI processors, to be made by Intel Foundry, continue to use near-reticle-sized compute dies, then Intel’s 18A manufacturing process is on track to achieve a defect density that is low enough to ramp such chips with decent yields. Of course, Microsoft could partition its next-gen AI processor into several smaller compute chiplets linked through Intel’s EMIB or Foveros technologies, but that could impact performance efficiency, so we are most likely talking about a big die or dies close to the reticle size of EUV tools, which is 858 mm^2.
To de-risk such a large component, Intel and Microsoft are almost certainly running DTCO loops, where Intel tunes transistor and metal stack parameters for Maia’s workloads and performance targets. In addition, Microsoft could embed spare compute arrays or redundant MAC blocks into next-gen Maia layout to enable post-manufacturing fusing or repair, which is what companies like Nvidia do with their designs.
Stay On the Cutting Edge: Get the Tom’s Hardware Newsletter
Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.
Meanwhile, the big question is what exactly Intel Foundry will produce for Microsoft and when. Based on the latest rumors, Microsoft is currently working on its next-generation codenamed Braga (Maia 200?) processor that will use TSMC’s 3nm node and HBM4 memory, supposedly due in 2026, as well as Clea (Maia 300?) due later.
Per fornire le migliori esperienze, utilizziamo tecnologie come i cookie per memorizzare e/o accedere alle informazioni del dispositivo. Il consenso a queste tecnologie ci permetterà di elaborare dati come il comportamento di navigazione o ID unici su questo sito. Non acconsentire o ritirare il consenso può influire negativamente su alcune caratteristiche e funzioni.
Funzionale
Always active
L'archiviazione tecnica o l'accesso sono strettamente necessari al fine legittimo di consentire l'uso di un servizio specifico esplicitamente richiesto dall'abbonato o dall'utente, o al solo scopo di effettuare la trasmissione di una comunicazione su una rete di comunicazione elettronica.
Preferenze
L'archiviazione tecnica o l'accesso sono necessari per lo scopo legittimo di memorizzare le preferenze che non sono richieste dall'abbonato o dall'utente.
Statistiche
L'archiviazione tecnica o l'accesso che viene utilizzato esclusivamente per scopi statistici.L'archiviazione tecnica o l'accesso che viene utilizzato esclusivamente per scopi statistici anonimi. Senza un mandato di comparizione, una conformità volontaria da parte del vostro Fornitore di Servizi Internet, o ulteriori registrazioni da parte di terzi, le informazioni memorizzate o recuperate per questo scopo da sole non possono di solito essere utilizzate per l'identificazione.
Marketing
L'archiviazione tecnica o l'accesso sono necessari per creare profili di utenti per inviare pubblicità, o per tracciare l'utente su un sito web o su diversi siti web per scopi di marketing simili.
Leave a Reply