Home News What kind of chip will OpenAI make?

What kind of chip will OpenAI make?

2025-09-26

Share this article :

OpenAI is reportedly developing a custom AI accelerator with the help of Broadcom, apparently in an effort to reduce its reliance on Nvidia and lower the cost of its GPT-series models.

According to a report by the Financial Times, citing sources familiar with the matter, Broadcom CEO Hock Tan revealed on Thursday's earnings call that Broadcom's mystery $10 billion client is none other than Sam Altman's AI hype factory, OpenAI.

While Broadcom doesn't disclose its clientele, it's an open secret that the company's intellectual property forms the basis of much of its custom cloud silicon.

Tan told analysts on Thursday's call that Broadcom is currently serving three XPU customers, with a fourth on the horizon.

He said, "Last quarter, one of these potential customers placed a production order with Broadcom, so we listed them as a qualified XPU customer, and in fact, they have secured over $10 billion in orders for AI racks based on our XPUs. Given this, we now expect our fiscal 2026 AI revenue outlook to improve significantly compared to last quarter's guidance."

OpenAI has been rumored for some time to be internally developing a chip to replace Nvidia and AMD GPUs. Sources told the Financial Times that the chip, expected to debut sometime next year, will primarily be for internal use and will not be available to external customers.

Whether this means the chip will be used for training rather than inference, or simply that it will power OpenAI's inference and API servers rather than accelerator-based virtual machines (as Google and AWS do with their TPUs and Trainium accelerators), remains an open question.

What it will look like

While we don't know how OpenAI plans to use its first-generation silicon, Broadcom's involvement provides some clues as to what it might ultimately look like.

Broadcom produces a range of foundational technologies needed to build large-scale AI computing systems. These technologies range from the serializers/deserializers (SerDes) used to move data from one chip to another, to the network switches and co-packaged optical interconnects needed to scale from a single chip to thousands of chips, and to the 3D packaging technology required to build multi-chip accelerators. If you're interested, we've delved deeper into each of these technologies here. Find all kinds of electronic components in Easelink Elec.

OpenAI will likely combine all of these technologies with Broadcom's 3.5D eXtreme Dimension system-in-package technology (3.5D XDSiP), which is a likely candidate for the accelerator itself.

The architecture, in many ways reminiscent of AMD's MI300 series accelerators and more similar to anything we've seen from Nvidia so far, involves stacking advanced compute modules on a substrate that contains the chip's underlying logic and memory controllers. Inter-package communication, meanwhile, is handled via discrete I/O chips. This modular approach means customers can incorporate as much or as little of their intellectual property into the design as they prefer, leaving Broadcom to fill in the gaps.

Broadcom's largest 3.5D XDSiP design will support a pair of 3D stacks, two I/O ports, and up to 12 HBM stacks on a single 6,000 mm² package. Initial product shipments are expected to begin next year, coinciding with OpenAI's first chip.

In addition to Broadcom's XDSiP technology, we wouldn't be surprised if OpenAI leverages Broadcom's Tomahawk 6 series switches and co-packaged optical chips for scale-up and scale-out networking. We've explored this topic in depth here. However, Broadcom's focus on Ethernet as the preferred protocol for both networking paradigms means they don't have to use Broadcom for everything.

Missing Mac

While Broadcom's 3.5D XDSiP appears to be a likely candidate for OpenAI's first in-house chip, it's not a complete solution on its own. The AI startup still needs to provide, or at least license, a compute architecture equipped with high-performance matrix multiply-accumulate (MAC) units (sometimes called MMEs or Tensor cores).

The compute units will need some additional control logic and, ideally, some vector units, but most important for AI is a sufficiently powerful matrix unit with access to ample high-bandwidth memory.

Since Broadcom will be responsible for providing nearly everything else, OpenAI's chip team can focus entirely on optimizing the compute architecture for its internal workloads, making the entire process far less daunting.

This is why cloud providers and hyperscalers tend to license many of their accelerator designs from merchant chip vendors. There's no point in reinventing the wheel when you can reinvest those resources in your core competencies.

What if it wasn't OpenAI?

With Altman planning to invest hundreds of billions of dollars (much of it other people's money) in AI infrastructure under his Stargate initiative, the idea that Broadcom's new $10 billion customer would be OpenAI is unsurprising.

However, the startup isn't the only company rumored to be working with Broadcom on custom AI accelerators. You may recall that late last year, The Information reported that Apple would be Broadcom's next major XPU customer, with its chip, code-named "Baltra," set to launch in 2026.

Since then, Apple has pledged to invest $500 billion and hire 20,000 employees to bolster its domestic manufacturing capabilities. These investments include a manufacturing facility in Texas that will produce AI-accelerated servers based on Apple's own chips.

Source: Content compiled from theregister

Reference Link https://www.theregister.com/2025/09/05/openai_broadcom_ai_chips/



View more at EASELINK

HOT NEWS

Glass substrates, transformed overnight

OpenAI,GPT,chat,GPT,XPU

In August 2024, a seemingly ordinary personnel change caused a stir in the semiconductor industry. Dr. Gang Duan, a longtime Intel chi...

2025-08-22

UFS 4.1 standard is commercially available, and industry giants respond positively

The formulation of the UFS 4.1 standard may accelerate the implementation of large-capacity storage such as QLC

2025-01-17

Amazon halts development of a chip

Amazon has stopped developing its Inferentia AI chip and is instead focusing on semiconductors for training AI models, an area the com...

2024-12-10

Understanding the Importance of Signal Buffers in Electronics

Have you ever wondered how your electronic devices manage to transmit and receive signals with such precision? The secret lies in a small ...

2023-11-13

Turkish domestically produced microcontrollers about to be put into production

Turkey has become one of the most important non-EU technology and semiconductor producers and distributors in Europe. The European se...

2024-08-14

US invests $75 million to support glass substrates

US invests $75 million to support glass substrates. In the last few days of the Biden administration in the United States, it has been...

2024-12-12

DRAM prices plummet, and the future of storage is uncertain

The DRAM market will see a notable price decline in the first quarter of 2025, with the PC, server, and GPU VRAM segments expe...

2025-01-06

SOT-MRAM, Chinese companies achieve key breakthrough

SOT-MRAM (spin-orbit moment magnetic random access memory), with its nanosecond write speed and unlimited erase and write times, is a...

2024-12-30

Address: 73 Upper Paya Lebar Road #06-01CCentro Bianco Singapore

OpenAI,GPT,chat,GPT,XPU OpenAI,GPT,chat,GPT,XPU
OpenAI,GPT,chat,GPT,XPU
Copyright © 2023 EASELINK. All rights reserved. Website Map
×

Send request/ Leave your message

Please leave your message here and we will reply to you as soon as possible. Thank you for your support.

send
×

RECYCLE Electronic Components

Sell us your Excess here. We buy ICs, Transistors, Diodes, Capacitors, Connectors, Military&Commercial Electronic components.

BOM File
OpenAI,GPT,chat,GPT,XPU
send

Leave Your Message

Send