Marvell Announces 112G SerDes, Built on TSMC 5nm

So far we have three products in the market built on TSMC’s N5 process: the Huawei Kirin 9000 5G SoC, found in the Mate 40 Pro, the Apple A14 SoC, found in the iPhone 12 family, and the Apple M1 SoC, which is in the new MBA/MBP and Mac Mini. We …

So far we have three products in the market built on TSMC’s N5 process: the Huawei Kirin 9000 5G SoC, found in the Mate 40 Pro, the Apple A14 SoC, found in the iPhone 12 family, and the Apple M1 SoC, which is in the new MBA/MBP and Mac Mini. We can now add another to that list, but it’s not a standard SoC: here we have IP for a SerDes connection, now validated and ready for licensing in TSMC N5. Today Marvell is announcing its DSP-based 112G SerDes solution for licensing.

Modern chip-to-chip networking infrastructure relies on high speed SerDes connections to enable a variety of different protocols at a range of speeds, typically in Ethernet, fiber optics, storage, and connectivity fabrics. Current high-speed connections rely on 56G connections, and so moving up to 112G enables double the speed. Several companies have 112G IP available, however Marvell is the first to enable it in 5nm, ensure it is hardware validated, and offer it for licensing.

These sorts of connections have a number of measurements to compare them to other 112G solutions: the goal is to not only meet the standard, but offer a solution that uses less power, but also a lower potential error rate, especially for high-speed high-reliability infrastructure applications. Marvell is claiming that its new solution enables a significant power reduction in energy per bit transferred – up to 25% compared to equivalent TSMC 7nm offerings, along with tight power/thermal constraints and a >40dB insertion loss.

We typically expect data to travel down a connection like this as a series of ones and zeros, i.e. a 1-bit operation which can be a 0 or a 1, known as NRZ (non-return to zero) - however Marvell’s solution enables 2-bit operation, such as a 00, 01, 10, or 11, known as PAM4 (Pulse Amplitude Modulation). This enables double the bandwidth, but does require some extra circuitry. PAM4 has been enabled at lower SerDes speeds and at 112G before, but not for TSMC N5. As we move to even faster speeds, PAM4 will become a necessity to enable them. Regular readers may identify that NVIDIA’s RTX 3090 uses PAM4 signaling (on N7) to enable over 1000 GB/s of bandwidth with Micron’s GDDR6X – it can also be run in NRZ mode for lower power if needed.


Image from Micron

Marvell says it is already engaged with its custom ASIC customers across multiple markets with the 112G implementation. Alongside the new 112G SerDes, the company says it is set to enable a complete suite of PHYs, switches, DPUs, custom processors, controllers, and accelerators built on 5nm, and that this initial offering is but the first step.

Related Reading

2023 Interposers: TSMC Hints at 3400mm2 + 12x HBM in one Package

High-performance computing chip designs have been pushing the ultra-high-end packaging technologies to their limits in the recent years. A solution to the need for extreme bandwidth requirements in the industry has been the shifts towards large design…

High-performance computing chip designs have been pushing the ultra-high-end packaging technologies to their limits in the recent years. A solution to the need for extreme bandwidth requirements in the industry has been the shifts towards large designs integrated into silicon interposers, directly connected to high-bandwidth-memory (HBM) stacks.

TSMC has been evolving their CoWoS-S packaging technology over the years, enabling designers to create bigger and beefier designs with bigger logic dies, and more and more HBM stacks. One limitation for such complex designs has been the reticle limit of lithography tools.

Recently, TSMC has been increasing their interpose size limitation, going from 1.5x to 2x to even projected 3x reticle sizes with up to 8 HBM stacks for 2021 products.

As part of TSMC’s 2020 Technology Symposium, the company has now teased further evolution of the technology, projecting 4x reticle size interposers in 2023, housing a total of up to 12 HBM stacks.

Although by 2023 we’re sure to have much faster HBM memory, a 12-stack implementation with the currently fastest HBM2E Samsung Flashbolt 3200MT/s modules would represent at least 4.92TB/s of memory bandwidth, which is multitudes faster than even the most complex designs today.

Carousel image credit: NEC SX-Aurora TSUBASA with 6 HBM2 Stacks

Related Reading

TSMC’s Version of EMIB is ‘LSI’: Currently in Pre-Qualification

Whilst process node technologies and Moore’s Law are slowing down, manufacturers and chip designers are looking to new creative solutions to further enable device and performance scaling. Advanced packaging technologies are one such area where w…

Whilst process node technologies and Moore’s Law are slowing down, manufacturers and chip designers are looking to new creative solutions to further enable device and performance scaling. Advanced packaging technologies are one such area where we’ve seen tremendous innovations over the past few years, such as the introduction of silicon interposers and integration of HBM-memory or the shift towards modularisation through chiplet designs.

Silicon interposers pose cost challenges as they are expensive and require quite a large silicon footprint, whilst chiplet designs which use conventional packaging on organic substrates are limited by I/O bandwidth and power efficiency. A solution to this problem has been the industry’s introduction of intermediary silicon dies that connect two logic chips together – but only in a limited scope, not using the same footprint as a full silicon interposer. Intel’s EMIB (Embedded Die Interconnect Bridge) has been the recently most talked about implementation of such technology.