PCIe Eyes Road Ahead with AI, Automotive

PCIe Eyes Road Ahead with AI, Automotive

[ad_1]

//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>

Artificial intelligence (AI) is getting more attention than ever thanks to the rapid emergence of ChatGPT, so it should be no surprise that well-established, incumbent technologies, such as the Peripheral Component Interconnect Express (PCIe), are poised to play a critical role.

PCIe has become somewhat foundational in enterprise computing, with the well-established Non-Volatile Memory Express (NVMe) and rapidly maturing Compute Express Link (CXL) both leveraging the now ubiquitous interconnect—and with the latter enabling PCIe to become better at delivering needed bandwidth.

The ubiquity of the interconnect positions it well for new opportunities—PCIe is well-understood and proven, so it’s no surprise that it’s seen as being a key enabler for AI workloads in data centers. But just as enterprise computing technologies like SSDs and ethernet have been gaining traction in the modern vehicle to support infotainment, advanced driver assistance systems (ADAS) and autonomy, the automotive market is also on the PCIe roadmap.

Overall, the PCIe architecture has a lot of growth opportunity in several verticals where applications and systems are increasingly demanding improved performance, power efficiency, flexibility and embedded security, according to a recently published report from ABI Research, “PCI Express market vertical opportunity.”

The research firm is forecasting that the total addressable market for PCIe technology will reach $10 billion by 2027 thanks to high-growth opportunities in automotive and network edge verticals. The ABI report expects the automotive industry will benefit greatly from widespread PCIe technology adoption. That adoption not only enables the consolidation of electrical/electronic (E/E) domains, but also helps mission-critical applications in autonomous vehicles meet safety and efficiency requirements.

Not surprisingly, the data center will contribute to sustained long-term demand for new PCIe-tech deployment to enable high-performance applications, which coincides with high rates of AI adoption, while power efficiency and security are also key drivers

As heterogenous hardware becomes ubiquitous, PCIe will be used to meet Complex Open Radio Access Network (Open RAN or ORAN) workloads, the report said, and it’s also expected to perform well in the mobile devices vertical as a discrete component interconnect necessary for keeping up with the quick pace of market innovation.

With so many opportunities for PCIe technology to address different workloads, industries and use cases, the PCI special interest group (PCI-SIG) is going full tilt to bring the next iteration of the specification to market. Version 0.3 of 7.0 was recently released to SIG members, with full specification release targeted for 2025.

Data rate set to keep doubling

Al Yanes (Source: PCI-SIG)

The PCI-SIG intends to have the next version support emerging applications like 800 G Ethernet, AI/ML, cloud and quantum computing, as well as data-intensive markets like hyperscale data centers, high-performance computing (HPC), edge computing and military/aerospace applications.

Anticipated features of PCIe 7.0 include delivering a 128 GT/s data rate and up to 512 GB/s bi-directionally via x16 configuration, continuing to deliver the low-latency and high-reliability targets and improving power efficiency while maintaining backwards compatibility with all previous generations of PCIe technology.

In an exclusive interview with EE Times, PCI-SIG President Al Yanes said PCIe 5.0 remains the focus of most compliance testing with 6.0 starting to get added to the mix (the latter was published in January 2022, which means the specification is getting a full update every three years).

“We have a good cadence,” Yanes said. “Three seems to be the magic number, as far as us being able to execute new technology node.”

He added that the cadence enables vendors to get a good return on investment for their development.

PCIe 7.0 is looking to be more of a “rinse and repeat” because PCIe 6.0 was more of a revolutionary change, in large part because of the move to Pulse Amplitude Modulation 4–level (PAM4) signaling, he said.

With PCIe 7.0, the SIG isn’t reinventing the wheel, and it has a roadmap that provides the clarity necessary for NVMe and CXL to move forward, Yanes said. “We have consistent delivery of technology.”

The PCI-SIG has outlined a roadmap for the interconnect that provides clarity for PCIe developers and other specifications like CXL and NVMe. (Source: PCI-SIG)

While it’s a long way off, PCIe 8.0 could be potentially more revolutionary given the advances in connectors and cabling.

In the meantime, the PCI-SIG goal is to explore new opportunities, including the cabling standards in the automotive segment.

A big part of meeting the needs of any workload is the “speeds and feeds” of the specification, Yanes said. “We have so much flexibility and we have so much room for growth.”

PCIe 6.0 offers double the bandwidth of its predecessor to deliver a raw data rate of 64 GT/s and up to 256 GB/s via x16 configuration.

Data movement is the impetus behind CXL, which is a key enabler of AI.

“Any data movement technology is going want to go with PCI Express because of these huge bandwidth opportunities and flexibility,” Yanes said, adding that PCIe enables flexibility because if there’s a high demand for bandwidth without significant I/O, you can move to higher frequencies and fewer pins.

There are many variations, he added.

While CXL adds features, the PCI-SIG is focused on the physical speeds and efficiency, Yanes said. “We deliver the bandwidth, we deliver the efficiency of the protocol, and we deliver the efficiency of power, on top of this bandwidth.”

PCIe bandwidth can be expected to double every three years with each new iteration. (Source: PCI-SIG)

CXL enables PCIe to deliver bandwidth

All data eventually goes to memory, and that’s where CXL plays on top of PCIe: It’s all about getting the data to the right memory or storage device more efficiently.

CXL 1.0 was based on PCIe 5.0, but CLX is evolving on its own based on usage models, CXL co-inventor and CXL Consortium founding member Dabendra Das Sharma told EE Times in an exclusive interview.

The first generation covered accelerators and memory expansion, while the second iteration added more switching, he said. With the latest generation, there’s now a fabric-like topology and a lot more usage models.

Dabendra Das Sharma (Source: CXL Consortium)

“We went from a small-scale pooling to large-scale pooling,” Das Sharma said.

Overall, CXL 3.0 focused a lot on the protocol side, while also taking advantage of the increased speeds of PCIe 6.0, he added. “That really gave us a lot of opportunities to build these larger scale systems in CXL 3.”

Das Sharma sees the cadence of CXL aligning well with that of PCIe. In the same interview, CXL Consortium technical task force co-chair Mahesh Wagh said it was important to scale functionality in CXL and start with the basics and prioritize features based on usage models.

He added that despite being relatively new, CXL has come a long way in a short time, guided by more than two decades of standards with backwards compatibility in mind.

“Recoupment of investment is something we think about very seriously and make sure that our roadmap covers,” Wagh said.

Das Sharma doesn’t see the cadence of either PCIe or CXL slowing, although experience tells him it’s hard to predict the future, including the speeds and feeds, despite the achievements over the past decades.

Mahesh Wagh (Source: CXL Consortium)

“People have been predicting the death of this backwards compatibility evolution for many generations now,” he said. “And yet we find out a way to not just extend it but extend it in a very healthy manner.”

Wagh said it’s important to have a line of sight to the next speed bump, and it helps that there’s a great deal of overlap when it comes to comes those working on CXL and PCIe. “That synergy is working really well for between CXL and PCIe.”

There’s also collaboration with those working on the recently published UCIe, the die-to-die interconnect standard, Das Sharma added. “We do look for synergies across the board and that helps the whole industry.” He said it makes sense that the CXL Consortium is separate because it’s trying to solve specific problem.

CXL has transformed PCIe from a memory bandwidth consumer to a producer of bandwidth as well because it makes memory bandwidth available to the system, according to Das Sharma. “It opens up a lot of exciting things in the PCIe world itself.”

That world includes AI workloads that are memory intensive and can benefit the advances in the latest iteration.

CXL has been incrementally update over three iterations, with addition of pooling and switching CXL 2.0 (Source: CXL Consortium)

PCIe moves AI workloads

In an exclusive interview with EE Times, Lou Termullo, product manager for Rambus’ high speed interface controllers including PCie and CXL, said there’s a huge amount of data that needs to be transferred and a ton of computation that needs to happen, and AI is driving the thirst for bandwidth.

PCIe is the defacto, high-speed data interconnect standard for servers, and many systems-on-chip (SoCs) connect to them via PCIe, as well as accelerators and smart network interface cards (NICs) used for AI and machine learning.

These NICs are more than just network cards, Termullo said, because they have a data processing unit (DPU) and some even have switches. This allows for computing to be offloaded, including AI workloads, he said, and the smart NIC allows the CPU to do all its computations.

“The thirst for bandwidth is there, and the technical challenge is getting harder and harder,” Termullo said. “But the standards and the ecosystem are really stepping up to the plate.”

Frank Ferro (Source: Rambus)

In the same interview, Frank Ferro, senior director of product management, said Rambus is seeing quite a bit of pull for PCIe on all application-specific IC (ASICs). “It’s pretty much every chip I’m working on right now with customers.”

It’s common to have a chip that has either HBM2E or HBM3 (high bandwidth memory) using PCIe or both Graphics Double Data Rate (GDDR) and PCIe, he said. “Due to the long design cycles, PCIe 6.0 is coming on strong.”

The obvious reason for high-performance systems, including AI, to jump onto the PCIe 6.0 bandwagon early is the available bandwidth, but connectivity matters too, Ferro said. “Whether it’s an accelerator card or a NIC, the amount of data that we’re pumping through is growing.”

The industry is at a point where there’s plenty of CPU bandwidth, but not enough memory bandwidth to keep up with the CPU, and that’s where PCIe helps to enable memory bandwidth or throughput into ASICs, he said. “Every customer I have wants more performance.”

Termullo added that many companies are still designing with earlier versions of PCIe; not all are shifting to the next generation. It’s the bleeding edge, including accelerators, high-performance smart NICs and high-end enterprise SSDs that are going to transition to the latest generation of PCIe and potentially CXL, he said.

Another enterprise standard aims for automotive

Aside from AI and other high-performance, data-center applications, automotive’s a high priority for PCIe—and it’s not that far to travel, given that many of the technologies PCIe works with, including ethernet and NVMe, already find themselves in the modern vehicle. As data requirements grow, memory and storage content are growing in the form of SSDs and DRAM.

The automotive industry prefers proven and reliable technologies to meet functional safety requirements, so it makes sense that PCIe would become the interconnect in vehicles, especially as computing architectures consolidate and virtualize. Storage devices in the car, like SSDs, are being shared with multiple hosts, just as they are in the data center.

“What helps us is that automotive needs more bandwidth,” Yanes said. “Automotive needs to process data.”

PCIe’s journey into automotive isn’t unlike the smartphone segment where the demand for data movement increased and the interconnect stepped up by addressing power consumption, he said. “Once we solved our power issue, we became the technology favorite for that space.”

The modern car generates a great deal of data, thanks to all the onboard sensors and using components that already take advantage of PCIe close to the processor and the host memory.

“We’re built for that,” Yanes said. “PCIe is ubiquitous.”



[ad_2]

Source link

Share this post
Facebook
Twitter
LinkedIn
WhatsApp