The Golden Age of Custom Silicon Draws Near: Part 3

[ad_1]

//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>

Last in a three-part series. Read part one here and part two here.

Traditional chip developers spend billions on R&D, and their budgets are rising—largely attributed to increased costs of chip designs. While system-on-chip (SoC) development costs are indeed increasing, they may not be increasing at a frightening pace—even for advanced nodes. Furthermore, not all chips need leading-edge nodes.

According to estimates of International Business Strategies (IBS) published by the Semiconductor Industry Association recently, design costs of a complex 5nm-class SoC are over 80% higher than design costs of a 7nm-class SoC, and they total over $540 million. Things are going to get about 3× more expensive when companies that design highly complex chips, such as AMD, Intel and Nvidia, move to even more advanced process technologies.

Chip design costs. (Source: SIA)

“Most of high-performance AI chips are being designed in leading-edge process nodes ranging 7 nm, 5 nm, and most recently down to 3 nm,” said Sudhir Mallya, a marketing executive at Alphawave Semi. “The mask costs alone for these are in the $20 million-plus range, and putting together a whole chip program including IP, front-end and physical design, and software can easily go into the $100-$200 million range.”

Dan Hutchenson from TechInsights agrees that chip development is getting more expensive, but he disagrees that it’s getting prohibitively expensive. Thousands of chip designs at below 28 nm were completed in 2022 alone, and the number of chip designs at advanced nodes approached 100.

“Last year, there were many thousands of new designs below 28 nm and a number approaching 100 for the most leading edge,” he said. “According to TechInsights’ Design Completion data, the top 3 most advanced nodes in 2022 have grown at a 35% CAGR since 2018.”

John Koeter, a marketing and strategy VP for the solutions group at Synopsys, said he believes these estimates are too high.

“There are a lot of startups that are getting into this market, and they are surely not spending $500 million to develop a chip. When you see some of those bigger numbers, what I have been told is that it is really like, ‘Let’s say for the Qualcomm Snapdragon platform, where you have many different chips, you develop all the IP internally and all the software for an entire platform of chips.’ In this case, the first chip is taking on the full burden of creating an entire platform. For a company first coming in, in my opinion, it is definitely not even close to that kind of range.”

Indeed, platform costs—which may include several microarchitectures, multiple chips, a variety of IP and loads of software—can be dramatically high. For example, Nvidia’s R&D investments in Ada Lovelace and Hopper products are rather dramatic, according to Peddie.

“Nvidia has invested over $2 billion in their latest design,” he said. “Startups do not have that kind of money, or the staff. GPUs are more than just the chip; there is a mountain of software behind them, not the least of which are drivers and special operations like matrix math-based DLSS/XLSS/FSR, [but also] debuggers and compliers.”

AWS Graviton 3 processor. (Source: AWS)

Meanwhile, development of embedded software accounts for about 40% of a complex SoC cost, and while applications like GPUs need complex drivers, things might be somewhat less costly for applications like AI accelerators since most of today’s AI chips are designed to run a specific set of workloads with specific formats—whereas gaming GPUs are supposed to run hundreds, if not thousands, of titles flawlessly.

That said, while building Ada Lovelace and Hopper was expensive, building an application-specific integrated circuit (ASIC) with a particular workload in mind is probably considerably cheaper.

“Cost really depends on the complexity of the design as costs rise with complexity,” Curren said. “Certainly, we can be talking about tens of millions of pounds to develop a complex design today, including the costs of engineering, IP, packaging and of course the mask sets.”

Hutchenson believes that prohibitively high costs of chip designs could be a myth created to keep newcomers away.

“The smart companies figured out that all the warnings about design costs being too high was a myth intended to keep competitors out,” Hutchenson explained. “If I apply some math to calculate the mythological design costs times the designs scaled across all the nodes in 2022, the sum of the products comes close enough to the $1 trillion revenue target the semiconductor industry is not supposed to achieve until 2030. The additional fact that these companies are jumping at the chance to get the latest EDA tools and access to foundries is de facto proof of the falseness of the design-cost claim.”

Most SoCs developed by newly emerged chip developers don’t cost $500 million.

Freund estimates that costs of relatively sophisticated AI SoCs range from $80 million to $200 million. Hutchenson said that, because not all chips need even FinFET transistors (i.e., sub-16nm nodes), the cost of an average design in 2022 was around $4 million.

“The average design cost less than $4 million in 2022, while the average cost-per-design has grown at 6% over the last five years, matching the overall growth of the semiconductor industry,” Hutchenson said. “It has been the one-two punch of innovations [of EDA tools] and the ecosystem business model that have kept design costs in control.”

There are designs that aren’t supposed to be expensive, such as custom image signal processors used by Vivo and Xiaomi smartphones. Meanwhile, these chips are sold in quantities that exceed tens of millions of units every quarter.

A number of smart phone manufacturers are “also getting into doing our own application processors, or in some cases, camera image processors,” Koeter said.

Large, fabless chip designers like Apple, AMD, Nvidia and Qualcomm tend to adopt leading-edge nodes as soon as they can, which leads to a relatively quick depreciation of fab costs. That, in turn, leads to lower quotes. This makes those advanced process technologies more attractive to new entrants, which will unlock development of custom SoCs at 7 nm and 5 nm for companies that cannot afford the endeavor now.

AI: Making chip development more accessible

Because developing custom silicon is gaining in popularity, companies are finding it challenging to assemble sizable design teams, according to Synopsys. Traditional chip designers employ thousands of engineers, which is something startups can’t get quickly—even if they have enough money.

Were there no shortcuts, development of custom silicon by companies not traditionally in the SoC business would be impossible. But AI-enabled EDA tools are providing a shortcut.

AI-enabled EDA tools from companies like Synopsys and Cadence “can lower costs and speed up development,” according to Freund.

AI for chip design. (Source: Synopsys)

To speed up the testing phase for high-volume parts, modern EDA tools use AI to achieve faster target coverage for each IP by creating test benchmarks to verify strategy simulations, evaluate and quickly find the optimal chip layout, and generate the right patterns for produced chips.

“AI-enabled EDA tools are absolutely critical to designing at today’s uber-complex designs,” Hutchenson said. “Otherwise, it is like trying to win Le Mans today in a 1923 Bentley.”

The Synopsys.ai platform of EDA tools is probably the most well-known AI-enabled set of EDA software. In about two years, more than 200 chip designs were placed-and-routed using the Synopsys DSO.ai program, the company announced in mid-May. Meanwhile, Ansys and Cadence are implementing similar capabilities into their suits.

Ansys said it’s been leveraging AI to improve performance and handle more complex multiphysics simulations, which is critical for things like place-and-route.

“AI is clearly a major and not-so-recent development in design tools and methodologies,” Bianchi said. “We, at Ansys, have used machine learning and embedded AI algorithms under the hood to improve performance and increase our ability to handle increasingly larger and more complex multiphysics simulations. More recently, we have started to equip the design teams with AI/ML solutions to increase designers’ ability to explore larger design spaces, optimize across a large number of parameters, and identify better solutions.”

In fact, AI-enabled tools work particularly well for companies willing to design something very specialized and very complex relatively quickly, Cadence’s Kittrell said.

“Hyperscalers also often need design services as they are tackling big semiconductor design challenges from a ‘cold start,’” he said. Similarly, they are very interested in generative AI productivity-enhancing tools that automate and scale chip design, such as Cadence Cerebrus, which enable them meet power, performance and area [PPA] goals with minimum effort.”

A critical factor for successful AI application is high-quality training data.

“One example is the use of AI to optimize power delivery networks [PDNs] in large SoCs,” Bianchi explained. “One important and critical element for AI is the quality of the data it uses for training—this is where we, as simulation leaders, excel with our ability to provide the most ‘true to physics’ dataset for AI/ML to perform at its best.”

“Large EDA companies like Cadence, Synopsys, Siemens started using AI in their tools,” said Oleh Krutko, general manager of imec.IC-link. “Also, some open-source tools started using AI. In design flows, we don’t see it applied yet that much, but many ideas are there to do so.

I believe AI will be super important in chip design in the coming years.”

Of course, AI has limitations

But there are limitations even for AI.

“There are features of AI within the EDA tools that we use today, and we do and have always used forms of inference to drive our design choices,” Curren said. “However, AI is all about learning from the past and applying those lessons to predict the future, whereas silicon design is all about doing something totally new with new capabilities—for example, twice as many transistors as last time—so there is only so much that AI can do.”

For now, Tenstorrent uses AI to develop its performance simulation software. This doesn’t have a direct impact on chip design, but it improves the team’s productivity.

“We are increasingly integrating AI-powered tools into our workflows in software development, because they have become crucial in enhancing our productivity,” Sokorac said. “Currently, we are testing an application of AI for estimating performance in simple operations. Looking ahead, our plan is to expand this approach to encompass comprehensive performance modeling.”

Can’t design yourself? Design-to-order

Not all companies can afford to assemble a chip design team, develop their own silicon and manage its supply chain throughout its lifetime. But they can still take advantage of custom SoCs for their workloads.

This is where contract chip designers like Sondrel come into play. They tend to have multiple teams across the world that can develop a turnkey ASIC or SoC of various complexity that’s tailored for particular workloads and may even contain customer’s IP. Furthermore, they offer supply chain management services. The number of clients is growing for such companies, according to Sondrel’s Curren.

“The problem with off-the-shelf is that rivals can buy exactly the same chips,” he said. “What we have seen is a shift from semiconductor companies commissioning ASICs to system companies (i.e., those who will sell a product incorporating the chip rather than the chip itself). This gives them a lot more leverage on the cost of the design [and gives them the chance] to define the chip to most efficiently and effectively differentiate their products.”

A turnkey SoC lifecycle. (Source: Sondrel)

Consumer electronics companies make products that are sold for years and in huge quantities. But only Apple and Samsung have huge chip-development divisions. Others either order semi-custom SoCs from companies like AMD (e.g., Microsoft and Sony), or just go with custom ASICs or SoCs designed by contract-chip developers.

In the consumer electronics space, “they are usually using a hybrid approach, where they specify the architecture and then they turn it over to an ASIC vendor to do the implementation and manufacturing the chip,” Koeter said. “In my experience, it is relatively rare for a consumer [electronics] company to do a full turnkey SoC design.”

Creating a chip in-house requires both a skilled design team and advanced software, which are costly. If a company isn’t consistently producing new chips, there will be periods where the team is idle, awaiting the next project. During this downtime, process node advancements may occur, adding further complexity to IP re-use and expense when it comes to software investments. Furthermore, risks of building a faulty design increase if a chip design team doesn’t have experience with the latest nodes or can’t efficiently work together.

“And risk is a very important consideration as a new chip project costs many millions to take from start to shipping silicon,” Curren said. “We have full-time teams of in-house experts who have worked together on many projects so that the whole process runs smoothly with minimal risk.”

Sondrel sees new customers from different verticals, including “8K, AI/ML, automotive, image recognition, HPC and security,” he said. These clients typically require ultra-complex SoCs containing billions of transistors to achieve their desired performance and feature set. Development of such SoCs may be costly, but it pays off in many cases.

“A move from general-purpose chips, which are inefficient, to specialized devices that are not only more efficient but also provide the opportunity for differentiating functionality [makes sense],” Curren said. “The downside, of course, is the need to fund the development, which can be very high. But if the ROI is compelling (which is often true for systems companies that can get good leverage on the price per device), then it is worth it. Although a development can be tens of millions, if you can reduce the unit price from $500 to under $50, you can save a lot of money over the product’s lifetime.”

The custom SoC is a new normal

After dozens of CPU and GPU developers went extinct in the 1990s and early 2000s, it was hard to imagine some 15 years ago that the industry would see the rise of not only chip-design companies like Apple, Ampere Computing and Tenstorrent, but also custom-chip development in general.

Apple M2 Max processor. (Source: Apple)

While hyperscalers, AI startups, automotive and mobile verticals clearly lead the way, Arm (which is the biggest IP licensee in the industry) sees custom silicon coming from all industries and for all kinds of applications—from 5G radios to large core count server SoCs.

“We see custom SoC projects as a broad trend across all industries, driven by compute and efficiency demands, which continue to put pressure on SoC designs,” O’Driscoll said. “These companies tend to have a very good understanding of their applications, use-cases and what they need from the SoCs they are deploying. This insight is informing their ability to design application-specific compute SoCs based on the IP model.”

“We expect that custom silicon will continue to gain traction in the next few years because of several factors,” said Sudhir Mallya, a marketing executive at Alphawave Semi. “First, with the advent of AI, the demand for high-performance computing and specialized processing is growing rapidly across many different industries—from data centers to automotive to healthcare. Second, advances in design tools and methodologies, such as machine learning and automated design, together with the rise of chiplets are making it easier and faster to create custom silicon solutions. Finally, the availability of advanced foundry and packaging technologies is expanding, with more options for companies to choose from.”



[ad_2]

Source link

Share this post
Facebook
Twitter
LinkedIn
WhatsApp