AI Software Developers Extol the Power of Standards

[ad_1]

//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>

AI software and its industry have grown so painfully big and complex that standards are needed to simplify things and ease the discomfort.

That was the key message from participants in a panel discussion on identifying and solving developers’ pain points, which was held during EE Times’s recent AI Everywhere Forum.

Andrew Richards (Source: Autosens Conference 2016/Bernal Revert)

“All AI systems, from tinyML to supercomputers, need software, but the state of AI software across the industry is at best variable,” said panel moderator Nitin Dahad, editor-in-chief of embedded.com and an EE Times correspondent. “What are the biggest pain points developers face today with AI software stacks? What problems are common to all applications, snd where should we start trying to fix them?”

Standardization may be the start of a fix, according to panelists.

“Industry standards let you plug things together, right?” said Codeplay CEO Andrew Richards, who also became a VP at Intel after it bought the specialist software company in 2022. “If you think about something like USB or Wi-Fi, they just make things work together, even though they’re designed by different companies, and different technologies, different skill sets.”

Richards and his fellow panelists had other issues on their mind, too:

  • Fragmentation of frameworks;
  • Performance;
  • The need to debug data and the lack of data debuggers;
  • Managing the disparate professionals and skill sets needed for AI software, along with the complexity of the applications themselves, and
  • Code portability.

“The answer to all of these questions, from my point of view, is industry standards,” Richards said. “And that’s what we do with SYCL, and that’s what we’re doing across the oneAPI project, what we’ve been doing with other standards: enabling different sector people to work together.”

He encouraged developers to become active in the groups writing the standards, including The Khronos Group and the oneAPI Community Forum.

“If you look at the standards and you think, ‘Oh, that’s not a good fit for us,’ come and join,” he said. “You can change them. You can actually change the standards and take them down a direction that you want, and it’s going to work much better for you.”

David Kanter (Source: MLCommons)

Necessary beyond writing or re-writing standards, though, is trying them, Richards said.

“What we find with SYCL is people go, ‘It’s never going to work, it’s never going to work,’ and then you try,” he said. “And it does actually work, and you actually get really good performance. Now we can run massive, large-scale software.”

Don’t forget standards for data, said David Kanter, executive director of MLCommons, a consortium formed to grow ML from a research field into a mature industry via benchmarks, public datasets and best practices.

 “One thing I would say is, standards for data, not a thing, right?” he said.

Kanter cited the example of building an ML model for speech. There are no standardized inputs developers always use for speech, he said, so everyone has a different pipeline munging things in different ways.

Anders Hardebring (Source: Imagimob)

“It also depends on your target,” he said. “So, to talk about tinyML, in some cases you may have an on-device component that can do some portion of the pre-processing. Sometimes you don’t. So, there’s a lot of things there where standards could totally help. That’s another example of an area where software has a rich history, but we don’t see it on the data side, and sort of need to develop that intuition and those muscles.”

The two parts of AI applications—software and data—are mismatched in their development, panelists said, leading to post-spinach Popeye “muscles” of code, while the ML training information remains a 98-pound weakling.

“We believe that the biggest pain point is to be able to collect enough amounts of data—well annotated, high quality—and to have the software to do that,” said Anders Hardebring, the CEO of Imagimob, a development platform for ML on edge devices.

Popeye muscles or no, software development always lags behind new hardware, Hardebring noted.

Alex Grbic, VP of software engineering at chip company Untether AI, agreed with Hardebring. “In order to take advantage of the novel architectures, the whole reason these new architectures are coming out are to meet a certain performance requirement that more traditional architectures can’t,” he said. “But they are complex, right?”

Alex Grbic (Source: Untether AI)

Untether’s customers who use frameworks like TensorFlow don’t want to do the underlying parallel programming needed for spatial architectures, he added.

“And, in that case, they rely heavily on the software to take advantage of that,” Grbic said. “And we’re the ones that provide it.”

While standards are being worked out, Untether’s software simplifies the use of its hardware and facilitates the better performance it makes possible.

 



[ad_2]

Source link

Share this post
Facebook
Twitter
LinkedIn
WhatsApp