Toolkit supports automotive-quality AI




NXP Semiconductors rolled out a new deep learning toolkit called eIQ Auto. NXP is seeking to set itself apart from competitors by making its tools “automotive-quality.” NXP’s goal is to make it easier for AV designers to implement deep learning in vehicles.

The development of autonomous vehicles (AV) does not necessarily require either artificial intelligence or deep learning. Simply put, not all AVs need to be AI-driven. And yet the rapid advancements and improved accuracy of deep learning are alluring to developers seeking to improve their highly automated vehicles.

The difficulty of validating safety of AI-driven AVs persists, however. Safety researchers worry about the “black-box” nature of deep learning, and that’s only one of several thorny issues. It remains uncertain whether AV designers can verify and validate a continuously learning AI system, or if an AI feature, once deployed in dedicated hardware inside a vehicle, will behave the same as when it was developed and trained on a larger, more powerful computer system.

Despite such concerns, both AV and safety communities recognize that AI is not a topic they can avoid discussing.

With the release of draft specifications for UL 4600 last week, Phil Koopman, CTO of Edge Case Research, told us, “We will go after full autonomy head-on.”

UL 4600, a safety standard for evaluating autonomous products currently under development at Underwriters’ Laboratories, neither assumes nor mandates that deep learning should be deployed inside AVs. But the standard covers the validation of any machine-learning and other autonomy functions used in life-critical applications.

Automotive-grade software tool kit for deep learning

Against this backdrop, NXP Semiconductors introduced its eIQ Auto deep learning toolkit.

“Most deep-learning frameworks and neural nets developed thus far are used for consumer applications such as vision, speech and natural language,” observed Ali Osman Ors, director, Automotive AI Strategy and Partnerships at NXP Semiconductors. They are not necessarily developed with life-critical applications in mind.

Click here for larger image (Source: NXP)(Source: NXP) Click here for larger image 

NXP, a leading automotive chip supplier, is going a step further by making its software toolkit compliant with Automotive Software Performance Improvement and Capability dEtermination (A-SPICE). A-SPICE is a set of guidelines developed by German automakers to improve software development processes.

NXP explained that its eIQ Auto tool set — designed specifically for NXP’s S32V234 processor — will help AV developers “optimize the embedded hardware development of deep learning algorithms and accelerate the time to market.”

Asked if there are similar auto-grade tool kits available for deep learning, Ors said, “Some car OEMs might have designed their own tools in-house. But as far as I know, I haven’t seen other automotive chip vendors offering automotive-quality software tool kits like ours for deep learning.”

Pruning, quantization and compressing

The process of data preparation and training (learning) and the process of AI deployment (inference) on embedded systems are well understood.

Today, AV developers are said to be collecting data at 4Gigabytes per second as their test vehicles drive on public roads. Cleaning up and annotating such a huge amount of data and prepping it for training data can be very costly. In some cases, the data-labeling processing alone can financially cripple algorithm developers and AV startups.

But equally challenging to AV designers, though little discussed publicly, are the arduous tasks involved in optimizing a trained AI model and converting it for deployment (on inference engines). Ors explained that NXP’s tool accelerates the process of “quantization, pruning and compressing” the neural network.

By pruning, Ors means removing redundant connections present in the neural net architecture, by cutting out unimportant weights. Of course, a new “pruned” model will lose accuracy. Hence, the model has to be fine-tuned after pruning to restore its accuracy.

Next up, quantization creates an “efficient computing process,” said Ors. It involves bundling weights by clustering them or rounding them off so that the same number of connections can be represented using less memory. Another common technique is converting floating-point weights to fixed point representation by rounding off. As with pruning, the model must be fine-tuned after quantization.

AV designers evaluate the accuracy of the converted model by running test data (which the deep learning system hasn’t seen before) and further fine-tune the model.

Click here for larger image (Source: NXP)(Source: NXP) Click here for larger image 

Partitioning workload

Beyond that, eIQ Auto, said NXP, “partitions the workload and selects the optimum compute engine for each part of the neural network.” It speeds the process of hand-crafting the inference engine, because the tool can help AV designers figure out which tasks run best among CPU, DSP or GPU, Ors explained. He said that eIQ Auto can’t be used for non-NXP devices, as the tool must be intimately familiar with what’s happening inside the processor.

>> Continue reading the next section, “Where AI is applied inside AV”, on page two of this article originally published on our sister site, EE Times: “NXP Touts Auto-Grade AI Toolkit for AVs.”

The post Toolkit supports automotive-quality AI appeared first on Embedded.com.





Original article: Toolkit supports automotive-quality AI
Author: Junko Yoshida