CEVA introduces new configurable sensor hub DSP architecture




In a world in which multiple sensors are being designed into almost everything, processing of all the data inputs, or sensor fusion, is becoming an increasingly important part of the system. To address this, CEVA has introduced a high-performance sensor hub DSP architecture called SensPro, which is configurable, combines scalar and parallel processing for floating point and integer data types, as well as deep learning and inferencing.

Growth in the number and variety of sensors in modern systems, and their substantially different computation needs, is the reason CEVA said it set out to design a new architecture from the ground up, to address this challenge. It constructed SensPro as a configurable, holistic architecture capable of handling intensive workloads using a combination of scalar, vector processing and AI acceleration, while utilizing the latest micro-architecture design techniques of deep pipelining, parallelism, and multi-tasking.

Explaining the new product family to embedded.com, CEVA’s senior director of artificial intelligence and computer vision marketing, Jeff VanWashenova, said, “This is the first sensor hub for multiple sensors, based on successful technologies and the strength of our existing portfolio, such as the NeuPro AI processor, the XM6 vision processor, and the BX2 scalar DSP. It’s highly configurable with three core configurations, and it features a mature software toolset.” He added that next generation silicon needs to analyze data, fuse data to build a coherent model, and then deliver contextual awareness.

The SensPro family provides the specialized processors to efficiently handle the different types of sensors in smartphones, robotics, automotive, AR/VR headsets, voice assistants, smart home devices and for emerging industrial and medical applications that are being transformed with initiatives like Industry 4.0. These sensors, among which are camera, radar, lidar, time-of-flight (ToF), microphones and inertial measurement units (IMUs), generate a multitude of data types and bit-rates derived from imaging, sound, RF and motion, which can be used to create a full 3D contextually-aware device.

Dimitrios Damianos, technology & market analyst of the sensing division at Yole Développement (Yole) commented, “The proliferation of sensors in intelligent systems continues to increase, providing more precise modeling of the environment and context. Sensors are becoming smarter, and the goal is not to get more and more data from them, but higher quality of data especially in cases of environment/surround perception such as: environmental sensor hubs that use a combo of microphones, pressure, humidity, inertial, temperature and gas sensors (smart homes/offices) as well as situational awareness in ADAS/AV where many sensors (radar, lidar, cameras, IMU, ultrasonic, etc) must work together to make sense of their surroundings.”

Yole adds that the challenge is to process and fuse different types of data from different types of sensors. Using a mix of scalar and vector processing, floating and fixed-point math coupled with an advanced micro-architecture, SensPro offers system and SoC designers a unified processor architecture to address the needs of any contextually-aware multi-sensor device.

CEVA SensPro block diagram
SensPro’s configurable and self-contained architecture brings together scalar and parallel processing for floating point and integer data types, as well as deep learning training and inferencing. (Image: CEVA)

Built to maximize performance-per-watt for complex multi-sensor processing use cases, the SensPro architecture offers a combination of high performance single and half precision floating-point math required for high dynamic range signal processing, point cloud creation and deep neural network (DNN) training, along with a large amount of 8- and 16-bit parallel processing capacity required for voice, imaging, DNN inference processing and simultaneous localization and mapping (SLAM). SensPro incorporates CEVA’s widely used CEVA-BX scalar DSP, which offers a seamless migration path from single sensory system designs to multi-sensor, contextual-aware designs.

The new sensor hub uses a highly-configurable 8-way VLIW architecture, allowing it to be tuned to address a wide range of applications. Its micro-architecture combines scalar and vector processing units and incorporates an advanced, deep pipeline enabling operating speeds of 1.6GHz at a 7nm process node.

SensPro incorporates a CEVA-BX2 scalar processor for control code execution with a 4.3 CoreMark/MHz score. It adopts a wide SIMD scalable processor architecture for parallel processing and is configurable for up to 1024 8×8 MACs, 256 16×16 MACs, dedicated 8×2 binary neural networks support, as well as 64 single precision and 128 half precision floating point MACs. This allows it to deliver 3 TOPS for 8×8 networks inferencing, 20 TOPS for binary neural networks inferencing, and 400 GFLOPS for floating point arithmetic. Other key features of SensPro include a memory architecture providing a bandwidth of 400GB per second, 4-way instruction cache, 2-way vector data cache, DMA, and queue and buffer managers for offloading the DSP from data transactions.

CEVA_SensPro-configuration-table
Initially, SensPro DSPs will be available in three configurations, each including a CEVA-BX2 scalar processor and various vector units configured for optimal use-case handling. (Image: CEVA)

SensPro is accompanied by a set of software and development tools to expedite system designs including an LLVM C/C++ compiler, Eclipse based integrated development environment (IDE), OpenVX API, software libraries for OpenCL, CEVA deep neural network (CDNN) graph compiler,  CEVA-CV imaging functions, CEVA-SLAM software development kit and vision libraries, ClearVox noise reduction, WhisPro speech recognition, MotionEngine sensor fusion, and the SenslinQ software framework.

Ceva Senspro software
Complementary software and libraries for SensPro (Image: CEVA)

CEVA told us that the sensor hub architecture is a natural progression from camera vision processing, then AI processing, and then its acquisition of Hillcrest Labs for motion sensing last year. Then it introduced its SenslinQ hardware IP and software platform allowing communication between cores. It became clear that in its roadmap it needed to put everything in one device. SensPro provides that self-contained sensor hub with on-device processor that unifies multi-sensor processing, AI and sensor fusion in a single solution.

VanWashenova said, the first use will be in automotive, for whom CEVA has a lead customer. “But we’ll also be targeting many other applications, including driver monitoring, delivery robots, drones, AR and wearables, surveillance and home entertainment.”

The analyst’s take

In his report following the Linley Spring Processor Conference, Mike Demler said Ceva has always offered configurable and customizable DSPs, but SensPro marks a departure from its earlier products, which target a single application such as audio processing or computer vision. “SensPro addresses two industry trends: on-device AI and smart machines. Just like humans, smart machines must employ multiple senses to properly perceive their environment. Several chip vendors target them by offering powerful camera-based neural-network processors, but they lack the DSP capabilities for sensor fusion.” He added that its three initial preconfigured models are suitable for numerous consumer and industrial systems, but licensees will appreciate the ability to customize their designs when that option arrives in a future release.

The post CEVA introduces new configurable sensor hub DSP architecture appeared first on Embedded.com.





Original article: CEVA introduces new configurable sensor hub DSP architecture
Author: Nitin Dahad