CES 2022 news: Redefining high performance AI/ML processing for edge AI & edge compute devices with new processor architecture
06 January 2022
CEVA Redefines High Performance AI/ML Processing for Edge AI & Edge Compute Devices with its NeuPro-M Heterogeneous & Secure Processor Architecture
CEVA is redefining high performance AI/ML processing for edge AI & edge compute devices with its NeuPro-M heterogeneous & secure processor architecture. The 3rd generation NeuPro AI/ML architecture offers scalable performance of 20 to 1,200 TOPS at SoC & Chiplet levels, plus lowers memory bandwidth by 6x. It targets broad use of AI/ML in automotive, industrial, 5G networks & handsets, surveillance cameras & edge compute.
CEVA, a leading licensor of wireless connectivity & smart sensing technologies & integrated IP solutions, has announced today at CES 2022 its latest generation NeuPro-M processor architecture for artificial intelligence and machine learning (AI/ML) inference workloads. Targeting the broad markets of edge AI and edge compute, NeuPro-M is a self-contained heterogeneous architecture that is composed of multiple specialised co-processors and configurable hardware accelerators that seamlessly and simultaneously process diverse workloads of Deep Neural Networks (DNNs), boosting performance by 5-15x compared to its predecessor. An industry first, NeuPro-M supports both system-on-chip (SoC) as well as Heterogeneous SoC (HSoC) scalability to achieve up to 1,200 TOPS and offers optional robust secure boot and end-to-end data privacy.
NeuPro–M compliant processors initially include the following pre-configured cores:
• NPM11 – single NeuPro-M engine, up to 20 TOPS at 1.25GHz
• NPM18 – eight NeuPro-M engines, up to 160 TOPS at 1.25GHz
Illustrating its leading-edge performance, a single NPM11 core, when processing a ResNet50 convolutional neural network, achieves a 5x performance increase and 6x memory bandwidth reduction versus its predecessor, which results in exceptional power efficiency of up to 24 TOPS per watt.
Built on the success of its’ predecessors, NeuPro-M is capable of processing all known neural network architectures, as well as integrated native support for next-generation networks like transformers, 3D convolution, self-attention and all types of recurrent neural networks. NeuPro-M has been optimised to process more than 250 neural networks, more than 450 AI kernels and more than 50 algorithms. The embedded vector processing unit (VPU) ensures future proof software-based support of new neural network topologies and new advances in AI workloads. Furthermore, the CDNN offline compression tool can increase the FPS/Watt of the NeuPro-M by a factor of 5-10x for common benchmarks, with very minimal impact on accuracy.
Ran Snir, Vice President & General Manager of the Vision Business Unit at CEVA commented: “The artificial intelligence and machine learning processing requirements of edge AI and edge compute are growing at an incredible rate, as more and more data is generated and sensor-related software workloads continue to migrate to neural networks for better performance and efficiencies. With the power budget remaining the same for these devices, we need to find new and innovative methods of utilising AI at the edge in these increasingly sophisticated systems. NeuPro-M is designed on the back of our extensive experience deploying AI processors and accelerators in millions of devices, from drones to security cameras, smartphones and automotive systems. Its innovative, distributed architecture and shared memory system controllers reduces bandwidth and latency to an absolute minimum and provides superb overall utilisation and power efficiency. With the ability to connect multiple NeuPro-M compliant cores in a SoC or Chiplet to address the most demanding AI workloads, our customers can take their smart edge processor designs to the next level.”
The NeuPro-M heterogenic architecture is composed of function-specific co-processors and load balancing mechanisms that are the main contributors to the huge leap in performance and efficiency compared to its predecessor. By distributing control functions to local controllers and implementing local memory resources in a hierarchical manner, the NeuPro-M achieves data flow flexibility that result in more than 90% utilisation and protects against data starvation of the different co-processors and accelerators at any given time. The optimal load balancing is obtained by practicing various data flow schemes that are adopted to the specific network, the desired bandwidth, the available memory and the target performance, by the CDNN framework.
NeuPro-M architecture highlights include:
• Main grid array consisting of 4K MACs (Multiply And Accumulates), with mixed precision of 2-16 bits
• Winograd transform engine for weights and activations, reducing convolution time by 2x and allowing 8-bit convolution processing with <0.5% precision degradation
• Sparsity engine to avoid operations with zero-value weights or activations per layer, for up to 4x performance gain, while reducing memory bandwidth and power consumption
• Fully programmable Vector Processing Unit, for handling new unsupported neural network architectures with all data types, from 32-bit Floating Point down to 2-bit Binary Neural Networks (BNN)
• Configurable Weight and Data compression down to 2-bits while storing to memory, and real-time decompression upon reading, for reduced memory bandwidth
• Dynamically configured two level memory architecture to minimise power consumption attributed to data transfers to and from an external SDRAM
To illustrate the benefit of these innovative features in the NeuPro-M architecture, concurrent use of the orthogonal mechanisms of Winograd transform, Sparsity engine and low-resolution 4x4-bit activations, delivers more than a 3x reduction in cycle count of networks such as Resnet50 and Yolo V3.
As neural network Weights and Biases and the data set and network topology become key intellectual property (IP) of the owner, there is a strong need to protect these from unauthorised use. The NeuPro-M architecture supports secure access in the form of optional root of trust, authentication and cryptographic accelerators.
For the automotive market, NeuPro-M cores and its CEVA Deep Neural Network (CDNN) deep learning compiler and software toolkit comply with Automotive ISO26262 ASIL-B functional safety standard and meets the stringent quality assurance standards IATF16949 and A-Spice.
Together with CEVA’s multi award-winning neural network compiler – CDNN – and its robust software development environment, NeuPro-M provides a fully programmable hardware/software AI development environment for customers to maximise their AI performance. CDNN includes innovative software that can fully utilise the customers’ NeuPro-M customised hardware to optimise power, performance & bandwidth. The CDNN software also includes a memory manager for memory reduction and optimal load balancing algorithms, and wide support of various network formats including ONNX, Caffe, TensorFlow, TensorFlow Lite, Pytorch and more. CDNN is compatible with common open-source frameworks, including Glow, tvm, Halide and TensorFlow and includes model optimisation features like ‘layer fusion’ and ‘post training quantization’, all while using precision conservation methods.
NeuPro-M is available for licensing to lead customers today and for general licensing in Q2 this year. NeuPro-M customers can also benefit from Heterogenous SoC design services from CEVA to help integrate and support system design and chiplet development. For further information, visit: https://www.ceva-dsp.com/product/ceva-neupro-m/
Contact Details and Archive...