Tensor Unit Efficiency Details for RISC-V AI Processing IP Published

28 June 2024

To underline the credential of its All-In-One artificial intelligence (AI) processing IP, Semidynamics has just made public all the data on the tensor unit efficiency levels attained.

This relates to the IP dealing with a LlaMA-2 7B-parameter generative AI large language model (LLM). The data has been aggregated and presented as an A-tensor shape. There are a total of 6 different shapes in LlaMA-2, with utilisation being above 80% for most of these, unlike what could be expected with other architectures currently being offered. This stems from the unique approach taken with Semidynamics’ IP. Instead of the central processing unit (CPU), graphical processing unit (GPU) and neural processing unit (NPU) being discrete (and needing to interface with each other via a bus interconnection), the company integrates all 3 of these together in a single, scalable processing element.

“Our new All-In-One AI IP not only delivers outstanding AI performance but is also so much easier to program as there is now just one software stack instead of three. Developers can use the RISC-V stack they already know and they do not have to worry about software-managed local SRAMs, or DMAs,” states Roger Espasa, CEO at Semidynamics.


Contact Details and Archive...

Print this page | E-mail this page






This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.