Myrtle elected to provide benchmark code for machine learning efficiency T&M

13 December 2018

Credit: Shutterstock

Deep learning company Myrtle Software have been selected to develop a speech recognition benchmark for MLPerf: a new machine learning quality assurance competition backed by Google, Baidu and more.

As a collaboration of technology giants and researchers from numerous universities – including Harvard, Stanford and the University of California Berkeley – MLPerf is aspiring to drive progress in machine learning, by developing a suite of fair and reliable benchmarks for emerging AI hardware and software platforms.

Myrtle have been selected to provide the computer code that will be the benchmark standard for the Speech Recognition division. The code is a new implementation of two AI models, known as ‘DeepSpeech 1’ and ‘DeepSpeech 2’ – building on models originally developed by Baidu.

Said Peter Baldwin, CEO of Myrtle: “We are honoured to be providing the reference implementations for the speech-to-text category of MLPerf. Myrtle has a world-class machine learning (ML) group and we are pleased to be able to provide the code as open source so that everyone can benefit from it.”

This is the first time that the AI community has come together to try to develop a series of reliable, transparent and vendor-neutral ML benchmarks to highlight performance differences between different ML algorithms and cloud configurations. The new benchmarking suite will be used to test and measure training speeds and inference times for a range of ML tasks.

Myrtle’s Speech Recognition benchmark is based on proven experience in this field. The company’s core R&D team have speeded up Mozilla’s Deep Speech implementations 100-fold when training on Librispeech, which demonstrates their practical experience of training and deploying AI and ML algorithms.


Contact Details and Archive...

Print this page | E-mail this page