Breakthrough in deep learning

14 June 2017

Credit: Zapp2Photo/Shutterstock

Adapting a technique for rapid data lookup, scientists have discovered a way to slash the amount of computation required for deep learning.

The basic building block of a deep-learning network is an artificial neuron. Though originally conceived in the 1950s as models for the biological neurons in living brains, artificial neurons are in fact just mathematical functions: equations that act upon an incoming piece of data and transform it into an output.

The research addresses one of the biggest issues facing tech giants such as Google, Facebook and Microsoft as they race to build, train and deploy massive deep-learning networks for a growing body of products as diverse as self-driving cars, language translators and intelligent replies to emails.


Shrivastava and Rice graduate student Ryan Spring have shown that techniques from ‘hashing,’ a tried-and-true data-indexing method, can be adapted to dramatically reduce the computational overhead for deep learning. Hashing involves the use of smart hash functions that convert data into manageable small numbers called hashes. These are stored in tables that work much like the index in a printed book.


"Our approach blends two techniques – a clever variant of locality-sensitive hashing and sparse backpropagation – to reduce computational requirements without significant loss of accuracy," Spring said. "For example, in small-scale tests we found we could reduce computation by as much as 95 percent and still be within 1 percent of the accuracy obtained with standard approaches."

Credit: sciencedaily.com

Research by Rice University


Contact Details and Archive...

Print this page | E-mail this page