Machine learning: real-time phone image retouching
22 August 2017
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and Google have presented a new system that can automatically retouch images in the style of a professional photographer.
The data captured by today’s digital cameras is often treated as the raw material of a final image. Before uploading pictures to social networking sites, even casual cellphone photographers might spend a minute or two balancing colour and tuning contrast, with one of the many popular image-processing programs now available.
This system, on the other hand, is so energy-efficient that it can run on a smartphone, and is so fast that it can display retouched images in real-time – so that the photographer can see the final version of the image while still framing the shot.
The same system can also speed up existing image-processing algorithms. In tests involving a new Google algorithm for producing high-dynamic-range images, which capture subtleties of colour lost in standard digital images, the new system produced results that were visually indistinguishable from those of the algorithm in about one-tenth the time – which is again, fast enough for real-time display.
The system is a machine-learning system, meaning that it learns to perform tasks by analysing training data; in this case, for each new task it learned, it was trained on thousands of pairs of images, raw and retouched.
The work builds on an earlier project from the MIT researchers, in which a cellphone would send a low-resolution version of an image to a web server. The server would send back a ‘transform recipe’ that could be used to retouch the high-resolution version of the image on the phone, reducing bandwidth consumption.
“Google heard about the work I’d done on the transform recipe,” says Michaël Gharbi, an MIT graduate student in electrical engineering and computer science and first author on both papers. “They themselves did a follow-up on that, so we met and merged the two approaches. The idea was to do everything we were doing before, but, instead of having to process everything on the cloud, to learn it. And the first goal of learning it was to speed it up.”
Said senior scientist at Google Research, Jon Barron: “This technology has the potential to be very useful for real-time image enhancement on mobile platforms. Using machine learning for computational photography is an exciting prospect but is limited by the severe computational and power constraints of mobile phones. This paper may provide us with a way to sidestep these issues and produce new, compelling, real-time photographic experiences without draining your battery or giving you a laggy viewfinder experience.”
For more information, visit this link from Massachusetts Institute of Technology.
Contact Details and Archive...