1 in 4 Brits in ‘distraction danger’ when wearing headphones, according to research

14 May 2019

Audio Analytic Global 2019 Hearables Report_AI Attitudes and Expectations

Call for ‘hearable’ technologies to help improve safety, as new research from Audio Analytic shows that around a quarter of Brits have put themselves in danger in the last year while wearing headphones.

New research shows that a staggering 24% of adult Brits admit they have put themselves in danger over the past 12 months when wearing headphones or earphones while walking, running or cycling. Examples included stepping out into a road, bumping into somebody or not hearing an emergency vehicle approaching.

Commissioned by Audio Analytic, a leading AI technology company focused on sound recognition, the research explored the risks that people are exposed to everyday by being distracted from their surroundings while listening to music, audiobooks, podcasts or the like on the move. According to the report, more than 12 million British adults in the past year have put themselves at risk of accident, because they can’t hear danger approaching.

The risk increases the younger consumers are. More than one in four Americans, and one in three Brits between 18 and 34 have put themselves in harm’s way when wearing headphones, as 37% admitted to finding themselves in at least one hazardous situation over the past year, with many doing this multiple times. Survey data was not collected for under 18s, but EPDT expects that these proportions would be at least as bad as the 18-34 age range for school age children, who often regularly use headphones.

Despite these admissions, the research found the majority of people claim to be aware of the dangers of wearing headphones or earphones in public. 97% of Brits consider it dangerous to wear headphones or earphones when driving, while other activities deemed dangerous include cycling (96%), running (90%) and commuting on public transport (67%).

Dr Chris Mitchell, CEO & founder of Audio Analytic comments; “A worrying number of people are putting their lives at risk every day when wearing headphones and shutting down the sense of hearing. Many of us wear headphones to block out the world and increase our focus, but that brings the risk of losing awareness of our surroundings. Missing important information in our environment can ultimately expose us to dangerous situations – and more needs to be done to prevent accidents from happening, we believe contextually-aware AI technology can be an enabler of this.”

Nick Lloyd, Acting Head of Road at RoSPA – the UK’s Royal Society for the Prevention of Accidents – comments; “Whether you are driving, cycling, jogging or walking, being distracted increases your risk of an accident. Listening to music on the move is part of modern life, but using headphones reduces your ability to hear approaching vehicles and puts you at increased risk of harm, especially if riding, walking or running on roads without vehicle segregation. Noise cancelling headphones are currently designed to isolate people from their surroundings; incorporating technology that can enable them to actively alert the wearer to possible dangers, and increase awareness, clearly has potential to improve safety.”

Dr Richard Lichenstein from the University of Maryland added; “Our analysis of accident reports showed that a warning was sounded before the crash in 29% of accidents involving pedestrians wearing headphones. Headphone use has become common for a significant proportion of pedestrians – and headphones with noise cancelling features have become more popular. If noise cancelling headphones can now be designed to recognise warning sounds and actively alert the wearer to danger, or automatically alter sound transmission to increase awareness, then there is the potential to reduce injury risk among headphone wearers.”

The survey results highlight demand for headphones to make use of artificial intelligence, with 83% of Brits wanting their audio devices to recognise and alert them to the sound of emergency vehicle sirens. Other important sounds Brits want their headphones to recognise include: smoke/fire alarms (88%), important announcements starting, such as train platform changes (83%) and, perhaps alarmingly, gun shots (81%).

Audio-Analytic-man-wearing-headphones

A further 86% of people agreed that dynamic noise cancellation, where headphones preserve battery life by automatically turning noise cancellation on when it is needed, would be useful in a range of locations such as the home, commuting and at the gym. In addition, 57% would purchase hearables that had dynamic sound equalisation, which enables the hearables to optimise the audio experience for different acoustic environments.

Dr Chris Mitchell continued; “Modern headphones with active noise cancellation can increase the risk of distraction danger, but these devices also offer a solution. They are fitted with external microphones presenting an opportunity to add intelligent sound recognition to ensure contextual awareness. When the earphones themselves can hear and recognise important sounds, like a siren, car horn or even a doorbell or somebody talking, the devices can alert the wearer or instantly change settings to allow more sound through to enhance awareness. In addition, by better understanding the world around us, wearables with sound recognition could also enhance sound quality and better manage battery power. Advanced AI tech can make the next headphones we buy intelligent enough to understand context. We can then lose ourselves in our music without losing touch with the world around us.”

Other findings from the report include:

•    73% of Brits own two or more pairs of headphones (in addition to those supplied with phones)
•    Almost half of respondents (48%) wear headphones for over two hours a day
•    56% are ‘excited by’ or ‘think it would be useful’ to have artificial intelligence on hearables; only 12% would be worried about it or will avoid it
•    86% of respondents are willing to sacrifice battery life in return for more intelligent features
•    The industry has passed the wired vs wireless tipping point, with wireless devices now more popular among consumers, especially amongst those spending more than £76

Download the full ‘Global 2019 Hearables Report: AI Attitudes and Expectations’.

The survey: headphone owners would give up 24% of battery capacity for AI features
The extensive survey of 6,000 consumers in the UK and US found a strong appetite for artificial intelligence – and in particular sound recognition – on headphones, earphones and earbuds. The majority of consumers were positive about the role of AI with many willing to sacrifice a quarter of battery capacity for valuable capabilities.

Collectively known as ‘hearables’, these devices could be enhanced through intelligent, context-sensing capabilities to enhance listening experiences, manage battery life and keep wearers safe.

The survey was carried out by an independent consumer research company on Audio Analytic's behalf and answers some key questions facing brands active in the dynamic hearables market, including:

•    Which form factor (on-ear, in-ear, over-ear) or connectivity (wired, wireless, true wireless) are the most popular among segments of the market?
•    Which of the 30 leading brands are most popular among consumers?
•    How much did consumers last spend and when did they buy?
•    How much do they plan to spend next time and when do they plan to buy again?
•    How much time do consumers spend using headphones, earphones and earbuds and where do they use them the most?
•    How does the amount consumers last spent on headphones or their age affect attitudes to current battery performance and the amount they would sacrifice for AI?
•    How do the various segments of the market feel about artificial intelligence in general?
•    Do consumers want a voice assistant running on their hearables?
•    Have consumers put themselves in danger by wearing headphones and how do they perceive potentially dangerous situations?
•    Do consumers want headphones, earphones or earbuds to recognise sounds around them?
•    Do consumers want their devices to dynamically adjust noise cancellation and sound equalisation settings based on their acoustic environment?

Methodology
The survey was conducted among 6,012 consumers in the UK and USA (3,008 in the UK and 3,004 in the USA).

hearablesreport-cup-of-tea

The research was conducted independently by Sapio Research on behalf of Audio Analytic at the end of March 2019. Respondent breakdown is representative across gender, geography and age. In this study, the chances are 95 in 100 that a survey result does not vary, plus or minus, by more than 1.3% from the result that would have been obtained if interviews had been conducted with all persons in the countries.

According to UK ONS statistics (https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationestimates/datasets/populationestimatesforukenglandandwalesscotlandandnorthernireland), there are 51,312,680 UK residents over 18 years of age. The survey found that 24% of respondents felt they had put themselves in a dangerous situation at least once in the last 12 months while wearing headphones, earphones or earbuds.

About Audio Analytic
Audio Analytic is the pioneer of intelligent sound recognition, using advanced, edge-based AI to provide consumer technology with a wide sense of hearing. The company is on a mission to map the world of sounds, empowering a new generation of smart products to hear the sounds around us. By transferring our sense of hearing to consumer products and digital personal assistants we give them the ability to react to the world around us, helping satisfy our entertainment, safety, security, wellbeing and communication needs.

Audio Analytic’s ai3 sound recognition software enables device manufacturers and chip companies to equip products with Artificial Audio Intelligence, recognising and automatically responding to our growing list of sound profiles. At the heart of the company is a technology platform made up of two synergistic parts:

•    Alexandria is the world’s largest commercially-usable audio dataset for machine learning, featuring millions of audio files that are structured taxonomically with full data provenance.
•    AuditoryNET is a highly-optimised deep neural network for sound recognition, which models the ideophonic features of sounds.

In addition to core technology are nearly 50 experts in acoustics, data, machine learning and embedded software engineering.

The company has successfully licensed its technology to major global brands, including two of the world’s biggest companies plus Hive, Iliad, Sengled and others.


More information...

Contact Details and Archive...

Print this page | E-mail this page