Researchers at the University of Washington have developed a watch that can monitor a user’s environment for important sounds, such as a fire alarm or a microwave beeping, identify the sounds, and then inform the user through a subtle vibration.
“This technology provides people with a way to experience sounds that require an action – such as getting food from the microwave when it beeps. But these devices can also enhance people’s experiences and help them feel more connected to the world,” said Dhruv Jain, a researcher involved in the study, who is himself hard of hearing. “I use the watch prototype to notice birds chirping and waterfall sounds when I am hiking. It makes me feel present in nature. My hope is that other deaf and hard-of-hearing people who are interested in sounds will also find SoundWatch helpful.”
During development, the researchers used machine learning to create a dataset of common sounds, including a door knock and a dog barking. The smartwatch sends the sound recording to the user’s smartphone where it is analyzed and identified, and then if the sound is relevant the phone sends a message back to the watch so that it can alert the user.
This system takes advantage of the superior battery life and processing power of smartphones compared with smartwatches, and means that the processing is faster than if it occurred within the watch itself. So far, the researchers have tested the system with eight deaf or hard of hearing volunteers, who used it in a variety of settings.
They reported that the system helped increase their awareness of sounds that were important to take notice of, such as a car horn, but that it still has some teething issues in terms of misclassifying some sounds and being slow to identify sounds in certain cases. Further developments of the system will hopefully help to iron these issues out.
“Disability is highly personal, and we want these devices to allow people to have deeper experiences,” said Jain. “We’re now looking into ways for people to personalize these systems for their own specific needs. We want people to be notified about the sounds they care about – a spouse’s voice versus general speech, the back door opening versus the front door opening, and more.”
See a video about the system:
The findings were presented on October 28th at the 2020 ACM conference on computing and accessibility
Via: University of Washington