A new research initiative aims to make voice recognition technology more useful for people with a range of diverse speech patterns and disabilities.
The Speech Accessibility Project, which launched Monday, is spearheaded by the University of Illinois at Urbana-Champaign. Amazon, Apple, Google, Meta and Microsoft are all supporting the project, as well as a handful of nonprofit disability organizations.
Speech recognition, which can be found in voice assistants like Siri and Alexa as well as translation tools, has become a part of many people’s everyday lives. But these systems don’t always recognize certain speech patterns, particularly those associated with disabilities. That includes speech affected by Lou Gehrig’s disease or ALS, Parkinson’s disease, cerebral palsy and Down syndrome. As a result, many people may not be able to effectively use these speech technologies.
The Speech Accessibility Project will work to change this by creating a dataset of representative speech samples that can be used to train machine learning models, so they can better understand a range of speech patterns.
“One of the groups that would benefit the most [from speech technology] are people who have physical disabilities of many different kinds. And too often, those are the people for whom the speech technology doesn’t work,” said Mark Hasegawa-Johnson, a professor of electrical and computer engineering at UIUC who’s leading the project.
“Speech technology relies on training data,” he added. “It’s an artificial intelligence technology, so it requires us to have enough data to be able to develop technology that will actually work for people with a particular kind of speech pattern. And too often in the past, we just haven’t had enough information about the speech patterns of people with different kinds of disabilities or with different kinds of atypical speech patterns.”
The project will collect speech samples from people who represent a diversity of speech patterns. Researchers at the University of Illinois will recruit paid volunteers to submit recorded voice samples, and will create a “private, de-identified” dataset that’ll be used to train machine learning models to better understand a range of speech patterns. Initially, the project will focus on American English.
The Davis Phinney Foundation, which supports people with Parkinson’s, and Team Gleason, which serves people with ALS, will support the endeavor. Community organizations will help with recruiting participants and user testing, and will offer feedback throughout the project. Anyone looking to get involved in the Speech Accessibility Project can visit the website.
Many tech companies, including those associated with this project, have been working to make their products and services more accessible to all users.
Google has rolled out apps like Lookout, which helps people who are blind or low-vision identify objects and currency, as well as Project Relate, which is designed to help people with speech impairments more easily communicate with others. Apple launched a People Detection feature in 2020 that lets blind and low-vision iPhone and iPad users know how close someone is to them, and updated its VoiceOver screen reader this year to support over 20 more languages. Facebook, now under parent company Meta, has worked to improve photo descriptions for blind and visually impaired users, while also rolling out automatic captions on Instagram’s IGTV, Stories and feed videos.
Amazon has added various accessibility features to its Echo line of smart speakers and displays, such as speech-to-text and “Show and Tell,” which helps people with vision impairments identify everyday household objects. And Microsoft made waves in 2018 when it launched the Xbox Adaptive Controller, a device designed to help gamers of all abilities play, followed by the Surface Adaptive Kit in 2021, which includes a variety of bumpy decals to identify keycaps, ports and cables. In May, it unveiled an Adaptive Mouse, four Adaptive Buttons and the Microsoft Adaptive Hub for wirelessly pairing various input devices.