Many of us take verbal communication for granted, but vocal and speech difficulties are common symptoms in disorders such as Multiple Sclerosis (MS), Lou-Gehrig's Disease (ALS), Chronic Obstructive Pulmonary (COPD), and Parkinson's Disease. The cumulative diagnosed population for these disorders reaches into the millions.
We're seeking to develop technology that picks up on small tongue movements—translating fragments of silent speech into computer language.
Our approach uses vibrational sensing to detect when and where the tongue is moving—allowing us to track silent speech patterns and leverage machine learning to identify and interpret into language.