Every word you say is controlled by electrical nerve signals from your brain, which tell your lips, throat, and tongue exactly how to say it. Now, scientists are trying to tap into those silent speech commands.
Listening to the sound of silence. I'm Bob Hirshon and this is Science Update.
You've heard of reading lips. Now, NASA scientists are reading throats. Or more precisely, the nerve signals that tell your throat and tongue to form words.
Chuck Jorgensen is Chief of Neuroengineering at the NASA-Ames Research Center in Mountain View, California. By placing sensors on the chin and Adam's apple, his team can identify several simple words when a speaker only mouths them—or less.
Some folks will choose to have their mouth completely closed, and the only thing that's going on is tiny movements of the tongue or tension that they have in their vocal cords.
He says the technology could help astronauts understand each other on space flights, where differences in the atmosphere and gravity make it hard to speak and hear clearly. It could also be useful in emergencies.
So if someone's muscles, for example, have deteriorated because of microgravity, or if they're physically injured so they can't speak, there is the possibility of directly tapping the nervous system and still controlling the emergency devices that they might need.
Here on earth, the system could help pilots and air traffic controllers communicate over loud noise. And someday, it might serve as a translator for patients with vocal cord damage. I'm Bob Hirshon for AAAS, the Science Society.
Making Sense of the Research
The good thing about speech is that it's an easy, spontaneous way to communicate that almost everyone is proficient at, and that enables extremely complex, rich ideas to be transmitted quickly. Most alternatives (from typing to Morse code to semaphore flags) require skill, have limited capacity for carrying information, and/or involve a code that has to be learned.
The drawback to speech is that you have to be heard. Normally, this isn't a problem. But for a person with vocal cord damage, that can be a barrier. Ditto for astronauts in space, where the low gravity causes bodily changes that make speech difficult to understand. Air traffic controllers and military personnel are often working in such deafening noise that even their shouts can't be understood.
Jorgensen's system gets around this by tapping into the nerve signals that tell your throat and tongue to form words. It turns out that these signals can be read no matter how quietly a person is speaking. In fact, most of us unconsciously send these signals when we're only thinking about words—for example, when reading silently to ourselves. According to Jorgensen, that's because people learn language by learning to pronounce words out loud. (Notice how small children usually read out loud when they read to themselves.) It's only later that we learn how to think and read in a language without making noise—and we do it by suppressing the signals coming into our throat and tongue. The signals don't go away; we just keep them from resulting in an action.
Right now, the device is in its early stages. Jorgensen has programmed the system to recognize the numbers 0 through 9, and a few short command words like "Go." Using these numbers and commands, the subjects have been able to control a web browser by subvocally spelling out words, letter by letter, using a matrix (with numbered rows and columns) that corresponds to the alphabet. For example, if the letter A were in row 1, column 1, then "1,1" would mean A. The letter B might be "1,2," and so on. It's painstakingly slow, but it works, which means that getting it to recognize more complex words and commands is only a matter of time and technology.
The challenge is in trying to separate the nerve signals that create these words from other nerve signals in the body. In other words, you want to pick up the nerve signals telling your throat and tongue to speak, but not the ones telling you to swallow, or to scrape the peanut butter off the roof of your mouth. As Jorgensen explains, it's "like trying to listen to a conversation in a crowded room."
And speaking of crowded rooms, another possible application for the device is for silent communication in public places. This system may make it possible for people to communicate without speaking out loud, which could serve everyone from party guests trying to remember an old friend's name to world leaders conducting sensitive negotiations. If so, it may not be too long before students start getting detention for talking subvocally in class.
Now try and answer these questions:
- What is subvocal speech?
- How can you measure subvocal speech without detecting sounds?
- What are some other possible applications for this system?
- What advantages would a subvocal speech system have over other forms of non-verbal communication? What are its limitations?
Photos of the system can be found on the NASA-Ames website.
Vocal Vowels, an Exploratorium online exhibit, explores the mechanics of speech production.
The University of Maryland's Vocal Tract Visualization Lab conducts imaging studies of the mechanics of human speech.