Search This Blog

Complex Rehab Vistors

Thursday, April 29, 2010

Wheelchairs that Listen: Voice Controlled Wheelchairs

Author Ed Grabianowski writes about how thought-controlled wheelchairs work on the "How Stuff Works" website.

Complete tetraplegia: In many ways, it is the worst possible medical diagnosis, short of imminent death.

Total physical paralysis from the neck down can result from spinal cord injuries or diseases such as Amyotrophic Lateral Sclerosis (also known as Lou Gehrig's disease). Sufferers become totally dependent upon others, but they often feel isolated because they have lost the ability to talk.

Most of us take for granted the ability to walk from one room to another, but for the severely disabled, even this common action requires assistance from someone else.
Imagine, then, that a completely paralyzed person could control a motorized wheelchair simply by thinking about it. By bypassing damaged nerves, such a device could open many doors to independence for disabled people.

In this article, we'll examine a company that is working to make that "what if" into reality. We'll also find out how the same technology could restore speech to people unable to talk

Whenever you perform a physical action, neurons in your brain generate minute electric signals. These signals move from the brain and travel along axons and dendrites, passing through your nervous system.

When they reach the right area of the body, motor neurons activate the necessary muscles to complete the action. Almost every signal passes through the bundle of nerves inside the spinal cord before moving on to other parts of the body.

When the spinal cord is severely damaged or cut, the break in the nervous system prevents the signals from getting where they need to be. In the case of neuromuscular disease, the motor neurons stop functioning -- the signals are still being sent, but there's no way for the body to translate them into actual muscle action.

How can we solve the problem of a faulty nervous system? One way is to intercept signals from the brain before they are interrupted by a break in the spinal cord or degenerated neurons. This is the solution that the thought-controlled wheelchair will put to use.

Physicist Stephen Hawking suffers from Amyotrophic Lateral Sclerosis. Hawking has near complete paralysis but retains enough muscle control to allow him to press a button with his right hand. A computer screen displays a series of icons that allow control of his wheelchair, doors and appliances in his house. He can select items on the screen by pressing the button when a moving cursor passes over the correct area of the screen.

Hawking speaks in a similar manner. The screen displays the alphabet, with a cursor moving over it. He presses the button at the appropriate letter. Once he has constructed a complete sentence, he can send the text to the voice synthesizer built into his chair.

Hawking’s ability to move a finger on his right hand differentiates him from many other victims of paralysis or disease, who are unable to communicate or interact with control systems at all.

Ambient Audeo System Michael Callahan and Thomas Coleman founded Ambient, the company that develops and markets the Audeo system.

Audeo was initially envisioned as a way for severely disabled people to communicate, but Ambient expanded the control systems to include the ability to control a wheelchair or interact with a computer.

The Audeo is based on the idea that neurological signals sent from the brain to the throat area to initiate speech still get there even if the spinal cord is damaged or the motor neurons and muscles in the throat no longer work properly.

Thus, even if you can't form understandable words, neurological signals that represent the intended speech exist. This is known as subvocal speech. Everyone performs subvocal speech -- if you think a word or sentence without saying it out loud, your brain still sends the signals to your mouth and throat.

A lightweight receiver on the subject's neck (a small array of sensors attached near the Adam's apple area) intercepts these signals. It functions much like anelectroencephalogram, a device that can receive neurological signals when placed on a subject's scalp.

The Audeo receives specific speech-related signals because it is placed directly on the neck and throat area. The sensors in the receiver detect the tiny electric potentials that represent neurological activity. It then encrypts those signals before sending them wirelessly to a computer.

The computer processes the signals and interprets what the user intended to say or do. The computer then sends command signals to the wheelchair or to a voice processor.

Here is an example of the Audeo system in action: You want to say, "Hello, how are you?" and say it silently in your mind. Your brain sends signals to the motor neurons in your mouth and throat. The signals are the same as the ones that would be sent if you had really said it out loud.

The Audeo receiver placed on your throat registers the signals and sends them to the computer. The computer knows the signals for different words and phonemes (small units of spoken speech), so it interprets the signals and processes them into a sentence. It works in much the same way as voice-recognition software.

The computer finishes the process by sending an electronic signal to a set of speakers. The speakers then "say" the phrase. If you want to control a wheelchair, the process is similar, except you learn certain subvocal phrases that the computer interprets as control commands rather than spoken words.

The user thinks, "forward," and the Audeo processes that signal as a command to move the wheelchair forward. Audeo uses a National Instruments Compact RIO controller to collect the data coming from the sensors.

Embedded software known as LabVIEW then crunches the numbers and converts the signals into control functions, such as synthesized words or wheelchair controls. Ambient has developed the communication aspect of Audeo to the point that users can create continuous speech, rather than speaking on word at a time.

NASA's Subvocal Speech Research NASA i s developing subvocal control for potential use by astronauts. Astronauts on spacewalks or in the International Space Station work in noisy environments doing jobs that often don't leave their hands free to control computer systems.

Voice-recognition programs don't work well in these situations because all the background noise makes voice commands difficult to interpret. NASA hopes the use of subvocal signals will circumvent this problem.

Other Options

For paralyzed patients who retain some mobility of their head and neck, there are other options for controlling a wheelchair. Most of them involve pushing or turning the head, or moving the shoulders. However, those with the most severe kinds of paralysis can't use these control mechanisms. Solutions that detect eye or face movement offer some control, though not without drawbacks.

Eyeglass-mounted sensors can detect movement of the cheek. This can be used like clicking on a computer mouse -- the user raises the cheek when a cursor on a computer screen passes over the correct function or letter.

Other systems detect eye movements for a more robust set of commands. Moving the eyes to the right would signal a wheelchair to turn right, for example. However, these systems require a disciplined user because they tend to interpret regular eye movements for "command" eye movements.

Swedish developer Tobii Technology is creating a system that detects exactly what the eyes are looking at. Instead of waiting for a moving cursor, the user just looks at a computer screen. Looking at various icons allows for communication and even game playing.

While NASA's system could also be extremely beneficial for disabled people, it has other applications in mind, including the ability to speak silently on a cell phone and uses in military or security operations where speaking out loud would be disruptive.

NASA's subvocal system requires two sensors attached to the user's neck, and the system has to be trained to recognize a particular user's subvocal speech patterns. It takes about an hour of work to train six to 10 words, and the system as of 2006 was limited to 25 words and 38 phonemes.

In an early experiment, NASA's system achieved higher than 90 percent accuracy after "training" the software. The system controlled a Web browser and did a Google search for the term "NASA."

No comments: