Flexible device could treat hearing loss without batteries

Some people are born with hearing loss, while others acquire it with age, infections or long-term noise exposures. In many instances, the tiny hairs in the inner ear’s cochlea that allow the brain to recognize electrical pulses as sound are damaged. As a step toward an advanced artificial cochlea, researchers in ACS Nano report a conductive membrane, which translated sound waves into matching electrical signals when implanted inside a model ear, without requiring external power.

An electrically conductive membrane implanted inside a model ear simulates cochlear hairs by converting sound waves into electrical pulses; wiring connects the prototype to a device that collects the output current signal.

When the hair cells inside the inner ear stop working, there’s no way to reverse the damage. Currently, treatment is limited to hearing aids or cochlear implants. But these devices require external power sources and can have difficulty amplifying speech correctly so that it’s understood by the user. One possible solution is to simulate healthy cochlear hairs, converting noise into the electrical signals processed by the brain as recognizable sounds. To accomplish this, previous researchers have tried self-powered piezoelectric materials, which become charged when they’re compressed by the pressure that accompanies sound waves, and triboelectric materials, which produce friction and static electricity when moved by these waves. However, the devices aren’t easy to make and don’t produce enough signal across the frequencies involved in human speech. So, Yunming Wang and colleagues from the University of Wuhan wanted a simple way to fabricate a material that used both compression and friction for an acoustic sensing device with high efficiency and sensitivity across a broad range of audio frequencies.

To create a piezo-triboelectric material, the researchers mixed barium titanate nanoparticles coated with silicon dioxide into a conductive polymer, which they dried into a thin, flexible film. Next, they removed the silicon dioxide shells with an alkaline solution. This step left behind a sponge-like membrane with spaces around the nanoparticles, allowing them to jostle around when hit by sound waves. In tests, the researchers showed that contact between the nanoparticles and polymer increased the membrane’s electrical output by 55% compared to the pristine polymer. When they sandwiched the membrane between two thin metal grids, the acoustic sensing device produced a maximum electrical signal at 170 hertz, a frequency within the range of most adult’s voices. Finally, the researchers implanted the device inside a model ear and played a music file. They recorded the electrical output and converted it into a new audio file, which displayed a strong similarity to the original version. The researchers say their self-powered device is sensitive to the wide acoustic range needed to hear most sounds and voices.

Source: https://www.acs.org/
AND
https://pubmed.ncbi.nlm.nih.gov/

Human Internal Verbalizations Understood Instantly By Computers

MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud. The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying wordsin your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words. The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.

The device is thus part of a complete silent-computing system that lets the user undetectably pose and receive answers to difficult computational problems. In one of the researchers’ experiments, for instance, subjects used the system to silently report opponents’ moves in a chess game and just as silently receive computer-recommended responses.

The motivation for this was to build an IA device — an intelligence-augmentation device,” says Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?” “We basically can’t live without our cellphones, our digital devices,” adds Pattie Maes, a professor of media arts and sciences and Kapur’s thesis advisor. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself. So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”

Source: http://news.mit.edu/