Paper-Thin LoudSpeaker

MIT engineers have developed a paper-thin loudspeaker that can turn any surface into an active audio source. This thin-film loudspeaker produces sound with minimal distortion while using a fraction of the energy required by a traditional loudspeaker. The hand-sized loudspeaker the team demonstrated, which weighs about as much as a dime, can generate high-quality sound no matter what surface the film is bonded to.

To achieve these properties, the researchers pioneered a deceptively simple fabrication technique, which requires only three basic steps and can be scaled up to produce ultrathin loudspeakers large enough to cover the inside of an automobile or to wallpaper a room. Used this way, the thin-film loudspeaker could provide active noise cancellation in clamorous environments, such as an airplane cockpit, by generating sound of the same amplitude but opposite phase; the two sounds cancel each other out. The flexible device could also be used for immersive entertainment, perhaps by providing three-dimensional audio in a theater or theme park ride. And because it is lightweight and requires such a small amount of power to operate, the device is well-suited for applications on smart devices where battery life is limited.

MIT researchers have developed an ultrathin loudspeaker that can turn any rigid surface into a high-quality, active audio source. The straightforward fabrication process they introduced can enable the thin-film devices to be produced at scale.

It feels remarkable to take what looks like a slender sheet of paper, attach two clips to it, plug it into the headphone port of your computer, and start hearing sounds emanating from it. It can be used anywhere. One just needs a smidgeon of electrical power to run it,” says Vladimir Bulović, the Fariborz Maseeh Chair in Emerging Technology, leader of the Organic and Nanostructured Electronics Laboratory (ONE Lab), director of MIT.nano, and senior author of the paper.

Bulović wrote the paper with lead author Jinchi Han, a ONE Lab postdoc, and co-senior author Jeffrey Lang, the Vitesse Professor of Electrical Engineering.
The research has been published in IEEE Transactions of Industrial Electronics.


Acoustic Fabric

Having trouble hearing? Just turn up your shirt. That’s the idea behind a new “acoustic fabric” developed by engineers at MIT and collaborators at Rhode Island School of DesignThe team has designed a fabric that works like a microphone, converting sound first into mechanical vibrations, then into electrical signals, similarly to how our ears hearAll fabrics vibrate in response to audible sounds, though these vibrations are on the scale of nanometers — far too small to ordinarily be sensed. To capture these imperceptible signals, the researchers created a flexible fiber that, when woven into a fabric, bends with the fabric like seaweed on the ocean’s surface.

The fiber is designed from a “piezoelectric” material that produces an electrical signal when bent or mechanically deformed, providing a means for the fabric to convert sound vibrations into electrical signalsThe fabric can capture sounds ranging in decibel from a quiet library to heavy road traffic, and determine the precise direction of sudden sounds like handclaps. When woven into a shirt’s lining, the fabric can detect a wearer’s subtle heartbeat features. The fibers can also be made to generate sound, such as a recording of spoken words, that another fabric can detectA study detailing the team’s design appears in Nature. Lead author Wei Yan, who helped develop the fiber as an MIT postdoc, sees many uses for fabrics that hear.

Wearing an acoustic garment, you might talk through it to answer phone calls and communicate with others,” says Yan, who is now an assistant professor at the Nanyang Technological University in Singapore. “In addition, this fabric can imperceptibly interface with the human skin, enabling wearers to monitor their heart and respiratory condition in a comfortable, continuous, real-time, and long-term manner.”

Yan’s co-authors include Grace Noel, Gabriel Loke, Tural Khudiyev, Juliette Marion, Juliana Cherston, Atharva Sahasrabudhe, Joao Wilbert, Irmandy Wicaksono, and professors John Joannopoulos and Yoel Fink at MIT, along with collaborators from the Rhode Island School of Design (RISD), Lei Zhu from Case Western Reserve University, Chu Ma from the University of Wisconsin at Madison, and Reed Hoyt of the U.S. Army Research Institute of Environmental Medicine.


Turn Stem Cells Into Bone Using Nothing More Than Sound

Researchers have used sound waves to turn stem cells into bone cells, in a tissue engineering advance that could one day help patients regrow bone lost to cancer or degenerative disease. The innovative stem cell treatment from RMIT researchers in Australia offers a smart way forward for overcoming some of the field’s biggest challenges, through the precision power of high-frequency sound waves.

Tissue engineering is an emerging field that aims to rebuild bone and muscle by harnessing the human body’s natural ability to heal itself. A key challenge in regrowing bone is the need for large amounts of bone cells that will thrive and flourish once implanted in the target area. To date, experimental processes to change adult stem cells into bone cells have used complicated and expensive equipment and have struggled with mass production, making widespread clinical application unrealistic. Additionally, the few clinical trials attempting to regrow bone have largely used stem cells extracted from a patient’s bone marrow – a highly painful procedure.

In a new study published in the journal Small, the RMIT research team showed stem cells treated with high-frequency sound waves turned into bone cells quickly and efficiently. Importantly, the treatment was effective on multiple types of cells including fat-derived stem cells, which are far less painful to extract from a patient. Co-lead researcher Dr Amy Gelmi said the new approach was faster and simpler than other methods.

A magnified image showing adult stem cells in the process of turning into bone cells after treatment with high-frequency sound waves. Green colouring shows the presence of collagen, which the cells produce as they become bone cells

The sound waves cut the treatment time usually required to get stem cells to begin to turn into bone cells by several days,” said Gelmi, a Vice-Chancellor’s Research Fellow at RMIT. “This method also doesn’t require any special ‘bone-inducing’ drugs and it’s very easy to apply to the stem cells. “Our study found this new approach has strong potential to be used for treating the stem cells, before we either coat them onto an implant or inject them directly into the body for tissue engineering.”

The high-frequency sound waves used in the stem cell treatment were generated on a low-cost microchip device developed by RMIT. Co-lead researcher Distinguished Professor Leslie Yeo and his team have spent over a decade researching the interaction of sound waves at frequencies above 10 MHz with different materials. The sound wave-generating device they developed can be used to precisely manipulate cells, fluids or materials. “We can use the sound waves to apply just the right amount of pressure in the right places to the stem cells, to trigger the change process,” Yeo said. “Our device is cheap and simple to use, so could easily be upscaled for treating large numbers of cells simultaneously – vital for effective tissue engineering.”


Could Sound Replace Pacemakers and Insulin Pumps?

Imagine a future in which crippling epileptic seizures, faltering hearts and diabetes could all be treated not with scalpels, stitches and syringes, but with sound. Though it may seem the stuff of science fiction, a new study shows that this has solid real-world potential.

Sonogenetics – the use of ultrasound to non-invasively manipulate neurons and other cells – is a nascent field of study that remains obscure amongst non-specialists, but if it proves successful it could herald a new era in medicine.

In the new study published in Nature Communications, researchers from the Salk Institute for Biological Studies in California, US, describe a significant leap forward for the field, documenting their success in engineering mammalian cells to be activated using ultrasound. The team say their method, which they used to activate human cells in a dish and brain cells inside living mice, paves the way toward non-invasive versions of deep brain stimulation, pacemakers and insulin pumps.

Going wireless is the future for just about everything,” says senior author Dr Sreekanth Chalasani, an associate professor in Salk’s Molecular Neurobiology Laboratory. “We already know that ultrasound is safe, and that it can go through bone, muscle and other tissues, making it the ultimate tool for manipulating cells deep in the body.

Chalasani is the mastermind who first established the field of sonogenetics a decade ago. He discovered that ultrasound sound waves beyond the range of human hearing — can be harnessed to control cells. Since sound is a form of mechanical energy, he surmised that if brain cells could be made mechanically sensitive, then they could be modified with ultrasound.

In 2015 his research group provided the first successful demonstration of the theory, adding a protein to cells of a roundworm, Caenorhabditis elegans, that made them sensitive to low-frequency ultrasound and thus enabled them to be activated at the behest of researchers.

Chalasani and his colleagues set out to search for a new protein that would work in mammals. Although a few proteins were already known to be ultrasound sensitive, no existing candidates were sensitive at the clinically safe frequency of 7MHz – so this was where the team set their sights. To test whether TRPA1 protein could activate cell types of clinical interest in response to ultrasound, the team used a gene therapy approach to add the genes for human TRPA1 to a specific group of neurons in the brains of living mice. When they then administered ultrasound to the mice, only the neurons with the TRPA1 genes were activated.

Clinicians treating conditions including Parkinson’s disease and epilepsy currently use deep brain stimulation, which involves surgically implanting electrodes in the brain, to activate certain subsets of neurons. Chalasani says that sonogenetics could one day replace this approach—the next step would be developing a gene therapy delivery method that can cross the blood-brain barrier, something that is already being studied. Perhaps sooner, he says, sonogenetics could be used to activate cells in the heart, as a kind of pacemaker that requires no implantation.


Flexible device could treat hearing loss without batteries

Some people are born with hearing loss, while others acquire it with age, infections or long-term noise exposures. In many instances, the tiny hairs in the inner ear’s cochlea that allow the brain to recognize electrical pulses as sound are damaged. As a step toward an advanced artificial cochlea, researchers in ACS Nano report a conductive membrane, which translated sound waves into matching electrical signals when implanted inside a model ear, without requiring external power.

An electrically conductive membrane implanted inside a model ear simulates cochlear hairs by converting sound waves into electrical pulses; wiring connects the prototype to a device that collects the output current signal.

When the hair cells inside the inner ear stop working, there’s no way to reverse the damage. Currently, treatment is limited to hearing aids or cochlear implants. But these devices require external power sources and can have difficulty amplifying speech correctly so that it’s understood by the user. One possible solution is to simulate healthy cochlear hairs, converting noise into the electrical signals processed by the brain as recognizable sounds. To accomplish this, previous researchers have tried self-powered piezoelectric materials, which become charged when they’re compressed by the pressure that accompanies sound waves, and triboelectric materials, which produce friction and static electricity when moved by these waves. However, the devices aren’t easy to make and don’t produce enough signal across the frequencies involved in human speech. So, Yunming Wang and colleagues from the University of Wuhan wanted a simple way to fabricate a material that used both compression and friction for an acoustic sensing device with high efficiency and sensitivity across a broad range of audio frequencies.

To create a piezo-triboelectric material, the researchers mixed barium titanate nanoparticles coated with silicon dioxide into a conductive polymer, which they dried into a thin, flexible film. Next, they removed the silicon dioxide shells with an alkaline solution. This step left behind a sponge-like membrane with spaces around the nanoparticles, allowing them to jostle around when hit by sound waves. In tests, the researchers showed that contact between the nanoparticles and polymer increased the membrane’s electrical output by 55% compared to the pristine polymer. When they sandwiched the membrane between two thin metal grids, the acoustic sensing device produced a maximum electrical signal at 170 hertz, a frequency within the range of most adult’s voices. Finally, the researchers implanted the device inside a model ear and played a music file. They recorded the electrical output and converted it into a new audio file, which displayed a strong similarity to the original version. The researchers say their self-powered device is sensitive to the wide acoustic range needed to hear most sounds and voices.