Alzheimer’s: Why Certain Frequencies Blast Amyloid Plaques

In 1906, a German psychiatrist and neuroanatomist performed an autopsy on the brain of a patient who displayed abnormal symptoms while alive. Over the course of several years, this woman’s behavior, as well as her speech and language, became erratic. She forgot who people were, became paranoid, and, as her condition worsened, suffered total memory loss. When her doctor dissected her brain, he found unusual plaques and neurofibrillary tangles in her cerebral cortex. He quickly alerted his colleagues of this “peculiar severe disease.” The doctor was Alois Alzheimer. More than a century later, the medical community is still trying to understand Alzheimer’s disease (AD), a neurodegenerative brain disorder. But early studies have demonstrated that we may be able to mitigate some of the damage created by AD simply by exposing people to certain waves of sound and light.

Li-Huei Tsai, a neuroscientist and the director of the Picower Institute for Learning and Memory in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology has spent the past three decades working to understand and treat neurodegenerative diseases, in particular AD.

It has not turned out to be a disease attributable to just one runaway protein or just one gene,” Li- Huei explained in a 2021 op-ed in The Boston Globe. “In fact, although Alzheimer’s is referred to as a single name, we in the Alzheimer’s research community don’t yet know how many different types of Alzheimer’s there may be, and, therefore, how many different treatments might ultimately prove necessary across the population.”

AD researchers have traditionally pursued small-molecule pharmaceuticals and immunotherapies that target a single errant protein, the amyloid. But Li-Huei believes Alzheimer’s to be a broader systemic breakdown, and she has thought about more encompassing, and hopefully effective, treatments. For several years now, her lab has pursued novel approaches using the aesthetic interventions of light and sound. We know the influence that light and sound have on the human body. People suffering from seasonal affective disorder benefit from light therapy. Blue light before bed stimulates our brain and disrupts sleep. Sound vibrations change our physiology. But how might this work on a brain experiencing AD?

Source: https://www.fastcompany.com/

Ultrasound metasurfaces move large objects without touching them

Engineers at the University of Minnesota have developed a new system that can move objects without making physical contact. The technique involves ultrasound waves acting on specialized surfaces to push or even pull objects in different directions, which could help in manufacturing and robotics.

You must be logged in to view this content.

AI Diagnoses Illness Based On the Sound of Your Voice

Voices offer lots of information. Turns out, they can even help diagnose an illness — and researchers are working on an app for that. The National Institutes of Health is funding a massive research project to collect voice data and develop an AI that could diagnose people based on their speech. Everything from your vocal cord vibrations to breathing patterns when you speak offers potential information about your health, says laryngologist Dr. Yael Bensoussan, the director of the University of South Florida’s Health Voice Center and a leader on the study.

We asked experts: Well, if you close your eyes when a patient comes in, just by listening to their voice, can you have an idea of the diagnosis they have?” Bensoussan says. “And that’s where we got all our information.”

Someone who speaks low and slowly might have Parkinson’s disease. Slurring is a sign of a stroke. Scientists could even diagnose depression or cancer. The team will start by collecting the voices of people with conditions in five areas: neurological disorders, voice disorders, mood disorders, respiratory disorders and pediatric disorders like autism and speech delays. The project is part of the NIH‘s Bridge to AI program, which launched over a year ago with more than $100 million in funding from the federal government, with the goal of creating large-scale health care databases for precision medicine.

We were really lacking large what we call open source databases,” Bensoussan says. “Every institution kind of has their own database of data. But to create these networks and these infrastructures was really important to then allow researchers from other generations to use this data.” This isn’t the first time researchers have used AI to study human voices, but it’s the first time data will be collected on this level — the project is a collaboration between USF, Cornell and 10 other institutions. “We saw that everybody was kind of doing very similar work but always at a smaller level,” Bensoussan says. “We needed to do something as a team and build a network.”

The ultimate goal is an app that could help bridge access to rural or underserved communities, by helping general practitioners refer patients to specialists. Long term, iPhones or Alexa could detect changes in your voice, such as a cough, and advise you to seek medical attention.

Source: https://www.npr.org/

Sound Plus Electrical Stimulation to Treat Chronic Pain

A University of Minnesota (U OF M) Twin Cities-led team has found that electrical stimulation of the body combined with sound activates the brain’s somatosensory or “tactilecortex, increasing the potential for using the technique to treat chronic pain and other sensory disorders. The researchers tested the non-invasive technique on animals and are planning clinical trials on humans in the near future. During the study, published in the Journal of Neural Engineering, the researchers played broadband sound while electrically stimulating different parts of the body in guinea pigs. They found that the combination of the two activated neurons in the brain’s somatosensory cortex, which is responsible for touch and pain sensations throughout the body.

While the researchers used needle stimulation in their experiments, one could achieve similar results using electrical stimulation devices, such as nerve stimulation (TENS) units, which are widely available. The researchers hope that their findings will lead to a treatment for chronic pain that’s safer and more accessible than drug approaches.

Chronic pain is a huge issue for a lot of people, and for most, it’s not sufficiently treatable,” said Cory Gloeckner, lead author on the paper, a Ph.D. alumnus of the U of M Department of Biomedical Engineering and an assistant professor at John Carroll University.Right now, one of the ways that we try to treat pain is opioids, and we all know that doesn’t work out well for many people. This, on the other hand, is a non-invasive, simple application. It’s not some expensive medical device that you have to buy in order to treat your pain. It’s something that we think would be available to pretty much anyone because of its low cost and simplicity.”

The researchers plan to continue investigating this “multimodal” approach to treating different neurological conditions, potentially integrating music therapy in the future to see how they can further modify the somatosensory cortex.

Source: https://twin-cities.umn.edu/

Paper-Thin LoudSpeaker

MIT engineers have developed a paper-thin loudspeaker that can turn any surface into an active audio source. This thin-film loudspeaker produces sound with minimal distortion while using a fraction of the energy required by a traditional loudspeaker. The hand-sized loudspeaker the team demonstrated, which weighs about as much as a dime, can generate high-quality sound no matter what surface the film is bonded to.

To achieve these properties, the researchers pioneered a deceptively simple fabrication technique, which requires only three basic steps and can be scaled up to produce ultrathin loudspeakers large enough to cover the inside of an automobile or to wallpaper a room. Used this way, the thin-film loudspeaker could provide active noise cancellation in clamorous environments, such as an airplane cockpit, by generating sound of the same amplitude but opposite phase; the two sounds cancel each other out. The flexible device could also be used for immersive entertainment, perhaps by providing three-dimensional audio in a theater or theme park ride. And because it is lightweight and requires such a small amount of power to operate, the device is well-suited for applications on smart devices where battery life is limited.

MIT researchers have developed an ultrathin loudspeaker that can turn any rigid surface into a high-quality, active audio source. The straightforward fabrication process they introduced can enable the thin-film devices to be produced at scale.

It feels remarkable to take what looks like a slender sheet of paper, attach two clips to it, plug it into the headphone port of your computer, and start hearing sounds emanating from it. It can be used anywhere. One just needs a smidgeon of electrical power to run it,” says Vladimir Bulović, the Fariborz Maseeh Chair in Emerging Technology, leader of the Organic and Nanostructured Electronics Laboratory (ONE Lab), director of MIT.nano, and senior author of the paper.

Bulović wrote the paper with lead author Jinchi Han, a ONE Lab postdoc, and co-senior author Jeffrey Lang, the Vitesse Professor of Electrical Engineering.
The research has been published in IEEE Transactions of Industrial Electronics.

Source: https://news.mit.edu

Acoustic Fabric

Having trouble hearing? Just turn up your shirt. That’s the idea behind a new “acoustic fabric” developed by engineers at MIT and collaborators at Rhode Island School of DesignThe team has designed a fabric that works like a microphone, converting sound first into mechanical vibrations, then into electrical signals, similarly to how our ears hearAll fabrics vibrate in response to audible sounds, though these vibrations are on the scale of nanometers — far too small to ordinarily be sensed. To capture these imperceptible signals, the researchers created a flexible fiber that, when woven into a fabric, bends with the fabric like seaweed on the ocean’s surface.

The fiber is designed from a “piezoelectric” material that produces an electrical signal when bent or mechanically deformed, providing a means for the fabric to convert sound vibrations into electrical signalsThe fabric can capture sounds ranging in decibel from a quiet library to heavy road traffic, and determine the precise direction of sudden sounds like handclaps. When woven into a shirt’s lining, the fabric can detect a wearer’s subtle heartbeat features. The fibers can also be made to generate sound, such as a recording of spoken words, that another fabric can detectA study detailing the team’s design appears in Nature. Lead author Wei Yan, who helped develop the fiber as an MIT postdoc, sees many uses for fabrics that hear.

Wearing an acoustic garment, you might talk through it to answer phone calls and communicate with others,” says Yan, who is now an assistant professor at the Nanyang Technological University in Singapore. “In addition, this fabric can imperceptibly interface with the human skin, enabling wearers to monitor their heart and respiratory condition in a comfortable, continuous, real-time, and long-term manner.”

Yan’s co-authors include Grace Noel, Gabriel Loke, Tural Khudiyev, Juliette Marion, Juliana Cherston, Atharva Sahasrabudhe, Joao Wilbert, Irmandy Wicaksono, and professors John Joannopoulos and Yoel Fink at MIT, along with collaborators from the Rhode Island School of Design (RISD), Lei Zhu from Case Western Reserve University, Chu Ma from the University of Wisconsin at Madison, and Reed Hoyt of the U.S. Army Research Institute of Environmental Medicine.

Source: https://news.mit.edu/

Turn Stem Cells Into Bone Using Nothing More Than Sound

Researchers have used sound waves to turn stem cells into bone cells, in a tissue engineering advance that could one day help patients regrow bone lost to cancer or degenerative disease. The innovative stem cell treatment from RMIT researchers in Australia offers a smart way forward for overcoming some of the field’s biggest challenges, through the precision power of high-frequency sound waves.

Tissue engineering is an emerging field that aims to rebuild bone and muscle by harnessing the human body’s natural ability to heal itself. A key challenge in regrowing bone is the need for large amounts of bone cells that will thrive and flourish once implanted in the target area. To date, experimental processes to change adult stem cells into bone cells have used complicated and expensive equipment and have struggled with mass production, making widespread clinical application unrealistic. Additionally, the few clinical trials attempting to regrow bone have largely used stem cells extracted from a patient’s bone marrow – a highly painful procedure.

In a new study published in the journal Small, the RMIT research team showed stem cells treated with high-frequency sound waves turned into bone cells quickly and efficiently. Importantly, the treatment was effective on multiple types of cells including fat-derived stem cells, which are far less painful to extract from a patient. Co-lead researcher Dr Amy Gelmi said the new approach was faster and simpler than other methods.

A magnified image showing adult stem cells in the process of turning into bone cells after treatment with high-frequency sound waves. Green colouring shows the presence of collagen, which the cells produce as they become bone cells

The sound waves cut the treatment time usually required to get stem cells to begin to turn into bone cells by several days,” said Gelmi, a Vice-Chancellor’s Research Fellow at RMIT. “This method also doesn’t require any special ‘bone-inducing’ drugs and it’s very easy to apply to the stem cells. “Our study found this new approach has strong potential to be used for treating the stem cells, before we either coat them onto an implant or inject them directly into the body for tissue engineering.”

The high-frequency sound waves used in the stem cell treatment were generated on a low-cost microchip device developed by RMIT. Co-lead researcher Distinguished Professor Leslie Yeo and his team have spent over a decade researching the interaction of sound waves at frequencies above 10 MHz with different materials. The sound wave-generating device they developed can be used to precisely manipulate cells, fluids or materials. “We can use the sound waves to apply just the right amount of pressure in the right places to the stem cells, to trigger the change process,” Yeo said. “Our device is cheap and simple to use, so could easily be upscaled for treating large numbers of cells simultaneously – vital for effective tissue engineering.”

Source: https://www.rmit.edu.au/

Could Sound Replace Pacemakers and Insulin Pumps?

Imagine a future in which crippling epileptic seizures, faltering hearts and diabetes could all be treated not with scalpels, stitches and syringes, but with sound. Though it may seem the stuff of science fiction, a new study shows that this has solid real-world potential.

Sonogenetics – the use of ultrasound to non-invasively manipulate neurons and other cells – is a nascent field of study that remains obscure amongst non-specialists, but if it proves successful it could herald a new era in medicine.

In the new study published in Nature Communications, researchers from the Salk Institute for Biological Studies in California, US, describe a significant leap forward for the field, documenting their success in engineering mammalian cells to be activated using ultrasound. The team say their method, which they used to activate human cells in a dish and brain cells inside living mice, paves the way toward non-invasive versions of deep brain stimulation, pacemakers and insulin pumps.

Going wireless is the future for just about everything,” says senior author Dr Sreekanth Chalasani, an associate professor in Salk’s Molecular Neurobiology Laboratory. “We already know that ultrasound is safe, and that it can go through bone, muscle and other tissues, making it the ultimate tool for manipulating cells deep in the body.

Chalasani is the mastermind who first established the field of sonogenetics a decade ago. He discovered that ultrasound sound waves beyond the range of human hearing — can be harnessed to control cells. Since sound is a form of mechanical energy, he surmised that if brain cells could be made mechanically sensitive, then they could be modified with ultrasound.

In 2015 his research group provided the first successful demonstration of the theory, adding a protein to cells of a roundworm, Caenorhabditis elegans, that made them sensitive to low-frequency ultrasound and thus enabled them to be activated at the behest of researchers.

Chalasani and his colleagues set out to search for a new protein that would work in mammals. Although a few proteins were already known to be ultrasound sensitive, no existing candidates were sensitive at the clinically safe frequency of 7MHz – so this was where the team set their sights. To test whether TRPA1 protein could activate cell types of clinical interest in response to ultrasound, the team used a gene therapy approach to add the genes for human TRPA1 to a specific group of neurons in the brains of living mice. When they then administered ultrasound to the mice, only the neurons with the TRPA1 genes were activated.

Clinicians treating conditions including Parkinson’s disease and epilepsy currently use deep brain stimulation, which involves surgically implanting electrodes in the brain, to activate certain subsets of neurons. Chalasani says that sonogenetics could one day replace this approach—the next step would be developing a gene therapy delivery method that can cross the blood-brain barrier, something that is already being studied. Perhaps sooner, he says, sonogenetics could be used to activate cells in the heart, as a kind of pacemaker that requires no implantation.

Source: https://www.salk.edu/
AND
https://cosmosmagazine.com/

Flexible device could treat hearing loss without batteries

Some people are born with hearing loss, while others acquire it with age, infections or long-term noise exposures. In many instances, the tiny hairs in the inner ear’s cochlea that allow the brain to recognize electrical pulses as sound are damaged. As a step toward an advanced artificial cochlea, researchers in ACS Nano report a conductive membrane, which translated sound waves into matching electrical signals when implanted inside a model ear, without requiring external power.

An electrically conductive membrane implanted inside a model ear simulates cochlear hairs by converting sound waves into electrical pulses; wiring connects the prototype to a device that collects the output current signal.

When the hair cells inside the inner ear stop working, there’s no way to reverse the damage. Currently, treatment is limited to hearing aids or cochlear implants. But these devices require external power sources and can have difficulty amplifying speech correctly so that it’s understood by the user. One possible solution is to simulate healthy cochlear hairs, converting noise into the electrical signals processed by the brain as recognizable sounds. To accomplish this, previous researchers have tried self-powered piezoelectric materials, which become charged when they’re compressed by the pressure that accompanies sound waves, and triboelectric materials, which produce friction and static electricity when moved by these waves. However, the devices aren’t easy to make and don’t produce enough signal across the frequencies involved in human speech. So, Yunming Wang and colleagues from the University of Wuhan wanted a simple way to fabricate a material that used both compression and friction for an acoustic sensing device with high efficiency and sensitivity across a broad range of audio frequencies.

To create a piezo-triboelectric material, the researchers mixed barium titanate nanoparticles coated with silicon dioxide into a conductive polymer, which they dried into a thin, flexible film. Next, they removed the silicon dioxide shells with an alkaline solution. This step left behind a sponge-like membrane with spaces around the nanoparticles, allowing them to jostle around when hit by sound waves. In tests, the researchers showed that contact between the nanoparticles and polymer increased the membrane’s electrical output by 55% compared to the pristine polymer. When they sandwiched the membrane between two thin metal grids, the acoustic sensing device produced a maximum electrical signal at 170 hertz, a frequency within the range of most adult’s voices. Finally, the researchers implanted the device inside a model ear and played a music file. They recorded the electrical output and converted it into a new audio file, which displayed a strong similarity to the original version. The researchers say their self-powered device is sensitive to the wide acoustic range needed to hear most sounds and voices.

Source: https://www.acs.org/
AND
https://pubmed.ncbi.nlm.nih.gov/