Brain Implant Could Be the Next Computer Mouse

Eight years ago, a patient lost her power of speech because of ALS, or Lou Gehrig’s disease, which causes progressive paralysis. She can still make sounds, but her words have become unintelligible, leaving her reliant on a writing board or iPad to communicate.

Now, after volunteering to receive a brain implant, the woman has been able to rapidly communicate phrases like “I don’t own my home” and “It’s just tough” at a rate approaching normal speech.

That is the claim inpaper published over the weekend on the website bioRxiv by a team at Stanford University. The study has not been formally reviewed by other researchers. The scientists say their volunteer, identified only as “subject T12,” smashed previous records by using the brain-reading implant to communicate at a rate of 62 words a minute, three times the previous best.

Philip Sabes, a researcher at the University of California, San Francisco, who was not involved in the project, called the results a “big breakthrough” and said that experimental brain-reading technology could be ready to leave the lab and become a useful product soon.

The performance in this paper is already at a level which many people who cannot speak would want, if the device were ready,” says Sabes. “People are going to want this.” People without speech deficits typically talk at a rate of about 160 words a minute. Even in an era of keyboards, thumb-typing, emojis, and internet abbreviations, speech remains the fastest form of human-to-human communication.

The new research was carried out at Stanford University. The preprint, published January 21, began drawing extra attention on Twitter and other social media because of the death the same day of its co-lead author, Krishna Shenoy, from pancreatic cancer.

Shenoy had devoted his career to improving the speed of communication through brain interfaces, carefully maintaining a list of records on his laboratory website. In 2019, another volunteer Shenoy worked with managed to use his thoughts to type at a rate of 18 words a minute, a record performance at the time.

Source: https://www.technologyreview.com/

Humanity May Reach Singularity Within Just 7 Years

In the world of artificial intelligence, the idea of “singularity” looms large. This slippery concept describes the moment AI exceeds beyond human control and rapidly transforms society. The tricky thing about AI singularity (and why it borrows terminology from black hole physics) is that it’s enormously difficult to predict where it begins and nearly impossible to know what’s beyond this technological “event horizon.”

However, some AI researchers are on the hunt for signs of reaching singularity measured by AI progress approaching the skills and ability comparable to a human. One such metric, defined by Translated, a Rome-based translation company, is an AI’s ability to translate speech at the accuracy of a human. Language is one of the most difficult AI challenges, but a computer that could close that gap could theoretically show signs of Artificial General Intelligence (AGI).

“That’s because language is the most natural thing for humans,” Translated CEO Marco Trombetti said at a conference in Orlando, Florida, in December. “Nonetheless, the data Translated collected clearly shows that machines are not that far from closing the gap.”

The company tracked its AI’s performance from 2014 to 2022 using a metric called “Time to Edit,” or TTE, which calculates the time it takes for professional human editors to fix AI-generated translations compared to human ones. Over that 8-year period and analyzing over billion post-edits, Translated’s AI showed a slow, but undeniable improvement as it slowly closed the gap toward human-level translation quality.

On average, it takes a human translator roughly one second to edit each word of another human translator, according to Translated. In 2015, it took professional editors approximately 3.5 seconds per word to check a machine-translated (MT) suggestion today that number is just 2 seconds. If the trend continues, Translated’s AI will be as good as human-produced translation by the end of the decade (or even sooner).

You must be logged in to view this content.

AI Diagnoses Illness Based On the Sound of Your Voice

Voices offer lots of information. Turns out, they can even help diagnose an illness — and researchers are working on an app for that. The National Institutes of Health is funding a massive research project to collect voice data and develop an AI that could diagnose people based on their speech. Everything from your vocal cord vibrations to breathing patterns when you speak offers potential information about your health, says laryngologist Dr. Yael Bensoussan, the director of the University of South Florida’s Health Voice Center and a leader on the study.

We asked experts: Well, if you close your eyes when a patient comes in, just by listening to their voice, can you have an idea of the diagnosis they have?” Bensoussan says. “And that’s where we got all our information.”

Someone who speaks low and slowly might have Parkinson’s disease. Slurring is a sign of a stroke. Scientists could even diagnose depression or cancer. The team will start by collecting the voices of people with conditions in five areas: neurological disorders, voice disorders, mood disorders, respiratory disorders and pediatric disorders like autism and speech delays. The project is part of the NIH‘s Bridge to AI program, which launched over a year ago with more than $100 million in funding from the federal government, with the goal of creating large-scale health care databases for precision medicine.

We were really lacking large what we call open source databases,” Bensoussan says. “Every institution kind of has their own database of data. But to create these networks and these infrastructures was really important to then allow researchers from other generations to use this data.” This isn’t the first time researchers have used AI to study human voices, but it’s the first time data will be collected on this level — the project is a collaboration between USF, Cornell and 10 other institutions. “We saw that everybody was kind of doing very similar work but always at a smaller level,” Bensoussan says. “We needed to do something as a team and build a network.”

The ultimate goal is an app that could help bridge access to rural or underserved communities, by helping general practitioners refer patients to specialists. Long term, iPhones or Alexa could detect changes in your voice, such as a cough, and advise you to seek medical attention.

Source: https://www.npr.org/

How To Create Speech From Brain Signals

“In my head, I churn over every sentence ten times, delete a word, add an adjective, and learn my text by heart, paragraph by paragraph,” wrote Jean-Dominique Bauby in his memoir, “The Diving Bell and the Butterfly.” In the book, Mr. Bauby, a journalist and editor, recalled his life before and after a paralyzing stroke that left him virtually unable to move a muscle; he tapped out the book letter by letter, by blinking an eyelid.

Thousands of people are reduced to similarly painstaking means of communication as a result of injuries suffered in accidents or combat, of strokes, or of neurodegenerative disorders such as amyotrophic lateral sclerosis, or A.L.S., that disable the ability to speak.

Now, scientists are reporting that they have developed a virtual prosthetic voice, a system that decodes the brain’s vocal intentions and translates them into mostly understandable speech, with no need to move a muscle, even those in the mouth. (The physicist and author Stephen Hawking used a muscle in his cheek to type keyboardcharacters, which a computer synthesized into speech.)

It’s formidable work, and it moves us up another level toward restoring speech” by decoding brain signals, said Dr. Anthony Ritaccio, a neurologist and neuroscientist at the Mayo Clinic in Jacksonville, Fla., who was not a member of the research group.

The new system, described on Wednesday in the journal Nature,deciphers the brain’s motor commands guiding vocal movement during speech — the tap of the tongue, the narrowing of the lips — and generates intelligible sentences that approximate a speaker’s natural cadence. Experts said the new work represented a “proof of principle,” a preview of what may be possible after further experimentation and refinement. The system was tested on people who speak normally; it has not been tested in people whose neurological conditions or injuries, such as common strokes, could make the decoding difficult or impossible. For the new trial, scientists at the University of California, San Francisco, and U.C. Berkeley recruited five people who were in the hospital being evaluated for epilepsy surgery.

Many people with epilepsy do poorly on medication and opt to undergo brain surgery. Before operating, doctors must first locate the “hot spot” in each person’s brain where the seizures originate; this is done with electrodes that are placed in the brain, or on its surface, and listen for telltale electrical stormsPinpointing this location can take weeks. In the interim, patients go through their days with electrodes implanted in or near brain regions that are involved in movement and auditory signaling. These patients often consent to additional experiments that piggyback on those implants.

Five such patients at U.C.S.F. agreed to test the virtual voice generator. Each had been implanted with one or two electrode arrays: stamp-size pads, containing hundreds of tiny electrodes, that were placed on the surface of the brain. As each participant recited hundreds of sentences, the electrodes recorded the firing patterns of neurons in the motor cortex. The researchers associated those patterns with the subtle movements of the patient’s lips, tongue, larynx and jaw that occur during natural speech. The team then translated those movements into spoken sentences.

Source: https://www.nytimes.com/