AI Diagnoses Illness Based On the Sound of Your Voice

Voices offer lots of information. Turns out, they can even help diagnose an illness — and researchers are working on an app for that. The National Institutes of Health is funding a massive research project to collect voice data and develop an AI that could diagnose people based on their speech. Everything from your vocal cord vibrations to breathing patterns when you speak offers potential information about your health, says laryngologist Dr. Yael Bensoussan, the director of the University of South Florida’s Health Voice Center and a leader on the study.

We asked experts: Well, if you close your eyes when a patient comes in, just by listening to their voice, can you have an idea of the diagnosis they have?” Bensoussan says. “And that’s where we got all our information.”

Someone who speaks low and slowly might have Parkinson’s disease. Slurring is a sign of a stroke. Scientists could even diagnose depression or cancer. The team will start by collecting the voices of people with conditions in five areas: neurological disorders, voice disorders, mood disorders, respiratory disorders and pediatric disorders like autism and speech delays. The project is part of the NIH‘s Bridge to AI program, which launched over a year ago with more than $100 million in funding from the federal government, with the goal of creating large-scale health care databases for precision medicine.

We were really lacking large what we call open source databases,” Bensoussan says. “Every institution kind of has their own database of data. But to create these networks and these infrastructures was really important to then allow researchers from other generations to use this data.” This isn’t the first time researchers have used AI to study human voices, but it’s the first time data will be collected on this level — the project is a collaboration between USF, Cornell and 10 other institutions. “We saw that everybody was kind of doing very similar work but always at a smaller level,” Bensoussan says. “We needed to do something as a team and build a network.”

The ultimate goal is an app that could help bridge access to rural or underserved communities, by helping general practitioners refer patients to specialists. Long term, iPhones or Alexa could detect changes in your voice, such as a cough, and advise you to seek medical attention.

Source: https://www.npr.org/

How to Construct Machines as Small as Cells

If you want to build a fully functional nanosized robot, you need to incorporate a host of capabilities, from complicated electronic circuits and photovoltaics to sensors and antennas. But just as importantly, if you want your robot to move, you need it to be able to bend.

Cornell researchers have created micron-sized shape memory actuators that enable atomically thin two-dimensional materials to fold themselves into 3D configurations. All they require is a quick jolt of voltage. And once the material is bent, it holds its shape – even after the voltage is removed. As a demonstration, the team created what is potentially the world’s smallest self-folding origami bird. And it’s not a lark.

The group’s paper, “Micrometer-Sized Electrically Programmable Shape Memory Actuators for Low-Power Microrobotics,” published in Science Robotics and was featured on the cover. The paper’s lead author is postdoctoral researcher Qingkun Liu. The project is led by Itai Cohen, professor of physics, and Paul McEuen, the John A. Newman Professor of Physical Science, both in the College of Arts and Sciences.

We humans, our defining characteristic is we’ve learned how to build complex systems and machines at human scales, and at enormous scales as well,” said McEuen. “But what we haven’t learned how to do is build machines at tiny scales. And this is a step in that basic, fundamental evolution in what humans can do, of learning how to construct machines that are as small as cells.”

McEuen and Cohen’s ongoing collaboration has so far generated a throng of nanoscale machines and components, each seemingly faster, smarter and more elegant than the last.

We want to have robots that are microscopic but have brains on board. So that means you need to have appendages that are driven by complementary metal-oxide-semiconductor (CMOS) transistors, basically a computer chip on a robot that’s 100 microns on a side,” Cohen said.

Imagine a million fabricated microscopic robots releasing from a wafer that fold themselves into shape, crawl free and go about their tasks, even assembling into more complicated structures. That’s the vision.

Source: https://news.cornell.edu/

NanoRobots Injected Into Human Bodies

In 1959, former Cornell physicist Richard Feynman delivered his famous lecture “There’s Plenty of Room at the Bottom,” in which he described the opportunity for shrinking technology, from machines to computer chips, to incredibly small sizes. Well, the bottom just got more crowded. A Cornell-led collaboration has created the first microscopic robots that incorporate semiconductor components, allowing them to be controlled – and made to walk – with standard electronic signals. These robots, roughly the size of paramecium, provide a template for building even more complex versions that utilize silicon-based intelligence, can be mass produced, and may someday travel through human tissue and blood.

The collaboration is led by Itai Cohen, professor of physics, Paul McEuen, the John A. Newman Professor of Physical Science – both in the College of Arts and Sciences – and their former postdoctoral researcher Marc Miskin, who is now an assistant professor at the University of Pennsylvania.

The walking robots are the latest iteration, and in many ways an evolution, of Cohen and McEuen’s previous nanoscale creations, from microscopic sensors to graphene-based origami machines. The new robots are about 5 microns thick (a micron is one-millionth of a meter), 40 microns wide and range from 40 to 70 microns in length. Each bot consists of a simple circuit made from silicon photovoltaics – which essentially functions as the torso and brain – and four electrochemical actuators that function as legs. As basic as the tiny machines may seem, creating the legs was an enormous feat.

In the context of the robot’s brains, there’s a sense in which we’re just taking existing semiconductor technology and making it small and releasable,” said McEuen, who co-chairs the Nanoscale Science and Microsystems Engineering (NEXT Nano) Task Force, part of the provost’s Radical Collaboration initiative, and directs the Kavli Institute at Cornell for Nanoscale Science.

But the legs did not exist before,” McEuen said. “There were no small, electrically activatable actuators that you could use. So we had to invent those and then combine them with the electronics.”

The team’s paper, “Electronically Integrated, Mass-Manufactured, Microscopic Robots,” has been published  in Nature.

Source: https://news.cornell.edu/
AND
https://thenextweb.com/