How to Teach Robot to Laugh at the Right Time

Laughter comes in many forms, from a polite chuckle to a contagious howl of mirth. Scientists are now developing an AI system that aims to recreate these nuances of humour by laughing in the right way at the right time. The team behind the laughing robot, which is called Erica, say that the system could improve natural conversations between people and AI systems.

CLICK THE PHOTO TO ENJOY THE VIDEO

We think that one of the important functions of conversational AI is empathy,” said Dr Koji Inoue, of Kyoto University, the lead author of the research, published in Frontiers in Robotics and AI. “So we decided that one way a robot can empathise with users is to share their laughter.

 Inoue and his colleagues have set out to teach their AI system the art of conversational laughter. They gathered training data from more than 80 speed-dating dialogues between male university students and the robot, who was initially teleoperated by four female amateur actors.

The dialogue data was annotated for solo laughs, social laughs (where humour isn’t involved, such as in polite or embarrassed laughter) and laughter of mirth. This data was then used to train a machine learning system to decide whether to laugh, and to choose the appropriate type. It might feel socially awkward to mimic a small chuckle, but empathetic to join in with a hearty laugh. Based on the audio files, the algorithm learned the basic characteristics of social laughs, which tend to be more subdued, and mirthful laughs, with the aim of mirroring these in appropriate situations.

Our biggest challenge in this work was identifying the actual cases of shared laughter, which isn’t easy because as you know, most laughter is actually not shared at all,” said Inoue. “We had to carefully categorise exactly which laughs we could use for our analysis and not just assume that any laugh can be responded to.

The team tested out Erica’s “sense of humour” by creating four short dialogues for it to share with a person, integrating the new shared-laughter algorithm into existing conversation software. These were compared to scenarios where Erica didn’t laugh at all or emitted a social laugh every time she detected laughter.

The clips were played to 130 volunteers who rated the shared-laughter algorithm most favourably for empathy, naturalness, human-likeness and understanding. The team said laughter could help create robots with their own distinct character. “We think that they can show this through their conversational behaviours, such as laughing, eye gaze, gestures and speaking style,” said Inoue, although he added that it could take more than 20 years before it would be possible to have a “casual chat with a robot like we would with a friend.”

Source: https://www.theguardian.com/

How to Detect Diabetes Early Enough To Reverse It

Diabetes is a severe and growing metabolic disorder. It already affects hundreds of thousands of people in Switzerland. A sedentary lifestyle and an excessively rich diet damage the beta cells of the pancreas, promoting the onset of this disease. If detected early enough, its progression could be reversed, but diagnostic tools that allow for early detection are lacking. A team from the University of Geneva (UNIGE) in collaboration with several other scientists, including teams from the HUG, has discovered that a low level of the sugar 1,5-anhydroglucitol in the blood is a sign of a loss in functional beta cells. This molecule, easily identified by a blood test, could be used to identify the development of diabetes in people at risk, before the situation becomes irreversible. These results can be found in the Journal of Clinical Endocrinology & Metabolism.
In Switzerland, almost 500,000 people suffer from diabetes. This serious metabolic disorder is constantly increasing due to the combined effect of a lack of physical activity and an unbalanced diet. If detected early enough at the pre-diabetes stage, progression to an established diabetes can be counteracted by adopting an appropriate lifestyle. Unfortunately, one third of patients already have cardiovascular, renal or neuronal complications at the time of diagnosis, which impacts their life expectancy.

When diabetes starts to develop but no symptoms are yet detectable, part of the beta cells of the pancreas (in green) disappear (right image) compared to a healthy individual (left image). This previously undetectable decrease could be identified by measuring the level of 1,5-anhydroglucitol in the blood

‘‘Identifying the transition from pre-diabetes to diabetes is complex, because the status of the affected cells, which are scattered in very small quantities in the core of an organ located under the liver, the pancreas, is impossible to assess quantitatively by non-invasive investigations. We therefore opted for an alternative strategy: to find a molecule whose levels in the blood would be associated with the functional mass of these beta cells in order to indirectly detect their alteration at the pre-diabetes stage, before the appearance of any symptoms,’’ explains Pierre Maechler, a Professor in the Department of Cell Physiology and Metabolism and in the Diabetes Centre of the UNIGE Faculty of Medicine, who led this work.

Several years ago, scientists embarked on the identification of such a molecule able to detect pre-diabetes. The first step was to analyse thousands of molecules in healthy, pre-diabetic and diabetic mouse models. By combining powerful molecular biology techniques with a machine learning system (artificial intelligence), the research team was able to identify, from among thousands of molecules, the one that best reflects a loss of beta cells at the pre-diabetic stage: namely 1,5-anhydroglucitol, a small sugar, whose decrease in blood would indicate a deficit in beta cells.

Source: https://www.unige.ch/

Mind-controlled Robots

Two EPFL research groups teamed up to develop a machine-learning program that can be connected to a human brain and used to command a robot. The program adjusts the robot’s movements based on electrical signals from the brain. The hope is that with this invention, tetraplegic patients will be able to carry out more day-to-day activities on their own. Tetraplegic patients are prisoners of their own bodies, unable to speak or perform the slightest movement. Researchers have been working for years to develop systems that can help these patients carry out some tasks on their own.

People with a spinal cord injury often experience permanent neurological deficits and severe motor disabilities that prevent them from performing even the simplest tasks, such as grasping an object,” says Prof. Aude Billard, the head of EPFL’s Learning Algorithms and Systems Laboratory. “Assistance from robots could help these people recover some of their lost dexterity, since the robot can execute tasks in their place.”

Prof. Billard carried out a study with Prof. José del R. Millán, who at the time was the head of EPFL’s Brain-Machine Interface Laboratory but has since moved to the University of Texas. The two research groups have developed a computer program that can control a robot using electrical signals emitted by a patient’s brain. No voice control or touch function is needed; patients can move the robot simply with their thoughts. The study has been published in Communications Biology, an open-access journal from Nature Portfolio.

To develop their system, the researchers started with a robotic arm that had been developed several years ago. This arm can move back and forth from right to left, reposition objects in front of it and get around objects in its path. “In our study we programmed a robot to avoid obstacles, but we could have selected any other kind of task, like filling a glass of water or pushing or pulling an object,” says Prof. Billard. This entailed developing an algorithm that could adjust the robot’s movements based only on a patient’s thoughts. The algorithm was connected to a headcap equipped with electrodes for running electroencephalogram (EEG) scans of a patient’s brain activity. To use the system, all the patient needs to do is look at the robot. If the robot makes an incorrect move, the patient’s brain will emit an “error message” through a clearly identifiable signal, as if the patient is saying “No, not like that.” The robot will then understand that what it’s doing is wrong – but at first it won’t know exactly why. For instance, did it get too close to, or too far away from, the object? To help the robot find the right answer, the error message is fed into the algorithm, which uses an inverse reinforcement learning approach to work out what the patient wants and what actions the robot needs to take. This is done through a trial-and-error process whereby the robot tries out different movements to see which one is correct.

The process goes pretty quickly – only three to five attempts are usually needed for the robot to figure out the right response and execute the patient’s wishes. “The robot’s AI program can learn rapidly, but you have to tell it when it makes a mistake so that it can correct its behavior,” says Prof. Millán. “Developing the detection technology for error signals was one of the biggest technical challenges we faced.” Iason Batzianoulis, the study’s lead author, adds: “What was particularly difficult in our study was linking a patient’s brain activity to the robot’s control system – or in other words, ‘translating’ a patient’s brain signals into actions performed by the robot. We did that by using machine learning to link a given brain signal to a specific task. Then we associated the tasks with individual robot controls so that the robot does what the patient has in mind.

Source: https://actu.epfl.ch/

How to Predict Stress at Atomic Scale

The amount of stress a material can withstand before it cracks is critical information when designing aircraft, spacecraft, and other structures. Aerospace engineers at the University of Illinois Urbana-Champaign used machine learning for the first time to predict stress in copper at the atomic scale.

According to Huck Beng Chew and his doctoral student Yue Cui, materials, such as copper, are very different at these very small scales.

Left: Machine learning based on artificial neural networks as constitutive laws for atomic stress predictions. Right: Quantifying the local stress state of grain boundaries from atomic coordinate information

Metals are typically polycrystalline in that they contain many grains,” Chew said. “Each grain is a single crystal structure where all the atoms are arranged neatly and very orderly.  But the atomic structure of the boundary where these grains meet can be very complex and tend to have very high stresses.”

These grain boundary stresses are responsible for the fracture and fatigue properties of the metal, but until now, such detailed atomic-scale stress measurements were confined to molecular dynamics simulation models. Using data-driven approaches based on machine learning enables the study to quantify, for the first time, the grain boundary stresses in actual metal specimens imaged by electron microscopy.

“We used molecular dynamics simulations of copper grain boundaries to train our machine learning algorithm to recognize the arrangements of the atoms along the boundaries and identify patterns in the stress distributions within different grain boundary structures,” Cui said. Eventually, the algorithm was able to predict very accurately the grain boundary stresses from both simulation and experimental image data with atomic-level resolution.

We tested the accuracy of the machine learning algorithm with lots of different grain boundary structures until we were confident that the approach was reliable,” Cui explained. The task was more challenging than they imagined, and they had to include physics-based constraints in their algorithms to achieve accurate predictions with limited training data.

When you train the machine learning algorithm on specific grain boundaries, you will get extremely high accuracy in the stress predictions of these same boundaries,” Chew said, “but the more important question is, can the algorithm then predict the stress state of a new boundary that it has never seen before?” For Chew , the answer is yes, and very well in fact.

Source: https://aerospace.illinois.edu/

Machine-learning Accelerates Discovery of Materials for 3D Printing

The growing popularity of 3D printing for manufacturing all sorts of items, from customized medical devices to affordable homes, has created more demand for new 3D printing materials designed for very specific uses. To cut down on the time it takes to discover these new materials, researchers at MIT have developed a data-driven process that uses machine learning to optimize new 3D printing materials with multiple characteristics, like toughness and compression strength.

By streamlining materials development, the system lowers costs and lessens the environmental impact by reducing the amount of chemical waste. The machine learning algorithm could also spur innovation by suggesting unique chemical formulations that human intuition might miss.

Materials development is still very much a manual process. A chemist goes into a lab, mixes ingredients by hand, makes samples, tests them, and comes to a final formulation. But rather than having a chemist who can only do a couple of iterations over a span of days, our system can do hundreds of iterations over the same time span,” says Mike Foshey, a and project manager in the Computational Design and Fabrication Group (CDFG) of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and co-lead author of the paper.

Additional authors include co-lead author Timothy Erps, a technical associate in CDFG; Mina Konaković Luković, a CSAIL postdoc; Wan Shou, a former MIT postdoc who is now an assistant professor at the University of Arkansas; senior author Wojciech Matusik, professor of electrical engineering and computer science at MIT; and Hanns Hagen Geotzke, Herve Dietsch, and Klaus Stoll of BASF. The research was published today in Science Advances.

Source: https://phys.org/

Google Launches a Dermatology AI App in EU

Billions of times each year, people turn to Google’s web search box for help figuring out what’s wrong with their skin. Now, Google is preparing to launch an app that uses image recognition algorithms to provide more expert and personalized help. A brief demo at the company’s developer conference last month showed the service suggesting several possible skin conditions based on uploaded photos.

Machines have matched or outperformed expert dermatologists in studies in which algorithms and doctors scrutinize images from past patients. But there’s little evidence from clinical trials deploying such technology, and no AI image analysis tools are approved for dermatologists to use in the US, says Roxana Daneshjou, a Stanford dermatologist and researcher in machine learning and health.

Many don’t pan out in the real world setting,” she says.

Google’s new app isn’t clinically validated yet either, but the company’s AI prowess and recent buildup of its health care division make its AI dermatology app notable. Still, the skin service will start small—and far from its home turf and largest market in the US. The service is not likely to analyze American skin blemishes any time soon.

At the developer conference, Google’s chief health officer, Karen DeSalvo, said the company aims to launch what it calls a dermatology assist tool in the European Union as soon as the end of this year. A video of the app suggesting that a mark on someone’s arm could be a mole featured a caption saying it was an approved medical device in the EU. The same note added a caveat: “Not available in the US.”

Google says its skin app has been approved “CE marked as a Class I medical device in the EU,” meaning it can be sold in the bloc and other countries recognizing that standard. The company would have faced relatively few hurdles to secure that clearance, says Hugh Harvey, managing director at Hardian Health, a digital health consultancy in the UK. “You essentially fill in a form and self-certify,” he says. Google’s conference last month took place a week before tighter EU rules took effect that Harvey says require many health apps, likely including Google’s, to show that an app is effective, among other things. Preexisting apps have until 2025 to comply with the new rules.

Source: https://www.wired.com/

What is the Human Cortex?

The cerebral cortex is the thin surface layer of the brain found in vertebrate animals that has evolved most recently, showing the greatest variation in size among different mammals (it is especially large in humans). Each part of the cerebral cortex is six layered (e.g., L2), with different kinds of nerve cells (e.g., spiny stellate) in each layer. The cerebral cortex plays a crucial role in most higher level cognitive functions, such as thinking, memory, planning, perception, language, and attention. Although there has been some progress in understanding the macroscopic organization of this very complicated tissue, its organization at the level of individual nerve cells and their interconnecting synapses is largely unknown.

Petabyte connectomic reconstruction of a volume of human neocortex. Left: Small subvolume of the dataset. Right: A subgraph of 5000 neurons and excitatory (green) and inhibitory (red) connections in the dataset. The full graph (connectome) would be far too dense to visualize.

Mapping the structure of the brain at the resolution of individual synapses requires high-resolution microscopy techniques that can image biochemically stabilized (fixed) tissue. We collaborated with brain surgeons at Massachusetts General Hospital in Boston (MGH) who sometimes remove pieces of normal human cerebral cortex when performing a surgery to cure epilepsy in order to gain access to a site in the deeper brain where an epileptic seizure is being initiated. Patients anonymously donated this tissue, which is normally discarded, to our colleagues in the Lichtman lab. The Harvard researchers cut the tissue into ~5300 individual 30 nanometer sections using an automated tape collecting ultra-microtome, mounted those sections onto silicon wafers, and then imaged the brain tissue at 4 nm resolution in a customized 61-beam parallelized scanning electron microscope for rapid image acquisition.

Imaging the ~5300 physical sections produced 225 million individual 2D images. The team then computationally stitched and aligned this data to produce a single 3D volume. While the quality of the data was generally excellent, these alignment pipelines had to robustly handle a number of challenges, including imaging artifacts, missing sections, variation in microscope parameters, and physical stretching and compression of the tissue. Once aligned, a multiscale flood-filling network pipeline was applied (using thousands of Google Cloud TPUs) to produce a 3D segmentation of each individual cell in the tissue. Additional machine learning pipelines were applied to identify and characterize 130 million synapses, classify each 3D fragment into various “subcompartments” (e.g., axon, dendrite, or cell body), and identify other structures of interest such as myelin and cilia. Automated reconstruction results were imperfect, so manual efforts were used to “proofread” roughly one hundred cells in the data. Over time, the scientists expect to add additional cells to this verified set through additional manual efforts and further advances in automation.

Source: https://ai.googleblog.com/

Machine Learning Predicts Heart Failure

Every year, roughly one out of eight U.S. deaths is caused at least in part by heart failure. One of acute heart failure’s most common warning signs is excess fluid in the lungs, a condition known as “pulmonary edema.” A patient’s exact level of excess fluid often dictates the doctor’s course of action, but making such determinations is difficult and requires clinicians to rely on subtle features in X-rays that sometimes lead to inconsistent diagnoses and treatment plans.

To better handle that kind of nuance, a group led by researchers at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) has developed a machine learning model that can look at an X-ray to quantify how severe the edema is, on a four-level scale ranging from 0 (healthy) to 3 (very, very bad). The system determined the right level more than half of the time, and correctly diagnosed level 3 cases 90 percent of the time.

Working with Beth Israel Deaconess Medical Center (BIDMC) and Philips, the team plans to integrate the model into BIDMC’s emergency-room workflow this fall.

This project is meant to augment doctors workflow by providing additional information that can be used to inform their diagnoses as well as enable retrospective analyses,” says PhD student Ruizhi Liao, who was the co-lead author of a related paper with fellow PhD student Geeticka Chauhan and MIT professors Polina Golland and Peter Szolovits.

The team says that better edema diagnosis would help doctors manage not only acute heart issues, but other conditions like sepsis and kidney failure that are strongly associated with edema.

As part of a separate journal article, Liao and colleagues also took an existing public dataset of X-ray images and developed new annotations of severity labels that were agreed upon by a team of four radiologists. Liao’s hope is that these consensus labels can serve as a universal standard to benchmark future machine learning development.

An important aspect of the system is that it was trained not just on more than 300,000 X-ray images, but also on the corresponding text of reports about the X-rays that were written by radiologists. “By learning the association between images and their corresponding reports, the method has the potential for a new way of automatic report generation from the detection of image-driven findings,says Tanveer Syeda-Mahmood, a researcher not involved in the project who serves as chief scientist for IBM’s Medical Sieve Radiology Grand Challenge. “Of course, further experiments would have to be done for this to be broadly applicable to other findings and their fine-grained descriptors.”

Chauhan, Golland, Liao and Szolovits co-wrote the paper with MIT Assistant Professor Jacob Andreas, Professor William Wells of Brigham and Women’s Hospital, Xin Wang of Philips, and Seth Berkowitz and Steven Horng of BIDMC.

Source: https://news.mit.edu/

Why RNA Is A Better Measure Of A Patient’s Current Health Than DNA

By harnessing the combined power of NGS, machine learning and the dynamic nature of RNA we’re able to accurately measure the dynamic immune response and capture a more comprehensive picture of what’s happening at the site of the solid tumor. In the beginning, there was RNA – the first genetic molecule.

In the primordial soup of chemicals that represented the beginning of life, ribonucleic acid (RNA) had the early job of storing information, likely with the support of peptides. Today, RNA’s cousin – deoxyribonucleic acid – or DNA, has taken over most of the responsibilities of passing down genetic information from cell-to-cell, generation-to-generation. As a result, most early health technologies were developed to analyze DNA. But, RNA is a powerful force. And its role in storing information, while different from its early years, has no less of an impact on human health and is gaining more mindshare in our industry.

RNA is often considered a messenger molecule, taking the information coded in our DNA and transcribing it into cellular directives that result in downstream biological signals and proteinslevel changes.  And for this reason, RNA is becoming known not only as a drug target but perhaps more importantly, as a barometer of health.

3d illustration of a part of RNA chain from which the deoxyribonucleic acid or DNA is composed

How and why is RNA so useful? First, RNA is labile — changing in both sequence and abundance in response to genetic and epigenetic changes, but also external factors such as disease, therapy, exercise, and more. This is in contrast to DNA, which is generally static, changing little after conception.

Next, RNA is a more accurate snapshot of disease progression. When mutations do occur at the DNA level, these do not always result in downstream biological changes. Often, the body is able to compensate by repairing the mutation or overcome it by using redundancies in the pathway in which the gene resides. By instead evaluating RNA, we get one step closer to understanding the real impact disease is imparting on our body.

Finally, RNA is abundant. In most human cells, while only two copies of DNA are present, hundreds of thousands of mRNA molecules are present,representing more than 10,000 different species of RNA. Because even rare transcripts are present in multiple copies, biological signals can be confidently detected in RNA when the right technology is used.

Source: https://medcitynews.com/

The U.S. Wastes $161B Worth Of Food Every Year. A.I. Is Helping Us Fix That

When you see pictures of food waste, it just blows you away,” said Stefan Kalb, a former food wholesaler. “I mean, shopping cart after shopping cart of food waste. What happens with the merchandisers when they walk through the store, and they’re pulling products that have expired, is that they’ll put it in a shopping cart and just roll it to the back. It’s almost one of those dystopian [movie] pictures … cartons of milk just piled up in a grocery cart. The ones that didn’t make it.”

In the United States, somewhere between 30% and 40% of the food that’s produced is wasted. That’s the equivalent of $161 billion every single year. The U.S. throws away twice as much food as any other developed country in the world. There are all sorts of reasons this is a problem but A.I. could could solve it.

Kalb’s company is one of several startups — let’s call them the “Internet of Groceries” — using some impressively smart machine learning tools to help with this significant problem. Kalb is the co-founder of Shelf Engine, a company that uses analytics to help retailers better examine the historical order and sales data on their products so as to make better decisions about what to order. This means reduced waste and bigger margins. The company also buys back unsold stock, thereby guaranteeing the sale for a retailer.

We haven’t previously automated this micro-decision that is happening at the grocery store with the buyer,” said Kalb . “The buyer of the store is predicting how much to order — and of what. It’s a very hard decision, and they’re doing it for hundreds and thousands of items. You have these category buyers that just walk through the store to decide how they’re gonna change their bread order or their produce order or their milk order. They’re making these micro-decisions, and it’s costing them tremendous money. If we can automate that part, then we can really make a large impact in the world.”

Source: https://www.digitaltrends.com/