AI is Dreaming Up Drugs that No One Has Ever Seen

At 82 years old, with an aggressive form of blood cancer that six courses of chemotherapy had failed to eliminate, “Paul” appeared to be out of options. With each long and unpleasant round of treatment, his doctors had been working their way down a list of common cancer drugs, hoping to hit on something that would prove effective—and crossing them off one by one. The usual cancer killers were not doing their job.

With nothing to lose, Paul’s doctors enrolled him in a trial set up by the Medical University of Vienna in Austria, where he lives. The university was testing a new matchmaking technology developed by a UK-based company called Exscientia that pairs individual patients with the precise drugs they need, taking into account the subtle biological differences between people.

The researchers took a small sample of tissue from Paul (his real name is not known because his identity was obscured in the trial). They divided the sample, which included both normal cells and cancer cells, into more than a hundred pieces and exposed them to various cocktails of drugs. Then, using robotic automation and computer vision (machine-learning models trained to identify small changes in cells), they watched to see what would happen. In effect, the researchers were doing what the doctors had done: trying different drugs to see what worked. But instead of putting a patient through multiple months-long courses of chemotherapy, they were testing dozens of treatments all at the same time.

The approach allowed the team to carry out an exhaustive search for the right drug. Some of the medicines didn’t kill Paul’s cancer cells. Others harmed his healthy cells. Paul was too frail to take the drug that came out on top. So he was given the runner-up in the matchmaking process: a cancer drug marketed by the pharma giant Johnson & Johnson that Paul’s doctors had not tried because previous trials had suggested it was not effective at treating his type of cancer.

It worked. Two years on, Paul was in complete remission—his cancer was gone. The approach is a big change for the treatment of cancer, says Exscientia’s CEO, Andrew Hopkins: “The technology we have to test drugs in the clinic really does translate to real patients.”

Source: https://www.technologyreview.com/

What is generative AI?

Generative artificial intelligence (AI) describes algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos. Recent new breakthroughs in the field have the potential to drastically change the way we approach content creation.

Generative AI systems fall under the broad category of machine learning, and here’s how one such system—ChatGPT—describes what it can do:

Ready to take your creativity to the next level? Look no further than generative AI! This nifty form of machine learning allows computers to generate all sorts of new and exciting content, from music and art to entire virtual worlds. And it’s not just for fungenerative AI has plenty of practical uses too, like creating new product designs and optimizing business processes. So why wait? Unleash the power of generative AI and see what amazing creations you can come up with!

Did anything in that paragraph seem off to you? Maybe not. The grammar is perfect, the tone works, and the narrative flows. That’s why ChatGPT—the GPT stands for generative pretrained transformer—is receiving so much attention right now. It’s a free chatbot that can generate an answer to almost any question it’s asked. Developed by OpenAI, and released for testing to the general public in November 2022, it’s already considered the best AI chatbot ever. And it’s popular too: over a million people signed up to use it in just five days. Starry-eyed fans posted examples of the chatbot producing computer code, college-level essays, poems, and even halfway-decent jokes.

You must be logged in to view this content.

GPT-3 Could Make Google Search Engine Obsolete

According to The Economist, improved algorithms, powerful computers, and an increase in digitized data have fueled a revolution in machine learning, with new techniques in the 2010s resulting in "rapid improvements in tasks" including manipulating language. Software models are trained to learn by using thousands or millions of examples in a "structure ... loosely based on the neural architecture of the brain". One architecture used in natural language processing (NLP) is a neural network based on a deep learning model that was first introduced in 2017—the Transformer. GPT-n models are based on this Transformer-based deep learning neural network architecture. There are a number of NLP systems capable of processing, mining, organizing, connecting and contrasting textual input, as well as correctly answering questions.

On June 11, 2018, OpenAI researchers and engineers posted their original paper on generative models—language models—artificial intelligence systems—that could be pre-trained with an enormous and diverse corpus of text via datasets, in a process they called generative pre-training (GP). The authors described how language understanding performances in natural language processing (NLP) were improved in GPT-n through a process of "generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task." This eliminated the need for human supervision and for time-intensive hand-labeling.

In February 2020, Microsoft introduced its Turing Natural Language Generation (T-NLG), which was claimed to be the "largest language model ever published at 17 billion parameters." It performed better than any other language model at a variety of tasks which included summarizing texts and answering questions.

You must be logged in to view this content.

Voice, a Biomarker to Detect Diseases

To the general public, it may sound like something out of a science fiction movie: diagnosing serious diseases such as cancer by listening to someone’s voice. But in fact, researchers funded by the National Institutes of Health (NIH) are now investigating whether changes in a person’s voice could serve as a new biomarker in clinical care for detecting illnesses early.

The project, called Voice as a Biomarker of Health, will include 12 research institutions and is being funded by the Bridge to Artificial Intelligence (Bridge2AI) program out of the NIH Common Fund. The project will use machine learning to build a database of vocal biomarkers, and then use the science of acoustic analysis to identify changes—such as pitch, amplitude, cadence, and words per minute—that could become a low-cost diagnostic tool, alongside other clinical tests.

(more…)

How to Teach Robot to Laugh at the Right Time

Laughter comes in many forms, from a polite chuckle to a contagious howl of mirth. Scientists are now developing an AI system that aims to recreate these nuances of humour by laughing in the right way at the right time. The team behind the laughing robot, which is called Erica, say that the system could improve natural conversations between people and AI systems.

CLICK THE PHOTO TO ENJOY THE VIDEO

We think that one of the important functions of conversational AI is empathy,” said Dr Koji Inoue, of Kyoto University, the lead author of the research, published in Frontiers in Robotics and AI. “So we decided that one way a robot can empathise with users is to share their laughter.

 Inoue and his colleagues have set out to teach their AI system the art of conversational laughter. They gathered training data from more than 80 speed-dating dialogues between male university students and the robot, who was initially teleoperated by four female amateur actors.

The dialogue data was annotated for solo laughs, social laughs (where humour isn’t involved, such as in polite or embarrassed laughter) and laughter of mirth. This data was then used to train a machine learning system to decide whether to laugh, and to choose the appropriate type. It might feel socially awkward to mimic a small chuckle, but empathetic to join in with a hearty laugh. Based on the audio files, the algorithm learned the basic characteristics of social laughs, which tend to be more subdued, and mirthful laughs, with the aim of mirroring these in appropriate situations.

Our biggest challenge in this work was identifying the actual cases of shared laughter, which isn’t easy because as you know, most laughter is actually not shared at all,” said Inoue. “We had to carefully categorise exactly which laughs we could use for our analysis and not just assume that any laugh can be responded to.

The team tested out Erica’s “sense of humour” by creating four short dialogues for it to share with a person, integrating the new shared-laughter algorithm into existing conversation software. These were compared to scenarios where Erica didn’t laugh at all or emitted a social laugh every time she detected laughter.

The clips were played to 130 volunteers who rated the shared-laughter algorithm most favourably for empathy, naturalness, human-likeness and understanding. The team said laughter could help create robots with their own distinct character. “We think that they can show this through their conversational behaviours, such as laughing, eye gaze, gestures and speaking style,” said Inoue, although he added that it could take more than 20 years before it would be possible to have a “casual chat with a robot like we would with a friend.”

Source: https://www.theguardian.com/

How to Detect Diabetes Early Enough To Reverse It

Diabetes is a severe and growing metabolic disorder. It already affects hundreds of thousands of people in Switzerland. A sedentary lifestyle and an excessively rich diet damage the beta cells of the pancreas, promoting the onset of this disease. If detected early enough, its progression could be reversed, but diagnostic tools that allow for early detection are lacking. A team from the University of Geneva (UNIGE) in collaboration with several other scientists, including teams from the HUG, has discovered that a low level of the sugar 1,5-anhydroglucitol in the blood is a sign of a loss in functional beta cells. This molecule, easily identified by a blood test, could be used to identify the development of diabetes in people at risk, before the situation becomes irreversible. These results can be found in the Journal of Clinical Endocrinology & Metabolism.
In Switzerland, almost 500,000 people suffer from diabetes. This serious metabolic disorder is constantly increasing due to the combined effect of a lack of physical activity and an unbalanced diet. If detected early enough at the pre-diabetes stage, progression to an established diabetes can be counteracted by adopting an appropriate lifestyle. Unfortunately, one third of patients already have cardiovascular, renal or neuronal complications at the time of diagnosis, which impacts their life expectancy.

When diabetes starts to develop but no symptoms are yet detectable, part of the beta cells of the pancreas (in green) disappear (right image) compared to a healthy individual (left image). This previously undetectable decrease could be identified by measuring the level of 1,5-anhydroglucitol in the blood

‘‘Identifying the transition from pre-diabetes to diabetes is complex, because the status of the affected cells, which are scattered in very small quantities in the core of an organ located under the liver, the pancreas, is impossible to assess quantitatively by non-invasive investigations. We therefore opted for an alternative strategy: to find a molecule whose levels in the blood would be associated with the functional mass of these beta cells in order to indirectly detect their alteration at the pre-diabetes stage, before the appearance of any symptoms,’’ explains Pierre Maechler, a Professor in the Department of Cell Physiology and Metabolism and in the Diabetes Centre of the UNIGE Faculty of Medicine, who led this work.

Several years ago, scientists embarked on the identification of such a molecule able to detect pre-diabetes. The first step was to analyse thousands of molecules in healthy, pre-diabetic and diabetic mouse models. By combining powerful molecular biology techniques with a machine learning system (artificial intelligence), the research team was able to identify, from among thousands of molecules, the one that best reflects a loss of beta cells at the pre-diabetic stage: namely 1,5-anhydroglucitol, a small sugar, whose decrease in blood would indicate a deficit in beta cells.

Source: https://www.unige.ch/

Mind-controlled Robots

Two EPFL research groups teamed up to develop a machine-learning program that can be connected to a human brain and used to command a robot. The program adjusts the robot’s movements based on electrical signals from the brain. The hope is that with this invention, tetraplegic patients will be able to carry out more day-to-day activities on their own. Tetraplegic patients are prisoners of their own bodies, unable to speak or perform the slightest movement. Researchers have been working for years to develop systems that can help these patients carry out some tasks on their own.

People with a spinal cord injury often experience permanent neurological deficits and severe motor disabilities that prevent them from performing even the simplest tasks, such as grasping an object,” says Prof. Aude Billard, the head of EPFL’s Learning Algorithms and Systems Laboratory. “Assistance from robots could help these people recover some of their lost dexterity, since the robot can execute tasks in their place.”

Prof. Billard carried out a study with Prof. José del R. Millán, who at the time was the head of EPFL’s Brain-Machine Interface Laboratory but has since moved to the University of Texas. The two research groups have developed a computer program that can control a robot using electrical signals emitted by a patient’s brain. No voice control or touch function is needed; patients can move the robot simply with their thoughts. The study has been published in Communications Biology, an open-access journal from Nature Portfolio.

To develop their system, the researchers started with a robotic arm that had been developed several years ago. This arm can move back and forth from right to left, reposition objects in front of it and get around objects in its path. “In our study we programmed a robot to avoid obstacles, but we could have selected any other kind of task, like filling a glass of water or pushing or pulling an object,” says Prof. Billard. This entailed developing an algorithm that could adjust the robot’s movements based only on a patient’s thoughts. The algorithm was connected to a headcap equipped with electrodes for running electroencephalogram (EEG) scans of a patient’s brain activity. To use the system, all the patient needs to do is look at the robot. If the robot makes an incorrect move, the patient’s brain will emit an “error message” through a clearly identifiable signal, as if the patient is saying “No, not like that.” The robot will then understand that what it’s doing is wrong – but at first it won’t know exactly why. For instance, did it get too close to, or too far away from, the object? To help the robot find the right answer, the error message is fed into the algorithm, which uses an inverse reinforcement learning approach to work out what the patient wants and what actions the robot needs to take. This is done through a trial-and-error process whereby the robot tries out different movements to see which one is correct.

The process goes pretty quickly – only three to five attempts are usually needed for the robot to figure out the right response and execute the patient’s wishes. “The robot’s AI program can learn rapidly, but you have to tell it when it makes a mistake so that it can correct its behavior,” says Prof. Millán. “Developing the detection technology for error signals was one of the biggest technical challenges we faced.” Iason Batzianoulis, the study’s lead author, adds: “What was particularly difficult in our study was linking a patient’s brain activity to the robot’s control system – or in other words, ‘translating’ a patient’s brain signals into actions performed by the robot. We did that by using machine learning to link a given brain signal to a specific task. Then we associated the tasks with individual robot controls so that the robot does what the patient has in mind.

Source: https://actu.epfl.ch/

How to Predict Stress at Atomic Scale

The amount of stress a material can withstand before it cracks is critical information when designing aircraft, spacecraft, and other structures. Aerospace engineers at the University of Illinois Urbana-Champaign used machine learning for the first time to predict stress in copper at the atomic scale.

According to Huck Beng Chew and his doctoral student Yue Cui, materials, such as copper, are very different at these very small scales.

Left: Machine learning based on artificial neural networks as constitutive laws for atomic stress predictions. Right: Quantifying the local stress state of grain boundaries from atomic coordinate information

Metals are typically polycrystalline in that they contain many grains,” Chew said. “Each grain is a single crystal structure where all the atoms are arranged neatly and very orderly.  But the atomic structure of the boundary where these grains meet can be very complex and tend to have very high stresses.”

These grain boundary stresses are responsible for the fracture and fatigue properties of the metal, but until now, such detailed atomic-scale stress measurements were confined to molecular dynamics simulation models. Using data-driven approaches based on machine learning enables the study to quantify, for the first time, the grain boundary stresses in actual metal specimens imaged by electron microscopy.

“We used molecular dynamics simulations of copper grain boundaries to train our machine learning algorithm to recognize the arrangements of the atoms along the boundaries and identify patterns in the stress distributions within different grain boundary structures,” Cui said. Eventually, the algorithm was able to predict very accurately the grain boundary stresses from both simulation and experimental image data with atomic-level resolution.

We tested the accuracy of the machine learning algorithm with lots of different grain boundary structures until we were confident that the approach was reliable,” Cui explained. The task was more challenging than they imagined, and they had to include physics-based constraints in their algorithms to achieve accurate predictions with limited training data.

When you train the machine learning algorithm on specific grain boundaries, you will get extremely high accuracy in the stress predictions of these same boundaries,” Chew said, “but the more important question is, can the algorithm then predict the stress state of a new boundary that it has never seen before?” For Chew , the answer is yes, and very well in fact.

Source: https://aerospace.illinois.edu/

Machine-learning Accelerates Discovery of Materials for 3D Printing

The growing popularity of 3D printing for manufacturing all sorts of items, from customized medical devices to affordable homes, has created more demand for new 3D printing materials designed for very specific uses. To cut down on the time it takes to discover these new materials, researchers at MIT have developed a data-driven process that uses machine learning to optimize new 3D printing materials with multiple characteristics, like toughness and compression strength.

By streamlining materials development, the system lowers costs and lessens the environmental impact by reducing the amount of chemical waste. The machine learning algorithm could also spur innovation by suggesting unique chemical formulations that human intuition might miss.

Materials development is still very much a manual process. A chemist goes into a lab, mixes ingredients by hand, makes samples, tests them, and comes to a final formulation. But rather than having a chemist who can only do a couple of iterations over a span of days, our system can do hundreds of iterations over the same time span,” says Mike Foshey, a and project manager in the Computational Design and Fabrication Group (CDFG) of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and co-lead author of the paper.

Additional authors include co-lead author Timothy Erps, a technical associate in CDFG; Mina Konaković Luković, a CSAIL postdoc; Wan Shou, a former MIT postdoc who is now an assistant professor at the University of Arkansas; senior author Wojciech Matusik, professor of electrical engineering and computer science at MIT; and Hanns Hagen Geotzke, Herve Dietsch, and Klaus Stoll of BASF. The research was published today in Science Advances.

Source: https://phys.org/

Google Launches a Dermatology AI App in EU

Billions of times each year, people turn to Google’s web search box for help figuring out what’s wrong with their skin. Now, Google is preparing to launch an app that uses image recognition algorithms to provide more expert and personalized help. A brief demo at the company’s developer conference last month showed the service suggesting several possible skin conditions based on uploaded photos.

Machines have matched or outperformed expert dermatologists in studies in which algorithms and doctors scrutinize images from past patients. But there’s little evidence from clinical trials deploying such technology, and no AI image analysis tools are approved for dermatologists to use in the US, says Roxana Daneshjou, a Stanford dermatologist and researcher in machine learning and health.

Many don’t pan out in the real world setting,” she says.

Google’s new app isn’t clinically validated yet either, but the company’s AI prowess and recent buildup of its health care division make its AI dermatology app notable. Still, the skin service will start small—and far from its home turf and largest market in the US. The service is not likely to analyze American skin blemishes any time soon.

At the developer conference, Google’s chief health officer, Karen DeSalvo, said the company aims to launch what it calls a dermatology assist tool in the European Union as soon as the end of this year. A video of the app suggesting that a mark on someone’s arm could be a mole featured a caption saying it was an approved medical device in the EU. The same note added a caveat: “Not available in the US.”

Google says its skin app has been approved “CE marked as a Class I medical device in the EU,” meaning it can be sold in the bloc and other countries recognizing that standard. The company would have faced relatively few hurdles to secure that clearance, says Hugh Harvey, managing director at Hardian Health, a digital health consultancy in the UK. “You essentially fill in a form and self-certify,” he says. Google’s conference last month took place a week before tighter EU rules took effect that Harvey says require many health apps, likely including Google’s, to show that an app is effective, among other things. Preexisting apps have until 2025 to comply with the new rules.

Source: https://www.wired.com/