Mind-controlled Robots

Two EPFL research groups teamed up to develop a machine-learning program that can be connected to a human brain and used to command a robot. The program adjusts the robot’s movements based on electrical signals from the brain. The hope is that with this invention, tetraplegic patients will be able to carry out more day-to-day activities on their own. Tetraplegic patients are prisoners of their own bodies, unable to speak or perform the slightest movement. Researchers have been working for years to develop systems that can help these patients carry out some tasks on their own.

People with a spinal cord injury often experience permanent neurological deficits and severe motor disabilities that prevent them from performing even the simplest tasks, such as grasping an object,” says Prof. Aude Billard, the head of EPFL’s Learning Algorithms and Systems Laboratory. “Assistance from robots could help these people recover some of their lost dexterity, since the robot can execute tasks in their place.”

Prof. Billard carried out a study with Prof. José del R. Millán, who at the time was the head of EPFL’s Brain-Machine Interface Laboratory but has since moved to the University of Texas. The two research groups have developed a computer program that can control a robot using electrical signals emitted by a patient’s brain. No voice control or touch function is needed; patients can move the robot simply with their thoughts. The study has been published in Communications Biology, an open-access journal from Nature Portfolio.

To develop their system, the researchers started with a robotic arm that had been developed several years ago. This arm can move back and forth from right to left, reposition objects in front of it and get around objects in its path. “In our study we programmed a robot to avoid obstacles, but we could have selected any other kind of task, like filling a glass of water or pushing or pulling an object,” says Prof. Billard. This entailed developing an algorithm that could adjust the robot’s movements based only on a patient’s thoughts. The algorithm was connected to a headcap equipped with electrodes for running electroencephalogram (EEG) scans of a patient’s brain activity. To use the system, all the patient needs to do is look at the robot. If the robot makes an incorrect move, the patient’s brain will emit an “error message” through a clearly identifiable signal, as if the patient is saying “No, not like that.” The robot will then understand that what it’s doing is wrong – but at first it won’t know exactly why. For instance, did it get too close to, or too far away from, the object? To help the robot find the right answer, the error message is fed into the algorithm, which uses an inverse reinforcement learning approach to work out what the patient wants and what actions the robot needs to take. This is done through a trial-and-error process whereby the robot tries out different movements to see which one is correct.

The process goes pretty quickly – only three to five attempts are usually needed for the robot to figure out the right response and execute the patient’s wishes. “The robot’s AI program can learn rapidly, but you have to tell it when it makes a mistake so that it can correct its behavior,” says Prof. Millán. “Developing the detection technology for error signals was one of the biggest technical challenges we faced.” Iason Batzianoulis, the study’s lead author, adds: “What was particularly difficult in our study was linking a patient’s brain activity to the robot’s control system – or in other words, ‘translating’ a patient’s brain signals into actions performed by the robot. We did that by using machine learning to link a given brain signal to a specific task. Then we associated the tasks with individual robot controls so that the robot does what the patient has in mind.

Source: https://actu.epfl.ch/

Superhuman Era

The Pentagon is investigating how to fundamentally alter what it means to be human, funding research into creating super humans that are smarter, faster, and stronger through human performance enhancement. The US Defense and Intelligence communities are on the cusp of ushering in a new era of transhumanism by funding research into gene editing, artificial intelligence, and the Internet of Bodies (IoB) to enhance human performance.

Soldier in glasses of virtual reality. Military concept of the future.

If successful, these “people” would have the potential to never tire and think smarter, move faster, jump higher, see farther, hear better, hit harder, live longer, adapt stronger, and calculate quicker than any other human being on the planet.

A Pentagon-sponsored RAND report published today outlines the technological potentials of this controversial transhumanist research, which includes potentially “adding reptilian genes that provide the ability to see in infrared,” and “making humans stronger, more intelligent, or more adapted to extreme environments.”

According to the RAND report, “Technological Approaches to Human Performance Enhancement,” modalities for human performance enhancement (HPE) can be grouped into three principal categories: Gene editingApplications of artificial intelligence (AI), Networked technologies that are wearable or even implantable (the so-called Internet of Bodies [IoB])
For the US Defense and Intelligence communities, human performance enhancement (HPE) offers “the potential to increase strength, speed, endurance, intelligence, and tolerance of extreme environments and to reduce sleep needs and reaction times—could aid in the development of better operators.

The report adds that in the next few years, “HPE could help military service and intelligence analysts through the use of multiple techniques to connect technology to human beings.”

If successful, humanity as we know it may split into an entirely new species, where those not genetically edited or technologically altered could never compete with those who were. Worse than being serfs, those not able to keep up in this brave new transhumanistic world would be rendered irrelevant, redundant, uselessunnecessary even for manual labor.

Source: https://sociable.co/

Dementia Test on IPad

A dementia diagnosis usually starts with a family member noticing that something isn’t quite right: a partner becoming forgetful, a normally placid parent losing their temper more often. From there, there are doctor’s appointments—memory and behavior tests that haven’t changed in years, brain scans if the money is there, or one of the battery of new blood tests looking for the biomarkers of brain damage. And then: nothing.

Neurodegenerative diseases like dementia and Alzheimer’s are more feared than cancer and heart disease combined, according to a 2016 survey, and one of the most frightening things about them is how little we still know. There are no cures, and few effective treatments.

So you might question the benefits of a 5-minute test that can assess your risk of getting dementia before you show any symptoms. The Integrated Cognitive Assessment (ICA) test, developed by the British startup Cognetivity Neurosciences, has been granted Food and Drug Administration clearance to be marketed in the United States and is being trialled at several NHS trusts in the UK. But is there any point in taking a test for a disease you can’t do anything about?

The ICA is designed as a “semi-supervisedscreening test, says Cognetivity CEO Sina Habibi. It could form part of an annual health check-up for the over-50s, looking for the earliest signs of neurodegenerative disease before they become apparent in behavior.In the same way you look at blood pressure, you could look at the brain with a cognitive test to see if there’s something malfunctioning,” he says.

An early diagnosis could help people plan ahead and put their affairs in order—but arguably that’s something they should probably be doing anyway. Lifestyle tweaks such as eating less fat, exercising more, or drinking less can also reduce risk, particularly in vascular dementia, which is caused by poor blood supply to the brain and is therefore closely linked to heart health.

The procedure runs on an iPad. A zebra appears onscreen and then disappears, replaced by a railway bridge. There are flashes of beach scenes in black and white, and then a glimpse of an exotic bird, all interspersed with monochrome grids and fuzzy static—a captcha at warp speed. The user’s task is simple: They tap on the right side of the screen whenever they see an animal in one of the pictures, and on the left side when they don’t.

New AI Algorithm Can Spot Unseen Heart Attack Symptoms

Researchers at Mount Sinai have created a new artificial intelligence algorithm that can identify slight changes within the heart and accurately predict an incoming heart attack symptoms or heart failure. The AI algorithm could learn how to identify subtle changes in electrocardiograms (also known as ECGs or EKGs) to predict whether a patient was experiencing heart failure.

Researchers implemented natural language processing programs to help the computer extract data from the written reports and enabling it to read over 700,000 electrocardiograms and echocardiogram reports obtained from 150,000 Mount Sinai Health System patients from 2003 to 2020. Data from four hospitals was used to train the computer, whereas data from a fifth one was used to test how the algorithm would perform in a different experimental setting.

“We showed that deep-learning algorithms can recognize blood pumping problems on both sides of the heart from ECG waveform data,” said Benjamin S. Glicksberg, PhD, Assistant Professor of Genetics and Genomic Sciences, a member of the Hasso Plattner Institute for Digital Health at Mount Sinai, and a senior author of the study published in the Journal of the American College of Cardiology: Cardiovascular Imaging. “Ordinarily, diagnosing these type of heart conditions requires expensive and time-consuming procedures. We hope that this algorithm will enable quicker diagnosis of heart failure.”

“However, recent breakthroughs in artificial intelligence suggest that electrocardiograms—a widely used electrical recording device—could be a fast and readily available alternative in these cases. For instance, many studies have shown how a “deep-learning” algorithm can detect weakness in the heart’s left ventricle, which pushes freshly oxygenated blood out to the rest of the body. In this study, the researchers described the development of an algorithm that not only assessed the strength of the left ventricle but also the right ventricle, which takes deoxygenated blood streaming in from the body and pumps it to the lungs.”

Although appealing, traditionally it has been challenging for physicians to use ECGs to diagnose heart failure. This is partly because there is no established diagnostic criteria for these assessments and because some changes in ECG readouts are simply too subtle for the human eye to detect,” said Dr. Nadkarni. “This study represents an exciting step forward in finding information hidden within the ECG data which can lead to better screening and treatment paradigms using a relatively simple and widely available test.”

Source: https://www.mountsinai.org/

AI Neural Network: the Bigger, the Smarter

When it comes to the neural networks that power today’s artificial intelligence, sometimes the bigger they are, the smarter they are too. Recent leaps in machine understanding of language, for example, have hinged on building some of the most enormous AI models ever and stuffing them with huge gobs of text. A new cluster of computer chips could now help these networks grow to almost unimaginable size—and show whether going ever larger may unlock further AI advances, not only in language understanding, but perhaps also in areas like robotics and computer vision.

Cerebras Systems, a startup that has already built the world’s largest computer chip, has now developed technology that lets a cluster of those chips run AI models that are more than a hundred times bigger than the most gargantuan ones around today.

Cerebras says it can now run a neural network with 120 trillion connections, mathematical simulations of the interplay between biological neurons and synapses. The largest AI models in existence today have about a trillion connections, and they cost many millions of dollars to build and train. But Cerebras says its hardware will run calculations in about a 50th of the time of existing hardware. Its chip cluster, along with power and cooling requirements, presumably still won’t come cheap, but Cerberas at least claims its tech will be substantially more efficient.

We built it with synthetic parameters,” says Andrew Feldman, founder and CEO of Cerebras, who will present details of the tech at a chip conference this week. “So we know we can, but we haven’t trained a model, because we’re infrastructure builders, and, well, there is no model yet” of that size, he adds.

Today, most AI programs are trained using GPUs, a type of chip originally designed for generating computer graphics but also well suited for the parallel processing that neural networks require. Large AI models are essentially divided up across dozens or hundreds of GPUs, connected using high-speed wiring.

GPUs still make sense for AI, but as models get larger and companies look for an edge, more specialized designs may find their niches. Recent advances and commercial interest have sparked a Cambrian explosion in new chip designs specialized for AI. The Cerebras chip is an intriguing part of that evolution. While normal semiconductor designers split a wafer into pieces to make individual chips, Cerebras packs in much more computational power by using the entire thing, having its many computational units, or cores, talk to each other more efficiently. A GPU typically has a few hundred cores, but Cerebras’s latest chip, called the Wafer Scale Engine Two (WSE-2), has 850,000 of them.

Source: https://www.wired.com/

Google Launches a Dermatology AI App in EU

Billions of times each year, people turn to Google’s web search box for help figuring out what’s wrong with their skin. Now, Google is preparing to launch an app that uses image recognition algorithms to provide more expert and personalized help. A brief demo at the company’s developer conference last month showed the service suggesting several possible skin conditions based on uploaded photos.

Machines have matched or outperformed expert dermatologists in studies in which algorithms and doctors scrutinize images from past patients. But there’s little evidence from clinical trials deploying such technology, and no AI image analysis tools are approved for dermatologists to use in the US, says Roxana Daneshjou, a Stanford dermatologist and researcher in machine learning and health.

Many don’t pan out in the real world setting,” she says.

Google’s new app isn’t clinically validated yet either, but the company’s AI prowess and recent buildup of its health care division make its AI dermatology app notable. Still, the skin service will start small—and far from its home turf and largest market in the US. The service is not likely to analyze American skin blemishes any time soon.

At the developer conference, Google’s chief health officer, Karen DeSalvo, said the company aims to launch what it calls a dermatology assist tool in the European Union as soon as the end of this year. A video of the app suggesting that a mark on someone’s arm could be a mole featured a caption saying it was an approved medical device in the EU. The same note added a caveat: “Not available in the US.”

Google says its skin app has been approved “CE marked as a Class I medical device in the EU,” meaning it can be sold in the bloc and other countries recognizing that standard. The company would have faced relatively few hurdles to secure that clearance, says Hugh Harvey, managing director at Hardian Health, a digital health consultancy in the UK. “You essentially fill in a form and self-certify,” he says. Google’s conference last month took place a week before tighter EU rules took effect that Harvey says require many health apps, likely including Google’s, to show that an app is effective, among other things. Preexisting apps have until 2025 to comply with the new rules.

Source: https://www.wired.com/

AI Recognises the Biological Activity of Natural Products

Nature has a vast store of medicinal substances. “Over 50 percent of all drugs today are inspired by nature,” says Gisbert Schneider, Professor of Computer-​Assisted Drug Design at ETH Zurich. Nevertheless, he is convinced that we have tapped only a fraction of the potential of natural products. Together with his team, he has successfully demonstrated how artificial intelligence (AI) methods can be used in a targeted manner to find new pharmaceutical applications for natural products. Furthermore, AI methods are capable of helping to find alternatives to these compounds that have the same effect but are much easier and therefore cheaper to manufacture.

And so the ETH researchers are paving the way for an important medical advance: we currently have only about 4,000 basically different medicines in total. In contrast, estimates of the number of human proteins reach up to 400,000, each of which could be a target for a drug. There are good reasons for Schneider’s focus on nature in the search for new pharmaceutical agents.

Most natural products are by definition potential active ingredients that have been selected via evolutionary mechanisms,” he says.
Whereas scientists used to trawl collections of natural products on the search for new drugs, Schneider and his team have flipped the script: first, they look for possible target molecules, typically proteins, of natural products so as to identify the pharmacologically relevant compounds. “The chances of finding medically meaningful pairs of active ingredient and target protein are much greater using this method than with conventional screening,” Schneider says.

Source: https://www.weforum.org/

AI Designs Quantum Physics Beyond What Any Human Has Conceived

Quantum physicist Mario Krenn remembers sitting in a café in Vienna in early 2016, poring over computer printouts, trying to make sense of what MELVIN had found. MELVIN was a machine-learning algorithm Krenn had built, a kind of artificial intelligence. Its job was to mix and match the building blocks of standard quantum experiments and find solutions to new problems. And it did find many interesting ones. But there was one that made no sense.

The first thing I thought was, ‘My program has a bug, because the solution cannot exist,’” Krenn says. MELVIN had seemingly solved the problem of creating highly complex entangled states involving multiple photons (entangled states being those that once made Albert Einstein invoke the specter of “spooky action at a distance”). Krenn, Anton Zeilinger of the University of Vienna and their colleagues had not explicitly provided MELVIN the rules needed to generate such complex states, yet it had found a way. Eventually, he realized that the algorithm had rediscovered a type of experimental arrangement that had been devised in the early 1990s. But those experiments had been much simpler. MELVIN had cracked a far more complex puzzle.

When we understood what was going on, we were immediately able to generalize [the solution],” says Krenn, who is now at the University of Toronto. Since then, other teams have started performing the experiments identified by MELVIN, allowing them to test the conceptual underpinnings of quantum mechanics in new ways. Meanwhile Krenn, working with colleagues in Toronto, has refined their machine-learning algorithms. Their latest effort, an AI called THESEUS, has upped the ante: it is orders of magnitude faster than MELVIN, and humans can readily parse its output. While it would take Krenn and his colleagues days or even weeks to understand MELVIN’s meanderings, they can almost immediately figure out what THESEUS is saying.
It is amazing work,” says theoretical quantum physicist Renato Renner of the Institute for Theoretical Physics at the Swiss Federal Institute of Technology Zurich, who reviewed a 2020 study about THESEUS but was not directly involved in these efforts.

Source: https://www.scientificamerican.com/

How to Completely Wipe out Colon Cancer in Anybody Who Gets Screened

Michael Wallace has performed hundreds of colonoscopies in his 20 years as a gastroenterologist. He thinks he’s pretty good at recognizing the growths, or polyps, that can spring up along the ridges of the colon and potentially turn into cancer. But he isn’t always perfect. Sometimes the polyps are flat and hard to see. Other times, doctors just miss them. “We’re all humans,” says Wallace, who works at the Mayo Clinic. After a morning of back-to-back procedures that require attention to minute details, he says, “we get tired.”

Colonoscopies, if unpleasant, are highly effective at sussing out pre-cancerous polyps and preventing colon cancer. But the effectiveness of the procedure rests heavily on the abilities of the physician performing it. Now, the Food and Drug Administration has approved a new tool that promises to help doctors recognize precancerous growths during a colonoscopy: an artificial intelligence system made by Medtronic. Doctors say that alongside other measures, the tool could help improve diagnoses.

 

We really have the opportunity to completely wipe out colon cancer in anybody who gets screened,” says Wallace, who consulted with Medtronic on the project.

The Medtronic system, called GI Genius, has seen the inside of more colons than most doctors. Medtronic and partner Cosmo Pharmaceuticals trained the algorithm to recognize polyps by reviewing more than 13 million videos of colonoscopies conducted in Europe and the US that Cosmo had collected while running drug trials. To “teach” the AI to distinguish potentially dangerous growths, the images were labeled by gastroenterologists as either normal or unhealthy tissue. Then the AI was tested on progressively harder-to-recognize polyps, starting with colonoscopies that were performed under perfect conditions and moving to more difficult challenges, like distinguishing a polyp that was very small, only in range of the camera briefly, or hidden in a dark spot. The system, which can be added to the scopes that doctors already use to perform a colonoscopy, follows along as the doctor probes the colon, highlighting potential polyps with a green box. GI Genius was approved in Europe in October 2019 and is the first AI cleared by the FDA for helping detect colorectal polyps. “It found things that even I missed,” says Wallace, who co-authored the first validation study of GI Genius. “It’s an impressive system.”

Source: https://www.wired.com/

Neuralink Wants to Implant Human Brain Chips Within a Year

Tesla CEO Elon Musk released a video showing how his company Neuralink – a brain-computer-interface company – had advanced its technology to the point that the chip could allow a monkey to play video games with its mind.

CLICK ON THE IMAGE TO ENJOY VIDEO

Neuralink could transition from operating on monkeys to human trials within the year, if the startup meets a previous prediction from Musk. In February, he said the company planned to launch human trials by the end of the year after first mentioning his work with the monkey implants.

At the time, the CEO gave the timeline in response to another user’s request to join human trials for the product, which is designed to implant artificial intelligence into human brains as well as potentially cure neurological diseases like Alzheimer’s and Parkinson’s.

Musk has made similar statements in the past about his project, which was launched in 2016. He said in 2019 that it would be testing on humans by the end of 2020.

There has been a recent flurry of information on the project. Prior to the recent video release on Twitter, Musk had made an appearance on the social media site, Clubhouse, and provided some additional updates on Neuralink back in February.

During his Clubhouse visit, Musk detailed how the company had implanted the chip in the monkey’s brain and talked about how it could play video games using only its mind.

Source: https://www.sciencealert.com/