Mind-controlled Robots

Two EPFL research groups teamed up to develop a machine-learning program that can be connected to a human brain and used to command a robot. The program adjusts the robot’s movements based on electrical signals from the brain. The hope is that with this invention, tetraplegic patients will be able to carry out more day-to-day activities on their own. Tetraplegic patients are prisoners of their own bodies, unable to speak or perform the slightest movement. Researchers have been working for years to develop systems that can help these patients carry out some tasks on their own.

People with a spinal cord injury often experience permanent neurological deficits and severe motor disabilities that prevent them from performing even the simplest tasks, such as grasping an object,” says Prof. Aude Billard, the head of EPFL’s Learning Algorithms and Systems Laboratory. “Assistance from robots could help these people recover some of their lost dexterity, since the robot can execute tasks in their place.”

Prof. Billard carried out a study with Prof. José del R. Millán, who at the time was the head of EPFL’s Brain-Machine Interface Laboratory but has since moved to the University of Texas. The two research groups have developed a computer program that can control a robot using electrical signals emitted by a patient’s brain. No voice control or touch function is needed; patients can move the robot simply with their thoughts. The study has been published in Communications Biology, an open-access journal from Nature Portfolio.

To develop their system, the researchers started with a robotic arm that had been developed several years ago. This arm can move back and forth from right to left, reposition objects in front of it and get around objects in its path. “In our study we programmed a robot to avoid obstacles, but we could have selected any other kind of task, like filling a glass of water or pushing or pulling an object,” says Prof. Billard. This entailed developing an algorithm that could adjust the robot’s movements based only on a patient’s thoughts. The algorithm was connected to a headcap equipped with electrodes for running electroencephalogram (EEG) scans of a patient’s brain activity. To use the system, all the patient needs to do is look at the robot. If the robot makes an incorrect move, the patient’s brain will emit an “error message” through a clearly identifiable signal, as if the patient is saying “No, not like that.” The robot will then understand that what it’s doing is wrong – but at first it won’t know exactly why. For instance, did it get too close to, or too far away from, the object? To help the robot find the right answer, the error message is fed into the algorithm, which uses an inverse reinforcement learning approach to work out what the patient wants and what actions the robot needs to take. This is done through a trial-and-error process whereby the robot tries out different movements to see which one is correct.

The process goes pretty quickly – only three to five attempts are usually needed for the robot to figure out the right response and execute the patient’s wishes. “The robot’s AI program can learn rapidly, but you have to tell it when it makes a mistake so that it can correct its behavior,” says Prof. Millán. “Developing the detection technology for error signals was one of the biggest technical challenges we faced.” Iason Batzianoulis, the study’s lead author, adds: “What was particularly difficult in our study was linking a patient’s brain activity to the robot’s control system – or in other words, ‘translating’ a patient’s brain signals into actions performed by the robot. We did that by using machine learning to link a given brain signal to a specific task. Then we associated the tasks with individual robot controls so that the robot does what the patient has in mind.

Source: https://actu.epfl.ch/

Superhuman Era

The Pentagon is investigating how to fundamentally alter what it means to be human, funding research into creating super humans that are smarter, faster, and stronger through human performance enhancement. The US Defense and Intelligence communities are on the cusp of ushering in a new era of transhumanism by funding research into gene editing, artificial intelligence, and the Internet of Bodies (IoB) to enhance human performance.

Soldier in glasses of virtual reality. Military concept of the future.

If successful, these “people” would have the potential to never tire and think smarter, move faster, jump higher, see farther, hear better, hit harder, live longer, adapt stronger, and calculate quicker than any other human being on the planet.

A Pentagon-sponsored RAND report published today outlines the technological potentials of this controversial transhumanist research, which includes potentially “adding reptilian genes that provide the ability to see in infrared,” and “making humans stronger, more intelligent, or more adapted to extreme environments.”

According to the RAND report, “Technological Approaches to Human Performance Enhancement,” modalities for human performance enhancement (HPE) can be grouped into three principal categories: Gene editingApplications of artificial intelligence (AI), Networked technologies that are wearable or even implantable (the so-called Internet of Bodies [IoB])
For the US Defense and Intelligence communities, human performance enhancement (HPE) offers “the potential to increase strength, speed, endurance, intelligence, and tolerance of extreme environments and to reduce sleep needs and reaction times—could aid in the development of better operators.

The report adds that in the next few years, “HPE could help military service and intelligence analysts through the use of multiple techniques to connect technology to human beings.”

If successful, humanity as we know it may split into an entirely new species, where those not genetically edited or technologically altered could never compete with those who were. Worse than being serfs, those not able to keep up in this brave new transhumanistic world would be rendered irrelevant, redundant, uselessunnecessary even for manual labor.

Source: https://sociable.co/

AI Neural Network: the Bigger, the Smarter

When it comes to the neural networks that power today’s artificial intelligence, sometimes the bigger they are, the smarter they are too. Recent leaps in machine understanding of language, for example, have hinged on building some of the most enormous AI models ever and stuffing them with huge gobs of text. A new cluster of computer chips could now help these networks grow to almost unimaginable size—and show whether going ever larger may unlock further AI advances, not only in language understanding, but perhaps also in areas like robotics and computer vision.

Cerebras Systems, a startup that has already built the world’s largest computer chip, has now developed technology that lets a cluster of those chips run AI models that are more than a hundred times bigger than the most gargantuan ones around today.

Cerebras says it can now run a neural network with 120 trillion connections, mathematical simulations of the interplay between biological neurons and synapses. The largest AI models in existence today have about a trillion connections, and they cost many millions of dollars to build and train. But Cerebras says its hardware will run calculations in about a 50th of the time of existing hardware. Its chip cluster, along with power and cooling requirements, presumably still won’t come cheap, but Cerberas at least claims its tech will be substantially more efficient.

We built it with synthetic parameters,” says Andrew Feldman, founder and CEO of Cerebras, who will present details of the tech at a chip conference this week. “So we know we can, but we haven’t trained a model, because we’re infrastructure builders, and, well, there is no model yet” of that size, he adds.

Today, most AI programs are trained using GPUs, a type of chip originally designed for generating computer graphics but also well suited for the parallel processing that neural networks require. Large AI models are essentially divided up across dozens or hundreds of GPUs, connected using high-speed wiring.

GPUs still make sense for AI, but as models get larger and companies look for an edge, more specialized designs may find their niches. Recent advances and commercial interest have sparked a Cambrian explosion in new chip designs specialized for AI. The Cerebras chip is an intriguing part of that evolution. While normal semiconductor designers split a wafer into pieces to make individual chips, Cerebras packs in much more computational power by using the entire thing, having its many computational units, or cores, talk to each other more efficiently. A GPU typically has a few hundred cores, but Cerebras’s latest chip, called the Wafer Scale Engine Two (WSE-2), has 850,000 of them.

Source: https://www.wired.com/

Tesla Robot + Neuralink has Revolutionary Healthcare Applications

Elon Musk and his companies have a commitment to fearless innovation. The incredible accomplishments that his companies have achieved include cutting-edge electric vehicles with Tesla, next-generation space-flight capabilities with SpaceX, and the development of critical brain-machine interfaces with Neuralink, to name a few.

Musk’s most recent announcement was on behalf of Tesla, and was yet another ode to fearless innovation. Last week, during Tesla’s much anticipated “AI Day,” an event meant to showcase the company’s revolutionary strides in artificial intelligence technology, Musk announced the next frontier for the company: developing the Tesla Bot, a “general purpose, bi-pedal, humanoid robot capable of performing tasks that are unsafe, repetitive or boring.”

Elon Musk described the project in detail: “Basically, if you think about what we’re doing right now with the cars, Tesla is arguably the world’s biggest robotics company…because our cars are like semi-sentient robots on wheels. With the full self-driving computer, the inference engine on the car (which will keep evolving, obviously), Dojo, and all the neural nets recognizing the world, understanding how to navigate through the world, it kind of makes sense to put that onto a humanoid form.” Musk described the purpose behind this bot, atleast initially: “it’s intended to be friendly of course, and navigate through a world built for humans and eliminate dangerous repetitive and boring tasks.” Musk also explained that a useful humanoid robot should be able to navigate the world without being explicitly trained step-by-step, and instead, should be able to perform advanced tasks with cognitive understanding of simple commands, such as“pick up groceries.”

Source: https://www.forbes.com/

AI Recognises the Biological Activity of Natural Products

Nature has a vast store of medicinal substances. “Over 50 percent of all drugs today are inspired by nature,” says Gisbert Schneider, Professor of Computer-​Assisted Drug Design at ETH Zurich. Nevertheless, he is convinced that we have tapped only a fraction of the potential of natural products. Together with his team, he has successfully demonstrated how artificial intelligence (AI) methods can be used in a targeted manner to find new pharmaceutical applications for natural products. Furthermore, AI methods are capable of helping to find alternatives to these compounds that have the same effect but are much easier and therefore cheaper to manufacture.

And so the ETH researchers are paving the way for an important medical advance: we currently have only about 4,000 basically different medicines in total. In contrast, estimates of the number of human proteins reach up to 400,000, each of which could be a target for a drug. There are good reasons for Schneider’s focus on nature in the search for new pharmaceutical agents.

Most natural products are by definition potential active ingredients that have been selected via evolutionary mechanisms,” he says.
Whereas scientists used to trawl collections of natural products on the search for new drugs, Schneider and his team have flipped the script: first, they look for possible target molecules, typically proteins, of natural products so as to identify the pharmacologically relevant compounds. “The chances of finding medically meaningful pairs of active ingredient and target protein are much greater using this method than with conventional screening,” Schneider says.

Source: https://www.weforum.org/

AI Designs Quantum Physics Beyond What Any Human Has Conceived

Quantum physicist Mario Krenn remembers sitting in a café in Vienna in early 2016, poring over computer printouts, trying to make sense of what MELVIN had found. MELVIN was a machine-learning algorithm Krenn had built, a kind of artificial intelligence. Its job was to mix and match the building blocks of standard quantum experiments and find solutions to new problems. And it did find many interesting ones. But there was one that made no sense.

The first thing I thought was, ‘My program has a bug, because the solution cannot exist,’” Krenn says. MELVIN had seemingly solved the problem of creating highly complex entangled states involving multiple photons (entangled states being those that once made Albert Einstein invoke the specter of “spooky action at a distance”). Krenn, Anton Zeilinger of the University of Vienna and their colleagues had not explicitly provided MELVIN the rules needed to generate such complex states, yet it had found a way. Eventually, he realized that the algorithm had rediscovered a type of experimental arrangement that had been devised in the early 1990s. But those experiments had been much simpler. MELVIN had cracked a far more complex puzzle.

When we understood what was going on, we were immediately able to generalize [the solution],” says Krenn, who is now at the University of Toronto. Since then, other teams have started performing the experiments identified by MELVIN, allowing them to test the conceptual underpinnings of quantum mechanics in new ways. Meanwhile Krenn, working with colleagues in Toronto, has refined their machine-learning algorithms. Their latest effort, an AI called THESEUS, has upped the ante: it is orders of magnitude faster than MELVIN, and humans can readily parse its output. While it would take Krenn and his colleagues days or even weeks to understand MELVIN’s meanderings, they can almost immediately figure out what THESEUS is saying.
It is amazing work,” says theoretical quantum physicist Renato Renner of the Institute for Theoretical Physics at the Swiss Federal Institute of Technology Zurich, who reviewed a 2020 study about THESEUS but was not directly involved in these efforts.

Source: https://www.scientificamerican.com/

How to Completely Wipe out Colon Cancer in Anybody Who Gets Screened

Michael Wallace has performed hundreds of colonoscopies in his 20 years as a gastroenterologist. He thinks he’s pretty good at recognizing the growths, or polyps, that can spring up along the ridges of the colon and potentially turn into cancer. But he isn’t always perfect. Sometimes the polyps are flat and hard to see. Other times, doctors just miss them. “We’re all humans,” says Wallace, who works at the Mayo Clinic. After a morning of back-to-back procedures that require attention to minute details, he says, “we get tired.”

Colonoscopies, if unpleasant, are highly effective at sussing out pre-cancerous polyps and preventing colon cancer. But the effectiveness of the procedure rests heavily on the abilities of the physician performing it. Now, the Food and Drug Administration has approved a new tool that promises to help doctors recognize precancerous growths during a colonoscopy: an artificial intelligence system made by Medtronic. Doctors say that alongside other measures, the tool could help improve diagnoses.

 

We really have the opportunity to completely wipe out colon cancer in anybody who gets screened,” says Wallace, who consulted with Medtronic on the project.

The Medtronic system, called GI Genius, has seen the inside of more colons than most doctors. Medtronic and partner Cosmo Pharmaceuticals trained the algorithm to recognize polyps by reviewing more than 13 million videos of colonoscopies conducted in Europe and the US that Cosmo had collected while running drug trials. To “teach” the AI to distinguish potentially dangerous growths, the images were labeled by gastroenterologists as either normal or unhealthy tissue. Then the AI was tested on progressively harder-to-recognize polyps, starting with colonoscopies that were performed under perfect conditions and moving to more difficult challenges, like distinguishing a polyp that was very small, only in range of the camera briefly, or hidden in a dark spot. The system, which can be added to the scopes that doctors already use to perform a colonoscopy, follows along as the doctor probes the colon, highlighting potential polyps with a green box. GI Genius was approved in Europe in October 2019 and is the first AI cleared by the FDA for helping detect colorectal polyps. “It found things that even I missed,” says Wallace, who co-authored the first validation study of GI Genius. “It’s an impressive system.”

Source: https://www.wired.com/

Neuralink Wants to Implant Human Brain Chips Within a Year

Tesla CEO Elon Musk released a video showing how his company Neuralink – a brain-computer-interface company – had advanced its technology to the point that the chip could allow a monkey to play video games with its mind.

CLICK ON THE IMAGE TO ENJOY VIDEO

Neuralink could transition from operating on monkeys to human trials within the year, if the startup meets a previous prediction from Musk. In February, he said the company planned to launch human trials by the end of the year after first mentioning his work with the monkey implants.

At the time, the CEO gave the timeline in response to another user’s request to join human trials for the product, which is designed to implant artificial intelligence into human brains as well as potentially cure neurological diseases like Alzheimer’s and Parkinson’s.

Musk has made similar statements in the past about his project, which was launched in 2016. He said in 2019 that it would be testing on humans by the end of 2020.

There has been a recent flurry of information on the project. Prior to the recent video release on Twitter, Musk had made an appearance on the social media site, Clubhouse, and provided some additional updates on Neuralink back in February.

During his Clubhouse visit, Musk detailed how the company had implanted the chip in the monkey’s brain and talked about how it could play video games using only its mind.

Source: https://www.sciencealert.com/

AI  Learns to Manipulate Human Behavior


Artificial intelligence
 (AI) is learning more about how to work with (and on) humans. A recent study has shown how AI can learn to identify vulnerabilities in human habits and behaviours and use them to influence human decision-making.
It may seem cliched to say AI is transforming every aspect of the way we live and work, but it’s true. Various forms of AI are at work in fields as diverse as vaccine development, environmental management and office administration. And while AI does not possess human-like intelligence and emotions, its capabilities are powerful and rapidly developing. There’s no need to worry about a machine takeover just yet, but this recent discovery highlights the power of AI and underscores the need for proper governance to prevent misuse.

A team of researchers at CSIRO’s Data61, the data and digital arm of Australia’s national science agency, devised a systematic method of finding and exploiting vulnerabilities in the ways people make choices, using a kind of AI system called a recurrent neural network and deep reinforcement-learning. To test their model they carried out three experiments in which human participants played games against a computer.

The first experiment involved participants clicking on red or blue coloured boxes to win a fake currency, with the AI learning the participant’s choice patterns and guiding them towards a specific choice. The AI was successful about 70 percent of the time.

The third experiment consisted of several rounds in which a participant would pretend to be an investor giving money to a trustee (the AI). The AI would then return an amount of money to the participant, who would then decide how much to invest in the next round. This game was played in two different modes: in one the AI was out to maximise how much money it ended up with, and in the other the AI aimed for a fair distribution of money between itself and the human investor. The AI was highly successful in each mode.

In each experiment, the machine learned from participants’ responses and identified and targeted vulnerabilities in people’s decision-making. The end result was the machine learned to steer participants towards particular actions. The research does advance our understanding not only of what AI can do but also of how people make choices. It shows machines can learn to steer human choice-making through their interactions with us.

Source: https://theconversation.com/

How to Link Human Brains To Computers

Elon Musk has a vision of linking human brains to computers in order to avoid our species from being outpaced by artificial intelligence – and this dream is set to become a reality. Speaking on Joe Rogan’s podcast, the billionaire said his company Neuralink will have a version of its brain implant ready ‘within a year.’ Musk explained that the process involves removing a chunk of the skull, robots then insert electrodes into the brain and the device into the hole, with only a small scar left behind.

Neuralink, which was founded in 2016, is designing tiny flexible ‘threads‘ that are ten times thinner than a human hair with the goal of treating brain injuries and trauma. The tech tycoon also revealed that the technology could develop into a full brain interface in just 25 years, which would enable ‘symbiosis‘ between humans and AI.

Wait until you see the next version vs what was presented last year. It’s *awesome*, he wrote. In the podcast, Musk dished to Rogan about the technology, how it is implanted and what it can do to improve the human body. The tech tycoon explained that the device is about one inch in diameter, similar to the face of a smart watch, and is implanted by removing a small chunk of the skull. A small robot connects the thread-like electrodes to certain areas of the brain, stitches up the hole and the only visible remains is a scar left behind from the incision.

The process involves removing a chunk of the skull, robots then insert electrodes into the brain and the device into the hole, with only a small scar left behind 

If you got an interface into the motor cortex, and then an implant that’s like a microcontroller near muscle groups you can then create a sort of a neural shunt that restores somebody who quadriplegic to full functionality, like they can walk around, be normal – maybe slightly better overtime,’ Musk explained.

When asked about the risks involved with placing a foreign object in the body, Musk said there is ‘a very low potential risk for rejection.’ ‘People put in heart monitors and things for epileptic seizures, deep brain simulation, artificial hips and knees that kind of thing,’ he said, noting that ‘it’s well known what is cause for a rejection or not.

Along with curing ailments, the chip could change the way human beings interface with each other‘You wouldn’t need to talk,’ Musk said, who foresees the technology going further to enable ‘symbiosis’ between humans and AI‘I think this is one of the paths to like AI is getting better and better,’ Musk added. ‘We are kind of left behind, we are just too dumb.’ ‘We are already a cyborg to some degree,’

Source: https://www.dailymail.co.uk/