Mind-controlled Robots

Two EPFL research groups teamed up to develop a machine-learning program that can be connected to a human brain and used to command a robot. The program adjusts the robot’s movements based on electrical signals from the brain. The hope is that with this invention, tetraplegic patients will be able to carry out more day-to-day activities on their own. Tetraplegic patients are prisoners of their own bodies, unable to speak or perform the slightest movement. Researchers have been working for years to develop systems that can help these patients carry out some tasks on their own.

People with a spinal cord injury often experience permanent neurological deficits and severe motor disabilities that prevent them from performing even the simplest tasks, such as grasping an object,” says Prof. Aude Billard, the head of EPFL’s Learning Algorithms and Systems Laboratory. “Assistance from robots could help these people recover some of their lost dexterity, since the robot can execute tasks in their place.”

Prof. Billard carried out a study with Prof. José del R. Millán, who at the time was the head of EPFL’s Brain-Machine Interface Laboratory but has since moved to the University of Texas. The two research groups have developed a computer program that can control a robot using electrical signals emitted by a patient’s brain. No voice control or touch function is needed; patients can move the robot simply with their thoughts. The study has been published in Communications Biology, an open-access journal from Nature Portfolio.

To develop their system, the researchers started with a robotic arm that had been developed several years ago. This arm can move back and forth from right to left, reposition objects in front of it and get around objects in its path. “In our study we programmed a robot to avoid obstacles, but we could have selected any other kind of task, like filling a glass of water or pushing or pulling an object,” says Prof. Billard. This entailed developing an algorithm that could adjust the robot’s movements based only on a patient’s thoughts. The algorithm was connected to a headcap equipped with electrodes for running electroencephalogram (EEG) scans of a patient’s brain activity. To use the system, all the patient needs to do is look at the robot. If the robot makes an incorrect move, the patient’s brain will emit an “error message” through a clearly identifiable signal, as if the patient is saying “No, not like that.” The robot will then understand that what it’s doing is wrong – but at first it won’t know exactly why. For instance, did it get too close to, or too far away from, the object? To help the robot find the right answer, the error message is fed into the algorithm, which uses an inverse reinforcement learning approach to work out what the patient wants and what actions the robot needs to take. This is done through a trial-and-error process whereby the robot tries out different movements to see which one is correct.

The process goes pretty quickly – only three to five attempts are usually needed for the robot to figure out the right response and execute the patient’s wishes. “The robot’s AI program can learn rapidly, but you have to tell it when it makes a mistake so that it can correct its behavior,” says Prof. Millán. “Developing the detection technology for error signals was one of the biggest technical challenges we faced.” Iason Batzianoulis, the study’s lead author, adds: “What was particularly difficult in our study was linking a patient’s brain activity to the robot’s control system – or in other words, ‘translating’ a patient’s brain signals into actions performed by the robot. We did that by using machine learning to link a given brain signal to a specific task. Then we associated the tasks with individual robot controls so that the robot does what the patient has in mind.

Source: https://actu.epfl.ch/

How to Predict Stress at Atomic Scale

The amount of stress a material can withstand before it cracks is critical information when designing aircraft, spacecraft, and other structures. Aerospace engineers at the University of Illinois Urbana-Champaign used machine learning for the first time to predict stress in copper at the atomic scale.

According to Huck Beng Chew and his doctoral student Yue Cui, materials, such as copper, are very different at these very small scales.

Left: Machine learning based on artificial neural networks as constitutive laws for atomic stress predictions. Right: Quantifying the local stress state of grain boundaries from atomic coordinate information

Metals are typically polycrystalline in that they contain many grains,” Chew said. “Each grain is a single crystal structure where all the atoms are arranged neatly and very orderly.  But the atomic structure of the boundary where these grains meet can be very complex and tend to have very high stresses.”

These grain boundary stresses are responsible for the fracture and fatigue properties of the metal, but until now, such detailed atomic-scale stress measurements were confined to molecular dynamics simulation models. Using data-driven approaches based on machine learning enables the study to quantify, for the first time, the grain boundary stresses in actual metal specimens imaged by electron microscopy.

“We used molecular dynamics simulations of copper grain boundaries to train our machine learning algorithm to recognize the arrangements of the atoms along the boundaries and identify patterns in the stress distributions within different grain boundary structures,” Cui said. Eventually, the algorithm was able to predict very accurately the grain boundary stresses from both simulation and experimental image data with atomic-level resolution.

We tested the accuracy of the machine learning algorithm with lots of different grain boundary structures until we were confident that the approach was reliable,” Cui explained. The task was more challenging than they imagined, and they had to include physics-based constraints in their algorithms to achieve accurate predictions with limited training data.

When you train the machine learning algorithm on specific grain boundaries, you will get extremely high accuracy in the stress predictions of these same boundaries,” Chew said, “but the more important question is, can the algorithm then predict the stress state of a new boundary that it has never seen before?” For Chew , the answer is yes, and very well in fact.

Source: https://aerospace.illinois.edu/

Machine-learning Accelerates Discovery of Materials for 3D Printing

The growing popularity of 3D printing for manufacturing all sorts of items, from customized medical devices to affordable homes, has created more demand for new 3D printing materials designed for very specific uses. To cut down on the time it takes to discover these new materials, researchers at MIT have developed a data-driven process that uses machine learning to optimize new 3D printing materials with multiple characteristics, like toughness and compression strength.

By streamlining materials development, the system lowers costs and lessens the environmental impact by reducing the amount of chemical waste. The machine learning algorithm could also spur innovation by suggesting unique chemical formulations that human intuition might miss.

Materials development is still very much a manual process. A chemist goes into a lab, mixes ingredients by hand, makes samples, tests them, and comes to a final formulation. But rather than having a chemist who can only do a couple of iterations over a span of days, our system can do hundreds of iterations over the same time span,” says Mike Foshey, a and project manager in the Computational Design and Fabrication Group (CDFG) of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and co-lead author of the paper.

Additional authors include co-lead author Timothy Erps, a technical associate in CDFG; Mina Konaković Luković, a CSAIL postdoc; Wan Shou, a former MIT postdoc who is now an assistant professor at the University of Arkansas; senior author Wojciech Matusik, professor of electrical engineering and computer science at MIT; and Hanns Hagen Geotzke, Herve Dietsch, and Klaus Stoll of BASF. The research was published today in Science Advances.

Source: https://phys.org/

Google Launches a Dermatology AI App in EU

Billions of times each year, people turn to Google’s web search box for help figuring out what’s wrong with their skin. Now, Google is preparing to launch an app that uses image recognition algorithms to provide more expert and personalized help. A brief demo at the company’s developer conference last month showed the service suggesting several possible skin conditions based on uploaded photos.

Machines have matched or outperformed expert dermatologists in studies in which algorithms and doctors scrutinize images from past patients. But there’s little evidence from clinical trials deploying such technology, and no AI image analysis tools are approved for dermatologists to use in the US, says Roxana Daneshjou, a Stanford dermatologist and researcher in machine learning and health.

Many don’t pan out in the real world setting,” she says.

Google’s new app isn’t clinically validated yet either, but the company’s AI prowess and recent buildup of its health care division make its AI dermatology app notable. Still, the skin service will start small—and far from its home turf and largest market in the US. The service is not likely to analyze American skin blemishes any time soon.

At the developer conference, Google’s chief health officer, Karen DeSalvo, said the company aims to launch what it calls a dermatology assist tool in the European Union as soon as the end of this year. A video of the app suggesting that a mark on someone’s arm could be a mole featured a caption saying it was an approved medical device in the EU. The same note added a caveat: “Not available in the US.”

Google says its skin app has been approved “CE marked as a Class I medical device in the EU,” meaning it can be sold in the bloc and other countries recognizing that standard. The company would have faced relatively few hurdles to secure that clearance, says Hugh Harvey, managing director at Hardian Health, a digital health consultancy in the UK. “You essentially fill in a form and self-certify,” he says. Google’s conference last month took place a week before tighter EU rules took effect that Harvey says require many health apps, likely including Google’s, to show that an app is effective, among other things. Preexisting apps have until 2025 to comply with the new rules.

Source: https://www.wired.com/

What is the Human Cortex?

The cerebral cortex is the thin surface layer of the brain found in vertebrate animals that has evolved most recently, showing the greatest variation in size among different mammals (it is especially large in humans). Each part of the cerebral cortex is six layered (e.g., L2), with different kinds of nerve cells (e.g., spiny stellate) in each layer. The cerebral cortex plays a crucial role in most higher level cognitive functions, such as thinking, memory, planning, perception, language, and attention. Although there has been some progress in understanding the macroscopic organization of this very complicated tissue, its organization at the level of individual nerve cells and their interconnecting synapses is largely unknown.

Petabyte connectomic reconstruction of a volume of human neocortex. Left: Small subvolume of the dataset. Right: A subgraph of 5000 neurons and excitatory (green) and inhibitory (red) connections in the dataset. The full graph (connectome) would be far too dense to visualize.

Mapping the structure of the brain at the resolution of individual synapses requires high-resolution microscopy techniques that can image biochemically stabilized (fixed) tissue. We collaborated with brain surgeons at Massachusetts General Hospital in Boston (MGH) who sometimes remove pieces of normal human cerebral cortex when performing a surgery to cure epilepsy in order to gain access to a site in the deeper brain where an epileptic seizure is being initiated. Patients anonymously donated this tissue, which is normally discarded, to our colleagues in the Lichtman lab. The Harvard researchers cut the tissue into ~5300 individual 30 nanometer sections using an automated tape collecting ultra-microtome, mounted those sections onto silicon wafers, and then imaged the brain tissue at 4 nm resolution in a customized 61-beam parallelized scanning electron microscope for rapid image acquisition.

Imaging the ~5300 physical sections produced 225 million individual 2D images. The team then computationally stitched and aligned this data to produce a single 3D volume. While the quality of the data was generally excellent, these alignment pipelines had to robustly handle a number of challenges, including imaging artifacts, missing sections, variation in microscope parameters, and physical stretching and compression of the tissue. Once aligned, a multiscale flood-filling network pipeline was applied (using thousands of Google Cloud TPUs) to produce a 3D segmentation of each individual cell in the tissue. Additional machine learning pipelines were applied to identify and characterize 130 million synapses, classify each 3D fragment into various “subcompartments” (e.g., axon, dendrite, or cell body), and identify other structures of interest such as myelin and cilia. Automated reconstruction results were imperfect, so manual efforts were used to “proofread” roughly one hundred cells in the data. Over time, the scientists expect to add additional cells to this verified set through additional manual efforts and further advances in automation.

Source: https://ai.googleblog.com/

Machine Learning Predicts Heart Failure

Every year, roughly one out of eight U.S. deaths is caused at least in part by heart failure. One of acute heart failure’s most common warning signs is excess fluid in the lungs, a condition known as “pulmonary edema.” A patient’s exact level of excess fluid often dictates the doctor’s course of action, but making such determinations is difficult and requires clinicians to rely on subtle features in X-rays that sometimes lead to inconsistent diagnoses and treatment plans.

To better handle that kind of nuance, a group led by researchers at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) has developed a machine learning model that can look at an X-ray to quantify how severe the edema is, on a four-level scale ranging from 0 (healthy) to 3 (very, very bad). The system determined the right level more than half of the time, and correctly diagnosed level 3 cases 90 percent of the time.

Working with Beth Israel Deaconess Medical Center (BIDMC) and Philips, the team plans to integrate the model into BIDMC’s emergency-room workflow this fall.

This project is meant to augment doctors workflow by providing additional information that can be used to inform their diagnoses as well as enable retrospective analyses,” says PhD student Ruizhi Liao, who was the co-lead author of a related paper with fellow PhD student Geeticka Chauhan and MIT professors Polina Golland and Peter Szolovits.

The team says that better edema diagnosis would help doctors manage not only acute heart issues, but other conditions like sepsis and kidney failure that are strongly associated with edema.

As part of a separate journal article, Liao and colleagues also took an existing public dataset of X-ray images and developed new annotations of severity labels that were agreed upon by a team of four radiologists. Liao’s hope is that these consensus labels can serve as a universal standard to benchmark future machine learning development.

An important aspect of the system is that it was trained not just on more than 300,000 X-ray images, but also on the corresponding text of reports about the X-rays that were written by radiologists. “By learning the association between images and their corresponding reports, the method has the potential for a new way of automatic report generation from the detection of image-driven findings,says Tanveer Syeda-Mahmood, a researcher not involved in the project who serves as chief scientist for IBM’s Medical Sieve Radiology Grand Challenge. “Of course, further experiments would have to be done for this to be broadly applicable to other findings and their fine-grained descriptors.”

Chauhan, Golland, Liao and Szolovits co-wrote the paper with MIT Assistant Professor Jacob Andreas, Professor William Wells of Brigham and Women’s Hospital, Xin Wang of Philips, and Seth Berkowitz and Steven Horng of BIDMC.

Source: https://news.mit.edu/

Why RNA Is A Better Measure Of A Patient’s Current Health Than DNA

By harnessing the combined power of NGS, machine learning and the dynamic nature of RNA we’re able to accurately measure the dynamic immune response and capture a more comprehensive picture of what’s happening at the site of the solid tumor. In the beginning, there was RNA – the first genetic molecule.

In the primordial soup of chemicals that represented the beginning of life, ribonucleic acid (RNA) had the early job of storing information, likely with the support of peptides. Today, RNA’s cousin – deoxyribonucleic acid – or DNA, has taken over most of the responsibilities of passing down genetic information from cell-to-cell, generation-to-generation. As a result, most early health technologies were developed to analyze DNA. But, RNA is a powerful force. And its role in storing information, while different from its early years, has no less of an impact on human health and is gaining more mindshare in our industry.

RNA is often considered a messenger molecule, taking the information coded in our DNA and transcribing it into cellular directives that result in downstream biological signals and proteinslevel changes.  And for this reason, RNA is becoming known not only as a drug target but perhaps more importantly, as a barometer of health.

3d illustration of a part of RNA chain from which the deoxyribonucleic acid or DNA is composed

How and why is RNA so useful? First, RNA is labile — changing in both sequence and abundance in response to genetic and epigenetic changes, but also external factors such as disease, therapy, exercise, and more. This is in contrast to DNA, which is generally static, changing little after conception.

Next, RNA is a more accurate snapshot of disease progression. When mutations do occur at the DNA level, these do not always result in downstream biological changes. Often, the body is able to compensate by repairing the mutation or overcome it by using redundancies in the pathway in which the gene resides. By instead evaluating RNA, we get one step closer to understanding the real impact disease is imparting on our body.

Finally, RNA is abundant. In most human cells, while only two copies of DNA are present, hundreds of thousands of mRNA molecules are present,representing more than 10,000 different species of RNA. Because even rare transcripts are present in multiple copies, biological signals can be confidently detected in RNA when the right technology is used.

Source: https://medcitynews.com/

The U.S. Wastes $161B Worth Of Food Every Year. A.I. Is Helping Us Fix That

When you see pictures of food waste, it just blows you away,” said Stefan Kalb, a former food wholesaler. “I mean, shopping cart after shopping cart of food waste. What happens with the merchandisers when they walk through the store, and they’re pulling products that have expired, is that they’ll put it in a shopping cart and just roll it to the back. It’s almost one of those dystopian [movie] pictures … cartons of milk just piled up in a grocery cart. The ones that didn’t make it.”

In the United States, somewhere between 30% and 40% of the food that’s produced is wasted. That’s the equivalent of $161 billion every single year. The U.S. throws away twice as much food as any other developed country in the world. There are all sorts of reasons this is a problem but A.I. could could solve it.

Kalb’s company is one of several startups — let’s call them the “Internet of Groceries” — using some impressively smart machine learning tools to help with this significant problem. Kalb is the co-founder of Shelf Engine, a company that uses analytics to help retailers better examine the historical order and sales data on their products so as to make better decisions about what to order. This means reduced waste and bigger margins. The company also buys back unsold stock, thereby guaranteeing the sale for a retailer.

We haven’t previously automated this micro-decision that is happening at the grocery store with the buyer,” said Kalb . “The buyer of the store is predicting how much to order — and of what. It’s a very hard decision, and they’re doing it for hundreds and thousands of items. You have these category buyers that just walk through the store to decide how they’re gonna change their bread order or their produce order or their milk order. They’re making these micro-decisions, and it’s costing them tremendous money. If we can automate that part, then we can really make a large impact in the world.”

Source: https://www.digitaltrends.com/

The Human Vs. Drone Dogfight

The U.S. Air Force will square off an AI-powered drone against a fighter jet flown by a real, live human being. The service wants to know if AI powered by machine learning can beat a human pilot with actual cockpit experience. The result will help the Air Force determine if AI-powered fighters are a viable alternative to human-powered fighters, the results of which could have far-reaching consequences for aerial warfare.

Air Force Magazine reports Lt. Gen. Jack Shanahan, head of the Pentagon’s Joint Artificial Intelligence Center, stated that the fly off would take place in 2021. Shanahan was speaking at a virtual event of the Mitchell Institute for Aerospace Studies.

Earlier reports stated that the fly-off would likely involve an older plane such as the F-16 before progressing to more advanced jets like the F-35 and F-22. If that’s the case, the Air Force will probably fly an unmanned F-16 versus a manned F-16, to make sure the human pilot and AI have as level a playing field as possible. The Air Force has already developed a remote flying mechanism, converting the F-16 to QF-16 target drones , and presumably the AI would control that mechanism.

The Pentagon has flown fighters against drones before. In 1971, spy drone manufacturer Teledyne Ryan modified their BFM-34 reconnaissance drone with the Maneuverability Augmentation System for Tactical Air Combat Simulation, or MASTACS. The result was the BFM-34E unmanned fighter jet. Fast, maneuverable, and with a low radar cross section, the BFM-34E was a difficult opponent for a human fighter pilot to fly against. A paper prepared for the U.S. Air Force’s Air University described the drone:Both the USAF and USN used this UAV to train their best pilots in simulated air combat. At Tyndall Air Force Base in Florida, the BGM-34F was used as a target in the annual William Tell air combat competition. This UAV routinely outmaneuvered manned F-15 and F-16 aircraft; one named ‘Old Red’ survived eighty-two dogfights. The USN used the MASTACS as a “graduation exercise” at their Top Gun Weapons School.

Despite the BGM-34F’s unexpected success the Navy and Air Force had no interest in an unmanned fighter, and the program was never pursued.

Source: https://www.popularmechanics.com/

10 Artificial Intelligence Trends

While no prediction engine has yet been built that can plot the course of AI over the coming decade, we can be fairly certain about what might happen over the next year. Spending on research, development, and deployment continues to rise, and debate over the wider social implications rages on.

1. AI will increasingly be monitoring and refining business processes

While the first robots in the workplace were mainly involved with automating manual tasks such as manufacturing and production lines, today’s software-based robots will take on the repetitive but necessary work that we carry out on computers. Filling in forms, generating reports and diagrams and producing documentation and instructions are all tasks that can be automated by machines that watch what we do and learn to do it for us in a quicker and more streamlined manner. This automation – known as robotic process automation – will free us from the drudgery of time-consuming but essential administrative work, leaving us to spend more time on complex, strategic, creative and interpersonal tasks.

This trend is driven by the success of internet giants like Amazon, Alibaba, and Google, and their ability to deliver personalized experiences and recommendations. AI allows providers of goods and services to quickly and accurately project a 360-degree view of customers in real-time as they interact through online portals and mobile apps, quickly learning how their predictions can fit our wants and needs with ever-increasing accuracy. Just as pizza delivery companies like Dominos will learn when we are most likely to want pizza, and make sure theOrder Now button is in front of us at the right time, every other industry will roll out solutions aimed at offering personalized customer experiences at scale.

3. AI becomes increasingly useful as data becomes more accurate and available

The quality of information available is often a barrier to businesses and organizations wanting to move towards AI-driven automated decision-making. But as technology and methods of simulating real-world processes and mechanisms in the digital domain have improved over recent years, accurate data has become increasingly available. Simulations have advanced to the stage where car manufacturers and others working on the development of autonomous vehicles can gain thousands of hours of driving data without vehicles even leaving the lab, leading to huge reductions in cost as well as increases in the quality of data that can be gathered. Why risk the expense and danger of testing AI systems in the real world when computers are now powerful enough, and trained on accurate-enough data, to simulate it all in the digital world? 2020 will see an increase in the accuracy and availability of real-world simulations, which in turn will lead to more powerful and accurate AI.

4. More devices will run AI-powered technology

As the hardware and expertise needed to deploy AI become cheaper and more available, we will start to see it used in an increasing number of tools, gadgets, and devices. In 2019 we’re already used to running apps that give us AI-powered predictions on our computers, phones, and watches. As the next decade approaches and the cost of hardware and software continues to fall, AI tools will increasingly be embedded into our vehicles, household appliances, and workplace tools. Augmented by technology such as virtual and augmented reality displays, and paradigms like the cloud and Internet of Things, the next year will see more and more devices of every shape and size starting to think and learn for themselves.

5. Human and AI cooperation increases

More and more of us will get used to the idea of working alongside AI-powered tools and bots in our day-to-day working lives. Increasingly, tools will be built that allow us to make the most of our human skills – those which AI can’t quite manage yet – such as imaginative, design, strategy, and communication skills. While augmenting them with super-fast analytics abilities fed by vast datasets that are updated in real-time.

For many of us, this will mean learning new skills, or at least new ways to use our skills alongside these new robotic and software-based tools. The IDC predicts that by 2025, 75% of organizations will be investing in employee retraining in order to fill skill gaps caused by the need to adopt AI.

6. AI increasingly at the “edge”

Much of the AI we’re used to interacting with now in our day-to-day lives takes place “in the cloud” – when we search on Google or flick through recommendations on Netflix, the complex, data-driven algorithms run on high-powered processors inside remote data centers, with the devices in our hands or on our desktops simply acting as conduits for information to pass through.

Increasingly, however, as these algorithms become more efficient and capable of running on low-power devices, AI is taking place at the “edge,” close to the point where data is gathered and used. This paradigm will continue to become more popular in 2020 and beyond, making AI-powered insights a reality outside of the times and places where super-fast fiber optic and mobile networks are available. Custom processors designed to carry out real-time analytics on-the-fly will increasingly become part of the technology we interact with day-to-day

7. AI increasingly used to create films, music, and games

Some things, even in 2020, are probably still best left to humans. Anyone who has seen the current state-of-the-art in AI-generated music, poetry or storytelling is likely to agree that the most sophisticated machines still have some way to go until their output will be as enjoyable to us as the best that humans can produce. However, the influence of AI on entertainment media is likely to increase.

In videogames, AI will continue to be used to create challenging, human-like opponents for players to compete against, as well as to dynamically adjust gameplay and difficulty so that games can continue to offer a compelling challenge for gamers of all skill levels. And while completely AI-generated music may not be everyone’s cup of tea, where AI does excel is in creating dynamic soundscapes – think of smart playlists on services like Spotify or Google Music that match tunes and tempo to the mood and pace of our everyday lives.

8. AI will become ever more present in cybersecurity

As hacking, phishing and social engineering attacks become ever-more sophisticated, and themselves powered by AI and advanced prediction algorithms, smart technology will play an increasingly important role in protecting us from these attempted intrusions into our lives. AI can be used to spot giveaway signs that digital activity or transactions follow patterns that are likely to be indicators of nefarious activity, and raise alarms before defenses can be breached and sensitive data compromised.

The rollout of 5G and other super-fast wireless communications technology will bring huge opportunities for businesses to provide services in new and innovative ways, but they will also potentially open us up to more sophisticated cyber-attacks. Spending on cybersecurity will continue to increase, and those with relevant skills will be highly sought-after.

9. More of us will interact with AI, maybe without even knowing it

Let’s face it, despite the huge investment in recent years in natural-language powered chatbots in customer service, most of us can recognize whether we’re dealing with a robot or a human. However, as the datasets used to train natural language processing algorithms continue to grow, the line between humans and machines will become harder and harder to distinguish. With the advent of deep learning and semi-supervised models of machine learning such as reinforcement learning, the algorithms that attempt to match our speech patterns and infer meaning from our own human language will become more and more able to fool us into thinking there is a human on the other end of the conversation.

10. But AI will recognize us, even if we don’t recognize it

Perhaps even more unsettlingly, the rollout of facial recognition technology is only likely to intensify as we move into the next decade. Not just in China (where the government is looking at ways of making facial recognition compulsory for accessing services like communication networks and public transport) but around the world.

Source: https://www.forbes.com/