AI Neural Network: the Bigger, the Smarter

When it comes to the neural networks that power today’s artificial intelligence, sometimes the bigger they are, the smarter they are too. Recent leaps in machine understanding of language, for example, have hinged on building some of the most enormous AI models ever and stuffing them with huge gobs of text. A new cluster of computer chips could now help these networks grow to almost unimaginable size—and show whether going ever larger may unlock further AI advances, not only in language understanding, but perhaps also in areas like robotics and computer vision.

Cerebras Systems, a startup that has already built the world’s largest computer chip, has now developed technology that lets a cluster of those chips run AI models that are more than a hundred times bigger than the most gargantuan ones around today.

Cerebras says it can now run a neural network with 120 trillion connections, mathematical simulations of the interplay between biological neurons and synapses. The largest AI models in existence today have about a trillion connections, and they cost many millions of dollars to build and train. But Cerebras says its hardware will run calculations in about a 50th of the time of existing hardware. Its chip cluster, along with power and cooling requirements, presumably still won’t come cheap, but Cerberas at least claims its tech will be substantially more efficient.

We built it with synthetic parameters,” says Andrew Feldman, founder and CEO of Cerebras, who will present details of the tech at a chip conference this week. “So we know we can, but we haven’t trained a model, because we’re infrastructure builders, and, well, there is no model yet” of that size, he adds.

Today, most AI programs are trained using GPUs, a type of chip originally designed for generating computer graphics but also well suited for the parallel processing that neural networks require. Large AI models are essentially divided up across dozens or hundreds of GPUs, connected using high-speed wiring.

GPUs still make sense for AI, but as models get larger and companies look for an edge, more specialized designs may find their niches. Recent advances and commercial interest have sparked a Cambrian explosion in new chip designs specialized for AI. The Cerebras chip is an intriguing part of that evolution. While normal semiconductor designers split a wafer into pieces to make individual chips, Cerebras packs in much more computational power by using the entire thing, having its many computational units, or cores, talk to each other more efficiently. A GPU typically has a few hundred cores, but Cerebras’s latest chip, called the Wafer Scale Engine Two (WSE-2), has 850,000 of them.

Source: https://www.wired.com/

How To Detect Heart Failure From A Single Heartbeat

Researchers have developed a neural network approach that can accurately identify congestive heart failure with 100% accuracy through analysis of just one raw electrocardiogram (ECG) heartbeat, a new study reports.

Congestive heart failure (CHF) is a chronic progressive condition that affects the pumping power of the heart muscles. Associated with high prevalence, significant mortality rates and sustained healthcare costs, clinical practitioners and health systems urgently require efficient detection processes.

Dr Sebastiano Massaro, Associate Professor of Organisational Neuroscience at the University of Surrey, has worked with colleagues Mihaela Porumb and Dr Leandro Pecchia at the University of Warwick and Ernesto Iadanza at the University of Florence, to tackle these important concerns by using Convolutional Neural Networks (CNN) – hierarchical neural networks highly effective in recognising patterns and structures in data.

Published in Biomedical Signal Processing and Control Journal, their research drastically improves existing CHF detection methods typically focused on heart rate variability that, whilst effective, are time-consuming and prone to errors. Conversely, their new model uses a combination of advanced signal processing and machine learning tools on raw ECG signals, delivering 100% accuracy.

We trained and tested the CNN model on large publicly available ECG datasets featuring subjects with CHF as well as healthy, non-arrhythmic hearts. Our model delivered 100% accuracy: by checking just one heartbeat we are able detect whether or not a person has heart failure. Our model is also one of the first known to be able to identify the ECG’ s morphological features specifically associated to the severity of the condition,”  explains Dr Massaro.  Dr Pecchia, President at European Alliance for Medical and Biological Engineering, explains the implications of these findings: “With approximately 26 million people worldwide affected by a form of heart failure, our research presents a major advancement on the current methodology. Enabling clinical practitioners to access an accurate CHF detection tool can make a significant societal impact, with patients benefitting from early and more efficient diagnosis and easing pressures on NHS resources.”

Source: https://www.surrey.ac.uk/

Sensor-packed Glove Coupled With AI

Wearing a sensor-packed glove while handling a variety of objects, MIT researchers have compiled a massive dataset that enables an AI system to recognize objects through touch alone. The information could be leveraged to help robots identify and manipulate objects, and may aid in prosthetics design.

The researchers developed a low-cost knitted glove, called “scalable tactile glove” (STAG), equipped with about 550 tiny sensors across nearly the entire hand. Each sensor captures pressure signals as humans interact with objects in various ways. A neural network processes the signals to “learn” a dataset of pressure-signal patterns related to specific objects. Then, the system uses that dataset to classify the objects and predict their weights by feel alone, with no visual input needed.

In a paper published today in Nature, the researchers describe a dataset they compiled using STAG for 26 common objects — including a soda can, scissors, tennis ball, spoon, pen, and mug. Using the dataset, the system predicted the objects’ identities with up to 76 percent accuracy. The system can also predict the correct weights of most objects within about 60 grams.

Similar sensor-based gloves used today run thousands of dollars and often contain only around 50 sensors that capture less information. Even though STAG produces very high-resolution data, it’s made from commercially available materials totaling around $10.

The tactile sensing system could be used in combination with traditional computer vision and image-based datasets to give robots a more human-like understanding of interacting with objects.

Humans can identify and handle objects well because we have tactile feedback. As we touch objects, we feel around and realize what they are. Robots don’t have that rich feedback,” says Subramanian Sundaram PhD ’18, a former graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We’ve always wanted robots to do what humans can do, like doing the dishes or other chores. If you want robots to do these things, they must be able to manipulate objects really well.

The researchers also used the dataset to measure the cooperation between regions of the hand during object interactions. For example, when someone uses the middle joint of their index finger, they rarely use their thumb. But the tips of the index and middle fingers always correspond to thumb usage. “We quantifiably show, for the first time, that, if I’m using one part of my hand, how likely I am to use another part of my hand,” he says.

Source: http://news.mit.edu/