Tag Archives: Artificial Intelligence

Artificial Synapses Made from Nanowires

Scientists from Jülich together with colleagues from Aachen and Turin have produced a memristive element made from nanowires that functions in much the same way as a biological nerve cell. The component is able to both save and process information, as well as receive numerous signals in parallel. The resistive switching cell made from oxide crystal nanowires is thus proving to be the ideal candidate for use in building bioinspired “neuromorphic” processors, able to take over the diverse functions of biological synapses and neurons.

Image captured by an electron microscope of a single nanowire memristor (highlighted in colour to distinguish it from other nanowires in the background image). Blue: silver electrode, orange: nanowire, yellow: platinum electrode. Blue bubbles are dispersed over the nanowire. They are made up of silver ions and form a bridge between the electrodes which increases the resistance.

Computers have learned a lot in recent years. Thanks to rapid progress in artificial intelligence they are now able to drive cars, translate texts, defeat world champions at chess, and much more besides. In doing so, one of the greatest challenges lies in the attempt to artificially reproduce the signal processing in the human brain. In neural networks, data are stored and processed to a high degree in parallel. Traditional computers on the other hand rapidly work through tasks in succession and clearly distinguish between the storing and processing of information. As a rule, neural networks can only be simulated in a very cumbersome and inefficient way using conventional hardware.

Systems with neuromorphic chips that imitate the way the human brain works offer significant advantages. Experts in the field describe this type of bioinspired computer as being able to work in a decentralised way, having at its disposal a multitude of processors, which, like neurons in the brain, are connected to each other by networks. If a processor breaks down, another can take over its function. What is more, just like in the brain, where practice leads to improved signal transfer, a bioinspired processor should have the capacity to learn.

With today’s semiconductor technology, these functions are to some extent already achievable. These systems are however suitable for particular applications and require a lot of space and energy,” says Dr. Ilia Valov from Forschungszentrum Jülich. “Our nanowire devices made from zinc oxide crystals can inherently process and even store information, as well as being extremely small and energy efficient,” explains the researcher from Jülich’s Peter Grünberg Institute.

Source: http://www.fz-juelich.de/

AI and Big Data To Fight Eye Diseases

In future, it will be possible to diagnose diabetes from the eye using automatic digital retinal screening, without the assistance of an ophthalmologist‘: these were the words used by Ursula Schmidt-Erfurth, Head of MedUni Vienna‘s Department of Ophthalmology and Optometrics. The scientist has opened the press conference about the ART-2018 Specialist Meeting on new developments in retinal therapy. The automatic diabetes screening, has been recently implemented at MedUni Vienna.
Patients flock to the Department to undergo this retinal examination to detect any diabetic changes. It takes just a few minutes and is completely non-invasive

Essentially this technique can detect all stages of diabetic retinal diseasehigh-resolution digital retinal images with two million pixels are taken and analyzed within seconds – but Big Data offers even more potential: nowadays it is already possible to diagnose an additional 50 other diseases in this way. Diabetes is just the start. And MedUni Vienna is among the global leaders in this digital revolution.

The Division of Cardiology led by Christian Hengstenberg within the Department of Medicine II is working on how digital retinal analysis can also be used in future for the early diagnosis of cardiovascular diseases.

This AI medicine is ‘super human’,” emphasizes Schmidt-Erfurth. “The algorithms are quicker and more accurate. They can analyze things that an expert cannot detect with the naked eye.” And yet the commitment to Big Data and Artificial Intelligence is not a plea for medicine without doctors, which some experts predict for the not-to-distant future. “What we want are ‘super doctors’, who are able to use the high-tech findings to make the correct, individualized therapeutic decision for their patients, in the spirit of precision medicine, rather than leaving patients on their own.”

However, it is not only in the diagnosis of diseases that Artificial Intelligence and Big Data, plus virtual reality, provide better results. “We are already performing digitized operations with support from Artificial Intelligence. This involves projecting a virtual and precise image of the area of the eye being operated on onto a huge screen – and the surgeon then performs the operation with a perfect viewon screen” as it were, while actually operating on the patient with a scalpel.”

Source: https://www.news-medical.net/

AI Robot Presents TV News In China

China’s state news agency Xinhua this week introduced the newest members of its newsroom: AI anchors who will reporttirelessly” all day every day, from anywhere in the country. Chinese viewers were greeted with a digital version of a regular Xinhua news anchor named Qiu Hao. The anchor, wearing a red tie and pin-striped suit, nods his head in emphasis, blinking and raising his eyebrows slightly.


Not only can I accompany you 24 hours a day, 365 days a year. I can be endlessly copied and present at different scenes to bring you the news,” he says. Xinhua also presented an English-speaking AI, based on another presenter, who adds: “The development of the media industry calls for continuous innovation and deep integration with the international advanced technologies … I look forward to bringing you brand new news experiences.”

Developed by Xinhua and the Chinese search engine, Sogou, the anchors were developed through machine learning to simulate the voice, facial movements, and gestures of real-life broadcasters, to present a “a lifelike image instead of a cold robot,” according to Xinhua.

Source: https://www.theguardian.com/

AI Lie Detectors Could Reach 85% Accuracy

It’s already nerve-wracking answering questions at the border, and some ports in the European Union are taking it to another, kinda worrying level. They’re installing an artificial intelligence-powered system called iBorderCtrl, which aims to speed up the processing of travellers, but also to determine if they’re lying. A six-month trial will take place at four border crossing points in Hungary, Greece and Latvia.

During pre-screening, users will upload their passport, visa, and proof of funds, then answer questions asked by a computer-generated border guard to a webcam. The system will analyse the user’s microexpressions to determine if they’re lying, and they’ll be flagged as either low or high risk. People will be asked questions like “What’s in your suitcase?” and “If you open the suitcase and show me what is inside, will it confirm that your answers were true?” For those who pass the test, they’ll receive a QR code that will let them pass through. If there’s additional concern, their biometric data will be taken, and be handed off to a human agent who will assess the case.

We’re employing existing and proven — as well as novel ones — to empower border agents to increase the accuracy and efficiency of border checks,” project coordinator George Boultadakis told the European Commission.  “iBorderCtrl’s system will collect data that will move beyond biometrics and on to biomarkers of deceit.

Of course, there’s the question of how accurate a system like this could be. iBorderCtrl is still in its early stages, and a team member told that early testing provided a 76 percent success rate, but believe this could be raised to 85 percent.

Source: https://mashable.com/

New Material For New Processor

Computers used to take up entire rooms. Today, a two-pound laptop can slide effortlessly into a backpack. But that wouldn’t have been possible without the creation of new, smaller processors — which are only possible with the innovation of new materials. But how do materials scientists actually invent new materials? Through experimentation, explains Sanket Deshmukh, an assistant professor in the chemical engineering department of Virginia Tech whose team’s recently published computational research might vastly improve the efficiency and costs savings of the material design process.

Deshmukh’s lab, the Computational Design of Hybrid Materials lab, is devoted to understanding and simulating the ways molecules move and interact — crucial to creating a new material. In recent years, materials scientists have employed machine learning, a powerful subset of artificial intelligence, to accelerate the discovery of new materials through computer simulations. Deshmukh and his team have recently published research in the Journal of Physical Chemistry Letters demonstrating a novel machine learning framework that trainson the fly,” meaning it instantaneously processes data and learns from it to accelerate the development of computational models. Traditionally the development of computational models are “carried out manually via trial-and-error approach, which is very expensive and inefficient, and is a labor-intensive task,” Deshmukh explained.

This novel framework not only uses the machine learning in a unique fashion for the first time,” Deshmukh said, “but it also dramatically accelerates the development of accurate computational models of materials.” “We train the machine learning model in a ‘reverse’ fashion by using the properties of a model obtained from molecular dynamics simulations as an input for the machine learning model, and using the input parameters used in molecular dynamics simulations as an output for the machine learning model,” said Karteek Bejagam, a post-doctoral researcher in Deshmukh’s lab and one of the lead authors of the study.

This new framework allows researchers to perform optimization of computational models, at unusually faster speed, until they reach the desired properties of a new material.

Source: https://vtnews.vt.edu/

Amazon to datamine the stars

Amazon.com is in talks with Chile to house and mine massive amounts of data generated by the country’s giant telescopes, which could prove fertile ground for the company to develop new artificial intelligence tools. The talks are aimed at fuelling growth in Amazon.com Inc’s cloud computing business in Latin America and boosting its data processing capabilities.

President Sebastian Pinera’s center-right government, which is seeking to wean Chile’s $325 billion economy from reliance on copper mining, announced last week it plans to pool data from all its telescopes onto a virtual observatory stored in the cloud, without giving a timeframe. The government talked of the potential for astrodata innovation, but did not give details.

Amazon executives have been holding discussions with the Chilean government for two years about a possible data center to provide infrastructure for local firms and the government to store information on the cloud. The talks have included discussion about the possibility of Amazon Web Services (AWS), hosting astrodata.

Jeffrey Kratz, AWS’s General Manager for Public Sector for Latin American, has confirmed the company’s interest in astrodata but said Amazon had no announcements to make at present. “Chile is a very important country for AWS,” he said in an email to Reuters. “We kept being amazed about the incredible work on astronomy and the telescopes, as real proof points on innovation and technology working together.” “The Chilean telescopes can benefit from the cloud by eliminating the heavy lifting of managing IT,” Kratz added.

Source: https://www.reuters.com/

AI creates 3D ‘digital heart’ to aid patient diagnoses

Armed with a mouse and computer screen instead of a scalpel and operating theater, cardiologist Benjamin Meder carefully places the electrodes of a pacemaker in a beating, digital heart.  Using this “digital twin” that mimics the electrical and physical properties of the cells in patient 7497’s heart, Meder runs simulations to see if the pacemaker can keep the congestive heart failure sufferer alivebefore he has inserted a knife.

A three-dimensional printout of a human heart is seen at the Heidelberg University Hospital (Universitaetsklinikum Heidelberg)

The digital heart twin developed by Siemens Healthineers, a German company is one example of how medical device makers are using artificial intelligence (AI) to help doctors make more precise diagnoses as medicine enters an increasingly personalized age.

The challenge for Siemens Healthineers and rivals such as Philips and GE Healthcare is to keep an edge over tech giants from Alphabet’s Google to Alibaba that hope to use big data to grab a slice of healthcare spending.

With healthcare budgets under increasing pressure, AI tools such as the digital heart twin could save tens of thousands of dollars by predicting outcomes and avoiding unnecessary surgery.

Source: https://www.healthcare.siemens.com/

Teaching a car how to drive itself in 20 minutes

Researchers from Wayve, a company founded by a team from the Cambridge University engineering department, have developed a neural network sophisticated enough to learn how to drive a car in 15 to 20 minutes using nothing but a computer and a single camera. The company showed off its robust deep learning methods last week in a company blog post showcasing the no-frills approach to driverless car development. Where companies like Waymo and Uber are relying on a variety of sensors and custom-built hardware, Wayve is creating the world’s first autonomous vehicles based entirely on reinforcement learning.


The AI powering Wayve’s self-driving system is remarkable for its simplicity. It’s a four layer convolutional neural network (learn about neural networks here) that performs all of its processing on a GPU inside the car. It doesn’t require any cloud connectivity or use pre-loaded mapsWayve’s vehicles are early-stage level five autonomous. There’s a lot of work to be done before Wayve’s AI can drive any car under any circumstances. But the idea that driverless cars will require tens of thousands of dollars worth of extraneous hardware is taking a serious blow in the wake of the company’s amazing deep learning techniques. According to Wayve, these algorithms are only going to get smarter.

Source: https://wayve.ai/

Carlos Ghosn: “Driverless Cars Similar To Antibiotics”

Carlos Ghosn, CEO of the Renault-Nissan-Mitsubishi Alliance car maker (ranked 1 in the world),  has detailed the impact of the driverless car on human daily lives (Interview at the French TV BFM). There are between 1,3 million and 1,4 million death on roads every year in the world. The driverless car will eliminate 90% of the fatal accidents.

 “We are five years from safe, driverless cars for all“, adds Ghosn. “Driverless cars impact will be similar to the discovery of antibiotics“.

Famously given the moniker “Le Cost Killer” for his work transforming two ailing brands into one profit-making success story, Carlos Ghosn has achieved celebrity status in the car industry — and was once even portrayed as a superhero in a Japanese comic book.

Today the auto industry is experiencing a paradigm shift with the growth of the global electric vehicle (EV) market, as well as the vast potential offered by disruptive new areas like the autonomously-driven vehicle, using massively Artificial Intelligence. Despite the challenge of staying competitive and profitable in this changing environment, the Brazilian-born 64-year old believes the brands under his watch are already in pole position — and plan to stay there. But he has to stay vigilant and is aware of the dangers, acknowledging that businesses are pushing hard for driverless vehicles. “Amazon, Alibaba, Uberwhy are they interested in this? It’s very simple. The driver is the biggest cost they have — you make a quick calculation about a car running 24-7 for a month: the electricity bill is about $250 a month; the lease of the car is $300; plus three drivers, since you’re running for 24 hours a day, are going to cost you $15,000 per month.  So getting rid of the driver is a 90% reduction in costs.

That’s why Uber, DiDi all want to be the first to have this … because if my competitor gets this before me, I’m dead.”


MIT Artificial Intelligence System Detects 85 Percent Of Cyber Attacks

While the number of cyber attacks continues to increase it is becoming even more difficult to detect and mitigate them in order to avoid serious consequences. A group of researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is working on an ambitious project, the development of a technology that is able to early detect cyber attacks. The experts in collaboration with peers from the startup PatternEx have designed an Artificial Intelligence system that is able to detect 85 percent of attacks by using data from more than 3.6 Billion lines of log files each day.

The researchers have developed a system that combines an Artificial Intelligence engine with human inputs. , which researchers call Analyst Intuition (AI), which is why it has been given the name of AI2. The AI2 system first performs an automatic scan of the content with machine-learning techniques and then reports the results to human analysts which have to discriminate events linked to cyber attacks. According to the experts at the MIT the approach implemented by the AI2 system is 3 times better than modern automated cyber attack detection systems.

“The team showed that AI2 can detect 85 percent of attacks, which is roughly three times better than previous benchmarks, while also reducing the number of false positives by a factor of 5. The system was tested on 3.6 billion pieces of data known as “log lines,” which were generated by millions of users over a period of three months.” states a description of the AI2 published by the MIT.

The greater the number of analyzes carried out by the system, the more accurate the subsequent estimates thanks to the feedback mechanism.

“You can think about the system as a virtual analyst,” says CSAIL research scientist Kalyan Veeramachaneni, who developed AI2 with Ignacio Arnaldo, a chief data scientist at PatternEx and a former CSAIL postdoc. “It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly.”

Source: http://ai2.appinventor.mit.edu/