Tag Archives: AI

Internet Of Thoughts

Imagine a future technology that would provide instant access to the world’s knowledge and artificial intelligence, simply by thinking about a specific topic or question. Communications, education, work, and the world as we know it would be transformed. Writing in Frontiers in Neuroscience, an international collaboration led by researchers at UC Berkeley and the US Institute for Molecular Manufacturing predicts that exponential progress in nanotechnology, nanomedicine, AI, and computation will lead this century to the development of a “Human Brain/Cloud Interface” (B/CI), that connects neurons and synapses in the brain to vast cloud-computing networks in real time.

The B/CI concept was initially proposed by futurist-author-inventor Ray Kurzweil, who suggested that neural nanorobots – brainchild of Robert Freitas, Jr., senior author of the research – could be used to connect the neocortex of the human brain to a “synthetic neocortex” in  . Our wrinkled neocortex is the newest, smartest, ‘conscious’ part of the brain. Freitas’ proposed neural nanorobots would provide direct, real-time monitoring and control of signals to and from brain cells.

These devices would navigate the human vasculature, cross the blood-brain barrier, and precisely autoposition themselves among, or even within brain cells,” explains Freitas. “They would then wirelessly transmit encoded information to and from a cloud-based supercomputer network for real-time brain-state monitoring and data extraction.

This cortex in the cloud would allow “Matrix“-style downloading of information to the brain, the group claims. “A human B/CI system mediated by neuralnanorobotics could empower individuals with instantaneous access to all cumulative human knowledge available in the cloud, while significantly improving human learning capacities and intelligence,” says lead author Dr. Nuno Martins.

B/CI technology might also allow us to create a future “global superbrain” that would connect networks of individual human brains and AIs to enable collective thought. “While not yet particularly sophisticated, an experimental human ‘BrainNet’ system has already been tested, enabling thought-driven information exchange via the cloud between individual brains,” explains Martins. “It used electrical signals recorded through the skull of ‘senders’ and magnetic stimulation through the skull of ‘receivers,’ allowing for performing cooperative tasks. With the advance of neuralnanorobotics, we envisage the future creation of ‘superbrains’ that can harness the thoughts and thinking power of any number of humans and machines in real time. This shared cognition could revolutionize democracy, enhance empathy, and ultimately unite culturally diverse groups into a truly global society.”

According to the group’s estimates, even existing supercomputers have processing speeds capable of handling the necessary volumes of neural data for B/CI – and they’re getting faster, fast. Rather, transferring neural data to and from supercomputers in the cloud is likely to be the ultimate bottleneck in B/CI development. “This challenge includes not only finding the bandwidth for global data transmission,” cautions Martins, “but also, how to enable data exchange with neurons via tiny devices embedded deep in the brain.”

One solution proposed by the authors is the use of ‘magnetoelectric nanoparticles‘ to effectively amplify communication between neurons and the cloud. “These nanoparticles have been used already in living mice to couple external magnetic fields to neuronal electric fields – that is, to detect and locally amplify these magnetic signals and so allow them to alter the electrical activity of neurons,” explains Martins. “This could work in reverse, too: electrical signals produced by neurons and nanorobots could be amplified via magnetoelectric nanoparticles, to allow their detection outside of the skull.” Getting these nanoparticles – and nanorobots – safely into the brain via the circulation, would be perhaps the greatest challenge of all in B/CI.

A detailed analysis of the biodistribution and biocompatibility of nanoparticles is required before they can be considered for human development. Nevertheless, with these and other promising technologies for B/CI developing at an ever-increasing rate, an ‘internet of thoughts’ could become a reality before the turn of the century,” Martins concludes.

Source: https://www.frontiersin.org/

Artificial Intelligence Revolutionizes Farming

Researchers at MIT have used AI to improve the flavor of basil. It’s part of a trend that is seeing artificial intelligence revolutionize farming.
What makes basil so good? In some cases, it’s AI. Machine learning has been used to create basil plants that are extra-delicious. While we sadly cannot report firsthand on the herb’s taste, the effort reflects a broader trend that involves using data science and machine learning to improve agriculture 

The researchers behind the AI-optimized basil used machine learning to determine the growing conditions that would maximize the concentration of the volatile compounds responsible for basil’s flavor. The basil was grown in hydroponic units within modified shipping containers in Middleton, Massachusetts. Temperature, light, humidity, and other environmental factors inside the containers could be controlled automatically. The researchers tested the taste of the plants by looking for certain compounds using gas chromatography and mass spectrometry. And they fed the resulting data into machine-learning algorithms developed at MIT and a company called Cognizant.

The research showed, counterintuitively, that exposing plants to light 24 hours a day generated the best taste. The research group plans to study how the technology might improve the disease-fighting capabilities of plants as well as how different flora may respond to the effects of climate change.

We’re really interested in building networked tools that can take a plant’s experience, its phenotype, the set of stresses it encounters, and its genetics, and digitize that to allow us to understand the plant-environment interaction,” said Caleb Harper, head of the MIT Media Lab’s OpenAg group, in a press release. His lab worked with colleagues from the University of Texas at Austin on the paper.

The idea of using machine learning to optimize plant yield and properties is rapidly taking off in agriculture. Last year, Wageningen University in the Netherlands organized an “Autonomous Greenhousecontest, in which different teams competed to develop algorithms that increased the yield of cucumber plants while minimizing the resources required. They worked with greenhouses where a variety of factors are controlled by computer systems.

The study has appeared  in the journal PLOS One.

Source: https://www.technologyreview.com/

This Person Does Not exist

With the help of artificial intelligence, you can manipulate video of public figures to say whatever you like — or now, create images of people’s faces that don’t even exist. You can see this in action on a website called thispersondoesnotexist.com. It uses an algorithm to spit out a single image of a person’s face, and for the most part, they look frighteningly realHit refresh in your browser, and the algorithm will generate a new face. Again, these people do not exist.

The website is the creation of software engineer Phillip Wang, and uses a new AI algorithm called StyleGAN, which was developed by researchers at NvidiaGAN, or Generative Adversarial Networks, is a concept within machine learning which aims to generate images that are indistinguishable from real ones. You can train GANs to remember human faces, as well bedrooms, cars, and cats, and of course, generate images of them.

Wang explained that he created the site to create awareness for the algorithm, and chose facesbecause our brains are sensitive to that kind of image.”  He added that it costs $150 a month to hire out the server, as he needs a good amount of graphical power to run the website.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

It also started off as a personal agenda mainly because none of my friends seem to believe this AI phenomenon, and I wanted to convince them,” Wang said. “This was the most shocking presentation I could send them. I then posted it on Facebook and it went viral from there.

I think eventually, given enough data, a big enough neural [network] can be teased into dreaming up many different kinds of scenarios,” Wang added.

Source: https://thispersondoesnotexist.com/
AND
https://mashable.com/

Facial Recognition And AI Identify 90% Of Rare Genetic Disorders

A facial recognition scan could become part of a standard medical checkup in the not-too-distant future. Researchers have shown how algorithms can help identify facial characteristics linked to genetic disorders, potentially speeding up clinical diagnoses.

In a study published this month in the journal Nature Medicine, US company FDNA published new tests of their software, DeepGestalt. Just like regular facial recognition software, the company trained their algorithms by analyzing a dataset of faces. FDNA collected more than 17,000 images covering 200 different syndromes using a smartphone app it developed named Face2Gene.

Rare genetic disorders are collectively common, affecting 8 percent of the population

In two first tests, DeepGestalt was used to look for specific disorders: Cornelia de Lange syndrome and Angelman syndrome. Both of these are complex conditions that affect intellectual development and mobility. They also have distinct facial traits, like arched eyebrows that meet in the middle for Cornelia de Lange syndrome, and unusually fair skin and hair for Angelman syndrome.

When tasked with distinguishing between pictures of patients with one syndrome or another, random syndrome, DeepGestalt was more than 90 percent accurate, beating expert clinicians, who were around 70 percent accurate on similar tests. When tested on 502 images showing individuals with 92 different syndromes, DeepGestalt identified the target condition in its guess of 10 possible diagnoses more than 90 percent of the time.

Source: https://www.theverge.com

Artificial Synapses Made from Nanowires

Scientists from Jülich together with colleagues from Aachen and Turin have produced a memristive element made from nanowires that functions in much the same way as a biological nerve cell. The component is able to both save and process information, as well as receive numerous signals in parallel. The resistive switching cell made from oxide crystal nanowires is thus proving to be the ideal candidate for use in building bioinspired “neuromorphic” processors, able to take over the diverse functions of biological synapses and neurons.

Image captured by an electron microscope of a single nanowire memristor (highlighted in colour to distinguish it from other nanowires in the background image). Blue: silver electrode, orange: nanowire, yellow: platinum electrode. Blue bubbles are dispersed over the nanowire. They are made up of silver ions and form a bridge between the electrodes which increases the resistance.

Computers have learned a lot in recent years. Thanks to rapid progress in artificial intelligence they are now able to drive cars, translate texts, defeat world champions at chess, and much more besides. In doing so, one of the greatest challenges lies in the attempt to artificially reproduce the signal processing in the human brain. In neural networks, data are stored and processed to a high degree in parallel. Traditional computers on the other hand rapidly work through tasks in succession and clearly distinguish between the storing and processing of information. As a rule, neural networks can only be simulated in a very cumbersome and inefficient way using conventional hardware.

Systems with neuromorphic chips that imitate the way the human brain works offer significant advantages. Experts in the field describe this type of bioinspired computer as being able to work in a decentralised way, having at its disposal a multitude of processors, which, like neurons in the brain, are connected to each other by networks. If a processor breaks down, another can take over its function. What is more, just like in the brain, where practice leads to improved signal transfer, a bioinspired processor should have the capacity to learn.

With today’s semiconductor technology, these functions are to some extent already achievable. These systems are however suitable for particular applications and require a lot of space and energy,” says Dr. Ilia Valov from Forschungszentrum Jülich. “Our nanowire devices made from zinc oxide crystals can inherently process and even store information, as well as being extremely small and energy efficient,” explains the researcher from Jülich’s Peter Grünberg Institute.

Source: http://www.fz-juelich.de/

AI and Big Data To Fight Eye Diseases

In future, it will be possible to diagnose diabetes from the eye using automatic digital retinal screening, without the assistance of an ophthalmologist‘: these were the words used by Ursula Schmidt-Erfurth, Head of MedUni Vienna‘s Department of Ophthalmology and Optometrics. The scientist has opened the press conference about the ART-2018 Specialist Meeting on new developments in retinal therapy. The automatic diabetes screening, has been recently implemented at MedUni Vienna.
Patients flock to the Department to undergo this retinal examination to detect any diabetic changes. It takes just a few minutes and is completely non-invasive

Essentially this technique can detect all stages of diabetic retinal diseasehigh-resolution digital retinal images with two million pixels are taken and analyzed within seconds – but Big Data offers even more potential: nowadays it is already possible to diagnose an additional 50 other diseases in this way. Diabetes is just the start. And MedUni Vienna is among the global leaders in this digital revolution.

The Division of Cardiology led by Christian Hengstenberg within the Department of Medicine II is working on how digital retinal analysis can also be used in future for the early diagnosis of cardiovascular diseases.

This AI medicine is ‘super human’,” emphasizes Schmidt-Erfurth. “The algorithms are quicker and more accurate. They can analyze things that an expert cannot detect with the naked eye.” And yet the commitment to Big Data and Artificial Intelligence is not a plea for medicine without doctors, which some experts predict for the not-to-distant future. “What we want are ‘super doctors’, who are able to use the high-tech findings to make the correct, individualized therapeutic decision for their patients, in the spirit of precision medicine, rather than leaving patients on their own.”

However, it is not only in the diagnosis of diseases that Artificial Intelligence and Big Data, plus virtual reality, provide better results. “We are already performing digitized operations with support from Artificial Intelligence. This involves projecting a virtual and precise image of the area of the eye being operated on onto a huge screen – and the surgeon then performs the operation with a perfect viewon screen” as it were, while actually operating on the patient with a scalpel.”

Source: https://www.news-medical.net/

Want to Sound Like Barack Obama?

For your hair, there are wigs and hairstylists; for your skin, there are permanent and removable tattoos; for your eyes, there are contact lenses that disguise the shape of your pupils. In short, there’s a plethora of tools people can use if they want to give themselves a makeover—except for one of their signature features: their voice.

Sure a Darth Vader voice changing mask would do something about it, but for people who want to sound like a celebrity or a person of the opposite sex, look no further than Boston-based startup Modulate.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

Founded in August 2017 by two MIT grads, this self-funded startup is using machine learning to change your voice as you speak. This could be a celebrity’s voice (like Barack Obama’s), the voice of a game character or even a totally custom voice. With potential applications in the gaming and movie industries, Modulate has launched  with a free online demo that allows users to play with the service.

The cool thing about Modulate is that the software doesn’t simply disguise your voice, but it does something far more radical: it converts a person’s speech into somebody’s else vocal chords, changing the very I.D. of someone’s speech but keeping intact cadence and word choice. As a result, you sound like you, but have in fact someone’s else voice.

Source: https://www.americaninno.com/

AI Lie Detectors Could Reach 85% Accuracy

It’s already nerve-wracking answering questions at the border, and some ports in the European Union are taking it to another, kinda worrying level. They’re installing an artificial intelligence-powered system called iBorderCtrl, which aims to speed up the processing of travellers, but also to determine if they’re lying. A six-month trial will take place at four border crossing points in Hungary, Greece and Latvia.

During pre-screening, users will upload their passport, visa, and proof of funds, then answer questions asked by a computer-generated border guard to a webcam. The system will analyse the user’s microexpressions to determine if they’re lying, and they’ll be flagged as either low or high risk. People will be asked questions like “What’s in your suitcase?” and “If you open the suitcase and show me what is inside, will it confirm that your answers were true?” For those who pass the test, they’ll receive a QR code that will let them pass through. If there’s additional concern, their biometric data will be taken, and be handed off to a human agent who will assess the case.

We’re employing existing and proven — as well as novel ones — to empower border agents to increase the accuracy and efficiency of border checks,” project coordinator George Boultadakis told the European Commission.  “iBorderCtrl’s system will collect data that will move beyond biometrics and on to biomarkers of deceit.

Of course, there’s the question of how accurate a system like this could be. iBorderCtrl is still in its early stages, and a team member told that early testing provided a 76 percent success rate, but believe this could be raised to 85 percent.

Source: https://mashable.com/

New Material For New Processor

Computers used to take up entire rooms. Today, a two-pound laptop can slide effortlessly into a backpack. But that wouldn’t have been possible without the creation of new, smaller processors — which are only possible with the innovation of new materials. But how do materials scientists actually invent new materials? Through experimentation, explains Sanket Deshmukh, an assistant professor in the chemical engineering department of Virginia Tech whose team’s recently published computational research might vastly improve the efficiency and costs savings of the material design process.

Deshmukh’s lab, the Computational Design of Hybrid Materials lab, is devoted to understanding and simulating the ways molecules move and interact — crucial to creating a new material. In recent years, materials scientists have employed machine learning, a powerful subset of artificial intelligence, to accelerate the discovery of new materials through computer simulations. Deshmukh and his team have recently published research in the Journal of Physical Chemistry Letters demonstrating a novel machine learning framework that trainson the fly,” meaning it instantaneously processes data and learns from it to accelerate the development of computational models. Traditionally the development of computational models are “carried out manually via trial-and-error approach, which is very expensive and inefficient, and is a labor-intensive task,” Deshmukh explained.

This novel framework not only uses the machine learning in a unique fashion for the first time,” Deshmukh said, “but it also dramatically accelerates the development of accurate computational models of materials.” “We train the machine learning model in a ‘reverse’ fashion by using the properties of a model obtained from molecular dynamics simulations as an input for the machine learning model, and using the input parameters used in molecular dynamics simulations as an output for the machine learning model,” said Karteek Bejagam, a post-doctoral researcher in Deshmukh’s lab and one of the lead authors of the study.

This new framework allows researchers to perform optimization of computational models, at unusually faster speed, until they reach the desired properties of a new material.

Source: https://vtnews.vt.edu/

Amazon to datamine the stars

Amazon.com is in talks with Chile to house and mine massive amounts of data generated by the country’s giant telescopes, which could prove fertile ground for the company to develop new artificial intelligence tools. The talks are aimed at fuelling growth in Amazon.com Inc’s cloud computing business in Latin America and boosting its data processing capabilities.

President Sebastian Pinera’s center-right government, which is seeking to wean Chile’s $325 billion economy from reliance on copper mining, announced last week it plans to pool data from all its telescopes onto a virtual observatory stored in the cloud, without giving a timeframe. The government talked of the potential for astrodata innovation, but did not give details.

Amazon executives have been holding discussions with the Chilean government for two years about a possible data center to provide infrastructure for local firms and the government to store information on the cloud. The talks have included discussion about the possibility of Amazon Web Services (AWS), hosting astrodata.

Jeffrey Kratz, AWS’s General Manager for Public Sector for Latin American, has confirmed the company’s interest in astrodata but said Amazon had no announcements to make at present. “Chile is a very important country for AWS,” he said in an email to Reuters. “We kept being amazed about the incredible work on astronomy and the telescopes, as real proof points on innovation and technology working together.” “The Chilean telescopes can benefit from the cloud by eliminating the heavy lifting of managing IT,” Kratz added.

Source: https://www.reuters.com/