Tag Archives: computer

Teaching a car how to drive itself in 20 minutes

Researchers from Wayve, a company founded by a team from the Cambridge University engineering department, have developed a neural network sophisticated enough to learn how to drive a car in 15 to 20 minutes using nothing but a computer and a single camera. The company showed off its robust deep learning methods last week in a company blog post showcasing the no-frills approach to driverless car development. Where companies like Waymo and Uber are relying on a variety of sensors and custom-built hardware, Wayve is creating the world’s first autonomous vehicles based entirely on reinforcement learning.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

The AI powering Wayve’s self-driving system is remarkable for its simplicity. It’s a four layer convolutional neural network (learn about neural networks here) that performs all of its processing on a GPU inside the car. It doesn’t require any cloud connectivity or use pre-loaded mapsWayve’s vehicles are early-stage level five autonomous. There’s a lot of work to be done before Wayve’s AI can drive any car under any circumstances. But the idea that driverless cars will require tens of thousands of dollars worth of extraneous hardware is taking a serious blow in the wake of the company’s amazing deep learning techniques. According to Wayve, these algorithms are only going to get smarter.

Source: https://wayve.ai/
AND
https://thenextweb.com/

Human Internal Verbalizations Understood Instantly By Computers

MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud. The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying wordsin your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words. The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.

The device is thus part of a complete silent-computing system that lets the user undetectably pose and receive answers to difficult computational problems. In one of the researchers’ experiments, for instance, subjects used the system to silently report opponents’ moves in a chess game and just as silently receive computer-recommended responses.

The motivation for this was to build an IA device — an intelligence-augmentation device,” says Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?” “We basically can’t live without our cellphones, our digital devices,” adds Pattie Maes, a professor of media arts and sciences and Kapur’s thesis advisor. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself. So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”

Source: http://news.mit.edu/