Tag Archives: algorithms

Supercomputer Finds Oil 10 Times Faster

Energy major Total said its new supercomputer – which has propelled it to a world ranking as the most powerful computer in the sector – will enable its geologists to find oil faster, cheaper and with a better success rate. The Pangea III computer build by IBM will help process complex seismic data in the search for hydrocarbons 10 times faster that before, Total said on Tuesday. The computing power of the Pangea III has been increased to 31.7 so-called ‘petaflops’ from 6.7 petaflops in 2016, and from 2.3 petaflops in 2013, Total said, adding that it was the equivalent of around 170,000 laptops combined. The computer ranks as number 1 among supercomputers in the oil and gas sector, and number 11 globally, according to the TOP500 table (www.top500.org) which ranks supercomputers twice a year. Total’s European peer Eni’s HPC4 supercomputer is ranked number 17 in the global top 500 list.

Oil and gas companies, along with other industrial groups, are increasingly relying on powerful computers to process complex data faster. This enables them to cut costs while boosting productivity and the success rate of projects. Total did not say how much it had invested in the new supercomputer. The company’s senior vice president for exploration, Kevin McLachlan, told Reuters that 80% of the Pangea III’s time would be dedicated to seismic imaging.

We can do things much faster,” he said. “We are developing advanced imaging algorithms to give us much better images of the sub-surface in these complex domains and Pangea III will let us do it 10 times faster than we could before.” Total said the new algorithms can process huge amounts of data more accurately, and at a higher resolution. It would also help to locate more reliably hydrocarbons below ground, which is useful in complex environments where it is exploring for oil trapped under salt, such as Brazil, the Gulf of Mexico, Angola and the Eastern Mediterranean. McLachlan expected the increased computer power to affect Total’s success rate in exploration, because of the better imaging, and in oil well appraisals, development and drilling.

What used to take a week, now takes us a day to process,” he said, adding that tens of millions of dollars of savings would be made on the oil wells as a direct result of obtaining better images.

Source: https://www.reuters.com/

Artificial Intelligence Revolutionizes Farming

Researchers at MIT have used AI to improve the flavor of basil. It’s part of a trend that is seeing artificial intelligence revolutionize farming.
What makes basil so good? In some cases, it’s AI. Machine learning has been used to create basil plants that are extra-delicious. While we sadly cannot report firsthand on the herb’s taste, the effort reflects a broader trend that involves using data science and machine learning to improve agriculture 

The researchers behind the AI-optimized basil used machine learning to determine the growing conditions that would maximize the concentration of the volatile compounds responsible for basil’s flavor. The basil was grown in hydroponic units within modified shipping containers in Middleton, Massachusetts. Temperature, light, humidity, and other environmental factors inside the containers could be controlled automatically. The researchers tested the taste of the plants by looking for certain compounds using gas chromatography and mass spectrometry. And they fed the resulting data into machine-learning algorithms developed at MIT and a company called Cognizant.

The research showed, counterintuitively, that exposing plants to light 24 hours a day generated the best taste. The research group plans to study how the technology might improve the disease-fighting capabilities of plants as well as how different flora may respond to the effects of climate change.

We’re really interested in building networked tools that can take a plant’s experience, its phenotype, the set of stresses it encounters, and its genetics, and digitize that to allow us to understand the plant-environment interaction,” said Caleb Harper, head of the MIT Media Lab’s OpenAg group, in a press release. His lab worked with colleagues from the University of Texas at Austin on the paper.

The idea of using machine learning to optimize plant yield and properties is rapidly taking off in agriculture. Last year, Wageningen University in the Netherlands organized an “Autonomous Greenhousecontest, in which different teams competed to develop algorithms that increased the yield of cucumber plants while minimizing the resources required. They worked with greenhouses where a variety of factors are controlled by computer systems.

The study has appeared  in the journal PLOS One.

Source: https://www.technologyreview.com/

Ai-Da The Artist Robot

A British arts engineering company says it has created the world’s first AI robot capable of drawing people who pose for it. The humanoid called Ai-Da can sketch subjects using a microchip in her eye and a pencil in her robotic hand – coordinated by AI processes and algorithmsAi-Da‘s ability as a life-like robot to draw and paint ultra-realistic portraits from sight has never been achieved before, according to the designers in Cornwall. It is the brainchild of art impresario and galleries Aidan Meller.

Named after Ada Lovelace , the first female computer programmer in the world, Ai-Da the robot has been designed and built by Cornish robotics company Engineered Arts who make robots for communication and entertainment.

In April 2018, Engineered Arts created an ultra-realistic robot to promote the Westworld TV show.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

Pioneering a new AI art movement, we are excited to present Ai-Da, the first professional humanoid artist, who creates her own art, as well as being a performance artist. “As an AI robot, her artwork uses AI processes and algorithms. “The work engages us to think about AI and technological uses and abuses in the world today.” explains Aidan Meller.

Professors and post-Phd students at Oxford University and Goldsmiths are providing Ai-Da with the programming and creative design for her art work. While students at Leeds University are custom designing and programming a bionic arm to create her art work.

Ai-Da has a “RoboThespian” body , featuring an expressive range of movements and she has the ability to talk and respond to questions. The robot also has a “Mesmer” head, featuring realistic silicone skin, 3D printed teeth and gums, integrated eye cameras, as well as hair.

Source: http://fortune.com/
AND
https://www.mirror.co.uk/

Facial Recognition And AI Identify 90% Of Rare Genetic Disorders

A facial recognition scan could become part of a standard medical checkup in the not-too-distant future. Researchers have shown how algorithms can help identify facial characteristics linked to genetic disorders, potentially speeding up clinical diagnoses.

In a study published this month in the journal Nature Medicine, US company FDNA published new tests of their software, DeepGestalt. Just like regular facial recognition software, the company trained their algorithms by analyzing a dataset of faces. FDNA collected more than 17,000 images covering 200 different syndromes using a smartphone app it developed named Face2Gene.

Rare genetic disorders are collectively common, affecting 8 percent of the population

In two first tests, DeepGestalt was used to look for specific disorders: Cornelia de Lange syndrome and Angelman syndrome. Both of these are complex conditions that affect intellectual development and mobility. They also have distinct facial traits, like arched eyebrows that meet in the middle for Cornelia de Lange syndrome, and unusually fair skin and hair for Angelman syndrome.

When tasked with distinguishing between pictures of patients with one syndrome or another, random syndrome, DeepGestalt was more than 90 percent accurate, beating expert clinicians, who were around 70 percent accurate on similar tests. When tested on 502 images showing individuals with 92 different syndromes, DeepGestalt identified the target condition in its guess of 10 possible diagnoses more than 90 percent of the time.

Source: https://www.theverge.com

Teaching a car how to drive itself in 20 minutes

Researchers from Wayve, a company founded by a team from the Cambridge University engineering department, have developed a neural network sophisticated enough to learn how to drive a car in 15 to 20 minutes using nothing but a computer and a single camera. The company showed off its robust deep learning methods last week in a company blog post showcasing the no-frills approach to driverless car development. Where companies like Waymo and Uber are relying on a variety of sensors and custom-built hardware, Wayve is creating the world’s first autonomous vehicles based entirely on reinforcement learning.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

The AI powering Wayve’s self-driving system is remarkable for its simplicity. It’s a four layer convolutional neural network (learn about neural networks here) that performs all of its processing on a GPU inside the car. It doesn’t require any cloud connectivity or use pre-loaded mapsWayve’s vehicles are early-stage level five autonomous. There’s a lot of work to be done before Wayve’s AI can drive any car under any circumstances. But the idea that driverless cars will require tens of thousands of dollars worth of extraneous hardware is taking a serious blow in the wake of the company’s amazing deep learning techniques. According to Wayve, these algorithms are only going to get smarter.

Source: https://wayve.ai/
AND
https://thenextweb.com/