Tag Archives: Artificial Intelligence

China: Explosive Growth In The Digital Economy

China has over 110 million 5G users and is expected to have more than 600,000 5G base stations by the end of this year, covering all cities at prefecture level and above, according to the 5G Innovation and Development Forum held on Sept 15 during the Smart China Expo Online in southwest China’s Chongqing municipality.

Since 5G licenses for commercial use for more than one year were issued, the country has made steady progress in the construction of its 5G network infrastructure, said Han Xia, director of the telecom department at the Ministry of Industry and Information Technology, adding that Chinese telecommunications companies have already built over 500,000 5G base stations with over 100 million 5G internet terminals.

So far, 5G has been deployed in sectors and fields including ports, machinery, automobiles, steel, mining and energy, while 5G application has been accelerated in key areas such as industrial internet, Internet of Vehicles, medical care, and education, Han noted.

The value of the country’s industrial internet hit 2.13 trillion yuan last year, Yin Hao, an academician from the Chinese Academy of Sciences said at the forum, adding that the figure is expected to exceed 5 trillion yuan in 2025.

The integrated development of “5G plus industrial internet” can create new products, generate new models and new forms of business, reduce enterprises’ operating costs, improve their production efficiency, and optimize their resource allocation, Yin noted.

According to Chen Shanzhi, vice president of the China Information and Communication Technologies Group Corporation (CICT), the combination of 5G and other emerging information technologies, including artificial intelligence, cloud computing and big data, will help accelerate the integrated development and innovation of other sectors and bring about explosive growth in the digital economy.

http://global.chinadaily.com.cn/

Robot Dogs Attack

It looked like a scene from science fiction. Emerging from United States Air Force planes, four-legged robot dogs scampered onto an airfield in the Mojave Desert, offering a possible preview into the future of warfare. But the exercise conducted last week, one of the US military‘s largest ever high-tech experiments, wasn’t a movie set.
Flying into a possibly hostile airstrip aboard an Air Force C-130, the robot dogs were sent outside the aircraft to scout for threats before the humans inside would be exposed to them, according to an Air Force news release dated September 3.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

Tech. Sgt. John Rodiguez, 321st Contingency Response Squadron security team, patrols with a Ghost Robotics Vision 60 prototype at a simulated austere base during the Advanced Battle Management System exercise on Nellis Air Force Base, Nev., Sept. 3, 2020. The ABMS is an interconnected battle network – the digital architecture or foundation – which collects, processes and shares data relevant to warfighters in order to make better decisions faster in the kill chain.

In order to achieve all-domain superiority, it requires that individual military activities not simply be de-conflicted, but rather integrated activities in one domain must enhance the effectiveness of those in another domain. The electronic canines are just one link in what the US military calls the Advanced Battle Management System (ABMS). It uses artificial intelligence and rapid data analytics to detect and counter threats to US military assets in space and possible attacks on the US homeland with missiles or other means.

Source: https://edition.cnn.com

AI Fighter Jet Obliterates Human Air Force Pilot

The never-ending saga of machines outperforming humans has a new chapter. An AI algorithm has again beaten a human fighter pilot in a virtual dogfight. The contest was the finale of the U.S. military’s AlphaDogfight challenge, an effort to “demonstrate the feasibility of developing effective, intelligent autonomous agents capable of defeating adversary aircraft in a dogfight.

Last August, Defense Advanced Research Project Agency, or DARPA,  selected eight teams ranging from large, traditional defense contractors like Lockheed Martin to small groups like Heron Systems to compete in a series of trials in November and January. In the final, on Thursday, Heron Systems emerged as the victor against the seven other teams after two days of old school dogfights, going after each other using nose-aimed guns only. Heron then faced off against a human fighter pilot sitting in a simulator and wearing a virtual reality helmet, and won five rounds to zero.

The other winner in Thursday’s event was deep reinforcement learning,wherein artificial intelligence algorithms get to try out a task in a virtual environment over and over again, sometimes very quickly, until they develop something like understanding. Deep reinforcement played a key role in Heron System’s agent, as well as Lockheed Martin’s, the runner up.

https://www.defenseone.com/

The U.S. Wastes $161B Worth Of Food Every Year. A.I. Is Helping Us Fix That

When you see pictures of food waste, it just blows you away,” said Stefan Kalb, a former food wholesaler. “I mean, shopping cart after shopping cart of food waste. What happens with the merchandisers when they walk through the store, and they’re pulling products that have expired, is that they’ll put it in a shopping cart and just roll it to the back. It’s almost one of those dystopian [movie] pictures … cartons of milk just piled up in a grocery cart. The ones that didn’t make it.”

In the United States, somewhere between 30% and 40% of the food that’s produced is wasted. That’s the equivalent of $161 billion every single year. The U.S. throws away twice as much food as any other developed country in the world. There are all sorts of reasons this is a problem but A.I. could could solve it.

Kalb’s company is one of several startups — let’s call them the “Internet of Groceries” — using some impressively smart machine learning tools to help with this significant problem. Kalb is the co-founder of Shelf Engine, a company that uses analytics to help retailers better examine the historical order and sales data on their products so as to make better decisions about what to order. This means reduced waste and bigger margins. The company also buys back unsold stock, thereby guaranteeing the sale for a retailer.

We haven’t previously automated this micro-decision that is happening at the grocery store with the buyer,” said Kalb . “The buyer of the store is predicting how much to order — and of what. It’s a very hard decision, and they’re doing it for hundreds and thousands of items. You have these category buyers that just walk through the store to decide how they’re gonna change their bread order or their produce order or their milk order. They’re making these micro-decisions, and it’s costing them tremendous money. If we can automate that part, then we can really make a large impact in the world.”

Source: https://www.digitaltrends.com/

AI Detects Visual Signs Of Covid-19

Zhongnan Hospital of Wuhan University in Wuhan, China, is at the heart of the outbreak of Covid-19, the disease caused by the new coronavirus SARS-CoV-2 that has shut down cities in China, South Korea, Iran, and Italy. That’s forced the hospital to become a testbed for how quickly a modern medical center can adapt to a new infectious disease epidemic.

One experiment is underway in Zhongnan’s radiology department, where staff are using artificial intelligence software to detect visual signs of the pneumonia associated with Covid-19 on lung CT scan images. Haibo Xu, professor and chair of radiology at Zhongnan Hospital, says the software helps overworked staff screen patients and prioritize those most likely to have Covid-19 for further examination and testing 

Detecting pneumonia on a scan doesn’t alone confirm a person has the disease, but Xu says doing so helps staff diagnose, isolate, and treat patients more quickly. The software “can identify typical signs or partial signs of Covid-19 pneumonia,” he wrotel. Doctors can then follow up with other examinations and lab tests to confirm a diagnosis of the disease. Xu says his department was quickly overwhelmed as the virus spread through Wuhan in January.

The software in use at Zhongnan was created by Beijing startup Infervision, which says  its Covid-19 tool has been deployed at 34 hospitals in China and used to review more than 32,000 cases. The startup, founded in 2015 with funding from investors including early Google backer Sequoia Capital, is an example of how China has embraced applying artificial intelligence to medicine.

China’s government has urged development of AI tools for healthcare as part of sweeping national investments in artificial intelligence. China’s relatively lax rules on privacy allow companies such as Infervision to gather medical data to train machine learning algorithms in tasks like reading scans more easily than US or European rivals.

Infervision created its main product, software that flags possible lung problems on CT scans, using hundreds of thousands of lung images collected from major Chinese hospitals. The software is in use at hospitals in China, and being evaluated by clinics in Europe, and the US, primarily to detect potentially cancerous lung nodulesInfervision began work on its Covid-19 detector early in the outbreak after noticing a sudden shift in how existing customers were using its lung-scan-reading software. In mid-January, not long after the US Centers for Disease Control advised against travel to Wuhan due to the new disease, hospitals in Hubei Province began employing a previously little-used feature of Infervision’s software that looks for evidence of pneumonia, says CEO Kuan Chen. “We realized it was coming from the outbreak,” he says.

Source: https://www.wired.com/

AI Predicts Heart Attacks

In a study published Feb. 14 in Circulation, researchers in the U.K. and the U.S. report that an AI program can reliably predict heart attacks and strokes. Kristopher Knott, a research fellow at the British Heart Foundation, and his team conducted the largest study yet involving cardiovascular magnetic resonance imaging (CMR) and AI. CMR is a scan that measures blood flow to the heart by detecting how much of a special contrast agent heart muscle picks up; the stronger the blood flow, the less likely there will be blockages in the heart vessels. Reading the scans, however, is time consuming and laborious; and it’s also more qualitative than quantitative, says Knott, subject to the vagaries of the human eyes and brain. To try to develop a more qualitative tool, Knott and his colleagues trained an AI model to read scans and learn to detect signs of compromised blood flow.

When they tested the technology on the scans of more than 1,000 people who needed CMR because they either at risk of developing heart disease or had already been diagnosed, they found the AI model worked pretty well at selecting out which people were more likely to go on to have a heart attack or stroke, or die from one. The study compared the AI-based analyses to health outcomes from the patients, who were followed for about 20 months on average. The researchers discovered that for every 1 ml/g/min decrease in blood flow to the heart, the risk of dying from a heart event nearly doubled, and the risk of having a heart attack, stroke or other event more than doubled.

Rather than a qualitative view of blood flow to the heart muscle, we get a quantitative number,” he says. “And from that number, we’ve shown that we can predict which people are at higher risk of adverse events.”

The study confirmed that CMR is a strong marker for risk of heart problems, but did not prove that the scans could actually be used to guide doctors’ decisions about which people are at higher risk. For that, more studies need to be done that document whether treating poor blood flow—with available medication or procedures—in people with decreased flow as predicted by the AI model, can reduce or eliminate heart attacks and strokes.

Source: https://time.com/

How To Take Delivery Door To Door By Droid

As an automotive supplier specialized in developing electric, autonomous and connected vehicle technologies, Valeo is presenting its autonomous, electric delivery droid prototype, Valeo eDeliver4U, at CES 2020 in Las Vegas. Valeo developed the technology in partnership with Meituan Dianping, China’s leading e-commerce platform for services, which operates popular food delivery service Meituan Waimai. The two groups signed a strategic cooperation agreement at last year’s CES to develop a last-mile autonomous delivery solution.

At 2.80m long, 1.20m wide and 1.70m tall, the droid can deliver up to 17 meals per trip, autonomously negotiating dense and complex urban environments at about 12 km/h without generating any pollutant emissions. With a range of around 100km, this prototype gives us a glimpse of what home delivery could look like in the near future, especially in the ever‑growing number of zero-emissions zones that are being created around the world. Meituan Dianping’s connected delivery locker allows for safe delivery to the end customer, who can book through a smartphone application.

The droid’s autonomy and electric power are delivered by Valeo technologies that are already series produced and aligned with automotive industry standards, thereby guaranteeing a high-level of safety. The droid operates autonomously using perception systems including algorithms and sensors. It is equipped with four Valeo SCALA® laser scanners (the only automotive LiDAR already fitted to vehicles in series production), a front camera, four fisheye cameras, four radar devices and twelve ultrasonic sensors, coupled with software and artificial intelligence. The electrified chassis features a Valeo 48V motor and a Valeo 48V inverter, which acts as the system’s “brain” and controls the power, a speed reducer, a 48V battery, a DC/DC converter and a Valeo 48V battery charger, as well as electric power steering and braking systems.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

“This delivery droid illustrates Valeo’s ability to embrace new forms of mobility using its technological platforms. The modularity of the platforms means our technologies can just as easily be fitted to cars, autonomous shuttles, robotaxis and even droids,” said Jacques Aschenbroich, Chairman and Chief Executive Officer of Valeo. “These new markets will allow us to further consolidate our leadership around the world in vehicle electrification, driver assistance systems and autonomous driving.”

Source: https://www.valeo.com/

Toyota To Build A Smart City Powered By Hydrogen

 

Japanese carmaker Toyota has announced plans to create a 175-acre smart city in Japan where it will test driverless cars and artificial intelligence. The project, announced at the Consumer Electronics Show in Las Vegas, will break ground at the base of Mount Fuji in 2021. Woven City will initially be home to 2,000 people who will test technologies including robots and smart homesToyota said in a press release that only driverless and electric vehicles will be allowed on the main streets of Woven CityStreets will be split into three types of thoroughfare: roads for fast vehicles, lanes which are a mixture of personal vehicles and pedestrians, and pedestrian footpaths.

Danish architect Bjarke Ingels has been commissioned to design the new city. His business previously worked on projects including Google’s London and US headquartersToyota said the city will be powered by hydrogen fuel cells and solar panels fitted to the roofs of housesBuildings in Woven City will mostly be made of wood and assembled using “robotised production methods,” Toyota said. 

 “Building a complete city from the ground up, even on a small scale like this, is a unique opportunity to develop future technologies, including a digital operating system for the infrastructure.
“With people, buildings and vehicles all connected and communicating with each other through data and sensors, we will be able to test connected AI technology, in both the virtual and physical realms, maximising its potential,” said Akio Toyoda, Toyota’s president.

Google has also experimented with the creation of its own smart city through its Sidewalk Labs division. The company is hoping to transform a 12-acre plot in Toronto’s waterfront district into a smart city, with the first homes due to appear in 2023.

Source: https://www.telegraph.co.uk/

Artificial Intelligence Outperforms Humans In Prediction Of Breast Cancer

An artificial intelligence (AI) system can reduce false positives and false negatives in prediction of breast cancer and outperforms human readers, according to a study published online Jan. 1 in Nature.

Scott Mayer McKinney, from Google Health in Palo Alto, California, and colleagues examined the performance of an AI system for breast cancer prediction in a clinical setting. Data were curated from a large representative dataset from the United Kingdom and a large enriched dataset from the United States.

The researchers observed an absolute reduction of 5.7 and 1.2 percent in false positives in the U.S. and U.K. datasets, respectively, and 9.4 and 2.7 percent, respectively, in false negatives. The system was also able to generalize from the United Kingdom to the United States. The AI system outperformed six human readers in an independent study involving radiologists; the area under the receiver operating characteristic curve was greater for the AI system than the average radiologist (absolute margin, 11.5 percent). The AI system maintained noninferior performance in a simulation in which the AI system participated in the double-reading process that is used in the United Kingdom and reduced the workload of the second reader by 88 percent.

“These analyses highlight the potential of this technology to deliver screening results in a sustainable manner despite workforce shortages in countries such as the United Kingdom,” the authors write.

Several authors disclosed financial ties to technology companies, including Google, which funded the study.

Source: https://www.nature.com/
AND
https://www.physiciansweekly.com/

AI Classify Chest X-Rays With Human-Level Accuracy

Analyzing chest X-ray images with machine learning algorithms is easier said than done. That’s because typically, the clinical labels required to train those algorithms are obtained with rule-based natural language processing or human annotation, both of which tend to introduce inconsistencies and errors. Additionally, it’s challenging to assemble data sets that represent an adequately diverse spectrum of cases, and to establish clinically meaningful and consistent labels given only images.

In an effort to move forward the goalpost with respect to X-ray image classification, researchers at Google devised AI models to spot four findings on human chest X-rays: pneumothorax (collapsed lungs), nodules and masses, fractures, and airspace opacities (filling of the pulmonary tree with material). In a paper published in the journal Nature, the team claims the model family, which was evaluated using thousands of images across data sets with high-quality labels, demonstrated “radiologist-levelperformance in an independent review conducted by human experts.

The study’s publication comes months after Google AI and Northwestern Medicine scientists created a model capable of detecting lung cancer from screening tests better than human radiologists with an average of eight years experience, and roughly a year after New York University used Google’s Inception v3 machine learning model to detect lung cancer. AI also underpins the tech giant’s advances in diabetic retinopathy diagnosis through eye scans, as well as Alphabet subsidiary DeepMind’s AI that can recommend the proper line of treatment for 50 eye diseases with 94% accuracy.

This newer work tapped over 600,000 images sourced from two de-identified data sets, the first of which was developed in collaboration with Apollo Hospitals and which consists of X-rays collected over years from multiple locations. As for the second corpus, it’s the publicly available ChestX-ray14 image set released by the National Institutes of Health, which has historically served as a resource for AI efforts but which suffers shortcomings in accuracy.

The researchers developed a text-based system to extract labels using radiology reports associated with each X-ray, which they then applied to provide labels for over 560,000 images from the Apollo Hospitals data set. To reduce errors introduced by the text-based label extraction and provide the relevant labels for a number of ChestX-ray14 images, they recruited radiologists to review approximately 37,000 images across the two corpora.

Google notes that while the models achieved expert-level accuracy overall, performance varied across corpora. For example, the sensitivity for detecting pneumothorax among radiologists was approximately 79% for the ChestX-ray14 images, but was only 52% for the same radiologists on the other data set.

Chest X-ray depicting a pneumothorax identified by Google’s AI model and the panel of radiologists, but missed by individual radiologists. On the left is the original image, and on the right is the same image with the most important regions for the model prediction highlighted in orange

The performance differences between datasets … emphasize the need for standardized evaluation image sets with accurate reference standards in order to allow comparison across studies,” wrote Google research scientist Dr. David Steiner and Google Health technical lead Shravya Shetty in a blog post, both of whom contributed to the paper. “[Models] often identified findings that were consistently missed by radiologists, and vice versa. As such, strategies that combine the unique ‘skills’ of both the [AI] systems and human experts are likely to hold the most promise for realizing the potential of AI applications in medical image interpretation.”

The research team hopes to lay the groundwork for superior methods with a corpus of the adjudicated labels for the ChestX-ray14 data set, which they’ve made available in open source. It contains 2,412 training and validation set images and 1,962 test set images, or 4,374 images in total.

We hope that these labels will facilitate future machine learning efforts and enable better apples-to-apples comparisons between machine learning models for chest X-ray interpretation,” wrote Steiner and Shetty.  

Source: https://venturebeat.com/