Tag Archives: machine-learning

Machine Learning Predicts Heart Failure

Every year, roughly one out of eight U.S. deaths is caused at least in part by heart failure. One of acute heart failure’s most common warning signs is excess fluid in the lungs, a condition known as “pulmonary edema.” A patient’s exact level of excess fluid often dictates the doctor’s course of action, but making such determinations is difficult and requires clinicians to rely on subtle features in X-rays that sometimes lead to inconsistent diagnoses and treatment plans.

To better handle that kind of nuance, a group led by researchers at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) has developed a machine learning model that can look at an X-ray to quantify how severe the edema is, on a four-level scale ranging from 0 (healthy) to 3 (very, very bad). The system determined the right level more than half of the time, and correctly diagnosed level 3 cases 90 percent of the time.

Working with Beth Israel Deaconess Medical Center (BIDMC) and Philips, the team plans to integrate the model into BIDMC’s emergency-room workflow this fall.

This project is meant to augment doctors workflow by providing additional information that can be used to inform their diagnoses as well as enable retrospective analyses,” says PhD student Ruizhi Liao, who was the co-lead author of a related paper with fellow PhD student Geeticka Chauhan and MIT professors Polina Golland and Peter Szolovits.

The team says that better edema diagnosis would help doctors manage not only acute heart issues, but other conditions like sepsis and kidney failure that are strongly associated with edema.

As part of a separate journal article, Liao and colleagues also took an existing public dataset of X-ray images and developed new annotations of severity labels that were agreed upon by a team of four radiologists. Liao’s hope is that these consensus labels can serve as a universal standard to benchmark future machine learning development.

An important aspect of the system is that it was trained not just on more than 300,000 X-ray images, but also on the corresponding text of reports about the X-rays that were written by radiologists. “By learning the association between images and their corresponding reports, the method has the potential for a new way of automatic report generation from the detection of image-driven findings,says Tanveer Syeda-Mahmood, a researcher not involved in the project who serves as chief scientist for IBM’s Medical Sieve Radiology Grand Challenge. “Of course, further experiments would have to be done for this to be broadly applicable to other findings and their fine-grained descriptors.”

Chauhan, Golland, Liao and Szolovits co-wrote the paper with MIT Assistant Professor Jacob Andreas, Professor William Wells of Brigham and Women’s Hospital, Xin Wang of Philips, and Seth Berkowitz and Steven Horng of BIDMC.

Source: https://news.mit.edu/

Why RNA Is A Better Measure Of A Patient’s Current Health Than DNA

By harnessing the combined power of NGS, machine learning and the dynamic nature of RNA we’re able to accurately measure the dynamic immune response and capture a more comprehensive picture of what’s happening at the site of the solid tumor. In the beginning, there was RNA – the first genetic molecule.

In the primordial soup of chemicals that represented the beginning of life, ribonucleic acid (RNA) had the early job of storing information, likely with the support of peptides. Today, RNA’s cousin – deoxyribonucleic acid – or DNA, has taken over most of the responsibilities of passing down genetic information from cell-to-cell, generation-to-generation. As a result, most early health technologies were developed to analyze DNA. But, RNA is a powerful force. And its role in storing information, while different from its early years, has no less of an impact on human health and is gaining more mindshare in our industry.

RNA is often considered a messenger molecule, taking the information coded in our DNA and transcribing it into cellular directives that result in downstream biological signals and proteinslevel changes.  And for this reason, RNA is becoming known not only as a drug target but perhaps more importantly, as a barometer of health.

3d illustration of a part of RNA chain from which the deoxyribonucleic acid or DNA is composed

How and why is RNA so useful? First, RNA is labile — changing in both sequence and abundance in response to genetic and epigenetic changes, but also external factors such as disease, therapy, exercise, and more. This is in contrast to DNA, which is generally static, changing little after conception.

Next, RNA is a more accurate snapshot of disease progression. When mutations do occur at the DNA level, these do not always result in downstream biological changes. Often, the body is able to compensate by repairing the mutation or overcome it by using redundancies in the pathway in which the gene resides. By instead evaluating RNA, we get one step closer to understanding the real impact disease is imparting on our body.

Finally, RNA is abundant. In most human cells, while only two copies of DNA are present, hundreds of thousands of mRNA molecules are present,representing more than 10,000 different species of RNA. Because even rare transcripts are present in multiple copies, biological signals can be confidently detected in RNA when the right technology is used.

Source: https://medcitynews.com/

The U.S. Wastes $161B Worth Of Food Every Year. A.I. Is Helping Us Fix That

When you see pictures of food waste, it just blows you away,” said Stefan Kalb, a former food wholesaler. “I mean, shopping cart after shopping cart of food waste. What happens with the merchandisers when they walk through the store, and they’re pulling products that have expired, is that they’ll put it in a shopping cart and just roll it to the back. It’s almost one of those dystopian [movie] pictures … cartons of milk just piled up in a grocery cart. The ones that didn’t make it.”

In the United States, somewhere between 30% and 40% of the food that’s produced is wasted. That’s the equivalent of $161 billion every single year. The U.S. throws away twice as much food as any other developed country in the world. There are all sorts of reasons this is a problem but A.I. could could solve it.

Kalb’s company is one of several startups — let’s call them the “Internet of Groceries” — using some impressively smart machine learning tools to help with this significant problem. Kalb is the co-founder of Shelf Engine, a company that uses analytics to help retailers better examine the historical order and sales data on their products so as to make better decisions about what to order. This means reduced waste and bigger margins. The company also buys back unsold stock, thereby guaranteeing the sale for a retailer.

We haven’t previously automated this micro-decision that is happening at the grocery store with the buyer,” said Kalb . “The buyer of the store is predicting how much to order — and of what. It’s a very hard decision, and they’re doing it for hundreds and thousands of items. You have these category buyers that just walk through the store to decide how they’re gonna change their bread order or their produce order or their milk order. They’re making these micro-decisions, and it’s costing them tremendous money. If we can automate that part, then we can really make a large impact in the world.”

Source: https://www.digitaltrends.com/

The Human Vs. Drone Dogfight

The U.S. Air Force will square off an AI-powered drone against a fighter jet flown by a real, live human being. The service wants to know if AI powered by machine learning can beat a human pilot with actual cockpit experience. The result will help the Air Force determine if AI-powered fighters are a viable alternative to human-powered fighters, the results of which could have far-reaching consequences for aerial warfare.

Air Force Magazine reports Lt. Gen. Jack Shanahan, head of the Pentagon’s Joint Artificial Intelligence Center, stated that the fly off would take place in 2021. Shanahan was speaking at a virtual event of the Mitchell Institute for Aerospace Studies.

Earlier reports stated that the fly-off would likely involve an older plane such as the F-16 before progressing to more advanced jets like the F-35 and F-22. If that’s the case, the Air Force will probably fly an unmanned F-16 versus a manned F-16, to make sure the human pilot and AI have as level a playing field as possible. The Air Force has already developed a remote flying mechanism, converting the F-16 to QF-16 target drones , and presumably the AI would control that mechanism.

The Pentagon has flown fighters against drones before. In 1971, spy drone manufacturer Teledyne Ryan modified their BFM-34 reconnaissance drone with the Maneuverability Augmentation System for Tactical Air Combat Simulation, or MASTACS. The result was the BFM-34E unmanned fighter jet. Fast, maneuverable, and with a low radar cross section, the BFM-34E was a difficult opponent for a human fighter pilot to fly against. A paper prepared for the U.S. Air Force’s Air University described the drone:Both the USAF and USN used this UAV to train their best pilots in simulated air combat. At Tyndall Air Force Base in Florida, the BGM-34F was used as a target in the annual William Tell air combat competition. This UAV routinely outmaneuvered manned F-15 and F-16 aircraft; one named ‘Old Red’ survived eighty-two dogfights. The USN used the MASTACS as a “graduation exercise” at their Top Gun Weapons School.

Despite the BGM-34F’s unexpected success the Navy and Air Force had no interest in an unmanned fighter, and the program was never pursued.

Source: https://www.popularmechanics.com/

10 Artificial Intelligence Trends

While no prediction engine has yet been built that can plot the course of AI over the coming decade, we can be fairly certain about what might happen over the next year. Spending on research, development, and deployment continues to rise, and debate over the wider social implications rages on.

1. AI will increasingly be monitoring and refining business processes

While the first robots in the workplace were mainly involved with automating manual tasks such as manufacturing and production lines, today’s software-based robots will take on the repetitive but necessary work that we carry out on computers. Filling in forms, generating reports and diagrams and producing documentation and instructions are all tasks that can be automated by machines that watch what we do and learn to do it for us in a quicker and more streamlined manner. This automation – known as robotic process automation – will free us from the drudgery of time-consuming but essential administrative work, leaving us to spend more time on complex, strategic, creative and interpersonal tasks.

This trend is driven by the success of internet giants like Amazon, Alibaba, and Google, and their ability to deliver personalized experiences and recommendations. AI allows providers of goods and services to quickly and accurately project a 360-degree view of customers in real-time as they interact through online portals and mobile apps, quickly learning how their predictions can fit our wants and needs with ever-increasing accuracy. Just as pizza delivery companies like Dominos will learn when we are most likely to want pizza, and make sure theOrder Now button is in front of us at the right time, every other industry will roll out solutions aimed at offering personalized customer experiences at scale.

3. AI becomes increasingly useful as data becomes more accurate and available

The quality of information available is often a barrier to businesses and organizations wanting to move towards AI-driven automated decision-making. But as technology and methods of simulating real-world processes and mechanisms in the digital domain have improved over recent years, accurate data has become increasingly available. Simulations have advanced to the stage where car manufacturers and others working on the development of autonomous vehicles can gain thousands of hours of driving data without vehicles even leaving the lab, leading to huge reductions in cost as well as increases in the quality of data that can be gathered. Why risk the expense and danger of testing AI systems in the real world when computers are now powerful enough, and trained on accurate-enough data, to simulate it all in the digital world? 2020 will see an increase in the accuracy and availability of real-world simulations, which in turn will lead to more powerful and accurate AI.

4. More devices will run AI-powered technology

As the hardware and expertise needed to deploy AI become cheaper and more available, we will start to see it used in an increasing number of tools, gadgets, and devices. In 2019 we’re already used to running apps that give us AI-powered predictions on our computers, phones, and watches. As the next decade approaches and the cost of hardware and software continues to fall, AI tools will increasingly be embedded into our vehicles, household appliances, and workplace tools. Augmented by technology such as virtual and augmented reality displays, and paradigms like the cloud and Internet of Things, the next year will see more and more devices of every shape and size starting to think and learn for themselves.

5. Human and AI cooperation increases

More and more of us will get used to the idea of working alongside AI-powered tools and bots in our day-to-day working lives. Increasingly, tools will be built that allow us to make the most of our human skills – those which AI can’t quite manage yet – such as imaginative, design, strategy, and communication skills. While augmenting them with super-fast analytics abilities fed by vast datasets that are updated in real-time.

For many of us, this will mean learning new skills, or at least new ways to use our skills alongside these new robotic and software-based tools. The IDC predicts that by 2025, 75% of organizations will be investing in employee retraining in order to fill skill gaps caused by the need to adopt AI.

6. AI increasingly at the “edge”

Much of the AI we’re used to interacting with now in our day-to-day lives takes place “in the cloud” – when we search on Google or flick through recommendations on Netflix, the complex, data-driven algorithms run on high-powered processors inside remote data centers, with the devices in our hands or on our desktops simply acting as conduits for information to pass through.

Increasingly, however, as these algorithms become more efficient and capable of running on low-power devices, AI is taking place at the “edge,” close to the point where data is gathered and used. This paradigm will continue to become more popular in 2020 and beyond, making AI-powered insights a reality outside of the times and places where super-fast fiber optic and mobile networks are available. Custom processors designed to carry out real-time analytics on-the-fly will increasingly become part of the technology we interact with day-to-day

7. AI increasingly used to create films, music, and games

Some things, even in 2020, are probably still best left to humans. Anyone who has seen the current state-of-the-art in AI-generated music, poetry or storytelling is likely to agree that the most sophisticated machines still have some way to go until their output will be as enjoyable to us as the best that humans can produce. However, the influence of AI on entertainment media is likely to increase.

In videogames, AI will continue to be used to create challenging, human-like opponents for players to compete against, as well as to dynamically adjust gameplay and difficulty so that games can continue to offer a compelling challenge for gamers of all skill levels. And while completely AI-generated music may not be everyone’s cup of tea, where AI does excel is in creating dynamic soundscapes – think of smart playlists on services like Spotify or Google Music that match tunes and tempo to the mood and pace of our everyday lives.

8. AI will become ever more present in cybersecurity

As hacking, phishing and social engineering attacks become ever-more sophisticated, and themselves powered by AI and advanced prediction algorithms, smart technology will play an increasingly important role in protecting us from these attempted intrusions into our lives. AI can be used to spot giveaway signs that digital activity or transactions follow patterns that are likely to be indicators of nefarious activity, and raise alarms before defenses can be breached and sensitive data compromised.

The rollout of 5G and other super-fast wireless communications technology will bring huge opportunities for businesses to provide services in new and innovative ways, but they will also potentially open us up to more sophisticated cyber-attacks. Spending on cybersecurity will continue to increase, and those with relevant skills will be highly sought-after.

9. More of us will interact with AI, maybe without even knowing it

Let’s face it, despite the huge investment in recent years in natural-language powered chatbots in customer service, most of us can recognize whether we’re dealing with a robot or a human. However, as the datasets used to train natural language processing algorithms continue to grow, the line between humans and machines will become harder and harder to distinguish. With the advent of deep learning and semi-supervised models of machine learning such as reinforcement learning, the algorithms that attempt to match our speech patterns and infer meaning from our own human language will become more and more able to fool us into thinking there is a human on the other end of the conversation.

10. But AI will recognize us, even if we don’t recognize it

Perhaps even more unsettlingly, the rollout of facial recognition technology is only likely to intensify as we move into the next decade. Not just in China (where the government is looking at ways of making facial recognition compulsory for accessing services like communication networks and public transport) but around the world.

Source: https://www.forbes.com/

How To Detect Heart Failure From A Single Heartbeat

Researchers have developed a neural network approach that can accurately identify congestive heart failure with 100% accuracy through analysis of just one raw electrocardiogram (ECG) heartbeat, a new study reports.

Congestive heart failure (CHF) is a chronic progressive condition that affects the pumping power of the heart muscles. Associated with high prevalence, significant mortality rates and sustained healthcare costs, clinical practitioners and health systems urgently require efficient detection processes.

Dr Sebastiano Massaro, Associate Professor of Organisational Neuroscience at the University of Surrey, has worked with colleagues Mihaela Porumb and Dr Leandro Pecchia at the University of Warwick and Ernesto Iadanza at the University of Florence, to tackle these important concerns by using Convolutional Neural Networks (CNN) – hierarchical neural networks highly effective in recognising patterns and structures in data.

Published in Biomedical Signal Processing and Control Journal, their research drastically improves existing CHF detection methods typically focused on heart rate variability that, whilst effective, are time-consuming and prone to errors. Conversely, their new model uses a combination of advanced signal processing and machine learning tools on raw ECG signals, delivering 100% accuracy.

We trained and tested the CNN model on large publicly available ECG datasets featuring subjects with CHF as well as healthy, non-arrhythmic hearts. Our model delivered 100% accuracy: by checking just one heartbeat we are able detect whether or not a person has heart failure. Our model is also one of the first known to be able to identify the ECG’ s morphological features specifically associated to the severity of the condition,”  explains Dr Massaro.  Dr Pecchia, President at European Alliance for Medical and Biological Engineering, explains the implications of these findings: “With approximately 26 million people worldwide affected by a form of heart failure, our research presents a major advancement on the current methodology. Enabling clinical practitioners to access an accurate CHF detection tool can make a significant societal impact, with patients benefitting from early and more efficient diagnosis and easing pressures on NHS resources.”

Source: https://www.surrey.ac.uk/

Neural Text-to-Speech Machine

Thanks to modern machine learning techniques, text-to-speech engines have made massive strides over the last few years. It used to be incredibly easy to know that it was a computer that was reading a text and not a human being. But that’s changing quickly. Amazon’s AWS cloud computing arm today launched a number of new neural text-to-speech models, as well as a new newscaster style that is meant to mimic the way… you guessed it… newscasters sound.

Man sitting in news studio with breaking news sign behind her. Ready to go live. Afro-american descent.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

Speech quality is certainly important, but more can be done to make a synthetic voice sound even more realistic and engaging,” the company notes in today’s announcement. “What about style? For sure, human ears can tell the difference between a newscast, a sportscast, a university class and so on; indeed, most humans adopt the right style of speech for the right context, and this certainly helps in getting their message across.

The new newscaster style is now available in two U.S. voices (Joanna and Matthew) and Amazon is already working with USA Today and Canada’s The Globe and Mail, among a number of other companies, to help them. Amazon Polly Newscaster, as the new service is officially called, is the result of years of research on text-to-speech, which AWS is also now making available through its Neural Text-to-Speech engine. This new engine, which isn’t unlike similar neural engines like Google’s WaveNet and others, currently features 11 voices, three for U.K. English and eight for U.S. English.

Source: https://aws.amazon.com/

Deciphering Breast Cancer

Breast cancer is one of the most common cancers, and one of the leading causes of death in women globally. Breast cancer is a disease where cells located in the breast grow out of control. Although a majority of breast cancers are discovered in women at the age of 50 years or older, the disease can affect anyone, including men and younger women, according to the Centers for Disease Control and Prevention (CDC). Last year there were 9.6 million deaths and 18.1 million new cases of breast cancer diagnosed globally according to the latest report from the International Agency for Research on Cancer (IARC) released in September 2018.

In 2019 alone, the U.S. National Cancer Institute estimates that there will be 268,600 new female breast cancer cases and 41,760 fatalities. Earlier this month, researchers based in Switzerland published in Cell their study in using applied artificial intelligence (AI) machine learning to create a comprehensive tumor and immune atlas of breast cancer ecosystems that lays the foundation for innovative precision medicine and immunotherapy.

The study was led by professor Bernd Bodenmiller, Ph.D. at the Institute of Molecular Life Sciences at the University of Zurich in Switzerland. Bodenmiller is a recipient of the 2019 Friedrich Miescher Award, Switzerland’s highest distinction for outstanding achievements in biochemistry. His team worked in collaboration with the Systems Biology Group at IBM Research in Zurich led by María Rodríguez Martínez, Ph.D. with the shared goal to produce a foundation for more targeted breast cancer treatment through precision medicine.

Source: https://www.ibm.com/

Artificial Intelligence Revolutionizes Farming

Researchers at MIT have used AI to improve the flavor of basil. It’s part of a trend that is seeing artificial intelligence revolutionize farming.
What makes basil so good? In some cases, it’s AI. Machine learning has been used to create basil plants that are extra-delicious. While we sadly cannot report firsthand on the herb’s taste, the effort reflects a broader trend that involves using data science and machine learning to improve agriculture 

The researchers behind the AI-optimized basil used machine learning to determine the growing conditions that would maximize the concentration of the volatile compounds responsible for basil’s flavor. The basil was grown in hydroponic units within modified shipping containers in Middleton, Massachusetts. Temperature, light, humidity, and other environmental factors inside the containers could be controlled automatically. The researchers tested the taste of the plants by looking for certain compounds using gas chromatography and mass spectrometry. And they fed the resulting data into machine-learning algorithms developed at MIT and a company called Cognizant.

The research showed, counterintuitively, that exposing plants to light 24 hours a day generated the best taste. The research group plans to study how the technology might improve the disease-fighting capabilities of plants as well as how different flora may respond to the effects of climate change.

We’re really interested in building networked tools that can take a plant’s experience, its phenotype, the set of stresses it encounters, and its genetics, and digitize that to allow us to understand the plant-environment interaction,” said Caleb Harper, head of the MIT Media Lab’s OpenAg group, in a press release. His lab worked with colleagues from the University of Texas at Austin on the paper.

The idea of using machine learning to optimize plant yield and properties is rapidly taking off in agriculture. Last year, Wageningen University in the Netherlands organized an “Autonomous Greenhousecontest, in which different teams competed to develop algorithms that increased the yield of cucumber plants while minimizing the resources required. They worked with greenhouses where a variety of factors are controlled by computer systems.

The study has appeared  in the journal PLOS One.

Source: https://www.technologyreview.com/

This Person Does Not exist

With the help of artificial intelligence, you can manipulate video of public figures to say whatever you like — or now, create images of people’s faces that don’t even exist. You can see this in action on a website called thispersondoesnotexist.com. It uses an algorithm to spit out a single image of a person’s face, and for the most part, they look frighteningly realHit refresh in your browser, and the algorithm will generate a new face. Again, these people do not exist.

The website is the creation of software engineer Phillip Wang, and uses a new AI algorithm called StyleGAN, which was developed by researchers at NvidiaGAN, or Generative Adversarial Networks, is a concept within machine learning which aims to generate images that are indistinguishable from real ones. You can train GANs to remember human faces, as well bedrooms, cars, and cats, and of course, generate images of them.

Wang explained that he created the site to create awareness for the algorithm, and chose facesbecause our brains are sensitive to that kind of image.”  He added that it costs $150 a month to hire out the server, as he needs a good amount of graphical power to run the website.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

It also started off as a personal agenda mainly because none of my friends seem to believe this AI phenomenon, and I wanted to convince them,” Wang said. “This was the most shocking presentation I could send them. I then posted it on Facebook and it went viral from there.

I think eventually, given enough data, a big enough neural [network] can be teased into dreaming up many different kinds of scenarios,” Wang added.

Source: https://thispersondoesnotexist.com/
AND
https://mashable.com/