GPT-3 Could Make Google Search Engine Obsolete

According to The Economist, improved algorithms, powerful computers, and an increase in digitized data have fueled a revolution in machine learning, with new techniques in the 2010s resulting in "rapid improvements in tasks" including manipulating language. Software models are trained to learn by using thousands or millions of examples in a "structure ... loosely based on the neural architecture of the brain". One architecture used in natural language processing (NLP) is a neural network based on a deep learning model that was first introduced in 2017—the Transformer. GPT-n models are based on this Transformer-based deep learning neural network architecture. There are a number of NLP systems capable of processing, mining, organizing, connecting and contrasting textual input, as well as correctly answering questions.

On June 11, 2018, OpenAI researchers and engineers posted their original paper on generative models—language models—artificial intelligence systems—that could be pre-trained with an enormous and diverse corpus of text via datasets, in a process they called generative pre-training (GP). The authors described how language understanding performances in natural language processing (NLP) were improved in GPT-n through a process of "generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task." This eliminated the need for human supervision and for time-intensive hand-labeling.

In February 2020, Microsoft introduced its Turing Natural Language Generation (T-NLG), which was claimed to be the "largest language model ever published at 17 billion parameters." It performed better than any other language model at a variety of tasks which included summarizing texts and answering questions.

You must be logged in to view this content.

Eye Exam Could Predict a Heart Attack

Soon, retinal scans may be able to predict heart attacks. New research has found that decreased complexity in the blood vessels at the back of the retina in the human eye is an early biomarker for myocardial infarction.

For decades, I’ve always lectured that the eye is not just the window to the soul, but the window to the brain and the window to the body as well,” said ophthalmologist Dr. Howard R. Krauss,

Cardiologist Dr. Rigved Tadwalkar, who was not involved in the research, said that the findings were interesting. “[A]lthough we have known that examination of retinal vasculature can produce insights on cardiovascular health, this study contributes to the evidence base that characteristics of the retinal vasculature can be used for individual risk prediction for myocardial infarction,” he said.

The greatest appeal,” underlined Dr. Krauss, who was also not involved in the study, “is that the photography station may be remote to the clinician, and perhaps, someday, even accessible via a smartphone.”

According to a press release, the project utilized data from the UK Biobank, which contains demographic, epidemiological, clinical, and genotyping data, as well as retinal images, for more than 500,000 individuals. Under demographic data, the data included individuals’ age, sex, smoking habits, systolic blood pressure, and body-mass index (BMI). The researchers identified about 38,000 white-British participants, whose retinas had been scanned and who later had heart attacks. The biobank provided retinal fundus images and genotyping information for these individuals.

At the back of the retina, on either side where it connects to the optic nerve, are two large systems of blood vessels, or vasculature. In a healthy individual, each resembles a tree branch, with similarly complex fractal geometry. For some people, however, this complexity is largely absent, and branching is greatly simplified. In this research, an artificial intelligence (AI) and deep learning model revealed a connection between low retinal vascular complexity and coronary artery disease.

The research was presented on June 12 at the European Society of Human Genetics.

Source: https://www.medicalnewstoday.com/

Eye Scan Predicts Mortality Risk

Using deep learning to predictretinal age” from images of the internal surface of the back of the eye, an international team of scientists has found that the difference between the biological age of an individual’s retina and that person’s real, chronological age, is linked to their risk of death. This ‘retinal age gap’ could be used as a screening tool, the investigators suggest.

Reporting on development of their deep learning model and research results in the British Journal of Ophthalmology, first author Zhuoting Zhu, PhD, at Guangdong Academy of Medical Sciences, together with colleagues at the Centre for Eye Research Australia, Sun Yat-Sen University, and colleagues in China, Australia, and Germany, concluded that in combination with previous research, their study results add weight to the hypothesis that “… the retina plays an important role in the aging process and is sensitive to the cumulative damages of aging which increase the mortality risk.”

Estimates suggest that the global population aged 60 years and over will reach 2.1 billion in 2050, the authors noted.

Aging populations place tremendous pressure on healthcare systems.

But while the risks of illness and death increase with age, these risks vary considerably between different people of the same age, implying that ‘biological aging’ is unique to the individual and may be a better indicator of current and future health. As the authors pointed out, “Chronological age is a major risk factor for frailty, age-related morbidity and mortality. However, there is great variability in health outcomes among individuals with the same chronological age, implying that the rate of aging at an individual level is heterogeneous. Biological age rather than chronological age can better represent health status and the aging process.

Several tissue, cell, chemical, and imaging-based indicators have been developed to pick up biological aging that is out of step with chronological aging. But these techniques are fraught with ethical/privacy issues as well as often being invasive, expensive, and time consuming, the researchers noted.

Source: https://www.genengnews.com/

10 Artificial Intelligence Trends

While no prediction engine has yet been built that can plot the course of AI over the coming decade, we can be fairly certain about what might happen over the next year. Spending on research, development, and deployment continues to rise, and debate over the wider social implications rages on.

1. AI will increasingly be monitoring and refining business processes

While the first robots in the workplace were mainly involved with automating manual tasks such as manufacturing and production lines, today’s software-based robots will take on the repetitive but necessary work that we carry out on computers. Filling in forms, generating reports and diagrams and producing documentation and instructions are all tasks that can be automated by machines that watch what we do and learn to do it for us in a quicker and more streamlined manner. This automation – known as robotic process automation – will free us from the drudgery of time-consuming but essential administrative work, leaving us to spend more time on complex, strategic, creative and interpersonal tasks.

This trend is driven by the success of internet giants like Amazon, Alibaba, and Google, and their ability to deliver personalized experiences and recommendations. AI allows providers of goods and services to quickly and accurately project a 360-degree view of customers in real-time as they interact through online portals and mobile apps, quickly learning how their predictions can fit our wants and needs with ever-increasing accuracy. Just as pizza delivery companies like Dominos will learn when we are most likely to want pizza, and make sure theOrder Now button is in front of us at the right time, every other industry will roll out solutions aimed at offering personalized customer experiences at scale.

3. AI becomes increasingly useful as data becomes more accurate and available

The quality of information available is often a barrier to businesses and organizations wanting to move towards AI-driven automated decision-making. But as technology and methods of simulating real-world processes and mechanisms in the digital domain have improved over recent years, accurate data has become increasingly available. Simulations have advanced to the stage where car manufacturers and others working on the development of autonomous vehicles can gain thousands of hours of driving data without vehicles even leaving the lab, leading to huge reductions in cost as well as increases in the quality of data that can be gathered. Why risk the expense and danger of testing AI systems in the real world when computers are now powerful enough, and trained on accurate-enough data, to simulate it all in the digital world? 2020 will see an increase in the accuracy and availability of real-world simulations, which in turn will lead to more powerful and accurate AI.

4. More devices will run AI-powered technology

As the hardware and expertise needed to deploy AI become cheaper and more available, we will start to see it used in an increasing number of tools, gadgets, and devices. In 2019 we’re already used to running apps that give us AI-powered predictions on our computers, phones, and watches. As the next decade approaches and the cost of hardware and software continues to fall, AI tools will increasingly be embedded into our vehicles, household appliances, and workplace tools. Augmented by technology such as virtual and augmented reality displays, and paradigms like the cloud and Internet of Things, the next year will see more and more devices of every shape and size starting to think and learn for themselves.

5. Human and AI cooperation increases

More and more of us will get used to the idea of working alongside AI-powered tools and bots in our day-to-day working lives. Increasingly, tools will be built that allow us to make the most of our human skills – those which AI can’t quite manage yet – such as imaginative, design, strategy, and communication skills. While augmenting them with super-fast analytics abilities fed by vast datasets that are updated in real-time.

For many of us, this will mean learning new skills, or at least new ways to use our skills alongside these new robotic and software-based tools. The IDC predicts that by 2025, 75% of organizations will be investing in employee retraining in order to fill skill gaps caused by the need to adopt AI.

6. AI increasingly at the “edge”

Much of the AI we’re used to interacting with now in our day-to-day lives takes place “in the cloud” – when we search on Google or flick through recommendations on Netflix, the complex, data-driven algorithms run on high-powered processors inside remote data centers, with the devices in our hands or on our desktops simply acting as conduits for information to pass through.

Increasingly, however, as these algorithms become more efficient and capable of running on low-power devices, AI is taking place at the “edge,” close to the point where data is gathered and used. This paradigm will continue to become more popular in 2020 and beyond, making AI-powered insights a reality outside of the times and places where super-fast fiber optic and mobile networks are available. Custom processors designed to carry out real-time analytics on-the-fly will increasingly become part of the technology we interact with day-to-day

7. AI increasingly used to create films, music, and games

Some things, even in 2020, are probably still best left to humans. Anyone who has seen the current state-of-the-art in AI-generated music, poetry or storytelling is likely to agree that the most sophisticated machines still have some way to go until their output will be as enjoyable to us as the best that humans can produce. However, the influence of AI on entertainment media is likely to increase.

In videogames, AI will continue to be used to create challenging, human-like opponents for players to compete against, as well as to dynamically adjust gameplay and difficulty so that games can continue to offer a compelling challenge for gamers of all skill levels. And while completely AI-generated music may not be everyone’s cup of tea, where AI does excel is in creating dynamic soundscapes – think of smart playlists on services like Spotify or Google Music that match tunes and tempo to the mood and pace of our everyday lives.

8. AI will become ever more present in cybersecurity

As hacking, phishing and social engineering attacks become ever-more sophisticated, and themselves powered by AI and advanced prediction algorithms, smart technology will play an increasingly important role in protecting us from these attempted intrusions into our lives. AI can be used to spot giveaway signs that digital activity or transactions follow patterns that are likely to be indicators of nefarious activity, and raise alarms before defenses can be breached and sensitive data compromised.

The rollout of 5G and other super-fast wireless communications technology will bring huge opportunities for businesses to provide services in new and innovative ways, but they will also potentially open us up to more sophisticated cyber-attacks. Spending on cybersecurity will continue to increase, and those with relevant skills will be highly sought-after.

9. More of us will interact with AI, maybe without even knowing it

Let’s face it, despite the huge investment in recent years in natural-language powered chatbots in customer service, most of us can recognize whether we’re dealing with a robot or a human. However, as the datasets used to train natural language processing algorithms continue to grow, the line between humans and machines will become harder and harder to distinguish. With the advent of deep learning and semi-supervised models of machine learning such as reinforcement learning, the algorithms that attempt to match our speech patterns and infer meaning from our own human language will become more and more able to fool us into thinking there is a human on the other end of the conversation.

10. But AI will recognize us, even if we don’t recognize it

Perhaps even more unsettlingly, the rollout of facial recognition technology is only likely to intensify as we move into the next decade. Not just in China (where the government is looking at ways of making facial recognition compulsory for accessing services like communication networks and public transport) but around the world.

Source: https://www.forbes.com/

AI and Big Data To Fight Eye Diseases

In future, it will be possible to diagnose diabetes from the eye using automatic digital retinal screening, without the assistance of an ophthalmologist‘: these were the words used by Ursula Schmidt-Erfurth, Head of MedUni Vienna‘s Department of Ophthalmology and Optometrics. The scientist has opened the press conference about the ART-2018 Specialist Meeting on new developments in retinal therapy. The automatic diabetes screening, has been recently implemented at MedUni Vienna.
Patients flock to the Department to undergo this retinal examination to detect any diabetic changes. It takes just a few minutes and is completely non-invasive

Essentially this technique can detect all stages of diabetic retinal diseasehigh-resolution digital retinal images with two million pixels are taken and analyzed within seconds – but Big Data offers even more potential: nowadays it is already possible to diagnose an additional 50 other diseases in this way. Diabetes is just the start. And MedUni Vienna is among the global leaders in this digital revolution.

The Division of Cardiology led by Christian Hengstenberg within the Department of Medicine II is working on how digital retinal analysis can also be used in future for the early diagnosis of cardiovascular diseases.

This AI medicine is ‘super human’,” emphasizes Schmidt-Erfurth. “The algorithms are quicker and more accurate. They can analyze things that an expert cannot detect with the naked eye.” And yet the commitment to Big Data and Artificial Intelligence is not a plea for medicine without doctors, which some experts predict for the not-to-distant future. “What we want are ‘super doctors’, who are able to use the high-tech findings to make the correct, individualized therapeutic decision for their patients, in the spirit of precision medicine, rather than leaving patients on their own.”

However, it is not only in the diagnosis of diseases that Artificial Intelligence and Big Data, plus virtual reality, provide better results. “We are already performing digitized operations with support from Artificial Intelligence. This involves projecting a virtual and precise image of the area of the eye being operated on onto a huge screen – and the surgeon then performs the operation with a perfect viewon screen” as it were, while actually operating on the patient with a scalpel.”

Source: https://www.news-medical.net/

Want to Sound Like Barack Obama?

For your hair, there are wigs and hairstylists; for your skin, there are permanent and removable tattoos; for your eyes, there are contact lenses that disguise the shape of your pupils. In short, there’s a plethora of tools people can use if they want to give themselves a makeover—except for one of their signature features: their voice.

Sure a Darth Vader voice changing mask would do something about it, but for people who want to sound like a celebrity or a person of the opposite sex, look no further than Boston-based startup Modulate.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

Founded in August 2017 by two MIT grads, this self-funded startup is using machine learning to change your voice as you speak. This could be a celebrity’s voice (like Barack Obama’s), the voice of a game character or even a totally custom voice. With potential applications in the gaming and movie industries, Modulate has launched  with a free online demo that allows users to play with the service.

The cool thing about Modulate is that the software doesn’t simply disguise your voice, but it does something far more radical: it converts a person’s speech into somebody’s else vocal chords, changing the very I.D. of someone’s speech but keeping intact cadence and word choice. As a result, you sound like you, but have in fact someone’s else voice.

Source: https://www.americaninno.com/

Teaching a car how to drive itself in 20 minutes

Researchers from Wayve, a company founded by a team from the Cambridge University engineering department, have developed a neural network sophisticated enough to learn how to drive a car in 15 to 20 minutes using nothing but a computer and a single camera. The company showed off its robust deep learning methods last week in a company blog post showcasing the no-frills approach to driverless car development. Where companies like Waymo and Uber are relying on a variety of sensors and custom-built hardware, Wayve is creating the world’s first autonomous vehicles based entirely on reinforcement learning.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

The AI powering Wayve’s self-driving system is remarkable for its simplicity. It’s a four layer convolutional neural network (learn about neural networks here) that performs all of its processing on a GPU inside the car. It doesn’t require any cloud connectivity or use pre-loaded mapsWayve’s vehicles are early-stage level five autonomous. There’s a lot of work to be done before Wayve’s AI can drive any car under any circumstances. But the idea that driverless cars will require tens of thousands of dollars worth of extraneous hardware is taking a serious blow in the wake of the company’s amazing deep learning techniques. According to Wayve, these algorithms are only going to get smarter.

Source: https://wayve.ai/
AND
https://thenextweb.com/