GPT-3 Could Make Google Search Engine Obsolete

According to The Economist, improved algorithms, powerful computers, and an increase in digitized data have fueled a revolution in machine learning, with new techniques in the 2010s resulting in "rapid improvements in tasks" including manipulating language. Software models are trained to learn by using thousands or millions of examples in a "structure ... loosely based on the neural architecture of the brain". One architecture used in natural language processing (NLP) is a neural network based on a deep learning model that was first introduced in 2017—the Transformer. GPT-n models are based on this Transformer-based deep learning neural network architecture. There are a number of NLP systems capable of processing, mining, organizing, connecting and contrasting textual input, as well as correctly answering questions.

On June 11, 2018, OpenAI researchers and engineers posted their original paper on generative models—language models—artificial intelligence systems—that could be pre-trained with an enormous and diverse corpus of text via datasets, in a process they called generative pre-training (GP). The authors described how language understanding performances in natural language processing (NLP) were improved in GPT-n through a process of "generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task." This eliminated the need for human supervision and for time-intensive hand-labeling.

In February 2020, Microsoft introduced its Turing Natural Language Generation (T-NLG), which was claimed to be the "largest language model ever published at 17 billion parameters." It performed better than any other language model at a variety of tasks which included summarizing texts and answering questions.

You must be logged in to view this content.

AI-generated Science Ready In Minutes

No matter how many months or years authors take to produce a scientific paper, Sabine Louët needs only a few seconds to generate a coherent 300-word summary of it. But she leaves the thinking to an artificial intelligence (AI) algorithm that statistically analyses the text, identifies meaningful words and phrases, and pieces it all together into a crisp, readable chunk.

We’re trying to tell a story, and we want to make it as digestible as possible,” says Louët, chief executive of SciencePOD, a Dublin-based science communication company.

As the volume of research continues to grow, natural-language processing programs that can rapidly sort and summarize scientific papers have become an increasingly important tool for scientific publishers and researchers alike, says Markus Kaindl, senior manager for data development at Springer Nature, which publishes Nature Index. (Nature Index is editorially independent of its publisher.)

He points to the roughly 2,000 papers published on COVID-19 each week, enough to overwhelm anyone trying to stay on top of the field. “It’s like an ocean of content, and it feels like our users are close to drowning,” he says. “We need to help them surf that wave instead.”

AI can help identify the papers most suited to a particular user’s needs. For example, Semantic Scholar, developed by the Allen Institute for Artificial Intelligence in Seattle, Washington, goes beyond keywords to rank the most relevant papers for any query. “It’s a brilliant platform because it really tries to understand what the publications are about,” Kaindl says. Springer Nature expects to go further by offering personalized summaries and search results. “If you are a senior career researcher, a postdoc or a principal investigator, your needs from a paper or a chapter may be very different from someone at an earlier career stage,” he says.

The company has engaged SciencePOD and others to explore the use of AI to enhance content appeal and accessibility. “AI can really help us as science publishers, by summarizing information, translating it for wider audiences and increasing the impact,” says Kaindl.


Driven by a Desire to Build Smarter Robots

Founded in 2015, CloudMinds’ unique Cloud Robot Service Platform consists of Human Augmented Robotics Intelligence with Extreme Reality (HARIX), Secure virtual backbone network (VBN over 4G/5G), and Robot Control Unit (RCU). HARIX is a highly scalablecloud brain” that can operate millions of cloud robots of different types and service roles. HARIX features a highly efficient multi-media switching engine, a MMO gaming engine, and a powerful AI Cloud that seamlessly integrates best-of-breed AI technologies developed by CloudMinds and others, such as face and object recognition, voice recognition and NLP, navigation, and motion control (vision controlled robotic grasping and move) as well as third party AI services. With a broad ecosystem of partners,


CloudMindscloud robotic services are empowering customer engagements in retail, hospitality, real estate, smart city and a wide range of vertical applications.

How to make robots marter? CoudMinds is connecting robots and devices over secure Virtual Backbone Networks (VBN) to Cloud AI. The Human Augmented Robotics Intelligence with Extreme Reality (HARIX) platform is an ever evolving “cloud brain”. It is capable of operating millions of cloud robots performing different tasks. It also empowers robots and devices with Cloud AI capabilities such as Natural Language Processing (NLP), Computer Vision (CV), navigation, and vision-controlled manipulation.

For sure in the vision of CloudMinds initiators, by 2025 helpful humanoid robots will be affordable for the average household.