GPT-3 Could Make Google Search Engine Obsolete

According to The Economist, improved algorithms, powerful computers, and an increase in digitized data have fueled a revolution in machine learning, with new techniques in the 2010s resulting in "rapid improvements in tasks" including manipulating language. Software models are trained to learn by using thousands or millions of examples in a "structure ... loosely based on the neural architecture of the brain". One architecture used in natural language processing (NLP) is a neural network based on a deep learning model that was first introduced in 2017—the Transformer. GPT-n models are based on this Transformer-based deep learning neural network architecture. There are a number of NLP systems capable of processing, mining, organizing, connecting and contrasting textual input, as well as correctly answering questions.

On June 11, 2018, OpenAI researchers and engineers posted their original paper on generative models—language models—artificial intelligence systems—that could be pre-trained with an enormous and diverse corpus of text via datasets, in a process they called generative pre-training (GP). The authors described how language understanding performances in natural language processing (NLP) were improved in GPT-n through a process of "generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task." This eliminated the need for human supervision and for time-intensive hand-labeling.

In February 2020, Microsoft introduced its Turing Natural Language Generation (T-NLG), which was claimed to be the "largest language model ever published at 17 billion parameters." It performed better than any other language model at a variety of tasks which included summarizing texts and answering questions.

You must be logged in to view this content.

Smart Bandage

Millions of people dealing with diseases and suppressed immune systems are often forced to deal with chronic wounds—often minor injuries that nonetheless take much longer to heal because of compromised health. In addition to vastly varying degrees of recovery, issues like diabetic ulcers are also incredibly expensive, with treatment for a single incident costing as much as $50,000. Overall, chronic injuries cost Americans $25 billion a year, but a remarkable new device could soon offer a much more effective and cost-efficient way to not only help patients heal, but do so better than ever before.

You must be logged in to view this content.

Photonic Chip Transmits All of the Internet’s Traffic Every Second

An international group of researchers from Technical University of Denmark (DTU) and Chalmers University of Technology in Gothenburg, Sweden have achieved dizzying data transmission speeds and are the first in the world to transmit more than 1 petabit per second (Pbit/s) using only a single laser and a single optical chip1 petabit corresponds to 1 million gigabits.

In the experiment, the researchers succeeded in transmitting 1.8 Pbit/s, which corresponds to twice the total global Internet traffic. And only carried by the light from one optical source. The light source is a custom-designed optical chip, which can use the light from a single infrared laser to create a rainbow spectrum of many colours, i.e. many frequencies. Thus, the one frequency (colour) of a single laser can be multiplied into hundreds of frequencies (colours) in a single chip.

All the colours are fixed at a specific frequency distance from each other – just like the teeth on a comb – which is why it is called a frequency comb. Each colour (or frequency) can then be isolated and used to imprint data. The frequencies can then be reassembled and sent over an optical fibre, thus transmitting data. Even a huge volume of data, as the researchers have discovered.

The experimental demonstration showed that a single chip could easily carry 1.8 Pbit/s, which—with contemporary state-of-the-art commercial equipment—would otherwise require more than 1,000 lasers. Victor Torres Company, professor at Chalmers University of Technology, is head of the research group that has developed and manufactured the chip.

What is special about this chip is that it produces a frequency comb with ideal characteristics for fiber-optical communications – it has high optical power and covers a broad bandwidth within the spectral region that is interesting for advanced optical communications,” says Victor Torres Company.

Interestingly enough, the chip was not optimized for this particular application. “In fact, some of the characteristic parameters were achieved by coincidence and not by design,” adds Victor Torres Company. “However, with efforts in my team, we are now capable to reverse engineer the process and achieve with high reproducibility microcombs for target applications in telecommunications.

In addition, the researchers created a computational model to examine theoretically the fundamental potential for data transmission with a single chip identical to the one used in the experiment. The calculations showed enormous potential for scaling up the solution.

Professor Leif Katsuo Oxenløwe, Head of the Centre of Excellence for Silicon Photonics for Optical Communications (SPOC) at DTU, explains:

Our calculations show that—with the single chip made by Chalmers University of Technology, and a single laser—we will be able to transmit up to 100 Pbit/s. The reason for this is that our solution is scalable—both in terms of creating many frequencies and in terms of splitting the frequency comb into many spatial copies and then optically amplifying them, and using them as parallel sources with which we can transmit data. Although the comb copies must be amplified, we do not lose the qualities of the comb, which we utilize for spectrally efficient data transmission.”

Source: https://www.dtu.dk/

Researchers Build AI That Builds AI

Artificial intelligence is largely a numbers game. When deep neural networks, a form of AI that learns to discern patterns in data, began surpassing traditional algorithms 10 years ago, it was because we finally had enough data and processing power to make full use of them.

Today’s neural networks are even hungrier for data and power. Training them requires carefully tuning the values of millions or even billions of parameters that characterize these networks, representing the strengths of the connections between artificial neurons. The goal is to find nearly ideal values for them, a process known as optimization, but training the networks to reach this point isn’t easy.

Training could take days, weeks or even months,” said Petar Veličković, a staff research scientist at DeepMind in London.

That may soon change. Boris Knyazev of the University of Guelph in Ontario and his colleagues have designed and trained a “hypernetwork” — a kind of overlord of other neural networks — that could speed up the training process. Given a new, untrained deep neural network designed for some task, the hypernetwork predicts the parameters for the new network in fractions of a second, and in theory could make training unnecessary. Because the hypernetwork learns the extremely complex patterns in the designs of deep neural networks, the work may also have deeper theoretical implications.

For now, the hypernetwork performs surprisingly well in certain settings, but there’s still room for it to grow — which is only natural given the magnitude of the problem. If they can solve it, “this will be pretty impactful across the board for machine learning,” said Veličković.

Source: https://www.quantamagazine.org

All of Human Knowledge in a Tablespoon of DNA

A tiny strand of DNA has an amazing ability to store some pretty big data. In fact, researchers estimate that all of the world’s digital data could fit into just 20 grams (a little more than a tablespoon) of DNA.

Every time we do anything on the internet, such as upload a photo or send an email, we generate data — which means we generate a lot of data. Most of our data is stored in the “cloud” — which really just means massive data centers around the worldLast year, all of Wikipedia was stored in a few strands of DNA.

However, these centers take up a lot of space, are costly to maintain, and account for nearly 2% of U.S. energy consumption. Plus, we’re going to keep generating more and more.) data, and existing centers won’t be able to store it all. We’ll either need to start discarding a bunch of potentially useful information, or we’ll need to think smaller and figure out how to store more information in less space.

Information in DNA is consistently sequenced, synthesized, copied, and stored… just like the information on your phone or computer’s hard drive. However, when it comes to storage, DNA has some considerable advantages. Aside from being extremely dense, it’s also extremely durable: it can survive passively for millennia. We were able to decode information from the DNA found in the 5,300-year-old corpse of Otzi the Iceman.

That’s clearly not the case with silicon microchips or magnetic tape, which tend to degrade and need replacing every few years. Last year, researchers announced that they had stored the English-version of Wikipedia (yes, all of it!) into a few strands of DNA. This year, we’ve used DNA to store the Wizard of Oz and an episode of Biohackers (a Netflix series). There’ve also been other random works of art saved, too: War and Peace, Shakespeare sonnets, and an OK Go music video.

While you and I won’t be buying synthetic DNA hard drives, considerable efforts are underway to make this a viable storage system for those who need to store a bunch of important, but rarely accessed, archival data (i.e., governments and big corporations). Researchers expect that more consumer-friendly options are at least a decade away.

In order to retrieve the information, you’ll need a DNA sequencing machine (the same machines used for sequencing human genomes), and that’s one of the hurdles to making this a household technology. As CNET points out, one of the machines “would fit easily in your house if you first got rid of your refrigerator, oven and some counter space.”

Another hurdle is the time it takes to retrieve the information. Last year, it took 21 hours to retrieve the word “Hello” (roughly 5 bytes of data) from DNA. If we’re going to use DNA as a storage system, we’ll have to make it many orders of magnitude faster.

Source: https://www.freethink.com/

Photonic Chips

Emitting light from silicon has been the ‘Holy Grail’ in the microelectronics industry for decades. Solving this puzzle would revolutionize computing, as chips will become faster than ever. Researchers from Eindhoven University of Technology  (TU-e) now succeeded: they have developed an alloy with silicon that can emit light. The team will now start creating a silicon laser to be integrated into current chips.

Every year we use and produce significantly more data. But our current technology, based on electronic chips, is reaching its ceiling. The limiting factor is heat, resulting from the resistance that the electrons experience when traveling through the copper lines connecting the many transistors on a chip. If we want to continue transferring more and more data every year, we need a new technique that does not produce heat. Bring in photonics, which uses photons (light particles) to transfer data. In contrast to electrons, photons do not experience resistance. As they have no mass or charge, they will scatter less within the material they travel through, and therefore no heat is produced. The energy consumption will therefore be reduced. Moreover, by replacing electrical communication within a chip by optical communication, the speed of on-chip and chip-to-chip communication can be increased by a factor 1000. Data centers would benefit most, with faster data transfer and less energy usage for their cooling system. But these photonic chips will also bring new applications within reach. Think of laser-based radar for self-driving cars and chemical sensors for medical diagnosis or for measuring air and food quality.

To use light in chips, you will need a light source; an integrated laser. The main semiconductor material that computer chips are made of is silicon. But bulk silicon is extremely inefficient at emitting light, and so was long thought to play no role in photonics. Thus, scientists turned to more complex semiconductors, such as gallium arsenide and indium phosphide. These are good at emitting light but are more expensive than silicon and are hard to integrate into existing silicon microchips.

To create a silicon compatible laser, scientists needed to produce a form of silicon that can emit light. That’s exactly what researchers from Eindhoven University of Technology (TU/e) now succeeded in. Together with researchers from the universities of Jena, Linz and Munich, they combined silicon and germanium in a hexagonal structure that is able to emit light. A breakthrough after 50 years of work.

Nanowires with hexagonal silicon-germanium shells

The crux is in the nature of the so-called band gap of a semiconductor,” says lead researcher Erik Bakkers from TU/e. “If an electron ‘drops’ from the conduction band to the valence band, a semiconductor emits a photon: light.” But if the conduction band and valence band are displaced with respect to each other, which is called an indirect band gap, no photons can be emitted – as is the case in silicon. “A 50-year old theory showed however that silicon, alloyed with germanium, shaped in a hexagonal structure does have a direct band gap, and therefore potentially could emit light,” explains Bakkers.

Shaping silicon in a hexagonal structure, however, is not easy. As Bakkers and his team master the technique of growing nanowires, they were able to create hexagonal silicon in 2015. They realized pure hexagonal silicon by first growing nanowires made from another material, with a hexagonal crystal structure. Then they grew a silicon-germanium shell on this template. Elham Fadaly, shared first author of the study: “We were able to do this such that the silicon atoms are built on the hexagonal template, and by this forced the silicon atoms to grow in the hexagonal structure.” But they could not yet make them to emit light, until now. Bakkers team managed to increase the quality of the hexagonal silicon-germanium shells by reducing the number of impurities and crystal defects. When exciting the nanowire with a laser, they could measure the efficiency of the new material. Alain Dijkstra, also shared first author of the study and responsible for measuring the light emission: “Our experiments showed that the material has the right structure, and that it is free of defects. It emits light very efficiently.”

The findings have been published in the journal Nature.

Source: https://www.tue.nl/

AI Detects Visual Signs Of Covid-19

Zhongnan Hospital of Wuhan University in Wuhan, China, is at the heart of the outbreak of Covid-19, the disease caused by the new coronavirus SARS-CoV-2 that has shut down cities in China, South Korea, Iran, and Italy. That’s forced the hospital to become a testbed for how quickly a modern medical center can adapt to a new infectious disease epidemic.

One experiment is underway in Zhongnan’s radiology department, where staff are using artificial intelligence software to detect visual signs of the pneumonia associated with Covid-19 on lung CT scan images. Haibo Xu, professor and chair of radiology at Zhongnan Hospital, says the software helps overworked staff screen patients and prioritize those most likely to have Covid-19 for further examination and testing 

Detecting pneumonia on a scan doesn’t alone confirm a person has the disease, but Xu says doing so helps staff diagnose, isolate, and treat patients more quickly. The software “can identify typical signs or partial signs of Covid-19 pneumonia,” he wrotel. Doctors can then follow up with other examinations and lab tests to confirm a diagnosis of the disease. Xu says his department was quickly overwhelmed as the virus spread through Wuhan in January.

The software in use at Zhongnan was created by Beijing startup Infervision, which says  its Covid-19 tool has been deployed at 34 hospitals in China and used to review more than 32,000 cases. The startup, founded in 2015 with funding from investors including early Google backer Sequoia Capital, is an example of how China has embraced applying artificial intelligence to medicine.

China’s government has urged development of AI tools for healthcare as part of sweeping national investments in artificial intelligence. China’s relatively lax rules on privacy allow companies such as Infervision to gather medical data to train machine learning algorithms in tasks like reading scans more easily than US or European rivals.

Infervision created its main product, software that flags possible lung problems on CT scans, using hundreds of thousands of lung images collected from major Chinese hospitals. The software is in use at hospitals in China, and being evaluated by clinics in Europe, and the US, primarily to detect potentially cancerous lung nodulesInfervision began work on its Covid-19 detector early in the outbreak after noticing a sudden shift in how existing customers were using its lung-scan-reading software. In mid-January, not long after the US Centers for Disease Control advised against travel to Wuhan due to the new disease, hospitals in Hubei Province began employing a previously little-used feature of Infervision’s software that looks for evidence of pneumonia, says CEO Kuan Chen. “We realized it was coming from the outbreak,” he says.

Source: https://www.wired.com/

5G Technology, 22 Times More Powerful Than 4G

Researchers at the universities of Lund (Sweden) and Bristol (UK) have conducted a number of experiments using a form of 5G technology called Massive MIMO (multiple input, multiple output), and set not one but two world records in so-called spectrum efficiency for wireless communication. Spectrum efficiency measures how much data can successfully be packed into a radio signal transmitted from an antenna.

This 5G technology developed by the researchers is extremely efficient – in fact, the most efficient technology ever when it comes to managing many simultaneous users. The latest world record was set when researchers from Lund and Bristol attained more than 20 times the total data speed of today’s 4G technology, thereby almost doubling the previous record where they, using the same technology, achieved a twelve-fold improvement.

Setting a new world record was a significant event as it demonstrated that it is possible to transmit 22 times more data compared to current wireless systems. Although the goal is for 5G to increase the total transmission capacity by a factor 1 000, this is still a big step”, says Steffen Malkowsky, researcher in Electrical and Information Technology at the Lund University Faculty of Engineering.

Source: https://www.lunduniversity.lu.se/

A stamp-sized nanofilm stores more data than 200 DVDs

Ninety percent of the world’s data has been created in the last two years, with a massive 2.5 quintillion bytes generated every single day. As you might suspect, this causes some challenges when it comes to storage. While one option is to gradually turn every square inch of free land into giant data centers, researchers from the  Center for Advanced Optoelectronic Functional Material Research, Northeast Normal University (China) may have come up with a more elegant solution. In a potential breakthrough, they have developed a new nanofilm80 times thinner than a human hair — that is able to store large amounts of data holographically. A single 10-by-10 cm piece of this film could archive more than 1,000 times the amount of data found on a DVD. By our count, that means around 8.5 TB of data. This data can also be retrieved incredibly quickly, at speeds of up to 1GB per second: The equivalent of 20 times the reading speed of modern flash memory.

In the journal Optical Materials Express, the researchers detail the fabrication process of the new film. This involves using a laser to write information onto silver nanoparticles on a titanium dioxide (titania) semiconductor film. This stores the data in the form of 3D holograms, thereby allowing it to be compressed into smaller spaces than regular optical systems.

That’s exciting enough, but what really makes the work promising is the fact that the data is stored in a way that is stable. Previous attempts at creating films for holographic data storage have proven less resilient than alternate storage methods since they can be wiped by exposure to ultraviolet light. That makes them less-than-viable options for long-term information storage. The creators of this new film, however, have shown that it has a high stability even in the presence of such light. This environmental stability means that the device could be used outside — or even conceivably in harsher radiation conditions like outer space.

Going forward, the researchers aim to test their new film by putting it through its paces outdoors. Should all go according to plan, it won’t be too long before this is available on the market. We might be willing to throw down a few bucks on Kickstarter for a piece!

Source: https://www.osapublishing.org
AND
https://www.digitaltrends.com/