What is generative AI?

Generative artificial intelligence (AI) describes algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos. Recent new breakthroughs in the field have the potential to drastically change the way we approach content creation.

Generative AI systems fall under the broad category of machine learning, and here’s how one such system—ChatGPT—describes what it can do:

Ready to take your creativity to the next level? Look no further than generative AI! This nifty form of machine learning allows computers to generate all sorts of new and exciting content, from music and art to entire virtual worlds. And it’s not just for fungenerative AI has plenty of practical uses too, like creating new product designs and optimizing business processes. So why wait? Unleash the power of generative AI and see what amazing creations you can come up with!

Did anything in that paragraph seem off to you? Maybe not. The grammar is perfect, the tone works, and the narrative flows. That’s why ChatGPT—the GPT stands for generative pretrained transformer—is receiving so much attention right now. It’s a free chatbot that can generate an answer to almost any question it’s asked. Developed by OpenAI, and released for testing to the general public in November 2022, it’s already considered the best AI chatbot ever. And it’s popular too: over a million people signed up to use it in just five days. Starry-eyed fans posted examples of the chatbot producing computer code, college-level essays, poems, and even halfway-decent jokes.

You must be logged in to view this content.

Making Computer Chips With Human Cells

In 2030 the smartphones could contain a super powerful processor that perform a quintillon operations per second, a thousand times faster than smartphone biologicalmodels in 2020. This huge performance gains is possible because a new biological chip using lab-grown human neurons,  better than silicon chips, can change their internal structure, adapting to a user’s usage pattern and leading to huge gains in efficiency.

In December 2021, Melbourne-based Cortical Labs grew groups of neurons (brain cells) that were incorporated into a computer chip. The resulting hybrid chip works because both brains and neurons share a common language: electricity.

In silicon computers, electrical signals travel along metal wires that link different components together. In brains, neurons communicate with each other using electric signals across synapses (junctions between nerve cells). In Cortical LabsDishbrain system, neurons are grown on silicon chips. These neurons act like the wires in the system, connecting different components. The major advantage of this approach is that the neurons can change their shape, grow, replicate, or die in response to the demands of the system.

Dishbrain could learn to play the arcade game Pong faster than conventional AI systems. The developers of Dishbrain said: “Nothing like this has ever existed before … It is an entirely new mode of being. A fusion of silicon and neuron.”

Cortical Labs believes its hybrid chips could be the key to the kinds of complex reasoning that today’s computers and AI cannot produce. Another start-up making computers from lab-grown neuronsKoniku, believes their technology will revolutionise several industries including agriculture, healthcare, military technology and airport security. Other types of organic computers are also in the early stages of development.

While silicon computers transformed society, they are still outmatched by the brains of most animals. For example, a  more data storage than an average iPad and can use this information a million times faster. The human brain, with its trillion neural connections, is capable of making 15 quintillion operations per second.

Source: https://theconversation.com/

Home-grown Semiconductors Ideal for Quantum Computing

Growing electronic components directly onto a semiconductor block avoids messy, noisy oxidation scattering that slows and impedes electronic operation. A UNSW (Australia) study out this month shows that the resulting high-mobility components are ideal candidates for high-frequency, ultra-small electronic devices, quantum dots, and for qubit applications in quantum computing.

Making computers faster requires ever-smaller transistors, with these electronic components now only a handful of nanometres in size. (There are around 12 billion transistors in the postage-stamp sized central chip of modern smartphones.)

However, in even smaller devices, the channel that the electrons flow through has to be very close to the interface between the semiconductor and the metallic gate used to turn the transistor on and off.  Unavoidable surface oxidation and other surface contaminants cause unwanted scattering of electrons flowing through the channel, and also lead to instabilities and noise that are particularly problematic for quantum devices.

In the new work we create transistors in which an ultra-thin metal gate is grown as part of the semiconductor crystal, preventing problems associated with oxidation of the semiconductor surface,” says lead author Yonatan Ashlea Alava.

We have demonstrated that this new design dramatically reduces unwanted effects from surface imperfections, and show that nanoscale quantum point contacts exhibit significantly lower noise than devices fabricated using conventional approaches,” says Yonatan, who is a FLEET PhD student.

This new all single-crystal design will be ideal for making ultra-small electronic devices, quantum dots, and for qubit applications,” comments group leader Prof Alex Hamilton at UNSW.

Collaborating with wafer growers at Cambridge University, the team at UNSW Sydney showed that the problem associated with surface charge can be eliminated by growing an epitaxial aluminium gate before removing the wafer from the growth chamber.

We confirmed the performance improvement via characterisation measurements in the lab at UNSW,” says co-author Dr Daisy Wang.

The high conductivity in ultra-shallow wafers, and the compatibility of the structure with reproducible nano-device fabrication, suggests that MBE-grown aluminium gated wafers are ideal candidates for making ultra-small electronic devices, quantum dots, and for qubit applications.

Source: https://www.fleet.org.au/

Elon Musk Promises Demo Of A Working Neuralink Device Today

Neuralink, the secretive firm has been relatively quiet since its first public event in July 2019, when Musk and his team explained how the firm plans to use chips to link human brains up to computers. On Tuesday, Musk revealed more details about the event via his Twitter page. The event is set to feature a “live webcast of a working Neuralink device,” giving the public its first glimpse of the device in action. The stream is scheduled to take place on FRIDAY, AUGUST 28 AT 3 P.M. PACIFIC TIME, or 12 p.m. Eastern time.

The motto hints at one of Musk’s biggest goals with Neuralink. While it’s currently focused on creating chips that could help medical patients, Musk has spoken before about his fear that artificial intelligence could one day outsmart humanity. Neuralink, Musk reasons, could help humans more effectively communicate with these smarter systems, and develop a symbiotic relationship with machines.

It’s important that Neuralink solves this problem sooner rather than later, because the point at which we have digital superintelligence, that’s when we pass the singularity and things become just very uncertain,” Musk said in a November 2019 interview.

It’s an ambitious goal, but Musk has hinted that Friday’s event will be a more grounded affair. That doesn’t mean there won’t be surprises in store, however, and Musk’s comments suggest it could offer something spectacular.

Source: https://www.inverse.com/

10 Artificial Intelligence Trends

While no prediction engine has yet been built that can plot the course of AI over the coming decade, we can be fairly certain about what might happen over the next year. Spending on research, development, and deployment continues to rise, and debate over the wider social implications rages on.

1. AI will increasingly be monitoring and refining business processes

While the first robots in the workplace were mainly involved with automating manual tasks such as manufacturing and production lines, today’s software-based robots will take on the repetitive but necessary work that we carry out on computers. Filling in forms, generating reports and diagrams and producing documentation and instructions are all tasks that can be automated by machines that watch what we do and learn to do it for us in a quicker and more streamlined manner. This automation – known as robotic process automation – will free us from the drudgery of time-consuming but essential administrative work, leaving us to spend more time on complex, strategic, creative and interpersonal tasks.

This trend is driven by the success of internet giants like Amazon, Alibaba, and Google, and their ability to deliver personalized experiences and recommendations. AI allows providers of goods and services to quickly and accurately project a 360-degree view of customers in real-time as they interact through online portals and mobile apps, quickly learning how their predictions can fit our wants and needs with ever-increasing accuracy. Just as pizza delivery companies like Dominos will learn when we are most likely to want pizza, and make sure theOrder Now button is in front of us at the right time, every other industry will roll out solutions aimed at offering personalized customer experiences at scale.

3. AI becomes increasingly useful as data becomes more accurate and available

The quality of information available is often a barrier to businesses and organizations wanting to move towards AI-driven automated decision-making. But as technology and methods of simulating real-world processes and mechanisms in the digital domain have improved over recent years, accurate data has become increasingly available. Simulations have advanced to the stage where car manufacturers and others working on the development of autonomous vehicles can gain thousands of hours of driving data without vehicles even leaving the lab, leading to huge reductions in cost as well as increases in the quality of data that can be gathered. Why risk the expense and danger of testing AI systems in the real world when computers are now powerful enough, and trained on accurate-enough data, to simulate it all in the digital world? 2020 will see an increase in the accuracy and availability of real-world simulations, which in turn will lead to more powerful and accurate AI.

4. More devices will run AI-powered technology

As the hardware and expertise needed to deploy AI become cheaper and more available, we will start to see it used in an increasing number of tools, gadgets, and devices. In 2019 we’re already used to running apps that give us AI-powered predictions on our computers, phones, and watches. As the next decade approaches and the cost of hardware and software continues to fall, AI tools will increasingly be embedded into our vehicles, household appliances, and workplace tools. Augmented by technology such as virtual and augmented reality displays, and paradigms like the cloud and Internet of Things, the next year will see more and more devices of every shape and size starting to think and learn for themselves.

5. Human and AI cooperation increases

More and more of us will get used to the idea of working alongside AI-powered tools and bots in our day-to-day working lives. Increasingly, tools will be built that allow us to make the most of our human skills – those which AI can’t quite manage yet – such as imaginative, design, strategy, and communication skills. While augmenting them with super-fast analytics abilities fed by vast datasets that are updated in real-time.

For many of us, this will mean learning new skills, or at least new ways to use our skills alongside these new robotic and software-based tools. The IDC predicts that by 2025, 75% of organizations will be investing in employee retraining in order to fill skill gaps caused by the need to adopt AI.

6. AI increasingly at the “edge”

Much of the AI we’re used to interacting with now in our day-to-day lives takes place “in the cloud” – when we search on Google or flick through recommendations on Netflix, the complex, data-driven algorithms run on high-powered processors inside remote data centers, with the devices in our hands or on our desktops simply acting as conduits for information to pass through.

Increasingly, however, as these algorithms become more efficient and capable of running on low-power devices, AI is taking place at the “edge,” close to the point where data is gathered and used. This paradigm will continue to become more popular in 2020 and beyond, making AI-powered insights a reality outside of the times and places where super-fast fiber optic and mobile networks are available. Custom processors designed to carry out real-time analytics on-the-fly will increasingly become part of the technology we interact with day-to-day

7. AI increasingly used to create films, music, and games

Some things, even in 2020, are probably still best left to humans. Anyone who has seen the current state-of-the-art in AI-generated music, poetry or storytelling is likely to agree that the most sophisticated machines still have some way to go until their output will be as enjoyable to us as the best that humans can produce. However, the influence of AI on entertainment media is likely to increase.

In videogames, AI will continue to be used to create challenging, human-like opponents for players to compete against, as well as to dynamically adjust gameplay and difficulty so that games can continue to offer a compelling challenge for gamers of all skill levels. And while completely AI-generated music may not be everyone’s cup of tea, where AI does excel is in creating dynamic soundscapes – think of smart playlists on services like Spotify or Google Music that match tunes and tempo to the mood and pace of our everyday lives.

8. AI will become ever more present in cybersecurity

As hacking, phishing and social engineering attacks become ever-more sophisticated, and themselves powered by AI and advanced prediction algorithms, smart technology will play an increasingly important role in protecting us from these attempted intrusions into our lives. AI can be used to spot giveaway signs that digital activity or transactions follow patterns that are likely to be indicators of nefarious activity, and raise alarms before defenses can be breached and sensitive data compromised.

The rollout of 5G and other super-fast wireless communications technology will bring huge opportunities for businesses to provide services in new and innovative ways, but they will also potentially open us up to more sophisticated cyber-attacks. Spending on cybersecurity will continue to increase, and those with relevant skills will be highly sought-after.

9. More of us will interact with AI, maybe without even knowing it

Let’s face it, despite the huge investment in recent years in natural-language powered chatbots in customer service, most of us can recognize whether we’re dealing with a robot or a human. However, as the datasets used to train natural language processing algorithms continue to grow, the line between humans and machines will become harder and harder to distinguish. With the advent of deep learning and semi-supervised models of machine learning such as reinforcement learning, the algorithms that attempt to match our speech patterns and infer meaning from our own human language will become more and more able to fool us into thinking there is a human on the other end of the conversation.

10. But AI will recognize us, even if we don’t recognize it

Perhaps even more unsettlingly, the rollout of facial recognition technology is only likely to intensify as we move into the next decade. Not just in China (where the government is looking at ways of making facial recognition compulsory for accessing services like communication networks and public transport) but around the world.

Source: https://www.forbes.com/

How To Merge Your Brain With A.I.

Elon Musk said startup Neuralink, which aims to build a scalable implant to connect human brains with computers, has already implanted chips in rats and plans to test its brain-machine interface in humans within two years, with a long-term goal of peoplemerging with AI.” Brain-machine interfaces have been around for awhile. Some of the earliest success with the technology include Brown University’s BrainGate, which first enabled a paralyzed person to control a computer cursor in 2006. Since then a variety of research groups and companies, including the University of Pittsburgh Medical Center and DARPA-backed Synchron, have been working on similar devices. There are two basic approaches: You can do it invasively, creating an interface with an implant that directly touches the brain, or you can do it non-invasively, usually by electrodes placed near the skin. (The latter is the approach used by startup CTRL-Labs, for example.)

Neuralink, says Musk, is going to go the invasive route. It’s developed a chip containing an array of up to 96 small, polymer threads, each with up to 32 electrodes that can be implanted into the brain via robot and a 2 millimeter incision. The threads are small — less than 6 micrometers — because, as Musk noted in remarks delivered Tuesday night and webcast, Once implanted, according to Musk, the chip would connect wirelessly to devices. “It basically Bluetooths to your phone,” he said. “We’ll have to watch the App Store updates to that one,” he added (the audience laughed).

Musk cofounded Neuralink in 2017 and serves as the company’s CEO, though it’s unclear how much involvement he has given that he’s also serving as CEO for SpaceX and Tesla. Company cofounder and president, Max Hodak, has a biomedical engineering degree from Duke and has cofounded two other companies, MyFit and Transcriptic. Neuralink has raised $66.27 million in venture funding so far, according to Pitchbook, which estimates the startup’s valuation at $509.3 million. Both Musk and Hodak spoke about the potential for its company’s neural implants to improve the lives of people with brain damage and other brain disabilities. Its first goal, based on its discussions with such patients, is the ability to control a mobile device.

The company’s long-term goal is a bit more fantastical, and relates to Musk’s oft-repeated concerns over the dangers of advanced artificial intelligence. That goal is to use the company’s chips to create a “tertiary level” of the brain that would be linked to artificial intelligence.We can effectively have the option of merging with AI,” he said. “After solving a bunch of brain related diseases there is the mitigation of the existential threat of AI,” he continued.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

In terms of progress, the company says that it has built a chip and a robot to implant it, which it has implanted into rats. According to the whitepaper the company has published (which has not yet undergone any peer review), it was able to record rat brain activity from its chips, and with many more channels than exist on current systems in use with humans. The first human clinical trials are expected for next year, though Hodak mentioned that the company has not yet begun to the FDA processes needed to conduct those tests.

Source: https://www.forbes.com/

AI Closer To The Efficiency Of The Brain

Computers and artificial intelligence continue to usher in major changes in the way people shop. It is relatively easy to train a robot’s brain to create a shopping list, but what about ensuring that the robotic shopper can easily tell the difference between the thousands of products in the store?

Purdue University researchers and experts in brain-inspired computing think part of the answer may be found in magnets. The researchers have developed a process to use magnetics with brain-like networks to program and teach devices such as personal robots, self-driving cars and drones to better generalize about different objects.

Our stochastic neural networks try to mimic certain activities of the human brain and compute through a connection of neurons and synapses,” said Kaushik Roy, Purdue’s Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering. “This allows the computer brain to not only store information but also to generalize well about objects and then make inferences to perform better at distinguishing between objects.

The stochastic switching behavior is representative of a sigmoid switching behavior of a neuron. Such magnetic tunnel junctions can be also used to store synaptic weights. Roy presented the technology during the annual German Physical Sciences Conference earlier this month in Germany. The work also appeared in the Frontiers in Neuroscience.

The switching dynamics of a nano-magnet are similar to the electrical dynamics of neurons. Magnetic tunnel junction devices show switching behavior, which is stochastic in nature.  The Purdue group proposed a new stochastic training algorithm for synapses using spike timing dependent plasticity (STDP), termed Stochastic-STDP, which has been experimentally observed in the rat’s hippocampus. The inherent stochastic behavior of the magnet was used to switch the magnetization states stochastically based on the proposed algorithm for learning different object representations. “The big advantage with the magnet technology we have developed is that it is very energy-efficient,” said Roy, who leads Purdue’s Center for Brain-inspired Computing Enabling Autonomous Intelligence. “We have created a simpler network that represents the neurons and synapses while compressing the amount of memory and energy needed to perform functions similar to brain computations.

Source: https://www.purdue.edu/

First Woman To Win Mathematics’ Prestigious Abel Prize

This year, another glass ceiling broke when Karen Uhlenbeck became the first woman to win mathematics‘ prestigious Abel Prize. Her achievement no doubt inspired legions of young girls already passionate about STEM—science, technology, engineering, and mathematics—and served as a salute to the woman mathematicians who came before.

One such woman is NASA mathematician Katherine Johnson, who once said of her love of math, “I counted everything. I counted the steps to the road, the steps up to church, the number of dishes and silverware I washed … anything that could be counted, I did.”

Here, for National Women’s History Month, we honor some coding pioneers whose careful calculations led to many of the world’s greatest technological advances, from programming the first computers to successfully putting humans on the moon.

Source: https://www.nationalgeographic.com/science/

Artificial Synapses Made from Nanowires

Scientists from Jülich together with colleagues from Aachen and Turin have produced a memristive element made from nanowires that functions in much the same way as a biological nerve cell. The component is able to both save and process information, as well as receive numerous signals in parallel. The resistive switching cell made from oxide crystal nanowires is thus proving to be the ideal candidate for use in building bioinspired “neuromorphic” processors, able to take over the diverse functions of biological synapses and neurons.

Image captured by an electron microscope of a single nanowire memristor (highlighted in colour to distinguish it from other nanowires in the background image). Blue: silver electrode, orange: nanowire, yellow: platinum electrode. Blue bubbles are dispersed over the nanowire. They are made up of silver ions and form a bridge between the electrodes which increases the resistance.

Computers have learned a lot in recent years. Thanks to rapid progress in artificial intelligence they are now able to drive cars, translate texts, defeat world champions at chess, and much more besides. In doing so, one of the greatest challenges lies in the attempt to artificially reproduce the signal processing in the human brain. In neural networks, data are stored and processed to a high degree in parallel. Traditional computers on the other hand rapidly work through tasks in succession and clearly distinguish between the storing and processing of information. As a rule, neural networks can only be simulated in a very cumbersome and inefficient way using conventional hardware.

Systems with neuromorphic chips that imitate the way the human brain works offer significant advantages. Experts in the field describe this type of bioinspired computer as being able to work in a decentralised way, having at its disposal a multitude of processors, which, like neurons in the brain, are connected to each other by networks. If a processor breaks down, another can take over its function. What is more, just like in the brain, where practice leads to improved signal transfer, a bioinspired processor should have the capacity to learn.

With today’s semiconductor technology, these functions are to some extent already achievable. These systems are however suitable for particular applications and require a lot of space and energy,” says Dr. Ilia Valov from Forschungszentrum Jülich. “Our nanowire devices made from zinc oxide crystals can inherently process and even store information, as well as being extremely small and energy efficient,” explains the researcher from Jülich’s Peter Grünberg Institute.

Source: http://www.fz-juelich.de/