Dancing Robots

Boston DynamicsAtlas and Spot robots can do a lot of things: sprinting, gymnastic routines,parkour, backflips, open doors to let in an army of their friends, wash dishes, and (poorly) get actual jobs. But the company’s latest video adds another impressive trick to our future robotic overlords’ repertoire: busting sick dance moves.

CLICK THE PICTURE TO ENJOY THE DANCING ROBOT

The video sees Boston Dynamics entire lineup of robots — the humanoid Atlas, the dog-shaped Spot, and the box-juggling Handle — all come together in a bopping, coordinated dance routine set to The Contours’ “Do You Love Me.”

t’s not the first time Boston Dynamics has shown off its robotsdancing skills: the company showcased a video of its Spot robot doing the Running Man to “Uptown Funk” in 2018. but the new video takes things to another level, with the Atlas robot tearing it up on the dance floor: smoothly running, jumping, shuffling, and twirling through different moves.

Things get even more incredible as more robots file out, prancing around in the kind of coordinated dance routine that puts my own, admittedly awful human dancing to shame. Compared to the jerky movements of the 2016 iteration of Atlas, the new model almost looks like a CGI creation.

Boston Dynamics was recently purchased by Hyundai, which bought the robotics firm from SoftBank in a $1.1 billion deal. The company was originally founded in 1992 as a spin-off from the Massachusetts Institute of Technology, where it became known for its dog-like quadrupedal robots (most notably, the DARPA-funded BigDog, a precursor to the company’s first commercial robot, Spot.) It was bought by Alphabet’s X division in 2013, and then by Softbank in 2017.

While the Atlas and Handle robots featured here are still just research prototypes, Boston Dynamics has recently started selling the Spot model to any company for the considerable price of $74,500. But can you really put a price on creating your own personal legion of boogieing robot minions?

Source: https://www.bostondynamics.com/
AND
https://www.theverge.com/

NanoRobots Injected Into Human Bodies

In 1959, former Cornell physicist Richard Feynman delivered his famous lecture “There’s Plenty of Room at the Bottom,” in which he described the opportunity for shrinking technology, from machines to computer chips, to incredibly small sizes. Well, the bottom just got more crowded. A Cornell-led collaboration has created the first microscopic robots that incorporate semiconductor components, allowing them to be controlled – and made to walk – with standard electronic signals. These robots, roughly the size of paramecium, provide a template for building even more complex versions that utilize silicon-based intelligence, can be mass produced, and may someday travel through human tissue and blood.

The collaboration is led by Itai Cohen, professor of physics, Paul McEuen, the John A. Newman Professor of Physical Science – both in the College of Arts and Sciences – and their former postdoctoral researcher Marc Miskin, who is now an assistant professor at the University of Pennsylvania.

The walking robots are the latest iteration, and in many ways an evolution, of Cohen and McEuen’s previous nanoscale creations, from microscopic sensors to graphene-based origami machines. The new robots are about 5 microns thick (a micron is one-millionth of a meter), 40 microns wide and range from 40 to 70 microns in length. Each bot consists of a simple circuit made from silicon photovoltaics – which essentially functions as the torso and brain – and four electrochemical actuators that function as legs. As basic as the tiny machines may seem, creating the legs was an enormous feat.

In the context of the robot’s brains, there’s a sense in which we’re just taking existing semiconductor technology and making it small and releasable,” said McEuen, who co-chairs the Nanoscale Science and Microsystems Engineering (NEXT Nano) Task Force, part of the provost’s Radical Collaboration initiative, and directs the Kavli Institute at Cornell for Nanoscale Science.

But the legs did not exist before,” McEuen said. “There were no small, electrically activatable actuators that you could use. So we had to invent those and then combine them with the electronics.”

The team’s paper, “Electronically Integrated, Mass-Manufactured, Microscopic Robots,” has been published  in Nature.

Source: https://news.cornell.edu/
AND
https://thenextweb.com/

Stem Cells Used To Create First Living Robots

Be warned. If the rise of the robots comes to pass, the apocalypse may be a more squelchy affair than science fiction writers have prepared us for. Researchers from the University of Vermont and Tufts University have created the first living machines by assembling cells from African clawed frogs into tiny robots that move around under their own steam.

One of the most successful creations has two stumpy legs that propel it along on its “chest”. Another has a hole in the middle that researchers turned into a pouch so it could shimmy around with miniature payloads.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

These are entirely new lifeforms. They have never before existed on Earth,” said Michael Levin, the director of the Allen Discovery Center at Tufts University in Medford, Massachusetts. “They are living, programmable organisms.”

Roboticists tend to favour metal and plastic for their strength and durability, but Levin and his colleagues see benefits in making robots from biological tissues. When damaged, living robots can heal their wounds, and once their task is done they fall apart, just as natural organisms decay when they die.

Source: https://www.uvm.edu/
AND
https://www.theguardian.com/

Toyota To Build A Smart City Powered By Hydrogen

 

Japanese carmaker Toyota has announced plans to create a 175-acre smart city in Japan where it will test driverless cars and artificial intelligence. The project, announced at the Consumer Electronics Show in Las Vegas, will break ground at the base of Mount Fuji in 2021. Woven City will initially be home to 2,000 people who will test technologies including robots and smart homesToyota said in a press release that only driverless and electric vehicles will be allowed on the main streets of Woven CityStreets will be split into three types of thoroughfare: roads for fast vehicles, lanes which are a mixture of personal vehicles and pedestrians, and pedestrian footpaths.

Danish architect Bjarke Ingels has been commissioned to design the new city. His business previously worked on projects including Google’s London and US headquartersToyota said the city will be powered by hydrogen fuel cells and solar panels fitted to the roofs of housesBuildings in Woven City will mostly be made of wood and assembled using “robotised production methods,” Toyota said. 

 “Building a complete city from the ground up, even on a small scale like this, is a unique opportunity to develop future technologies, including a digital operating system for the infrastructure.
“With people, buildings and vehicles all connected and communicating with each other through data and sensors, we will be able to test connected AI technology, in both the virtual and physical realms, maximising its potential,” said Akio Toyoda, Toyota’s president.

Google has also experimented with the creation of its own smart city through its Sidewalk Labs division. The company is hoping to transform a 12-acre plot in Toronto’s waterfront district into a smart city, with the first homes due to appear in 2023.

Source: https://www.telegraph.co.uk/

10 Artificial Intelligence Trends

While no prediction engine has yet been built that can plot the course of AI over the coming decade, we can be fairly certain about what might happen over the next year. Spending on research, development, and deployment continues to rise, and debate over the wider social implications rages on.

1. AI will increasingly be monitoring and refining business processes

While the first robots in the workplace were mainly involved with automating manual tasks such as manufacturing and production lines, today’s software-based robots will take on the repetitive but necessary work that we carry out on computers. Filling in forms, generating reports and diagrams and producing documentation and instructions are all tasks that can be automated by machines that watch what we do and learn to do it for us in a quicker and more streamlined manner. This automation – known as robotic process automation – will free us from the drudgery of time-consuming but essential administrative work, leaving us to spend more time on complex, strategic, creative and interpersonal tasks.

This trend is driven by the success of internet giants like Amazon, Alibaba, and Google, and their ability to deliver personalized experiences and recommendations. AI allows providers of goods and services to quickly and accurately project a 360-degree view of customers in real-time as they interact through online portals and mobile apps, quickly learning how their predictions can fit our wants and needs with ever-increasing accuracy. Just as pizza delivery companies like Dominos will learn when we are most likely to want pizza, and make sure theOrder Now button is in front of us at the right time, every other industry will roll out solutions aimed at offering personalized customer experiences at scale.

3. AI becomes increasingly useful as data becomes more accurate and available

The quality of information available is often a barrier to businesses and organizations wanting to move towards AI-driven automated decision-making. But as technology and methods of simulating real-world processes and mechanisms in the digital domain have improved over recent years, accurate data has become increasingly available. Simulations have advanced to the stage where car manufacturers and others working on the development of autonomous vehicles can gain thousands of hours of driving data without vehicles even leaving the lab, leading to huge reductions in cost as well as increases in the quality of data that can be gathered. Why risk the expense and danger of testing AI systems in the real world when computers are now powerful enough, and trained on accurate-enough data, to simulate it all in the digital world? 2020 will see an increase in the accuracy and availability of real-world simulations, which in turn will lead to more powerful and accurate AI.

4. More devices will run AI-powered technology

As the hardware and expertise needed to deploy AI become cheaper and more available, we will start to see it used in an increasing number of tools, gadgets, and devices. In 2019 we’re already used to running apps that give us AI-powered predictions on our computers, phones, and watches. As the next decade approaches and the cost of hardware and software continues to fall, AI tools will increasingly be embedded into our vehicles, household appliances, and workplace tools. Augmented by technology such as virtual and augmented reality displays, and paradigms like the cloud and Internet of Things, the next year will see more and more devices of every shape and size starting to think and learn for themselves.

5. Human and AI cooperation increases

More and more of us will get used to the idea of working alongside AI-powered tools and bots in our day-to-day working lives. Increasingly, tools will be built that allow us to make the most of our human skills – those which AI can’t quite manage yet – such as imaginative, design, strategy, and communication skills. While augmenting them with super-fast analytics abilities fed by vast datasets that are updated in real-time.

For many of us, this will mean learning new skills, or at least new ways to use our skills alongside these new robotic and software-based tools. The IDC predicts that by 2025, 75% of organizations will be investing in employee retraining in order to fill skill gaps caused by the need to adopt AI.

6. AI increasingly at the “edge”

Much of the AI we’re used to interacting with now in our day-to-day lives takes place “in the cloud” – when we search on Google or flick through recommendations on Netflix, the complex, data-driven algorithms run on high-powered processors inside remote data centers, with the devices in our hands or on our desktops simply acting as conduits for information to pass through.

Increasingly, however, as these algorithms become more efficient and capable of running on low-power devices, AI is taking place at the “edge,” close to the point where data is gathered and used. This paradigm will continue to become more popular in 2020 and beyond, making AI-powered insights a reality outside of the times and places where super-fast fiber optic and mobile networks are available. Custom processors designed to carry out real-time analytics on-the-fly will increasingly become part of the technology we interact with day-to-day

7. AI increasingly used to create films, music, and games

Some things, even in 2020, are probably still best left to humans. Anyone who has seen the current state-of-the-art in AI-generated music, poetry or storytelling is likely to agree that the most sophisticated machines still have some way to go until their output will be as enjoyable to us as the best that humans can produce. However, the influence of AI on entertainment media is likely to increase.

In videogames, AI will continue to be used to create challenging, human-like opponents for players to compete against, as well as to dynamically adjust gameplay and difficulty so that games can continue to offer a compelling challenge for gamers of all skill levels. And while completely AI-generated music may not be everyone’s cup of tea, where AI does excel is in creating dynamic soundscapes – think of smart playlists on services like Spotify or Google Music that match tunes and tempo to the mood and pace of our everyday lives.

8. AI will become ever more present in cybersecurity

As hacking, phishing and social engineering attacks become ever-more sophisticated, and themselves powered by AI and advanced prediction algorithms, smart technology will play an increasingly important role in protecting us from these attempted intrusions into our lives. AI can be used to spot giveaway signs that digital activity or transactions follow patterns that are likely to be indicators of nefarious activity, and raise alarms before defenses can be breached and sensitive data compromised.

The rollout of 5G and other super-fast wireless communications technology will bring huge opportunities for businesses to provide services in new and innovative ways, but they will also potentially open us up to more sophisticated cyber-attacks. Spending on cybersecurity will continue to increase, and those with relevant skills will be highly sought-after.

9. More of us will interact with AI, maybe without even knowing it

Let’s face it, despite the huge investment in recent years in natural-language powered chatbots in customer service, most of us can recognize whether we’re dealing with a robot or a human. However, as the datasets used to train natural language processing algorithms continue to grow, the line between humans and machines will become harder and harder to distinguish. With the advent of deep learning and semi-supervised models of machine learning such as reinforcement learning, the algorithms that attempt to match our speech patterns and infer meaning from our own human language will become more and more able to fool us into thinking there is a human on the other end of the conversation.

10. But AI will recognize us, even if we don’t recognize it

Perhaps even more unsettlingly, the rollout of facial recognition technology is only likely to intensify as we move into the next decade. Not just in China (where the government is looking at ways of making facial recognition compulsory for accessing services like communication networks and public transport) but around the world.

Source: https://www.forbes.com/

AI-driven Robots Improve Solar Cells

In July 2018, Curtis Berlinguette, a materials scientist at the University of British Columbia in Vancouver, Canada, realized he was wasting his graduate student’s time and talent. He was asked to refine a key material in solar cells to boost its electrical conductivity. But the number of potential tweaks was overwhelming, from spiking the recipe with traces of metals and other additives to varying the heating and drying times.

 

Ada, an AI-driven robot, searches for new solar cell designs at the University of British Columbia

There are so many things you can go change, you can quickly go through 10 million [designs] you can test,” Berlinguette says.

So he and colleagues outsourced the effort to a single-armed robot overseen by an artificial intelligence (AI) algorithm. Dubbed Ada, the robot mixed different solutions, cast them in films, performed heat treatments and other processing steps, tested the films’ conductivity, evaluated their microstructure, and logged the results. The AI interpreted each experiment and determined what to synthesize next. At a meeting of the Materials Research Society (MRS) here last week, Berlinguette reported that the system quickly homed in on a recipe and heating conditions that created defect-free films ideal for solar cells. “What used to take us 9 months now takes us 5 days,” Berlinguette says.

Other material scientists also reported successes with such “closed loop” systems that combine the latest advances in automation with AI that directs how the experiments should proceed on the fly. Drug developers, geneticists, and investigators in other fields had already melded AIs and robots to design and do experiments, but materials scientists had lagged behind. DNA synthesizers can be programmed to assemble any combination of DNA letters, but there’s no single way to synthesize, process, or characterize materials, making it exponentially more complicated to develop an automated system that can be guided by an AI. Materials scientists are finally bringing such systems online. “It’s a superexciting area,” says Benji Maruyama, a materials scientist with the U.S. Air Force Research Laboratory east of Dayton, Ohio. “The closed loop is what is really going to make progress in materials research go orders of magnitude faster.”

With more than 100 elements in the periodic table and the ability to combine them in virtually limitless ways, the number of possible materials is daunting. “The good news is there are millions to billions of undiscovered materials out there,” says Apurva Mehta, a materials physicist at the Stanford Synchrotron Radiation Lightsource in Menlo Park, California. The bad news, he says, is that most are unremarkable, making the challenge of finding gems a needle-in-the-haystack problem. Robots have already helped. They are now commonly used to mix dozens of slightly different recipes for a material, deposit them on single wafers or other platforms, and then process and test them simultaneously. But simply plodding through recipe after recipe is a slow route to a breakthrough, Maruyama says. “High throughput is a way to do lots of experiments, but not a lot of innovation.”

To speed the process, many teams have added in computer modeling to predict the formula of likely gems. “We’re seeing an avalanche of exciting materials coming from prediction,” says Kristin Persson of Lawrence Berkeley National Laboratory (LBNL) in California, who runs a large-scale prediction enterprise known as the Materials Project. But those systems still typically rely on graduate students or experienced scientists to evaluate the results of experiments and determine how to proceed. Yet, “People still need to do things like sleep and eat,” says Keith Brown, a mechanical engineer at Boston University (BU). So, like Berlinguette, Brown and his colleagues built an AI-driven robotics system. Their goal was to find the toughest possible 3D-printed structures. Toughness comes from a blend of high strength and ductility, and it varies depending on the details of a structure, even if the material itself doesn’t change. Predicting which shape will be toughest isn’t feasible, Brown says. “You have to do the experiment.”

Source: https://www.sciencemag.org/

Tiny Robots Build Huge Structures

Today’s commercial aircraft are typically manufactured in sections, often in different locationswings at one factory, fuselage sections at another, tail components somewhere else — and then flown to a central plant in huge cargo planes for final assembly. But what if the final assembly was the only assembly, with the whole plane built out of a large array of tiny identical pieces, all put together by an army of tiny robots? That’s the vision that graduate student Benjamin Jenett, working with Professor Neil Gershenfeld in MIT’s Center for Bits and Atoms (CBA), has been pursuing as his doctoral thesis work. It’s now reached the point that prototype versions of such robots can assemble small structures and even work together as a team to build up a larger assemblies. The new work appears in the October issue of the IEEE Robotics and Automation Letters, in a paper by Jenett, Gershenfeld, fellow graduate student Amira Abdel-Rahman, and CBA alumnus Kenneth Cheung SM ’07, PhD ’12, who is now at NASA’s Ames Research Center, where he leads the ARMADAS project to design a lunar base that could be built with robotic assembly.

“This paper is a treat,” says Aaron Becker, an associate professor of electrical and computer engineering at the University of Houston, who was not associated with this work. “It combines top-notch mechanical design with jaw-dropping demonstrations, new robotic hardware, and a simulation suite with over 100,000 elements,” he says. “What’s at the heart of this is a new kind of robotics, that we call relative robots,” Gershenfeld says. Historically, he explains, there have been two broad categories of robotics — ones made out of expensive custom components that are carefully optimized for particular applications such as factory assembly, and ones made from inexpensive mass-produced modules with much lower performance. The new robots, however, are an alternative to both. They’re much simpler than the former, while much more capable than the latter, and they have the potential to revolutionize the production of large-scale systems, from airplanes to bridges to entire buildings.

Source: http://news.mit.edu/
AND
https://www.popularmechanics.com/

Mimicking Mosquito Eyes To Create Artificial Lens

Anyone who’s tried to swat a pesky mosquito knows how quickly the insects can evade a hand or fly swatter. The pests’ compound eyes, which provide a wide field of view, are largely responsible for these lightning-fast actions. Now, researchers reporting in ACS Applied Materials & Interfaces have developed compound lenses inspired by the mosquito eye that could someday find applications in autonomous vehicles, robots or medical devices.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

Compound eyes, found in most arthropods, consist of many microscopic lenses organized on a curved array. Each tiny lens captures an individual image, and the mosquito’s brain integrates all of the images to achieve peripheral vision without head or eye movement. The simplicity and multifunctionality of compound eyes make them good candidates for miniaturized vision systems, which could be used by drones or robots to rapidly image their surroundings. Joelle Frechette and colleagues from Johns Hopkins University wanted to develop a liquid manufacturing process to make compound lenses with most of the features of the mosquito eye.

To make each microlens, the researchers used a capillary microfluidic device to produce oil droplets surrounded by silica nanoparticles. Then, they organized many of these microlenses into a closely packed array around a larger oil droplet. They polymerized the structure with ultraviolet light to yield a compound lens with a viewing angle of 149 degrees, similar to that of the mosquito eye. The silica nanoparticles coating each microlens had antifogging properties, reminiscent of nanostructures on mosquito eyes that allow the insect organs to function in humid environments. The researchers could move, deform and relocate the fluid lenses, allowing them to create arrays of compound lenses with even greater viewing capabilities.

Source: https://www.acs.org/

Artificial Skin Recreates The Human Sense Of Pain

Prosthetic technology has taken huge strides in the last decade, but accurately simulating human-like sensation is a difficult task. New “electronic skin” technology developed at the Daegu Gyeongbuk Institute of Science and Technology (DGIST) in Korea could help replicate advanced “pain” sensations in prosthetics, and enable robots to understand tactile feedback, like the feeling of being pricked, or that of heat on skin.

Trying to recreate the human senses has been a driver of technologies throughout the 20thcentury, like TV or audio playback. Mimicry of tactile sensing has been a focus of several different research groups in the last few years, but advances have mainly improved the feeling of pressure and strength in prosthetics. Human sensation, however, can detect much more subtle cues. The DGIST researchers, led by Department of Information and Communication Engineering Professor Jae Eun Jang, needed to bring together expertise from several different fields to begin the arduous task of replicating these more complex sensations in their electronic skin, working with colleagues in DGIST’s Robotics and Brain Sciences departments.

“We have developed a core base technology that can effectively detect pain, which is necessary for developing future-type tactile sensor. As an achievement of convergence research by experts in nano engineering, electronic engineering, robotics engineering, and brain sciences, it will be widely applied on electronic skin that feels various senses as well as new human-machine interactions.” Jang explained.

The DGIST team effort has created a more efficient sensor technology, able to simultaneously detect pressure and heat. They also developed a signal processing system that adjusted pain responses depending on pressure, area, and temperature.

Source: https://www.dgist.ac.kr/
AND
https://www.technologynetworks.com/

Huawei is launching “Hongmeng” to replace Android

China’s Huawei is in the process of potentially launching its “Hongmeng” operating system (OS) to replace the U.S. Android OS, an executive said on Thursday, after Reuters reported that the company has applied to trademark the OS in various countries.

Huawei, the world’s biggest maker of telecoms network gear, has filed for a Hongmeng trademark in countries such as Cambodia, Canada, South Korea and New Zealand, data from the U.N. World Intellectual Property Organization (WIPO) shows. It also filed an application in Peru on May 27, according to the country’s anti-trust agency Indecopi. Data from a U.N. body showed that Huawei Technologies Co Ltd is aiming to trademark the OS in at least nine countries and Europe, in a sign it may be deploying a back-up plan in key markets as U.S. sanctions threaten its business model.

President Donald Trump’s administration last month put Huawei on a blacklist that barred it from doing business with U.S. tech companies such as Alphabet Inc, whose Android OS is used in Huawei’s phones. Andrew Williamson, vice president of Huawei’s public affairs and communications, said Hongmeng was moving forward.

Huawei is in the process of potentially launching a replacement,” Williamson said in an interview in Mexico City. “Presumably we’ll be trying to put trademarks.”

Huawei has a back-up OS in case it is cut off from U.S.-made software, Richard Yu, chief executive of the company’s consumer division, told German newspaper Die Welt in an interview earlier this year. The U.S. official, meeting with officials in Europe to warn against buying Huawei equipment for next-generation mobile networks, said only time would tell if Huawei could diversify.

It is a fair question to ask if one decides to go with Huawei and Huawei continues to be on our entity list, will Huawei be able to actually deliver what it promises any particular client,” Jonathan Fritz, the U.S. State Department’s director for international communications policy, told reporters in Brussels.

The company, also the world’s second-largest maker of smartphones, has not yet revealed details about its OS.The applications to trademark the OS show that Huawei wants to use Hongmeng for gadgets ranging from smartphones and portable computers to robots and car televisions.

Source: http://www.reuters.com