Researchers found that preschoolers prefer learning from what they perceive as a competent robot over an incompetent human. This study is the first to use both a human speaker and a robot to see if children deem social affiliation and similarity more important than competency when choosing which source to trust and learn from.
Engineers are harnessing artificial intelligence (AI) and wireless technology to unobtrusively monitor elderly people in their living spaces and provide early detection of emerging health problems.
Researchers have demonstrated a caterpillar-like soft robot that can move forward, backward and dip under narrow spaces. The caterpillar-bot's movement is driven by a novel pattern of silver nanowires that use heat to control the way the robot bends, allowing users to steer the robot in either direction.
Scientists have developed fully biodegradable, high-performance artificial muscles. Their research project marks another step towards green technology becoming a lasting trend in the field of soft robotics.
New research aims to increase autonomy for individuals with such motor impairments by introducing a head-worn device that will help them control a mobile manipulator. Teleoperated mobile manipulators can aid individuals in completing daily activities, but many existing technologies like hand-operated joysticks or web interfaces require a user to have substantial fine motor skills to […]
Synecoculture, a new farming method, involves growing mixed plant species together in high density. However, it requires complex operation since varying species with different growing seasons and growing speeds are planted on the same land. To address this need, researchers have developed a robot that can sow, prune, and harvest plants in dense vegetation growth. […]
Researchers have developed resilient artificial muscles that can enable insect-scale aerial robots to effectively recover flight performance after suffering severe damage.
The Walking Oligomeric Robotic Mobility System, or WORMS, is a reconfigurable, modular, multiagent robotics architecture for extreme lunar terrain mobility. The system could be used to assemble autonomous worm-like parts into larger biomimetic robots that could explore lava tubes, steep slopes, and the moon's permanently shadowed regions.
Robots can be useful as mental wellbeing coaches in the workplace -- but perception of their effectiveness depends in large part on what the robot looks like.
A researcher has solved a nearly 60-year-old game theory dilemma called the wall pursuit game, with implications for better reasoning about autonomous systems such as driver-less vehicles.
Most animals can quickly transition from walking to jumping to crawling to swimming if needed without reconfiguring or making major adjustments. Most robots cannot. But researchers have now created soft robots that can seamlessly shift from walking to swimming, for example, or crawling to rolling using a bistable actuator made of 3D-printed soft rubber containing […]
Imagine for a moment, that we are on a safari watching a giraffe graze. After looking away for a second, we then see the animal lower its head and sit down. But, we wonder, what happened in the meantime? Computer scientists have found a way to encode an animal's pose and appearance in order to […]
A 'biocomputer' powered by human brain cells could be developed within our lifetime, according to researchers who expect such technology to exponentially expand the capabilities of modern computing and create novel fields of study.
A tiny robot that could one day help doctors perform surgery was inspired by the incredible gripping ability of geckos and the efficient locomotion of inchworms.
While apprehensions about employment and schools dominate headlines, the truth is that the effects of large-scale language models such as ChatGPT will touch virtually every corner of our lives. These new tools raise society-wide concerns about artificial intelligence's role in reinforcing social biases, committing fraud and identity theft, generating fake news, spreading misinformation and more. […]
Could an app tell if a first date is just not that into you? Engineers say the technology might not be far off. They trained a computer to identify the type of conversation two people were having based on their physiological responses alone.
By John P. Desmond, AI Trends Editor The AI stack defined by Carnegie Mellon University is fundamental to the approach being taken by the US Army for its AI development platform efforts, according to Isaac Faber, Chief Data Scientist at the US Army AI Integration Center, speaking at the AI World Government event held in-person and virtually […]
By John P. Desmond, AI Trends Editor Advancing trustworthy AI and machine learning to mitigate agency risk is a priority for the US Department of Energy (DOE), and identifying best practices for implementing AI at scale is a priority for the US General Services Administration (GSA). That’s what attendees learned in two sessions at the AI […]
By AI Trends Staff While AI in hiring is now widely used for writing job descriptions, screening candidates, and automating interviews, it poses a risk of wide discrimination if not implemented carefully. That was the message from Keith Sonderling, Commissioner with the US Equal Opportunity Commision, speaking at the AI World Government event held live and virtually in […]
By John P. Desmond, AI Trends Editor More companies are successfully exploiting predictive maintenance systems that combine AI and IoT sensors to collect data that anticipates breakdowns and recommends preventive action before break or machines fail, in a demonstration of an AI use case with proven value. This growth is reflected in optimistic market forecasts. […]
By Lance Eliot, the AI Trends Insider We already expect that humans to exhibit flashes of brilliance. It might not happen all the time, but the act itself is welcomed and not altogether disturbing when it occurs. What about when Artificial Intelligence (AI) seems to display an act of novelty? Any such instance is bound to get our attention; […]
By John P. Desmond, AI Trends Editor Engineers tend to see things in unambiguous terms, which some may call Black and White terms, such as a choice between right or wrong and good and bad. The consideration of ethics in AI is highly nuanced, with vast gray areas, making it challenging for AI software engineers to […]
By John P. Desmond, AI Trends Editor AI is more accessible to young people in the workforce who grew up as ‘digital natives’ with Alexa and self-driving cars as part of the landscape, giving them expectations grounded in their experience of what is possible. That idea set the foundation for a panel discussion at AI World […]
By John P. Desmond, AI Trends Editor Two experiences of how AI developers within the federal government are pursuing AI accountability practices were outlined at the AI World Government event held virtually and in-person this week in Alexandria, Va. Taka Ariga, chief data scientist and director at the US Government Accountability Office, described an AI accountability framework he uses within his agency […]
By AI Trends Staff Advances in the AI behind speech recognition are driving growth in the market, attracting venture capital and funding startups, posing challenges to established players. The growing acceptance and use of speech recognition devices are driving the market, which according to an estimate by Meticulous Research is expected to reach $26.8 billion […]
By Lance Eliot, the AI Trends Insider Are there things that we must not know? This is an age-old question. Some assert that there is the potential for knowledge that ought to not be known. In other words, there are ideas, concepts, or mental formulations that should we become aware of that knowledge it could be […]
NASA and the Defense Advanced Research Projects Agency (DARPA) announced Tuesday a collaboration to demonstrate a nuclear thermal rocket engine in space, an enabling capability for NASA crewed missions to Mars. NASA and DARPA will partner on the Demonstration Rocket for Agile Cislunar Operations, or DRACO, program.
“NASA will work with our long-term partner, DARPA, to develop and demonstrate advanced nuclear thermal propulsion technology as soon as 2027. With the help of this new technology, astronauts could journeyto and fromdeep space faster than ever – a major capability to prepare for crewed missions to Mars,” said NASA Administrator Bill Nelson.
Using a nuclear thermal rocket allows for faster transit time, reducing risk for astronauts. Reducing transittime is a key component for human missions to Mars, as longer trips require more supplies and more robust systems. Maturing faster, more efficient transportation technology will help NASA meet its Moon to Mars Objectives.
Other benefits to space travel include increased science payload capacity and higher power for instrumentation and communication. In a nuclear thermal rocket engine, a fission reactor is used to generate extremely high temperatures. The engine transfers the heat produced by the reactor to a liquid propellant, which is expanded and exhausted through a nozzle to propel the spacecraft. Nuclear thermal rockets can be three or more times more efficient than conventional chemical propulsion.
A Rutgers-led team of researchers has developed a microchip that can measure stress hormones in real time from a drop of blood.
Cortisol and other stress hormones regulate many aspects of our physical and mental health, including sleep quality. High levels of cortisol can result in poor sleep, which increases stress that can contribute to panic attacks, heart attacks and other ailments.
Currently, measuring cortisol takes costly and cumbersome laboratory setups, so the Rutgers-led team looked for a way to monitor its natural fluctuations in daily life and provide patients with feedback that allows them to receive the right treatment at the right time.
The researchers used the same technologies used to fabricate computer chips to build sensorsthinner than a human hair that can detect biomolecules at low levels. They validated the miniaturized device’s performance on 65 blood samples from patients with rheumatoid arthritis.
“The use of nanosensors allowed us to detect cortisol molecules directly without the need for any other molecules or particles to act as labels,” said lead author Reza Mahmoodi, a postdoctoral scholar in the Department of Electrical and Computer Engineering at Rutgers University-New Brunswick.
With technologies like the team’s new microchip, patients can monitor their hormone levels and better manage chronic inflammation, stress and other conditions at a lower cost, said senior author Mehdi Javanmard, an associate professor in Rutgers’ Department of Electrical and Computer Engineering.
“Our new sensor produces an accurate and reliable response that allows a continuous readout of cortisol levels for real-time analysis,” he added. “It has great potential to be adapted to non-invasive cortisol measurement in other fluids such as saliva and urine. The fact that molecular labels are not required eliminates the need for large bulky instruments like optical microscopes and plate readers, making the readout instrumentation something you can measure ultimately in a small pocket-sized box or even fit onto a wristband one day.”
The study included Rutgers co-author Pengfei Xie, a Ph.D. student, and researchers from the University of Minnesota and University of Pennsylvania. The research was funded by the DARPAElectRX program.
The study appears in the journal Science Advances.
An “unhackable” computer chip lived up to its name in its first bug bounty competition, foiling over 500 cybersecurity researchers who were offered tens of thousands of dollars to analyze it and three other secure processor technologies for vulnerabilities. MORPHEUS, developed by computer science researchers at the University of Michigan, weathered the three-month virtual program DARPA dubbed the Finding Exploits to Thwart Tampering—or FETT—Bug Bountywithout a single successful attack. In bug bounty programs, organizations or software developers offer compensation or other incentives to individuals who can find and report bugs or vulnerabilities.
DARPA, the Defense Advanced Research Projects Agency, partnered with the Department of Defense’s Defense Digital Service and Synack, a crowdsourced security platform, to conduct FETT, which ran from June through August 2020. It also tested technologies from MIT, Cambridge University, Lockheed Martin and nonprofit tech institute SRI International. The U-M team achieved its results by abandoning a cornerstone of traditional computer security—finding and eliminating software bugs, says team leader Todd Austin, the S. Jack Hu Collegiate Professor of Computer Science and Engineering. MORPHEUS works by reconfiguring key bits of its code and data dozens of times per second, turning any vulnerabilities into dead ends for hackers.
MORPHEUS blocks potential attacks by encrypting and randomly reshuffling key bits of its own code and data twenty times per second.
“Imagine trying to solve a Rubik’s Cube that rearranges itself every time you blink,” Austin said. “That’s what hackers are up against with MORPHEUS. It makes the computer an unsolvable puzzle.”
MORPHEUS has previously proven itself in the lab, but the FETT Bug Bounty marks the first time that it was exposed to a group of skilled cybersecurity researchers from around the globe. Austin says its success is further proof that computer security needs to move away from its traditional bugs-and-patches paradigm. “Today’s approach of eliminating security bugs one by one is a losing game,” he said. “Developers are constantly writing code, and as long as there is new code, there will be new bugs and security vulnerabilities. With MORPHEUS, even if a hacker finds a bug, the information needed to exploit it vanishes within milliseconds. It’s perhaps the closest thing to a future-proof secure system.”
For FETT, the MORPHEUS architecture was built into a computer system that housed a mock medical database. Computer experts were invited to try to breach it remotely. MORPHEUS was the second-most popular target of the seven processors evaluated.
Boston Dynamics’ Atlas and Spot robots can do a lot of things: sprinting, gymnastic routines,parkour, backflips, open doorsto let in an army of their friends, wash dishes, and (poorly) get actual jobs. But the company’s latest video adds another impressive trick to our future robotic overlords’ repertoire: busting sick dance moves.
CLICK THE PICTURE TO ENJOY THE DANCING ROBOT
The video sees Boston Dynamics entire lineup of robots — the humanoid Atlas, the dog-shaped Spot, and the box-juggling Handle — all come together in a bopping, coordinated dance routine set to The Contours’ “Do You Love Me.”
t’s not the first time Boston Dynamics has shown off its robots’ dancing skills: the company showcased a video of its Spotrobot doing the Running Man to “Uptown Funk” in 2018. but the new video takes things to another level, with the Atlasrobot tearing it up on the dance floor: smoothly running, jumping, shuffling, and twirling through different moves.
Things get even more incredible as more robots file out, prancing around in the kind of coordinated dance routine that puts my own, admittedly awful human dancing to shame. Compared to the jerky movements of the 2016 iteration of Atlas, the new model almost looks like a CGI creation.
Boston Dynamics was recently purchased by Hyundai, which bought the robotics firm from SoftBank in a $1.1 billion deal. The company was originally founded in 1992 as a spin-off from the Massachusetts Institute of Technology, where it became known for its dog-like quadrupedal robots (most notably, the DARPA-funded BigDog, a precursor to the company’s first commercial robot, Spot.) It was bought by Alphabet’s X division in 2013, and then by Softbank in 2017.
While the Atlas and Handlerobots featured here are still just research prototypes, Boston Dynamics has recently started selling the Spot model to any company for the considerable price of$74,500. But can you really put a price on creating your own personal legion of boogieing robot minions?
The never-ending saga of machines outperforming humans has a new chapter. An AI algorithm has again beaten a human fighter pilot in a virtual dogfight. The contest was the finale of the U.S. military’s AlphaDogfight challenge, an effort to “demonstrate the feasibility of developing effective, intelligent autonomous agents capable of defeating adversary aircraft in a dogfight.
Last August, Defense Advanced Research Project Agency, or DARPA, selected eight teams ranging from large, traditional defense contractors like Lockheed Martin to small groups like Heron Systems to compete in a series of trials in November and January. In the final, on Thursday, Heron Systems emerged as the victor against the seven other teams after two days of old school dogfights, going after each other using nose-aimed guns only. Heron then faced off against a human fighter pilot sitting in a simulator and wearing a virtual reality helmet, and won five rounds to zero.
The other winner in Thursday’s event was deep reinforcement learning,wherein artificial intelligence algorithms get to try out a task in a virtual environment over and over again, sometimes very quickly, until they develop something like understanding. Deep reinforcement played a key role in Heron System’s agent, as well as Lockheed Martin’s, the runner up.
A carbine that can call in an airstrike. A computer-aided scope on a machine gun that can turn just about anyone into a marksman.Even firearms that measure and record every movement, from the angle of the barrel to the precise moment of each shot fired, which could provide law enforcement with a digital record of police shootings. The application of information technology to firearms has long been resisted in the United States by gun owners and law-enforcement officials who worry they could be hacked, fail at the wrong moment, or invite government control.
But with the U.S. Army soliciting bids for high-tech battlefield solutions to create the soldier’s rifle of the future, those concerns may quickly become irrelevant. The Army is moving forward regardless. One company seeking an Army contract is working on an operating system that could be embedded into the gun, which could have law-enforcement and civilian applications that may reshape the U.S. debate about gun safety.
“You could accomplish some of the functionality by duct-taping an iPhone to your gun. However what we offer is the world’s first truly embedded operating system,” said Melvic Smith, 41, principal owner of Dimensional Weapons Systems, which bills itself as the first patented blockchain-based firearms company.
That system could eventually add any number of applications, Smith said, including “smart gun” technology that would only allow the weapon to be fired by a designated shooter’s hand. Smart guns in theory could prevent children from accidentally firing guns at home, or render stolen guns useless.
“Our team is composed of veterans, law enforcement officers, people that are pro-Second Amendment to begin with,” Smith said, referring to the amendment in the U.S. constitution that grants American citizens the right to bear arms. “But we also have engaged with people in the weapons manufacturing industry. They actually love the technology. They’re worried about political backlash.”
Elon Musk said startup Neuralink, which aims to build a scalable implant to connect human brains with computers, has already implanted chips in rats and plans to test its brain-machine interface in humans within two years, with a long-term goal of people “merging with AI.” Brain-machine interfaces have been around for awhile. Some of the earliest success with the technology include Brown University’s BrainGate, which first enabled a paralyzed person to control a computer cursor in 2006. Since then a variety of research groups and companies, including the University of Pittsburgh Medical Center and DARPA-backed Synchron, have been working on similar devices. There are two basic approaches: You can do it invasively, creating an interface with an implant that directly touches the brain, or you can do it non-invasively, usually by electrodes placed near the skin. (The latter is the approach used by startup CTRL-Labs, for example.)
Neuralink, says Musk, is going to go the invasive route. It’s developed a chip containing an array of up to 96 small, polymer threads, each with up to 32 electrodes that can be implanted into thebrain via robot and a 2 millimeter incision. The threads are small — less than 6 micrometers — because, as Musk noted in remarks delivered Tuesday night and webcast, Once implanted, according to Musk, the chip would connect wirelessly to devices.“It basically Bluetooths to your phone,” he said. “We’ll have to watch the App Store updates to that one,” he added (the audience laughed).
Musk cofounded Neuralink in 2017 and serves as the company’s CEO, though it’s unclear how much involvement he has given that he’s also serving as CEO for SpaceX and Tesla. Company cofounder and president, Max Hodak, has a biomedical engineering degree from Duke and has cofounded two other companies, MyFit and Transcriptic.Neuralink has raised $66.27 million in venture funding so far, according to Pitchbook, which estimates the startup’s valuation at $509.3 million. Both Musk and Hodak spoke about the potential for its company’s neural implants to improve the lives of people with brain damage and other brain disabilities. Its first goal, based on its discussions with such patients, is the ability to control a mobile device.
The company’s long-term goal is a bit more fantastical, and relates to Musk’s oft-repeated concerns over the dangers of advanced artificial intelligence. That goal is to use the company’s chips to create a “tertiary level” of the brain that would be linked to artificial intelligence. “We can effectively have the option of merging with AI,” he said. “After solving a bunch of brain related diseases there is the mitigation of the existential threat of AI,” he continued.
CLICK ON THE IMAGE TO ENJOY THE VIDEO
This pair of images from Neuralink shows its electrodes on the surface of a rat cortex (left) and one of its chips as implanted in a rat (right).
“If you stick something in your brain, don’t want it to be giant, you want it to be tiny,” says Musk.
In terms of progress, the company says that it has built a chip and a robot to implant it, which it has implanted into rats. According to the whitepaper the company has published (which has not yet undergone any peer review), it was able to record rat brain activity from its chips, and with many more channels than exist on current systems in use with humans. The first human clinical trials are expected for next year, though Hodak mentioned that the company has not yet begun to the FDA processes needed to conduct those tests.
The Department of Defense’s research and development wing, DARPA, is working on technology to read and write to the human brain. The focus isn’t on mind control but rather machine control, allowing the human brain to directly send instructions to machines. The goal of the process is to streamline thought control of machines to the point where humans could control them with a simple helmet or head-mounted device, making operating such systems easier.
The brain makes physical events happen by turning thoughts into action, sending instructions through the nervous system to organs, limbs, and other parts of the body. It effortlessly sends out a constant stream of commands to do everything from drive a car to make breakfast. To operate today’s machines, humans being need a middleman of sorts, a physical control system manipulated by hands, fingers, and feet.
What if human beings could cut out the middleman, operating a machine simply by thinking at it? So DARPA is funding the (Next Generation Nonsurgical Neurotechnology (N3) initiative. N3’s goal is to create a control system for machines—including weapons—that can directly interact with the human brain. According to IEEE Spectrum, DARPA is experimenting with “magnetic fields, electric fields, acoustic fields (ultrasound) and light” as a means of controlling machines.
The implications of such a technology are huge. Instead of designing complicated controls and control systems for every machine or weapon devised, engineers could instead just create a thought-operated control system. Wearable technology becomes easier to operate as it doesn’t require a separate control system. This could also apply to notifications and data: as IEEE Spectrum points out, network administrators could feel intrusions into computer networks. DAPRA is, of course, an arm of the Pentagon, and a neurotechnological interface would almost certainly find its way into weapons.
DARPA has awarded development contracts to six groups for amounts of up to $19.48 million each. Each group has one year to prove their ability to read and write to brain tissue with an 18-month animal testing period to follow.
DARPA’s Extreme Accuracy Tasked Ordnance (EXACTO) program, which developed a self-steering bullet to increase hit rates for difficult, long-distance shots, completed in February its most successful round of live-fire tests to date. An experienced shooter using the technology demonstration system repeatedly hit moving and evading targets. Additionally, a novice shooter using the system for the first time hit a moving target.
CLICK ON THE IMAGE TO ENJOY THE VIDEO
This video shows EXACTO rounds maneuvering in flight to hit targets that are moving and accelerating. EXACTO’s specially designed ammunition and real-time optical guidance system help track and direct projectiles to their targets by compensating for weather, wind, target movement and other factors that can impede successful hits.
“True to DARPA’s mission, EXACTO has demonstrated what was once thought impossible: the continuous guidance of a small-caliber bullet to target,” said Jerome Dunn, DARPA program manager. “This live-fire demonstration from a standard rifle showed that EXACTO is able to hit moving and evading targets with extreme accuracy at sniper ranges unachievable with traditional rounds. Fitting EXACTO’s guidance capabilities into a small .50-caliber size is a major breakthrough and opens the door to what could be possible in future guided projectiles across all calibers.”
The EXACTO program developed new approaches and advanced capabilities to improve the range and accuracy of sniper systems beyond the current state of the art. The program sought to improve sniper effectiveness and enhance troop safety by allowing greater shooter standoff range and reduction in target engagement timelines.
China is developing a satellite with a powerful laser for anti-submarine warfare that researchers hope will be able to pinpoint a target as far as 500 metres below the surface. It is the latest addition to the country’s expanding deep-sea surveillance programme, and aside from targeting submarines – most operate at a depth of less than 500 metres – it could also be used to collect data on the world’s oceans. Project Guanlan, meaning “watching the big waves”, was officially launched in May at the Pilot National Laboratory for Marine Science and Technology in Qingdao, Shandong. It aims to strengthen China’s surveillance activities in the world’s oceans, according to the laboratory’s website.
Scientists are working on the satellite’s design at the laboratory, but its key components are being developed by more than 20 research institutes and universities across the country. Song Xiaoquan, a researcher involved in the project, said if the team can develop the satellite as planned, it will make the upper layer of the sea “more or less transparent”. “It will change almost everything,” Song said.
While light dims 1,000 times faster in water than in the air, and the sun can penetrate no more than 200 metres below the ocean surface, a powerful artificial laser beam can be 1 billion times brighter than the sun. But this project is ambitious – naval researchers have tried for more than half a century to develop a laser spotlight for hunting submarines using technology known as light detection and ranging (lidar). In theory, it works like this – when a laser beam hits a submarine, some pulses bounce back. They are then picked up by sensors and analysed by computer to determine the target’s location, speed and three-dimensional shape.
But in real life, lidar technology can be affected by the device’s power limitations, as well as cloud, fog, murky water – and even marine life such as fish and whales. Added to that, the laser beam deflects and scatters as it travels from one body of water to another, making it more of a challenge to get a precise calculation. Experiments carried out by the United States and former Soviet Union achieved maximum detection depths of less than 100 metres, according to openly available information. That range has been extended in recent years by the US in research funded by Nasa and the Defence Advanced Research Projects Agency (DARPA).
The neocortex is the seat of human intellect. New data suggests that mammals created it with new types of cells only after their evolutionary split from reptiles.
Privately run genealogy databases have become a crucial tool for police investigators. Now a nonprofit is collecting data to help crack more cold cases.
Type Ia supernovas are astronomers’ best tools for measuring cosmic distances. In a first, researchers recreated one on a supercomputer to learn how they form.
Startups and investors are rushing to build networks of charging stations. But it’s still unclear when—or if—battery-powered big rigs will rule the roads.
The GOP-fueled far right differs from similar movements around the globe, thanks to the country’s politics, electoral system, and changing demographics.
In last week's hearing, lawmakers kept focusing on the harms TikTok inflicts on kids. Until they take steps to solve these problems, that's a distraction.
Kids today face problems far larger than their social media usage. Restrictions feed into a moral panic without addressing the root cause of their anxiety.
Recent Comments