Augmented Reality (AR) Revolutionizes Surgery

Dr Stephen Quinn, a gynaecologist at hospitals in the NHS Trust Imperial College, appears on TV show to help a patient, Hilda, with a condition causing her swollen abdomen. After taking careful scans of Hilda’s body, the team are able to show her the growths, called fibroids, that are behind her pain.


I’ve spent a lot of my career looking at MRI scans of pelvises, and having these images is extremely helpful in clinic,” said Quinn. “But using augmented reality just took that to a whole different level. It was fantastic being able to to fully visualise exactly what was going on in the pelvis ahead of the surgery to remove the fibroids.”

Unfortunately, the technology is a way off being available on the NHS, but Quinn said AR’s use could be commonplace within the next decade.

For the show, radiologists at Imperial hospitals provided artists with in-depth scans of each patient. Dr Dimitri Amiras, a musculoskeletal consultant radiologist at Imperial, also worked on the experiment.

First, patients would undergo routine scans. “In order to define what the organ is and where the pathology is, that’s all done by radiologists. We are the ones to identify it and look at the imaging techniques work out what is good tissue, what’s bad tissue,” said Amiras. “Then, once we’ve got those images with relevant bits identified, digital artists may draw around them or even use artificial intelligence to make all the pretty pictures and the shiny stuff.”

Once finished, the patients and doctors would wear an AR device to ‘see’ the body part in front of them. Each was 3D, and could be zoomed in or out, rotated, and compared to the same areas on a healthy individual.


Augmented Reality A Hundred Times Less Expensive

Zombies or enemies flashing right before your eyes and the dizzying feeling of standing on the edge of a cliff using virtual reality and augmented reality (AR and VR) are no longer exclusive to the games or media industries. These technologies allow us to conduct virtual conferences, share presentations and videos, and communicate in real time in virtual space. But because of the high cost and bulkiness of VR and AR devices, the virtual world is not currently within easy reach.

Recently, a South Korean research team developed moldable nanomaterials and a printing technology using , allowing the commercialization of inexpensive and thin VR and AR devices.

Professor Junsuk Rho of the departments of mechanical engineering and chemical engineering and doctoral student in mechanical engineering Gwanho Yoon at POSTECH with Professor Heon Lee and researcher Kwan Kim of the department of material science at Korea University have jointly developed a new nanomaterial and large-scale nanoprinting technology for commercialization of metamaterials. The research findings, which solve the issue of device size and high production that were problematic in previous research, were recently published in Nature Communications.

Metamaterials are substances made from artificial atoms that do not exist in nature but freely control the properties of light. An invisible cloak that makes an illusion of disappearance by adjusting the refraction or diffraction of light, or metaholograms that can produce different hologram images depending on the direction of light’s entrance, uses this metamaterial. Using this principle, the ultrathin metalens technology, which can replace the conventional optical system with extreme thinness, was recently selected as one of the top 10 emerging technologies to change the world at the World Economic Forum last year.

In order to make metamaterials, artificial atoms smaller than the wavelengths of light must be meticulously constructed and arranged. Until now, metamaterials have been produced through a method called electron beam lithography (EBL). However, EBL has hindered the commercialization or production of sizable metamaterials due to its slow process speed and high cost of production. To overcome these limitations, the joint research team developed a new nanomaterial based on nanoparticle composite that can be molded freely while having optical characteristics suitable for fabricating metamaterials. The team also succeeded in developing a one-step printing technique that can shape the materials in a single-step process.


Augmented Reality Headset Protects Doctors From Coronavirus

One of the largest NHS trusts in England is using Microsoft HoloLens on its Covid-19 wards to keep doctors safe as they help patients with the virus. Staff at Imperial College Healthcare NHS Trust are wearing the mixed-reality headset as they work on the frontline in the most high-risk area of some of London’s busiest hospitals. HoloLens with Dynamics 365 Remote Assist uses Microsoft Teams to send a secure live video-feed to a computer screen in a nearby room, allowing healthcare teams to see everything the doctor treating Covid-19 patients can see, while remaining at a safe distance.

Imperial College Healthcare NHS Trust, which includes Charing Cross Hospital, Hammersmith Hospital and St Mary’s Hospital, says using HoloLens has led to a fall in the amount of time staff are spending in high-risk areas of up to 83%. It is also significantly reducing the amount of personal protective equipment (PPE) being used, as only the doctor wearing the headset has to dress in PPE. Early estimates that using HoloLens is saving up to 700 items of PPE per ward, per week.

In March, we had a hospital full of Covid-19 patients. Doctors, nurses and allied healthcare professionals providing ward care had a high risk of exposure to the virus and many became ill. Protecting staff was a major motivating factor for this work, but so was protecting patients. If our staff are ill they can transmit disease and they are unable to provide expert medical care to those who needed it most, said James Kinross, a consultant surgeon at Imperial College Healthcare? Kinross has been using HoloLens for many years at the hospital. “In one week our hospital trust switched from being a place that delivered acute, elective care and planned treatment into a giant intensive care unit. We weren’t just trying to restructure an entire building, we were trying to redeploy and retrain our staff, while at the same time we had to cope with an ever-growing number of very sick people“We needed an innovative solution. I’ve used HoloLens before in surgery and we quickly realised it had a unique role to play because we could take advantage of its hands-free telemedicine capabilities. Most importantly, it could be used while wearing PPE. It solved a major problem for us during a crisis, by allowing us to keep treating very ill patients while limiting our exposure to a deadly virus. Not only that, it reduced our PPE consumption and significantly improved the efficiency of our ward rounds.”

Rather than put users in a fully computer-generated world, as virtual reality does, HoloLens allows users to place 3D digital models in the room alongside them and interact with them using gestures, gaze and voice.


Hologram operates in forward and backward directions

Hologram techniques are already used in our everyday life. A hologram sticker to prevent from counterfeiting money, Augmented Reality navigation projected in front mirror of a car to guide directions, and Virtual Reality game that allows a user to play in a virtual world with a feeling of live are just a few examples to mention. Recently, thinner and lighter meta-hologram operating in forward and backward directions has been developed. As seen in the movie, Black Panther, people from Wakanda Kingdom communicate to each other through the hologram and, this specific movie scene seems to become reality soon that we can exchange different information with people from different locations.

Junsuk Rho, professor of POSTECH Mechanical Engineering and Chemical Engineering Department in Korea, with his student, Inki Kim developed a multifunctional meta-hologram from a monolayer meta-holographic optical device that can create different hologram images depending on a direction of light incident on the device. Their research accomplishment has been introduced as a cover story in the January 2020 issue of Nanoscale Horizons.

Televisions and beam projectors can only transmit intensity of lights but holographic techniques can save light intensity and its phase information to play movies in three-dimensional spaces. At this time, if metamaterials are used, a user can change nano structures, size, and shapes as desired and can control light intensity and phase at the same time. Meta-hologram has pixel sizes as small as 300 to 400 nanometers but can display very high resolution of holographic images with larger field of view compared to existing hologram projector such as spatial light modulator.

However, the conventional meta-holograms can display images when incident light is in one direction and cannot when light is in the other direction.

To solve such a problem, the research team used two different types of metasurfaces.* One metasurface was designed to have phase information when incident light was in the forward direction and the other one to operate when light was in backward direction. As a result, they confirmed that these could display different images in real-time depending on the directions of light.

In addition, the team applied dual magnetic resonances*2 and antiferromagnetic resonances*3, which are phenomena occurring in silicon nanopillars, to nanostructure design to overcome low efficiency of the conventional meta-hologram. This newly made meta-hologram demonstrated diffraction efficiency higher than 60% (over 70% in simulation) and high-quality and clear images were observed. Furthermore, the new meta-hologram uses silicon and it can be easily produced by following through the conventional semiconductor manufacturing process. The meta-hologram operating in both directions, forward and backward, is expected to set a new hologram platform that can transmit various information to multiple users from different locations, overcoming the limits of the conventional ones which could only transmit one image to a limited location.

Microscopic, ultrathin, ultralightweight flat optical devices based on a metasurface is an impressive technique with great potentials as it can not only perform the functions of the conventional optical devices but also demonstrate multiple functions depending on how its metasurface is designed. Especially, we developed a meta-hologram optical device that operated in forward and backward directions and it could transmit various visual information to multiple users from different locations simultaneously. We anticipate that this new development can be employed in multiple applications such as holograms for performances, entertainment, exhibitions,  automobiles and more,”, said Junsuk Rho who is leading research on metamaterials.


10 Artificial Intelligence Trends

While no prediction engine has yet been built that can plot the course of AI over the coming decade, we can be fairly certain about what might happen over the next year. Spending on research, development, and deployment continues to rise, and debate over the wider social implications rages on.

1. AI will increasingly be monitoring and refining business processes

While the first robots in the workplace were mainly involved with automating manual tasks such as manufacturing and production lines, today’s software-based robots will take on the repetitive but necessary work that we carry out on computers. Filling in forms, generating reports and diagrams and producing documentation and instructions are all tasks that can be automated by machines that watch what we do and learn to do it for us in a quicker and more streamlined manner. This automation – known as robotic process automation – will free us from the drudgery of time-consuming but essential administrative work, leaving us to spend more time on complex, strategic, creative and interpersonal tasks.

This trend is driven by the success of internet giants like Amazon, Alibaba, and Google, and their ability to deliver personalized experiences and recommendations. AI allows providers of goods and services to quickly and accurately project a 360-degree view of customers in real-time as they interact through online portals and mobile apps, quickly learning how their predictions can fit our wants and needs with ever-increasing accuracy. Just as pizza delivery companies like Dominos will learn when we are most likely to want pizza, and make sure theOrder Now button is in front of us at the right time, every other industry will roll out solutions aimed at offering personalized customer experiences at scale.

3. AI becomes increasingly useful as data becomes more accurate and available

The quality of information available is often a barrier to businesses and organizations wanting to move towards AI-driven automated decision-making. But as technology and methods of simulating real-world processes and mechanisms in the digital domain have improved over recent years, accurate data has become increasingly available. Simulations have advanced to the stage where car manufacturers and others working on the development of autonomous vehicles can gain thousands of hours of driving data without vehicles even leaving the lab, leading to huge reductions in cost as well as increases in the quality of data that can be gathered. Why risk the expense and danger of testing AI systems in the real world when computers are now powerful enough, and trained on accurate-enough data, to simulate it all in the digital world? 2020 will see an increase in the accuracy and availability of real-world simulations, which in turn will lead to more powerful and accurate AI.

4. More devices will run AI-powered technology

As the hardware and expertise needed to deploy AI become cheaper and more available, we will start to see it used in an increasing number of tools, gadgets, and devices. In 2019 we’re already used to running apps that give us AI-powered predictions on our computers, phones, and watches. As the next decade approaches and the cost of hardware and software continues to fall, AI tools will increasingly be embedded into our vehicles, household appliances, and workplace tools. Augmented by technology such as virtual and augmented reality displays, and paradigms like the cloud and Internet of Things, the next year will see more and more devices of every shape and size starting to think and learn for themselves.

5. Human and AI cooperation increases

More and more of us will get used to the idea of working alongside AI-powered tools and bots in our day-to-day working lives. Increasingly, tools will be built that allow us to make the most of our human skills – those which AI can’t quite manage yet – such as imaginative, design, strategy, and communication skills. While augmenting them with super-fast analytics abilities fed by vast datasets that are updated in real-time.

For many of us, this will mean learning new skills, or at least new ways to use our skills alongside these new robotic and software-based tools. The IDC predicts that by 2025, 75% of organizations will be investing in employee retraining in order to fill skill gaps caused by the need to adopt AI.

6. AI increasingly at the “edge”

Much of the AI we’re used to interacting with now in our day-to-day lives takes place “in the cloud” – when we search on Google or flick through recommendations on Netflix, the complex, data-driven algorithms run on high-powered processors inside remote data centers, with the devices in our hands or on our desktops simply acting as conduits for information to pass through.

Increasingly, however, as these algorithms become more efficient and capable of running on low-power devices, AI is taking place at the “edge,” close to the point where data is gathered and used. This paradigm will continue to become more popular in 2020 and beyond, making AI-powered insights a reality outside of the times and places where super-fast fiber optic and mobile networks are available. Custom processors designed to carry out real-time analytics on-the-fly will increasingly become part of the technology we interact with day-to-day

7. AI increasingly used to create films, music, and games

Some things, even in 2020, are probably still best left to humans. Anyone who has seen the current state-of-the-art in AI-generated music, poetry or storytelling is likely to agree that the most sophisticated machines still have some way to go until their output will be as enjoyable to us as the best that humans can produce. However, the influence of AI on entertainment media is likely to increase.

In videogames, AI will continue to be used to create challenging, human-like opponents for players to compete against, as well as to dynamically adjust gameplay and difficulty so that games can continue to offer a compelling challenge for gamers of all skill levels. And while completely AI-generated music may not be everyone’s cup of tea, where AI does excel is in creating dynamic soundscapes – think of smart playlists on services like Spotify or Google Music that match tunes and tempo to the mood and pace of our everyday lives.

8. AI will become ever more present in cybersecurity

As hacking, phishing and social engineering attacks become ever-more sophisticated, and themselves powered by AI and advanced prediction algorithms, smart technology will play an increasingly important role in protecting us from these attempted intrusions into our lives. AI can be used to spot giveaway signs that digital activity or transactions follow patterns that are likely to be indicators of nefarious activity, and raise alarms before defenses can be breached and sensitive data compromised.

The rollout of 5G and other super-fast wireless communications technology will bring huge opportunities for businesses to provide services in new and innovative ways, but they will also potentially open us up to more sophisticated cyber-attacks. Spending on cybersecurity will continue to increase, and those with relevant skills will be highly sought-after.

9. More of us will interact with AI, maybe without even knowing it

Let’s face it, despite the huge investment in recent years in natural-language powered chatbots in customer service, most of us can recognize whether we’re dealing with a robot or a human. However, as the datasets used to train natural language processing algorithms continue to grow, the line between humans and machines will become harder and harder to distinguish. With the advent of deep learning and semi-supervised models of machine learning such as reinforcement learning, the algorithms that attempt to match our speech patterns and infer meaning from our own human language will become more and more able to fool us into thinking there is a human on the other end of the conversation.

10. But AI will recognize us, even if we don’t recognize it

Perhaps even more unsettlingly, the rollout of facial recognition technology is only likely to intensify as we move into the next decade. Not just in China (where the government is looking at ways of making facial recognition compulsory for accessing services like communication networks and public transport) but around the world.


How The Army Uses Microsoft’s HoloLens On The Battlefield


The headset is impressive — better than any augmented reality experience, including Magic Leap, which also tried to win the Army contract. The project is also a showcase for the Army’s plans to work more closely with America’s tech companies to speed innovation in military. The military calls its special version of the HoloLens 2IVAS,” which stands for Integrated Visual Augmentation System. It’s an augmented-reality headset, which means it places digital objects, such as maps or video displays, on top of the real world in front of you. Several companies are betting big on AR as the future of computing, since it will allow us to do much of what we can on a computer but while looking through glasses instead of down at a phone or at a computer screen. Apple, Google and Magic Leap are all building AR-capable software and hardware.

Put the headset on and pulled it down so that your eyes are peering through a glass visor. That visor is capable of displaying 3D images, information, my location and more. IVAS isn’t nearly finished.