AI Designs Quantum Physics Beyond What Any Human Has Conceived

Quantum physicist Mario Krenn remembers sitting in a café in Vienna in early 2016, poring over computer printouts, trying to make sense of what MELVIN had found. MELVIN was a machine-learning algorithm Krenn had built, a kind of artificial intelligence. Its job was to mix and match the building blocks of standard quantum experiments and find solutions to new problems. And it did find many interesting ones. But there was one that made no sense.

The first thing I thought was, ‘My program has a bug, because the solution cannot exist,’” Krenn says. MELVIN had seemingly solved the problem of creating highly complex entangled states involving multiple photons (entangled states being those that once made Albert Einstein invoke the specter of “spooky action at a distance”). Krenn, Anton Zeilinger of the University of Vienna and their colleagues had not explicitly provided MELVIN the rules needed to generate such complex states, yet it had found a way. Eventually, he realized that the algorithm had rediscovered a type of experimental arrangement that had been devised in the early 1990s. But those experiments had been much simpler. MELVIN had cracked a far more complex puzzle.

When we understood what was going on, we were immediately able to generalize [the solution],” says Krenn, who is now at the University of Toronto. Since then, other teams have started performing the experiments identified by MELVIN, allowing them to test the conceptual underpinnings of quantum mechanics in new ways. Meanwhile Krenn, working with colleagues in Toronto, has refined their machine-learning algorithms. Their latest effort, an AI called THESEUS, has upped the ante: it is orders of magnitude faster than MELVIN, and humans can readily parse its output. While it would take Krenn and his colleagues days or even weeks to understand MELVIN’s meanderings, they can almost immediately figure out what THESEUS is saying.
It is amazing work,” says theoretical quantum physicist Renato Renner of the Institute for Theoretical Physics at the Swiss Federal Institute of Technology Zurich, who reviewed a 2020 study about THESEUS but was not directly involved in these efforts.


AI Detects Alzheimer’s Six Years In Advance

Using a common type of brain scan, researchers programmed a machine-learning algorithm to diagnose early-stage Alzheimer’s disease about six years before a clinical diagnosis is made – potentially giving doctors a chance to intervene with treatment. No cure exists for Alzheimer’s disease, but promising drugs have emerged in recent years that can help stem the condition’s progression. However, these treatments must be administered early in the course of the disease in order to do any good. This race against the clock has inspired scientists to search for ways to diagnose the condition earlier.

A PET scan of the brain of a person with Alzheimer’s disease

One of the difficulties with Alzheimer’s disease is that by the time all the clinical symptoms manifest and we can make a definitive diagnosis, too many neurons have died, making it essentially irreversible,” says Jae Ho Sohn, MD, MS, a resident in the Department of Radiology and Biomedical Imaging at UC San Francisco.


Positron emission tomography (PET) scans, which measure the levels of specific molecules, like glucose, in the brain, have been investigated as one tool to help diagnose Alzheimer’s disease before the symptoms become severe. Glucose is the primary source of fuel for brain cells, and the more active a cell is, the more glucose it uses. As brain cells become diseased and die, they use less and, eventually, no glucose.

Other types of PET scans look for proteins specifically related to Alzheimer’s disease, but glucose PET scans are much more common and cheaper, especially in smaller health care facilities and developing countries, because they’re also used for cancer staging.

Radiologists have used these scans to try to detect Alzheimer’s by looking for reduced glucose levels across the brain, especially in the frontal and parietal lobes of the brain. However, because the disease is a slow progressive disorder, the changes in glucose are very subtle and so difficult to spot with the naked eye. To solve this problem, Sohn applied a machine learning algorithm to PET scans to help diagnose early-stage Alzheimer’s disease more reliably.

This is an ideal application of deep learning because it is particularly strong at finding very subtle but diffuse processes. Human radiologists are really strong at identifying tiny focal finding like a brain tumor, but we struggle at detecting more slow, global changes,” says Sohn. “Given the strength of deep learning in this type of application, especially compared to humans, it seemed like a natural application.

To train the algorithm, Sohn fed it images from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), a massive public dataset of PET scans from patients who were eventually diagnosed with either Alzheimer’s disease, mild cognitive impairment or no disorder. Eventually, the algorithm began to learn on its own which features are important for predicting the diagnosis of Alzheimer’s disease and which are not.

Once the algorithm was trained on 1,921 scans, the scientists tested it on two novel datasets to evaluate its performance. The first were 188 images that came from the same ADNI database but had not been presented to the algorithm yet. The second was an entirely novel set of scans from 40 patients who had presented to the UCSF Memory and Aging Center with possible cognitive impairment.

The algorithm performed with flying colors. It correctly identified 92 percent of patients who developed Alzheimer’s disease in the first test set and 98 percent in the second test set. What’s more, it made these correct predictions on average 75.8 months – a little more than six yearsbefore the patient received their final diagnosis.