‘Dancing Molecules’ Successfully Repair Severe Spinal Cord Injuries

Northwestern University researchers have developed a new injectable therapy that harnessesdancing molecules” to reverse paralysis and repair tissue after severe spinal cord injuries. In a new study, researchers administered a single injection to tissues surrounding the spinal cords of paralyzed mice. Just four weeks later, the animals regained the ability to walk.

By sending bioactive signals to trigger cells to repair and regenerate, the breakthrough therapy dramatically improved severely injured spinal cords in five key ways: The severed extensions of neurons, called axons, regenerated. Scar tissue, which can create a physical barrier to regeneration and repair, significantly diminished. Myelin, the insulating layer of axons that is important in transmitting electrical signals efficiently, reformed around cells. Functional blood vessels formed to deliver nutrients to cells at the injury site. More motor neurons survived.
After the therapy performs its function, the materials biodegrade into nutrients for the cells within 12 weeks and then completely disappear from the body without noticeable side effects. This is the first study in which researchers controlled the collective motion of molecules through changes in chemical structure to increase a therapeutic’s efficacy.

Our research aims to find a therapy that can prevent individuals from becoming paralyzed after major trauma or disease,” said Northwestern’s Samuel I. Stupp, who led the study. “For decades, this has remained a major challenge for scientists because our body’s central nervous system, which includes the brain and spinal cord, does not have any significant capacity to repair itself after injury or after the onset of a degenerative disease. We are going straight to the FDA to start the process of getting this new therapy approved for use in human patients, who currently have very few treatment options.”

Stupp is Board of Trustees Professor of Materials Science and Engineering, Chemistry, Medicine and Biomedical Engineering at Northwestern, where he is founding director of the Simpson Querrey Institute for BioNanotechnology (SQI) and its affiliated research center, the Center for Regenerative Nanomedicine.

Source: https://news.northwestern.edu/

AI Classify Chest X-Rays With Human-Level Accuracy

Analyzing chest X-ray images with machine learning algorithms is easier said than done. That’s because typically, the clinical labels required to train those algorithms are obtained with rule-based natural language processing or human annotation, both of which tend to introduce inconsistencies and errors. Additionally, it’s challenging to assemble data sets that represent an adequately diverse spectrum of cases, and to establish clinically meaningful and consistent labels given only images.

In an effort to move forward the goalpost with respect to X-ray image classification, researchers at Google devised AI models to spot four findings on human chest X-rays: pneumothorax (collapsed lungs), nodules and masses, fractures, and airspace opacities (filling of the pulmonary tree with material). In a paper published in the journal Nature, the team claims the model family, which was evaluated using thousands of images across data sets with high-quality labels, demonstrated “radiologist-levelperformance in an independent review conducted by human experts.

The study’s publication comes months after Google AI and Northwestern Medicine scientists created a model capable of detecting lung cancer from screening tests better than human radiologists with an average of eight years experience, and roughly a year after New York University used Google’s Inception v3 machine learning model to detect lung cancer. AI also underpins the tech giant’s advances in diabetic retinopathy diagnosis through eye scans, as well as Alphabet subsidiary DeepMind’s AI that can recommend the proper line of treatment for 50 eye diseases with 94% accuracy.

This newer work tapped over 600,000 images sourced from two de-identified data sets, the first of which was developed in collaboration with Apollo Hospitals and which consists of X-rays collected over years from multiple locations. As for the second corpus, it’s the publicly available ChestX-ray14 image set released by the National Institutes of Health, which has historically served as a resource for AI efforts but which suffers shortcomings in accuracy.

The researchers developed a text-based system to extract labels using radiology reports associated with each X-ray, which they then applied to provide labels for over 560,000 images from the Apollo Hospitals data set. To reduce errors introduced by the text-based label extraction and provide the relevant labels for a number of ChestX-ray14 images, they recruited radiologists to review approximately 37,000 images across the two corpora.

Google notes that while the models achieved expert-level accuracy overall, performance varied across corpora. For example, the sensitivity for detecting pneumothorax among radiologists was approximately 79% for the ChestX-ray14 images, but was only 52% for the same radiologists on the other data set.

Chest X-ray depicting a pneumothorax identified by Google’s AI model and the panel of radiologists, but missed by individual radiologists. On the left is the original image, and on the right is the same image with the most important regions for the model prediction highlighted in orange

The performance differences between datasets … emphasize the need for standardized evaluation image sets with accurate reference standards in order to allow comparison across studies,” wrote Google research scientist Dr. David Steiner and Google Health technical lead Shravya Shetty in a blog post, both of whom contributed to the paper. “[Models] often identified findings that were consistently missed by radiologists, and vice versa. As such, strategies that combine the unique ‘skills’ of both the [AI] systems and human experts are likely to hold the most promise for realizing the potential of AI applications in medical image interpretation.”

The research team hopes to lay the groundwork for superior methods with a corpus of the adjudicated labels for the ChestX-ray14 data set, which they’ve made available in open source. It contains 2,412 training and validation set images and 1,962 test set images, or 4,374 images in total.

We hope that these labels will facilitate future machine learning efforts and enable better apples-to-apples comparisons between machine learning models for chest X-ray interpretation,” wrote Steiner and Shetty.  

Source: https://venturebeat.com/