Artificial intelligence algorithms may soon bring the diagnostic know-how of an eye doctor to primary care offices and walk-in clinics, speeding up the detection of health problems and the start of treatment, especially in areas where specialized doctors are scarce. The first such program — trained to spot symptoms of diabetes-related vision loss in eye images — is pending approval by the U.S. Food and Drug Administration.
While other already approved AI programs help doctors examine medical images, there’s “not a specialist looking over the shoulder of [this] algorithm,” says Michael Abràmoff, who founded and heads a company that developed the system under FDA review, dubbed IDx-DR. “It makes the clinical decision on its own.” IDx-DR and similar AI programs, which are learning to predict everything from age-related sight loss to heart problems just by looking at eye images, don’t follow preprogrammed guidelines for how to diagnose a disease. They’re machine-learning algorithms that researchers teach to recognize symptoms of a particular condition, using example images labeled with whether or not that patient had that condition. IDx-DR studied over 1 million eye images to learn how to recognize symptoms of diabetic retinopathy, a condition that develops when high blood sugar damages retinal blood vessels (SN Online: 6/29/10). Between 12,000 and 24,000 people in the United States lose their vision to diabetic retinopathy each year, but the condition can be treated if caught early. Researchers compared how well IDx-DR detected diabetic retinopathy in more than 800 U.S. patients with diagnoses made by three human specialists. Of the patients identified by IDx-DR as having at least moderate diabetic retinopathy, more than 85 percent actually did. And of the patients IDx-DR ruled as having mild or no diabetic retinopathy, more than 82.5 percent actually did, researchers reported February 22 at the annual meeting of the Macula Society in Beverly Hills, Calif.
IDx-DR is on the fast-track to FDA clearance, and a decision is expected within a few months, says Abràmoff, a retinal specialist at the University of Iowa in Iowa City. If approved, it would become the first autonomous AI to be used in primary care offices and clinics.
AI algorithms to diagnose other eye diseases are in the works, too. An AI described February 22 in Cell studied over 100,000 eye images to learn the signs of several eye conditions. These included age-related macular degeneration, or AMD — a leading cause of vision loss in adults over 50 — and diabetic macular edema, a condition that develops from diabetic retinopathy.
This AI was designed to flag advanced AMD or diabetic macular edema for urgent treatment, and to refer less severe cases for routine checkups. In tests, the algorithm was 96.6 percent accurate in diagnosing eye conditions from 1,000 pictures. Six ophthalmologists made similar referrals based on the same eye images.
Researchers still need to test how this algorithm fares in the real world where the quality of images may vary from clinic to clinic, says Aaron Lee, an ophthalmologist at the University of Washington in Seattle. But this kind of AI could be especially useful in rural and developing regions where medical resources and specialists are scarce and people otherwise wouldn’t have easy access to in-person eye exams.
AI might also be able to use eye pictures to identify other kinds of health problems. One algorithm that studied retinal images from over 284,000 patients could predict cardiovascular health risk factors such as high blood pressure.
The algorithm was 71 percent accurate in distinguishing eye images between smoking and nonsmoking patients, according to a report February 19 in Nature Biomedical Engineering. And it predicted which patients would have a major cardiovascular event, such as a heart attack, within the next five years 70 percent of the time.
With AI getting more adept at screening for a growing list of conditions, “some people might be concerned that this is machines taking over” health care, says Caroline Baumal, an ophthalmologist at Tufts University in Boston. But diagnostic AI can’t replace the human touch. “Doctors will still need to be there to see patients and treat patients and talk to patients,” Baumal says. AI will just help people who need treatment get it faster.
The seeds for Martian clouds may come from the dusty tails of comets.
Charged particles, or ions, of magnesium from the cosmic dust can trigger the formation of tiny ice crystals that help form clouds, a new analysis of Mars’ atmosphere suggests.
For more than a decade, rovers and orbiters have captured images of Martian skies with wispy clouds made of carbon dioxide ice. But “it hasn’t been easy to explain where they come from,” says chemist John Plane of the University of Leeds in England. The cloud-bearing layer of the atmosphere is between –120° and –140° Celsius — too warm for carbon dioxide clouds to form on their own, which can happen at about –220° C. Then in 2017, NASA’s MAVEN orbiter detected a layer of magnesium ions hovering about 90 kilometers above the Martian surface (SN: 4/29/17, p. 20). Scientists think the magnesium, and possibly other metals not yet detected, comes from cosmic dust left by passing comets. The dust vaporizes as it hits the atmosphere, leaving a sprinkling of metals suspended in the air. Earth has a similar layer of atmospheric metals, but none had been observed elsewhere in the solar system before.
According to the new calculations, the bits of magnesium clump with carbon dioxide gas — which makes up about 95 percent of Mars’ atmosphere — to produce magnesium carbonate molecules. These larger, charged molecules could attract the atmosphere’s sparse water, creating what Plane calls “dirty” ice crystals.
At the temperatures seen in Mars’ cloud layer, pure carbon dioxide ice crystals are too small to gather clouds around them. But clouds could form around dirty ice at temperatures as high as –123° C, Plane and colleagues report online March 6 in the Journal of Geophysical Research: Planets.
Earthquake warning systems face a tough trade-off: To give enough time to take cover or shut down emergency systems, alerts may need to go out before it’s clear how strong the quake will be. And that raises the risk of false alarms, undermining confidence in any warning system.
A new study aims to quantify the best-case scenario for warning time from a hypothetical earthquake early warning system. The result? There is no magic formula for deciding when to issue an alert, the researchers report online March 21 in Science Advances. “We have a choice when issuing earthquake warnings,” says study leader Sarah Minson, a seismologist at the U.S. Geological Survey, or USGS, in Menlo Park, Calif. “You have to think about your relative risk appetite: What is the cost of taking action versus the cost of the damage you’re trying to prevent?”
For locations far from a large quake’s origin, waiting for clear signs of risk before sending an alert may mean waiting too long for people to be able to take protective action. But for those tasked with managing critical infrastructure, such as airports, trains or nuclear power plants, an early warning even if false may be preferable to an alert coming too late (SN: 4/19/14, p. 16).
Alerts issued by earthquake early warning systems, called EEWs, are based on several parameters: the depth and location of the quake’s origin, its estimated magnitude and the ground properties, such as the types of soil and rock that seismic waves would travel through.
“The trick to earthquake early warning systems is that it’s a misnomer,” Minson says. Such systems don’t warn that a quake is imminent. Instead, they alert people that a quake has already happened, giving them precious seconds — perhaps a minute or two — to prepare for imminent ground shaking. Estimating magnitude turns out to be a sticking point. It is impossible to distinguish a powerful earthquake in its earliest stages from a small, weak quake, according to a 2016 study by a team of researchers that included Men-Andrin Meier, a seismologist at Caltech who also coauthors the new study. Estimating magnitude for larger quakes also takes more time, because the rupture of the fault lasts perhaps several seconds longer – a significant chunk of time when it comes to EEW. And there is a trade-off in terms of distance: For locations farther away, there is less certainty the shaking will reach that far. In the new study, Minson, Meier and colleagues used standard ground-motion prediction equations to calculate the minimum quake magnitude that would produce shaking at any distance. Then, they calculated how quickly an EEW could estimate whether the quake would exceed that minimum magnitude to qualify for an alert. Finally, the team estimated how long it would take for the shaking to strike a location. Ultimately, they determined, EEW holds the greatest benefit for users who are willing to take action early, even with the risk of false alarms. The team hopes its paper provides a framework to help emergency response managers make those decisions.
EEWs are already in operation around the world, from Mexico to Japan. USGS, in collaboration with researchers and universities, has been developing the ShakeAlert system for the earthquake-prone U.S. West Coast. It is expected be rolled out this year, although plans for future expansion may be in jeopardy: President Trump’s proposed 2019 budget cuts the USGS program’s $8.2 million in funding. It’s unclear whether Congress will spare those funds.
The value of any alert system will ultimately depend on whether it fulfills its objective — getting people to take cover swiftly in order to save lives. “More than half of injuries from past earthquakes are associated with things falling on people,” says Richard Allen, a seismologist at the University of California, Berkeley who was not involved in the new study. “A few seconds of warning can more than halve the number of injuries.”
But the researchers acknowledge there is a danger in issuing too many false alarms. People may become complacent and ignore future warnings. “We are playing a precautionary game,” Minson says. “It’s a warning system, not a guarantee.”
The Neil Armstrong biopic, opening October 12, follows about eight years of the life of the first man on the moon, and spends about eight minutes depicting the lunar surface. Instead of the triumphant ticker tape parades that characterize many movies about the space race, First Man focuses on the terror, grief and heartache that led to that one small step.
“It’s a very different movie and storyline than people expect,” says James Hansen, author of the 2005 biography of Armstrong that shares the film’s name and a consultant on the film. The story opens shortly before Armstrong’s 2-year-old daughter, Karen, died of a brain tumor in January 1962. That loss hangs over the rest of the film, setting the movie’s surprisingly somber emotional tone. The cinematography is darker than most space movies. Colors are muted. Music is ominous or absent — a lot of scenes include only ambient sound, like a pen scratching on paper, a glass breaking or a phone clicking into the receiver. Karen’s death also seems to motivate the rest of Armstrong’s journey. Getting a fresh start may have been part of the reason why the grieving Armstrong (portrayed by Ryan Gosling) applied to the NASA Gemini astronaut program, although he never explicitly says so. And without giving too much away, a private moment Armstrong takes at the edge of Little West crater on the moon recalls his enduring bond with his daughter.
Hansen’s book also makes the case that Karen’s death motivated Armstrong’s astronaut career. Armstrong’s oldest son, Rick, who was 12 when his father landed on the moon, agrees that it’s plausible. “But it’s not something that he ever really definitively talked about,” Rick Armstrong says.
Armstrong’s reticence about Karen — and almost everything else — is true to life. That’s not all the film got right. Gosling captured Armstrong’s gravitas as well as his humor, and Claire Foy as his wife, Janet Armstrong, “is just amazing,” Rick Armstrong says.
Beyond the performances, the filmmakers, including director Damien Chazelle and screenwriter Josh Singer, went to great lengths to make the technical aspects of spaceflight historically accurate. The Gemini and Apollo cockpits Gosling sits in are replicas of the real spacecraft, and he flipped switches and hit buttons that would have controlled real flight. Much of the dialog during space scenes was taken verbatim from NASA’s control room logs, Hansen says.
The result is a visceral sense of how frightening and risky those early flights were. The spacecraft rattled and creaked like they were about to fall apart. The scene of Armstrong’s flight on the 1966 Gemini 8 mission, which ended early when the spacecraft started spinning out of control and almost killed its passengers, is terrifying. The 1967 fire inside the Apollo 1 spacecraft, which killed astronauts Ed White, Gus Grissom and Roger Chaffee, is gruesome.
“We wanted to treat that one with extreme care and love and get it exactly right,” Hansen says. “What we have in that scene, none of it’s made up.”
Even when the filmmakers took poetic license, they did it in a historical way. A vomit-inducing gyroscope that Gosling rides in during Gemini astronaut training was, in real life, used for the earlier Mercury astronauts, but not for Gemini, for instance. Since the Mercury astronauts never experienced the kind of dizzying rotation that the gyroscope mimicked, NASA dismantled it before the next group of astronauts arrived.
“They probably shouldn’t have dismantled it,” Hansen says — it did simulate what ended up happening in the Gemini 8 accident. So the filmmakers used the gyroscope experience as foreshadowing.
Meanwhile, present-day astronauts are not immune to harrowing brushes with death: a Russian Soyuz capsule carrying two astronauts malfunctioned October 11, and the astronauts had to evacuate in an alarming “ballistic descent.” NASA is currently talking about when and how to send astronauts back to the moon from American soil. The first commercial crew astronauts, who will test spacecraft built by Boeing and SpaceX, were announced in August.
First Man is a timely and sobering reminder of the risks involved in taking these giant leaps.
SAN DIEGO — Mice yanked out of their community and held in solitary isolation show signs of brain damage.
After a month of being alone, the mice had smaller nerve cells in certain parts of the brain. Other brain changes followed, scientists reported at a news briefing November 4 at the annual meeting of the Society for Neuroscience.
It’s not known whether similar damage happens in the brains of isolated humans. If so, the results have implications for the health of people who spend much of their time alone, including the estimated tens of thousands of inmates in solitary confinement in the United States and elderly people in institutionalized care facilities.
The new results, along with other recent brain studies, clearly show that for social species, isolation is damaging, says neurobiologist Huda Akil of the University of Michigan in Ann Arbor. “There is no question that this is changing the basic architecture of the brain,” Akil says. Neurobiologist Richard Smeyne of Thomas Jefferson University in Philadelphia and his colleagues raised communities of multiple generations of mice in large enclosures packed with toys, mazes and things to climb. When some of the animals reached adulthood, they were taken out and put individually into “a typical shoebox cage,” Smeyne said.
This abrupt switch from a complex society to isolation induced changes in the brain, Smeyne and his colleagues later found. The overall size of nerve cells, or neurons, shrunk by about 20 percent after a month of isolation. That shrinkage held roughly steady over three months as mice remained in isolation. To the researchers’ surprise, after a month of isolation, the mice’s neurons had a higher density of spines — structures for making neural connections — on message-receiving dendrites. An increase in spines is a change that usually signals something positive. “It’s almost as though the brain is trying to save itself,” Smeyne said.
But by three months, the density of dendritic spines had decreased back to baseline levels, perhaps a sign that the brain couldn’t save itself when faced with continued isolation. “It’s tried to recover, it can’t, and we start to see these problems,” Smeyne said.
The researchers uncovered other worrisome signals, too, including reductions in a protein called BDNF, which spurs neural growth. Levels of the stress hormone cortisol changed, too. Compared with mice housed in groups, isolated mice also had more broken DNA in their neurons.
The researchers studied neurons in the sensory cortex, a brain area involved in taking in information, and the motor cortex, which helps control movement. It’s not known whether similar effects happen in other brain areas, Smeyne says.
It’s also not known how the neural changes relate to mice’s behavior. In people, long-term isolation can lead to depression, anxiety and psychosis. Brainpower is affected, too. Isolated people develop problems reasoning, remembering and navigating.
Smeyne is conducting longer-term studies aimed at figuring out the effects of neuron shrinkage on thinking skills and behavior. He and his colleagues also plan to return isolated mice to their groups to see if the brain changes can be reversed. Those types of studies get at an important issue, Akil says. “The question is, ‘When is it too far gone?’”
Locust: The Opera finds a novel way to doom a soprano: species extinction.
The libretto, written by entomologist Jeff Lockwood of the University of Wyoming in Laramie, features a scientist, a rancher and a dead insect. The scientist tenor agonizes over why the Rocky Mountain locust went extinct at the dawn of the 20th century. He comes up with hypotheses, three of which unravel to music and frustration.
The project hatched in 2014. “Jeff got in his head, ‘Oh, opera is a good way to tell science stories,’ which takes a creative mind to think that,” says Anne Guzzo, who composed the music. Guzzo teaches music theory and composition at the University of Wyoming. locust brought famine and ruin to farms across the western United States. “This was a devastating pest that caused enormous human suffering,” Lockwood says. Epic swarms would suddenly descend on and eat vast swaths of cropland. “On the other hand, it was an iconic species that defined and shaped the continent.” Lockwood had written about the locust’s mysterious and sudden extinction in the 2004 book Locust , but the topic “begged in my mind for the grandeur of opera.” He spent several years mulling how to create a one-hour opera for three singers about the swarming grasshopper species. Then the ghost of Hamlet’s father, in the opera “Amleto,” based on Shakespeare’s play, inspired a breakthrough. Lockwood imagined a spectral soprano locust, who haunted a scientist until he figured out what killed her kind.
To make one locust soprano represent trillions, Guzzo challenged her music theory class to find ways of evoking the sound of a swarm. They tried snapping fingers, rattling cardstock and crinkling cellophane. But “the simplest answer was the most elegant,” Guzzo says — tasking the audience with shivering sheets of tissue paper in sequence, so that a great wave of rustling swept through the auditorium.
For the libretto, Lockwood took an unusually data-driven approach. After surveying opera lengths and word counts, he paced his work at 25 to 30 words per minute, policing himself sternly. If a scene was long by two words, he’d find two to cut. He wrote the dialogue not in verse, but as conversation, some of it a bit professorial. Guzzo asked for a few line changes. “I just couldn’t get ‘manic expressions of fecundity’ to fit where I wanted it to,” she says. Eventually, the scientist solves the mystery, but takes no joy in telling the beautiful locust ghost that humans had unwittingly doomed her kind by destroying vital locust habitat. For tragedy, Lockwood says, “there has to be a loss tinged with a kind of remorse.”
The opera, performed twice in Jackson, Wyo., will next be staged in March in Agadir, Morocco.
Martha Carlin married the love of her life in 1995. She and John Carlin had dated briefly in college in Kentucky, then lost touch until a chance meeting years later at a Dallas pub. They wed soon after and had two children. John worked as an entrepreneur and stay-at-home dad. In his free time, he ran marathons.
Almost eight years into their marriage, the pinky finger on John’s right hand began to quiver. So did his tongue. Most disturbing for Martha was how he looked at her. For as long as she’d known him, he’d had a joy in his eyes. But then, she says, he had a stony stare, “like he was looking through me.” In November 2002, a doctor diagnosed John with Parkinson’s disease. He was 44 years old.
Carlin made it her mission to understand how her seemingly fit husband had developed such a debilitating disease. “The minute we got home from the neurologist, I was on the internet looking for answers,” she recalls. She began consuming all of the medical literature she could find.
With her training in accounting and corporate consulting, Carlin was used to thinking about how the many parts of large companies came together as a whole. That kind of wide-angle perspective made her skeptical that Parkinson’s, which affects half a million people in the United States, was just a malfunction in the brain.Martha Carlin married the love of her life in 1995. She and John Carlin had dated briefly in college in Kentucky, then lost touch until a chance meeting years later at a Dallas pub. They wed soon after and had two children. John worked as an entrepreneur and stay-at-home dad. In his free time, he ran marathons.
Almost eight years into their marriage, the pinky finger on John’s right hand began to quiver. So did his tongue. Most disturbing for Martha was how he looked at her. For as long as she’d known him, he’d had a joy in his eyes. But then, she says, he had a stony stare, “like he was looking through me.” In November 2002, a doctor diagnosed John with Parkinson’s disease. He was 44 years old.
Carlin made it her mission to understand how her seemingly fit husband had developed such a debilitating disease. “The minute we got home from the neurologist, I was on the internet looking for answers,” she recalls. She began consuming all of the medical literature she could find.
With her training in accounting and corporate consulting, Carlin was used to thinking about how the many parts of large companies came together as a whole. That kind of wide-angle perspective made her skeptical that Parkinson’s, which affects half a million people in the United States, was just a malfunction in the brain. “I had an initial hunch that food and food quality was part of the issue,” she says. If something in the environment triggered Parkinson’s, as some theories suggest, it made sense to her that the disease would involve the digestive system. Every time we eat and drink, our insides encounter the outside world.
John’s disease progressed slowly and Carlin kept up her research. In 2015, she found a paper titled, “Gut microbiota are related to Parkinson’s disease and clinical phenotype.” The study, by neurologist Filip Scheperjans of the University of Helsinki, asked two simple questions: Are the microorganisms that populate the guts of Parkinson’s patients different than those of healthy people? And if so, does that difference correlate with the stooped posture and difficulty walking that people with the disorder experience? Scheperjans’ answer to both questions was yes.
Carlin had picked up on a thread from one of the newest areas of Parkinson’s research: the relationship between Parkinson’s and the gut. Other than a small fraction of cases that are inherited, the cause of Parkinson’s disease is unknown. What is known is that something kills certain nerve cells, or neurons, in the brain. Abnormally misfolded and clumped proteins are the prime suspect. Some theories suggest a possible role for head trauma or exposure to heavy metals, pesticides or air pollution. People with Parkinson’s often have digestive issues, such as constipation, long before the disease appears. Since the early 2000s, scientists have been gathering evidence that the malformed proteins in the brains of Parkinson’s patients might actually first appear in the gut or nose (people with Parkinson’s also commonly lose their sense of smell). From there, the theory goes, these proteins work their way into the nervous system. Scientists don’t know exactly where in the gut the misfolded proteins come from, or why they form, but some early evidence points to the body’s internal microbial ecosystem. In the latest salvo, scientists from Sweden reported in October that people who had their appendix removed had a lower risk of Parkinson’s years later (SN: 11/24/18, p. 7). The job of the appendix, which is attached to the colon, is a bit of a mystery. But the organ may play an important role in intestinal health.
If the gut connection theory proves true — still a big if — it could open up new avenues to one day treat or at least slow the disease.
“It really changes the concept of what we consider Parkinson’s,” Scheperjans says. Maybe Parkinson’s isn’t a brain disease that affects the gut. Perhaps, for many people, it’s a gut disease that affects the brain.
Gut feeling London physician James Parkinson wrote “An essay on the shaking palsy” in 1817, describing six patients with unexplained tremors. Some also had digestive problems. (“Action of the bowels had been very much retarded,” he reported of one man.) He treated two people with calomel — a toxic, mercury-based laxative of the time — and noted that their tremors subsided.
But the digestive idiosyncrasies of the disease that later bore Parkinson’s name largely faded into the background for the next two centuries, until neuroanatomists Heiko Braak and Kelly Del Tredici, now at the University of Ulm in Germany, proposed that Parkinson’s disease might arise from the intestine. Writing in Neurobiology of Aging in 2003, they and their colleagues based their theory on autopsies of Parkinson’s patients. The researchers were looking for Lewy bodies, which contain clumps of a protein called alpha-synuclein. The presence of Lewy bodies in the brain is a hallmark of Parkinson’s, though their exact role in the disease is still under investigation.
Lewy bodies form when alpha-synuclein, which is produced by neurons and other cells, starts curdling into unusual strands. The body encapsulates the abnormal alpha-synuclein and other proteins into the round Lewy body bundles. In the brain, Lewy bodies collect in the cells of the substantia nigra, a structure that helps orchestrate movement. By the time symptoms appear, much of the substantia nigra is already damaged.
Substantia nigra cells produce the chemical dopamine, which is important for movement. Levodopa, the main drug prescribed for Parkinson’s, is a synthetic replacement for dopamine. The drug has been around for a half-century, and while it can alleviate symptoms for a while, it does not slow the destruction of brain cells.
In patient autopsies, Braak and his team tested for the presence of Lewy bodies, as well as abnormal alpha-synuclein that had not yet become bundled together. Based on comparisons with people without Parkinson’s, the researchers found signs that Lewy bodies start to form in the nasal passages and intestine before they show up in the brain. Braak’s group proposed that Parkinson’s disease develops in stages, migrating from the gut and nose into the nerves to reach the brain.
Neural highway Today, the idea that Parkinson’s might arise from the intestine, not the brain, “is one of the most exciting things in Parkinson’s disease,” says Heinz Reichmann, a neurologist at the University of Dresden in Germany. The Braak theory couldn’t explain how the Lewy bodies reach the brain, but Braak speculated that some sort of pathogen, perhaps a virus, might travel along the body’s nervous system, leaving a trail of Lewy bodies.
There is no shortage of passageways: The intestine contains so many nerves that it’s sometimes called the body’s second brain. And the vagus nerve offers a direct connection between those nerves in the gut and the brain (SN: 11/28/15, p. 18).
In mice, alpha-synuclein can indeed migrate from the intestine to the brain, using the vagus nerve like a kind of intercontinental highway, as Caltech researchers demonstrated in 2016 (SN: 12/10/16, p. 12). And Reichmann’s experiments have shown that mice that eat the pesticide rotenone develop symptoms of Parkinson’s. Other teams have shown similar reactions in mice that inhale the chemical. “What you sniff, you swallow,” he says.
To look at this idea another way, researchers have examined what happens to Parkinson’s risk when people have a weak or missing vagus nerve connection. There was a time when doctors thought that an overly eager vagus nerve had something to do with stomach ulcers. Starting around the 1970s, many patients had the nerve clipped as an experimental means of treatment, a procedure called a vagotomy. In one of the latest studies on vagotomy and Parkinson’s, researchers examined more than 9,000 patients with vagotomies, using data from a nationwide patient registry in Sweden. Among people who had the nerve cut down low, just above the stomach, the risk of Parkinson’s began dropping five years after surgery, eventually reaching a difference of about 50 percent compared with people who hadn’t had a vagotomy, the researchers reported in 2017 in Neurology. The studies are suggestive, but by no means definitive. And the vagus nerve may not be the only possible link the gut and brain share. The body’s immune system might also connect the two, as one study published in January in Science Translational Medicine found. Study leader Inga Peter, a genetic epidemiologist at the Icahn School of Medicine at Mount Sinai in New York City, was looking for genetic contributors to Crohn’s disease, an inflammatory bowel condition that affects close to 1 million people in the United States.
She and a worldwide team studied about 2,000 people from an Ashkenazi Jewish population, which has an elevated risk of Crohn’s, and compared them with people without the disease. The research led Peter and colleagues to suspect the role of a gene called LRRK2. That gene is involved in the immune system — which mistakenly attacks the intestine in people who have Crohn’s. So it made sense for a variant of that gene to be involved in inflammatory disease. The researchers were thrown, however, when they discovered that versions of the gene also appeared to increase the risk for Parkinson’s disease.
“We refused to believe it,” Peter says. The finding, although just a correlation, suggested that whatever the gene was doing to the intestine might have something to do with Parkinson’s. So the team investigated the link further, reporting results in the August JAMA Neurology.
In their analysis of a large database of health insurance claims and prescriptions, the scientists found more evidence of inflammation’s role. People with inflammatory bowel disease were about 30 percent more likely to develop Parkinson’s than people without it. But among those who had filled prescriptions for an anti-inflammatory medication called antitumor necrosis factor, which the researchers used as a marker for reduced inflammation, Parkinson’s risk was 78 percent lower than in people who had not filled prescriptions for the drug.
Belly bacteria Like Inga Peter, microbiologist Sarkis Mazmanian of Caltech came upon Parkinson’s disease almost by accident. He had long studied how the body’s internal bacteria interact with the immune system. At lunch one day with a colleague who was studying autism using a mouse version of the disease, Mazmanian asked if he could take a look at the animals’ intestines. Because of the high density of nerves in the intestine, he wanted to see if the brain and gut were connected in autism.
Neurons in the gut “are literally one cell layer away from the microbes,” he says. “That made me feel that at least the physical path or conduit was there.” He began to study autism, but wanted to switch to a brain disease with more obvious physical symptoms. When he learned that people with Parkinson’s disease often have a long history of digestive problems, he had his subject.
Mazmanian’s group examined mice that were genetically engineered to overproduce alpha-synuclein. He wanted to know whether the presence or absence of gut bacteria influenced symptoms that developed in the mice.
The results, reported in Cell in 2016, showed that when the mice were raised germ free — meaning their insides had no microorganisms — they showed no signs of Parkinson’s. The animals had no telltale gait or balance problems and no constipation, even though their bodies made alpha-synuclein (SN: 12/24/16 & 1/7/17, p. 10). “All the features of Parkinson’s in the animals were gone when the animals had no microbiome,” he says.
However, when gut microbes from people diagnosed with Parkinson’s were transplanted into the germ-free mice, the mice developed symptoms of the disease — symptoms that were much more severe than those in mice transplanted with microbes from healthy people.
Mazmanian suspects that something in the microbiome triggers the misfolding of alpha-synuclein. But this has not been tested in humans, and he is quick to say that this is just one possible explanation for the disease. “There’s likely no one smoking gun,” he says.
Microbial forces If the microbiome is involved, what exactly is it doing to promote Parkinson’s? Microbiologist Matthew Chapman of the University of Michigan in Ann Arbor thinks it may have something to do with chemical signals that bacteria send to the body. Chapman studies biofilms, which occur when bacteria form resilient colonies. (Think of the slime on the inside a drain pipe.)
Part of what makes biofilms so hard to break apart is that fibers called amyloids run through them. Amyloids are tight stacks of proteins, like columns of Legos. Scientists have long suspected that amyloids are involved in degenerative diseases of the brain, including Alzheimer’s. In Parkinson’s, amyloid forms of alpha-synuclein are found in Lewy bodies.
Despite amyloids’ bad reputation, the fibers themselves aren’t always undesirable, Chapman says. Sometimes they may provide a good way of storing proteins for future use, to be snapped off brick by brick as needed. Perhaps it’s only when amyloids form in the wrong place, like the brain, that they contribute to disease. Chapman’s lab group has found that E. coli bacteria, part of the body’s normal microbial population, produce amyloid forms of some proteins when they are under stress.
When gut bacteria produce amyloids, the body’s own cells could also be affected, wrote Chapman in 2017 in PLOS Pathogens with an unlikely partner: neurologist Robert Friedland of the University of Louisville School of Medicine in Kentucky. “This is a difficult field to study because it’s on the border of several fields,” Friedland says. “I’m a neurologist who has little experience in gastroenterology. When I talked about this to my colleagues who are gastroenterologists, they’ve never heard that bacteria make amyloid.” Friedland and collaborators reported in 2016 in Scientific Reports that when E. coli in the intestines of rats started to produce amyloid, alpha-synuclein in the rats’ brains also congealed into the amyloid form. In their 2017 paper, Chapman and Friedland suggested that the immune system’s reaction to the amyloid in the gut might have something to do with triggering amyloid formation in the brain.
In other words, when gut bacteria get stressed and start to produce their own amyloids, those microbes may be sending cues to nearby neurons in the intestine to follow suit. “The question is, and it’s still an outstanding question, what is it that these bacteria are producing that is, at least in animals, causing alpha-synuclein to form amyloids?” Chapman says.
Head for a cure There is, in fact, a long list of questions about the microbiome, says Scheperjans, the neurologist whose paper Martha Carlin first spotted. So far, studies of the microbiomes of human patients are largely limited to simple observations like his, and the potential for a microbiome connection has yet to reach deeply into the neurology community. But in October, for the second year in a row, Scheperjans says, the International Congress of Parkinson’s Disease and Movement Disorders held a panel discussing connections to the microbiome.
“I got interested in the gastrointestinal aspects because the patients complained so much about it,” he says. While his study found definite differences in the bacteria of people with Parkinson’s, it’s still too early to know how that might matter. But Scheperjans hopes that one day doctors may be able to test for microbiome changes that put people at higher risk for Parkinson’s, and restore a healthy microbe population through diet or some other means to delay or prevent the disease. One way to slow the disease might be shutting down the mobility of misfolded alpha-synuclein before it has even reached the brain. In Science in 2016, neuroscientist Valina Dawson and colleagues at Johns Hopkins University School of Medicine and elsewhere described using an antibody to halt the spread of bad alpha-synuclein from cell to cell. The researchers are working now to develop a drug that could do the same thing.
The goal is to one day test for the early development of Parkinson’s and then be able to tell a patient, “Take this drug and we’re going to try to slow and prevent progression of disease,” she says.
For her part, Carlin is doing what she can to speed research into connections between the microbiome and Parkinson’s. She quit her job, sold her house and drained her retirement account to pour money into the cause. She donated to the University of Chicago to study her husband’s microbiome. And she founded a company called the BioCollective to aid in microbiome research, providing free collection kits to people with Parkinson’s. The 15,000 microbiome samples she has collected so far are available to researchers.
Carlin admits that the possibility of a gut connection to Parkinson’s can be a hard sell. “It’s a difficult concept for people to wrap their head around when you are taking a broad view,” she says. As she searches for answers, her husband, John, keeps going. “He drives, he runs biking programs in Denver for people with Parkinson’s,” she says. Anything to keep the wheels turning toward the future.One way to slow the disease might be shutting down the mobility of misfolded alpha-synuclein before it has even reached the brain. In Science in 2016, neuroscientist Valina Dawson and colleagues at Johns Hopkins University School of Medicine and elsewhere described using an antibody to halt the spread of bad alpha-synuclein from cell to cell. The researchers are working now to develop a drug that could do the same thing.
The goal is to one day test for the early development of Parkinson’s and then be able to tell a patient, “Take this drug and we’re going to try to slow and prevent progression of disease,” she says.
For her part, Carlin is doing what she can to speed research into connections between the microbiome and Parkinson’s. She quit her job, sold her house and drained her retirement account to pour money into the cause. She donated to the University of Chicago to study her husband’s microbiome. And she founded a company called the BioCollective to aid in microbiome research, providing free collection kits to people with Parkinson’s. The 15,000 microbiome samples she has collected so far are available to researchers.
Carlin admits that the possibility of a gut connection to Parkinson’s can be a hard sell. “It’s a difficult concept for people to wrap their head around when you are taking a broad view,” she says. As she searches for answers, her husband, John, keeps going. “He drives, he runs biking programs in Denver for people with Parkinson’s,” she says. Anything to keep the wheels turning toward the future.
Race should no longer be used to describe populations in most genetics studies, a panel of experts says.
Using race and ethnicity to describe study participants gives the mistaken impression that humans can be divided into distinct groups. Such labels have been used to stigmatize groups of people, but do not explain biological and genetic diversity, the panel convened by the U.S. National Academies of Sciences, Engineering and Medicine said in a report on March 14. In particular, the term Caucasian should no longer be used, the committee recommends. The term, coined in the 18th century by German scientist Johann Friedrich Blumenbach to describe what he determined was the most beautiful skull in his collection, carries the false notion of white superiority, the panel says.
Worse, the moniker “has also acquired today the connotation of being an objective scientific term, and that’s what really led the committee to take objection with it,” says Ann Morning, a sociologist at New York University and a member of the committee that wrote the report. “It tends to reinforce this erroneous belief that racial categories are somehow objective and natural characterizations of human biological difference. We felt that it was a term that … should go into the dustbin of history.”
Similarly, the term “black race” shouldn’t be used because it implies that Black people are a distinct group, or race, that can be objectively defined, the panel says.
Racial definitions are problematic “because not only are they stigmatizing, they are historically wrong,” says Ambroise Wonkam, a medical geneticist at Johns Hopkins University and president of the African Society of Human Genetics. Race is often used as a proxy for genetic diversity. But “race cannot be used to capture diversity at all. Race doesn’t exist. There is only one race, the human race,” says Wonkam, who was not involved with the National Academies’ panel.
Race might be used in some studies to determine how genetic and social factors contribute to health disparities (SN: 4/5/22), but beyond that race has no real value in genetic research, Wonkam adds.
Researchers could use other identifiers, including geographical ancestry, to define groups of people in the study, Wonkam says. But those definitions need to be precise.
For instance, some researchers group Africans by language groups. But a Bantu-speaking person from Tanzania or Nigeria where malaria is endemic would have a much higher genetic risk of sickle cell disease than a Bantu-speaking person whose ancestors are from South Africa, where malaria has not existed for at least 1,000 years. (Changes in genes that make hemoglobin can protect against malaria (SN: 5/2/11), but cause life-threatening sickle cell disease.) Genetic studies also have to account for movements of people and mixture between multiple groups, Wonkam says. And labeling must be consistent for all groups in the study, he says. Current studies sometimes compare continent-wide racial groups, such as Asian, with national groups, such as French or Finnish, and ethnic groups, such as Hispanic.
An argument for keeping race in rare cases Removing race as a descriptor may be helpful for some groups, such as people of African descent, says Joseph Yracheta, a health disparities researcher and the executive director of the Native BioData Consortium, headquartered on the Cheyenne River Sioux reservation in South Dakota. “I understand why they want to get rid of race science for themselves, because in their case it’s been used to deny them services,” he says.
But Native Americans’ story is different, says Yracheta, who was not part of the panel. Native Americans’ unique evolutionary history have made them a valuable resource for genetics research. A small starting population and many thousands of years of isolation from humans outside the Americas have given Native Americans and Indigenous people in Polynesia and Australia some genetic features that may make it easier for researchers to find variants that contribute to health or disease, he says. “We’re the Rosetta stone for the rest of the planet.”
Native Americans “need to be protected, because not only are our numbers small, but we keep having things taken away from us since 1492. We don’t want this to be another casualty of colonialism.” Removing the label of Indigenous or Native American may erode tribal sovereignty and control over genetic data, he says.
The panel does recommend that genetic researchers should clearly state why they used a particular descriptor and should involve study populations in making decisions about which labels to use.
That community input is essential, Yracheta says. The recommendations have no legal or regulatory weight. So he worries that this lack of teeth may allow researchers to ignore the wishes of study participants without fear of penalty.
Still seeking diversity in research participants Genetics research has suffered from a lack of diversity of participants (SN: 3/4/21). To counteract the disparities, U.S. government regulations require researchers funded by the National Institutes of Health to collect data on the race and ethnicity of study participants. But because those racial categories are too broad and don’t consider the social and environmental conditions that may affect health, the labels are not helpful in most genetic analyses, the panel concluded.
Removing racial labels won’t hamper diversity efforts, as researchers will still seek out people from different backgrounds to participate in studies, says Brendan Lee, who is president of the American Society of Human Genetics. But taking race out of the equation should encourage researchers to think more carefully about the type of data they are collecting and how it might be used to support or refute racism, says Lee, a medical geneticist at Baylor College of Medicine in Houston, who was not part of the panel.
The report offers decision-making tools for determining what descriptors are appropriate for particular types of studies. But “while it is a framework, it is not a recipe where in every study we do A, B and C,” Lee says.
Researchers probably won’t instantly adopt the new practices, Lee says. “It is a process that will take time. I don’t think it is something we can expect in one week or one evening that we’ll all change over to this, but it is a very important first step.”
Just a few powerful storms in Antarctica can have an outsized effect on how much snow parts of the southernmost continent get. Those ephemeral storms, preserved in ice cores, might give a skewed view of how quickly the continent’s ice sheet has grown or shrunk over time.
Relatively rare extreme precipitation events are responsible for more than 40 percent of the total annual snowfall across most of the continent — and in some places, as much as 60 percent, researchers report March 22 in Geophysical Research Letters. Climatologist John Turner of the British Antarctic Survey in Cambridge and his colleagues used regional climate simulations to estimate daily precipitation across the continent from 1979 to 2016. Then, the team zoomed in on 10 locations — representing different climates from the dry interior desert to the often snowy coasts and the open ocean — to determine regional differences in snowfall.
While snowfall amounts vary greatly by location, extreme events packed the biggest wallop along Antarctica’s coasts, especially on the floating ice shelves, the researchers found. For instance, the Amery ice shelf in East Antarctica gets roughly half of its annual precipitation — which typically totals about half a meter of snow — in just 10 days, on average. In 1994, the ice shelf got 44 percent of its entire annual precipitation on a single day in September.
Ice cores aren’t just a window into the past; they are also used to predict the continent’s future in a warming world. So characterizing these coastal regions is crucial for understanding Antarctica’s ice sheet — and its potential future contribution to sea level rise. Editor’s note: This story was updated April 5, 2019, to correct that the results were reported March 22 (not March 25).
We live in a sea of neutrinos. Every second, trillions of them pass through our bodies. They come from the sun, nuclear reactors, collisions of cosmic rays hitting Earth’s atmosphere, even the Big Bang. Among fundamental particles, only photons are more numerous. Yet because neutrinos barely interact with matter, they are notoriously difficult to detect.
The existence of the neutrino was first proposed in the 1930s and then verified in the 1950s (SN: 2/13/54). Decades later, much about the neutrino — named in part because it has no electric charge — remains a mystery, including how many varieties of neutrinos exist, how much mass they have, where that mass comes from and whether they have any magnetic properties. These mysteries are at the heart of Ghost Particle by physicist Alan Chodos and science journalist James Riordon. The book is an informative, easy-to-follow introduction to the perplexing particle. Chodos and Riordon guide readers through how the neutrino was discovered, what we know — and don’t know — about it, and the ongoing and future experiments that (fingers crossed) will provide the answers.
It’s not just neutrino physicists who await those answers. Neutrinos, Riordon says, “are incredibly important both for understanding the universe and our existence in it.” Unmasking the neutrino could be key to unlocking the nature of dark matter, for instance. Or it could clear up the universe’s matter conundrum: The Big Bang should have produced equal amounts of matter and antimatter, the oppositely charged counterparts of electrons, protons and so on. When matter and antimatter come into contact, they annihilate each other. So in theory, the universe today should be empty — yet it’s not (SN: 9/22/22). It’s filled with matter and, for some reason, very little antimatter.
Science News spoke with Riordon, a frequent contributor to the magazine, about these puzzles and how neutrinos could act as a tool to observe the cosmos or even see into our own planet. The following conversation has been edited for length and clarity.
SN: In the first chapter, you list eight unanswered questions about neutrinos. Which is the most pressing to answer?
Riordon: Whether they’re their own antiparticles is probably one of the grandest. The proposal that neutrinos are their own antiparticles is an elegant solution to all sorts of problems, including the existence of this residue of matter we live in. Another one is figuring out how neutrinos fit in the standard model [of particle physics]. It’s one of the most successful theories there is, but it can’t explain the fact that neutrinos have mass. SN: Why is now a good time to write a book about neutrinos?
Riordon: All of these questions about neutrinos are sort of coming to a head right now — the hints that neutrinos may be their own antiparticles, the issues of neutrinos not quite fitting the standard model, whether there are sterile neutrinos [a hypothetical neutrino that is a candidate for dark matter]. In the next few years, a decade or so, there will be a lot of experiments that will [help answer these questions,] and the resolution either way will be exciting.
SN: Neutrinos could also be used to help scientists observe a range of phenomena. What are some of the most interesting questions neutrinos could help with?
Riordon: There are some observations that simply have to be done with neutrinos, that there are no other technological alternatives for. There’s a problem with using light-based telescopes to look back in history. We have this really amazing James Webb Space Telescope that can see really far back in history. But at some point, when you go far enough back, the universe is basically opaque to light; you can’t see into it. Once we narrow down how to detect and how to measure the cosmic neutrino background [neutrinos that formed less than a second after the Big Bang], it will be a way to look back at the very beginning. Other than with gravitational waves, you can’t see back that far with anything else. So it’ll give us sort of a telescope back to the beginning of the universe.
The other thing is, when a supernova happens, all kinds of really cool stuff happens inside, and you can see it with neutrinos because neutrinos come out immediately in a burst. We call it the “cosmic neutrino bomb,” but you can track the supernova as it’s going along. With light, it takes a while for it to get out [of the stellar explosion]. We’re due for a [nearby] supernova. We haven’t had one since 1987. It was the last visible supernova in the sky and was a boon for research. Now that we have neutrino detectors around the world, this next one is going to be even better [for research], even more exciting.
And if we develop better instrumentation, we could use neutrinos to understand what’s going on in the center of the Earth. There’s no other way that you could probe the center of the Earth. We use seismic waves, but the resolution is really low. So we could resolve a lot of questions about what the planet is made of with neutrinos.
SN: Do you have a favorite “character” in the story of neutrinos?
Riordon: I’m certainly very fond of my grandfather Clyde Cowan [he and Frederick Reines were the first physicists to detect neutrinos]. But Reines is a riveting character. He was poetic. He was a singer. He really was this creative force. I mentioned [in the book] that they put this “SNEWS” sign on their detector for “supernova early warning system,” which sort of echoed the ballistic missile early warning systems at the time [during the Cold War]. That’s so ripe.