The holiday onslaught is upon us. For some families with children, the crush of holiday gifts — while wonderful and thoughtful in many ways — can become nearly unmanageable, cluttering both rooms and minds.
This year, I’m striving for simplicity as I pick a few key presents for my girls. I will probably fail. But it’s a good goal, and one that has some new science to back it. Toddlers play longer and more creatively with toys when there are fewer toys around, researchers report November 27 in Infant Behavior and Development. Researchers led by occupational therapist Alexia Metz at the University of Toledo in Ohio were curious about whether the number of toys would affect how the children played, including how many toys they played with and how long they spent with each toy. The researchers also wondered about children’s creativity, such as the ability to imagine a bucket as a drum or a hat.
In the experiment, 36 children ages 18 to 30 months visited a laboratory playroom twice while cameras caught how they played. On one visit, the room held four toys. On the other visit, the room held 16 toys.
When in the playroom with 16 toys, children played with more toys and spent less time with each one over a 15-minute session, the researchers found. When the same kids were in a room with four toys, they stuck with each toy longer, exploring other toys less over the 15 minutes.
What’s more, the quality of the children’s play seemed to be better when fewer toys were available. The researchers noted more creative uses of the toys when only four were present versus 16. Metz and colleagues noticed that initial attempts to play with a toy were often superficial and simple. But if a kid’s interest stuck, those early pokes and bangs turned into more sophisticated manners of playing. This type of sustained engagement might help children learn to focus their attention, a skill Metz likened to a “muscle that they have to exercise.” This attentional workout might not happen if kids are perpetually exposed to lots of distracting toys.
The toys used in the study didn’t include electronic devices such as tablets. Only one of the four toys and only four of the 16 toys used batteries. Noisy toys may have their own troubles. They can cut down on parent-child conversations, scientists have found. It’s possible that electronics such as televisions or tablets would have even greater allure than other toys.
Nor do the researchers know what would happen if the study had been done in kids’ houses and with their own toys. It’s possible that the novelty of the new place and the new toys influenced the toddlers’ behavior. (As everyone knows, the toys at a friend’s house are way better than the toys a kid has at home, even when they are literally the exact same toy.)
The results don’t pinpoint the optimal number of toys for optimal child development, Metz says. “It’s a little preliminary to say this is good and that is bad,” she says. But she points out that many kids are not in danger of having too few toys. In fact, the average number of toys the kids in the study had was 87. Five families didn’t even provide toy counts, instead answering “a lot.”
“Because of the sheer abundance of toys, there’s no harm in bringing out a few at a time,” Metz says.
That’s an idea that I’ve seen floating around, and I like it. I’ve already started packing some of my kids’ toys out of sight, with the idea to switch the selection every so often (or more likely, never). Another recommendation I’ve seen is to immediately hide away some of the new presents, which aren’t likely to be missed in the holiday pandemonium, and break them out months later when the kids need a thrill.
If more nerve cells mean more smarts, then dogs beat cats, paws down, a new study on carnivores shows. That harsh reality may shock some friends of felines, but scientists say the real surprises are inside the brains of less popular carnivores. Raccoon brains are packed with nerve cells, for instance, while brown bear brains are sorely lacking.
By comparing the numbers of nerve cells, or neurons, among eight species of carnivores (ferret, banded mongoose, raccoon, cat, dog, hyena, lion and brown bear), researchers now have a better understanding of how different-sized brains are built. This neural accounting, described in an upcoming Frontiers in Neuroanatomy paper, may ultimately help reveal how brain features relate to intelligence. For now, the multispecies tally raises more questions than it answers, says zoologist Sarah Benson-Amram of the University of Wyoming in Laramie. “It shows us that there’s a lot more out there that we need to study to really be able to understand the evolution of brain size and how it relates to cognition,” she says.
Neuroscientist Suzana Herculano-Houzel of Vanderbilt University in Nashville and colleagues gathered brains from the different species of carnivores. For each animal, the researchers whipped up batches of “brain soup,” tissue dissolved in a detergent. Using a molecule that attaches selectively to neurons in this slurry, researchers could count the number of neurons in each bit of brain real estate.
For most animals, the team found the expected numbers of neurons, given a certain brain size. Those expectations came in part from work on other mammals’ brains. That research showed that with the exception of primates (which pack in lots of neurons without growing bigger brains), there’s a predictable relationship between the size of the cerebral cortex — the wrinkly outer layer of the brain that’s involved in thinking, learning and remembering — and the number of neurons contained inside it.
Story continues below interactive graphic Feeling brainy Comparing brain size and number of nerve cells in the cerebral cortex among several animal species revealed some surprises. Golden retrievers, for example, have many more nerve cells than cats, and brown bears have an unexpectedly low number of nerve cells given the relatively large size of their brain. Raccoons have a surprising number of nerve cells considering their small noggin. It’s too early, however, to say how neuron number relates to animal intelligence.
Tap or click the graph below for more information.
But some of the larger carnivores with correspondingly larger cortices had surprisingly few neurons. In fact, a golden retriever — with 623 million neurons packed into its doggy cortex —topped both lions and bears, the team found. (For scale, humans have roughly 16.3 billion neurons in the cortex.)
The brown bear is especially lacking. Despite being about 10 times bigger than a cat’s cortex, the bear’s cortex contained roughly the same number of neurons, about 250 million. “It’s just flat out missing 80 percent of the neurons that you would expect,” Herculano-Houzel says. She suspects that there’s a limit to how much food a big predator can catch and eat, especially one that hibernates. That caloric limit might also cap the number of energetically expensive neurons.
Another exception — but in the opposite direction — was the raccoon, which has a cat-sized brain but a doglike neuron number, a finding that fits the nocturnal mammal’s reputation as a clever problem-solver. Benson-Amram cautions that it’s not clear how these neuron numbers relate to potential intelligence. Raccoons are very dexterous, she says, and it’s possible that a beefed-up brain region that handles touch, part of the cortex, could account for the neuron number.
Herculano-Houzel expected large predators such as lions to have lots of neurons. “We went into this study with the expectation that being a predator would require smarts,” she says. But in many cases, a predator didn’t seem to have more neurons than its prey. A lion, for instance, has about 545 million neurons in its cerebral cortex, while a blesbok antelope, which has a slightly smaller cortex, has about 571 million, the researchers previously found.
It’s too early to say how neuron number relates to animal intelligence. By counting neurons, “we’ve figured out one side of that equation,” Herculano-Houzel says. Those counts still need to be linked to animals’ thinking abilities.
Some studies, including one by Benson-Amram, have found correlations between brain size, neuron number and problem-solving skills across species. But finding ways to measure intelligence across different species is challenging, she says. “I find it to be a really fun puzzle, but it’s a big challenge to think, ‘Are we asking the right questions?’”
The hardy souls who manage to push shorts season into December might feel some kinship with the thirteen-lined ground squirrel.
The critter hibernates all winter, but even when awake, it’s less sensitive to cold than its nonhibernating relatives, a new study finds. That cold tolerance is linked to changes in a specific cold-sensing protein in the sensory nerve cells of the ground squirrels and another hibernator, the Syrian hamster, researchers report in the Dec. 19 Cell Reports. The altered protein may be an adaptation that helps the animals drift into hibernation. In experiments, mice, which don’t hibernate, strongly preferred to hang out on a hot plate that was 30° Celsius versus one that was cooler. Syrian hamsters (Mesocricetus auratus) and the ground squirrels (Ictidomys tridecemlineatus), however, didn’t seem to notice the chill until plate temperatures dipped below 10° Celsius, notes study coauthor Elena Gracheva, a neurophysiologist at Yale University.
Further work revealed that a cold-sensing protein called TRPM8 wasn’t as easily activated by cold in the squirrels and hamsters as in rats. Found in the sensory nerve cells of vertebrates, TRPM8 typically sends a sensation of cold to the brain when activated by low temperatures. It’s what makes your fingertips feel chilly when you’re holding a glass of ice water. It’s also responsible for the cooling sensation in your mouth after you chew gum made with menthol.
The researchers looked at the gene that contains the instructions to make the TRPM8 protein in ground squirrels and switched up parts of it to find regions responsible for tolerance to cold. The adaptation could be pinned on six amino acid changes in one section of the squirrel gene, the team found. Cutting-and-pasting the rat version of this gene fragment into the squirrel gene led to a protein that was once again cold-sensitive. Hamster TRPM8 proteins also lost their cold tolerance with slightly different genetic tweaks in the same region of the gene.
The fact that it’s possible to make a previously cold-resistant protein sensitive to cold by transferring in a snippet of genetic instructions from a different species is “really quite striking,” says David McKemy, a neurobiologist at the University of Southern California in Los Angeles. As anyone who’s lain awake shivering in a subpar sleeping bag knows, falling asleep while cold is really hard. Hibernation is different than sleep, Gracheva emphasizes, but the squirrels and hamsters’ tolerance to cold may help them transition from an active, awake state to hibernation. If an animal feels chilly, its body will expend a lot of energy trying to warm up — and that’ll work against the physiological changes needed to enter hibernation. For example, while hibernating, small mammals like the ground squirrel slow their pulse and breathing and can lower their core body temperature to just a few degrees above freezing.
Modifications to TRPM8 probably aren’t the only factors that help ground squirrels ignore the cold, Gracheva says, especially as the thermometer drops even closer to freezing. “We think this is only part of the mechanism.”
Scientists also aren’t sure exactly how TRPM8 gets activated by cold in the first place. A detailed view of TRPM8’s structure, obtained using cryo-electron microscopy, was published by a different research group online December 7 in Science. “This is a big breakthrough. We were waiting for this structure for a long period of time,” Gracheva says. Going forward, she and colleagues hope that knowing the protein’s structure will help them link genetic adaptations for cold tolerance in TRPM8 with specific structural changes in the protein.
Cold weather often brings with it hot takes on so-called man flu. That’s the phenomenon in which the flu hits men harder than women — or, depending on who you ask, when men exaggerate regular cold symptoms into flu symptoms. In time for the 2017–2018 flu season, one researcher has examined the scientific evidence for and against man flu.
“The concept of man flu, as commonly defined, is potentially unjust,” Kyle Sue, a clinician at Memorial University of Newfoundland in St. John’s, Canada, writes December 11 in BMJ. Motivated by his own memorable bout of flu, he says, Sue began looking into man flu research and summarizes the work in a review article that’s part of BMJ’s Christmas issue, which traditionally features humorous takes on legitimate research. There might be a reason men come across as wimps. In the United States, more men than women died from flu-related causes from 2007 to 2010 across several age groups, researchers reported in the American Journal of Epidemiology in 2013. An analysis of data on the 2004 to 2010 flu seasons in Hong Kong found that in children and adults, males were more likely to be hospitalized for the flu than females.
Sue isn’t the first to make a case for man flu. A prevailing explanation for men’s susceptibility says that women have higher levels of the hormone estradiol, which can boost the immune system, while men have higher levels of testosterone, which can sometimes suppress the immune system. However, these hormones interact with the immune system in other ways as well.
“There is some evidence that men make weaker immune responses to some viruses than women, but how this happens and whether it is seen across all viruses is still unclear to me,” notes John Upham, professor of respiratory medicine at Queensland University in Australia.
Sue’s review also cites evidence that women respond better to some flu shots than men do. Sex differences in immune response could have real consequences when it comes to vaccine choice, Upham says. It’s also unclear what the evolutionary drivers for immune differences between the sexes might be. And studies of how the male and female immune systems respond differently all come with caveats, Sue notes: Such studies are often in mice rather than humans, have limited data or don’t account for health differences such as smoking habits and tendency to go to the doctor. Upham adds that studying differences in flu cases among men in Western versus non-Western societies could reveal the degree to which learned behavior plays a role in “man flu.”
As much as he’d like to help out his half of the species, Sue says, “we cannot yet conclude that this phenomenon is real, but the current evidence is suggestive that it may be.” Not surprising, his review has met just as much skepticism as previous man flu treatises.
Regardless of the possibility that men may be immunologically weaker than women, Sue says, both flu-stricken men and women alike “could benefit from resting in a safe, comfortable place with a recliner and TV.”
Rising carbon dioxide levels could leave some tiny lake dwellers defenseless. Like the oceans, some lakes are experiencing increasing levels of the greenhouse gas, a new study shows. And too much CO2 in the water may leave water fleas, an important part of many lake food webs, too sleepy to fend off predators.
Detailed observations of lake chemistry over long periods of time are rare. But researchers found data from 1981 to 2015 on four reservoirs in Germany, allowing the scientists to calculate how much CO2 levels had risen and how much pH levels, measuring acidity in the water, had dropped, the scientists report online January 11 in Current Biology.
Rising CO2 in Earth’s atmosphere has also increased levels of the gas dissolved in the oceans, making them more acidic (SN: 5/27/17, p. 11). Studies show that ocean acidification alters the behaviors of marine species (SN Online: 2/2/17). It’s less clear how rising atmospheric CO2 levels are affecting freshwater bodies, or how their denizens are coping with change, says aquatic ecologist Linda Weiss of Ruhr University Bochum in Germany. Comparing the data from the four reservoirs showed that, in those 35 years, the average CO2 level across all lakes rose by about 560 microatmospheres, a unit of pressure. Two of the water bodies experienced a roughly fourfold increase in CO2 levels. For pH, the overall average value dropped from 8.13 to 7.82.
In the lab, the team examined the effect that high CO2 had on the behavior of two species of water fleas, or pinhead-sized lake dwellers also known as Daphnia. The miniature crustaceans are at the bottom of many freshwater food webs. When predators such as the larvae of phantom midges feed on Daphnia, the predators release a chemical signal that cues various species of water fleas to arm themselves with an array of defenses. Some raise forbidding neck spikes; others grow giant “helmets” that make the critters tougher to swallow. But the water fleas’ sense of danger seemed to be dulled in waters with high CO2 levels. The team tested the critters in waters containing both chemical predator cues and CO2 at partial pressures of 2,000, 11,000 and 16,000 microatmospheres. Although 2,000 microatmospheres is considered high, it is now common enough in lakes that the team used it as the control case. Both species were less defensive at 11,000 and 16,000 microatmospheres (considered worst-case scenario values for many lakes) — displaying fewer neck spikes or developing smaller crests.
Further tests revealed that the elevated CO2 was responsible, rather than the reduced pH. Although it’s unclear exactly how the elevated carbon dioxide leads Daphnia to lower its defenses, the researchers suggest the CO2 acts as a narcotic and blunts the senses.
The variability between lakes in terms of setting and chemistry makes it difficult to draw firm conclusions from the findings, Weiss says. Many lakes are warming (SN: 5/13/17, p. 18). And many are already saturated in carbon dioxide and expelling it into the atmosphere. Others are absorbing it and becoming more acidic.
It is also unclear how other freshwater species, including predators, might be affected at different CO2 levels and in different environments, says Caleb Hasler, an organismal biologist at the University of Winnipeg in Canada, who was not involved in the study. “There’s been a bit of work done on phytoplankton, some on zooplankton, freshwater fishes and mussels. If anything, the effect seems to be highly variable.”
But studies such as this that show long-term trends in CO2 levels are an important part of solving the puzzle, Hasler says. And “showing that there is an impact on an important species is pretty significant.”
Editor’s Note: This story was updated February 9 to note President Trump’s fiscal year 2019 budget proposal.
A two-year spending package, passed by Congress in the wee hours of February 9 and signed into law by President Trump hours later, could add to the coffers of U.S. science agencies.
The bipartisan deal raises the caps on defense and nondefense discretionary spending by nearly $300 billion overall. Nondefense discretionary spending gets a $63 billion boost in fiscal year 2018, and another $68 billion in FY 2019 (the spending year that starts October 1, 2018). Here’s why that could be good for science: Almost all research agencies, including NASA, EPA, the National Science Foundation and the National Institutes of Health, fall under this nondefense category. (Defense agencies also do a chunk of scientific research.) But there is a big but. It’s still unclear how any funds will be divvied up among individual agencies and programs. (Early word is that NIH is in line for a $2 billion increase over the two years.) Still, the real details of who gets what in the 2018 budget — including what science will get federal funding support — will come as Congress works on an omnibus appropriations bill, expected in late March. Trump’s FY 2019 budget proposal, released February 12, includes a last-minute addendum that would keep science spending roughly at 2017 levels for some major research agencies, including NIH, NSF and the Department of Energy Office of Science. But a number of federal research programs and projects remain in Trump’s cross hairs, including five of NASA’s Earth science missions and various research, including on climate or environmental science, at the EPA, the National Oceanic and Atmospheric Administration and the U.S. Geological Survey. Whether Congress will go along with Trump’s request for the 2019 budget remains to be seen.
Matt Hourihan, director of the R&D Budget and Policy program at the American Association for the Advancement of Science in Washington, D.C., spoke with Science News February 9 about the prospects for funding for science research. His answers were edited for clarity.
SN: What does the spending deal mean for science research and technology funding?
M.H.: Generally, research and development funding tends to track the discretionary budget pretty closely, though individual agencies may fare a little better or worse in any given year. But most likely we’re looking at a larger increase this year, and then a far more moderate increase next year. Within that context, agencies will fare better or worse based on their current popularity.
SN: Are there any obvious winners or losers?
M.H.: We won’t really know that until the omnibus deal is released. All we have is an overall framework, but spending levels for individual agencies and programs will need to be negotiated and the details released. I would certainly expect more winners than losers, given how large a spending increase we’re talking about. The deal apparently includes some extra funding for NIH, though again we’ll see how the details look.
SN: Could the extra money still be cut?
M.H.: Whatever Congress does, they can, of course, undo. But if they lower the cap next year after raising it, it would be the first time. The downside is, this does add quite a bit to the deficit. With this deal plus the recent tax reform, we’re looking at a potential return to trillion-dollar deficits next year. When deficits get bigger, Congress gets more interested in restraining spending, and trillion-dollar deficits are what got us here in the first place. It’s a catch-22.
SN: How will Trump’s FY 2019 budget proposal impact how the money is divvied up?
M.H.: Last year’s budget proposed big cuts to nondefense spending, and now Congress has gone in the complete opposite direction. We’ll see what the administration does … but if they go for a repeat performance, we could be looking at a pretty irrelevant [Trump] budget.
No wounded left behind — not quite. Ants that have evolved battlefield medevac carry only the moderately wounded home to the nest. There, those lucky injured fighters get fast and effective wound care.
Insect colonies seething with workers may seem unlikely to stage elaborate rescues of individual fighters. Yet for Matabele ants (Megaponera analis) in sub-Saharan Africa — with a mere 1,000 to 2,000 nest mates — treating the wounded can be worth it, says behavioral ecologist Erik Frank at the University of Lausanne in Switzerland. Tales of self-medication pop up across the animal kingdom. For Matabele ants, however, nest cameras plus survival tests show insects treating other adults and improving their chances of survival, he and colleagues report February 14 in Proceedings of the Royal Society B. For treatment boosting others’ survival, Frank says, the closest documented example is humans.
In Ivory Coast, Frank studied Matabele ant colonies that staged three to five termite hunts a day. He and colleagues at the University of Würzburg in Germany published research last year showing that members of a hunting party carry injured comrades home. Frank took a closer look at rescues after he accidentally drove over a Matabele ant column crossing a road. Survivors “were only interested in picking up the ants that were lightly injured, and leaving behind the heavily injured,” he says. When Frank later set injured ants in front of columns trooping home from raids, injured ants minus two legs typically got picked up. Only once did an ant with five missing legs get a lift.
Ants that have lost two legs still have value to a colony, especially in a species where only about 13 new adults a day emerge from pupae. Four-legged ants regain almost the same speed that ants have on six legs, he says. In a typical hunting party, about a third of the ants have survived some injury, but most ants have at least four legs left.
How the ants triage a battlefield evacuation is shaped by the injured ants’ behavior, Frank says. Ants with only moderate injuries, such as two lost legs, emit “help me” pheromones. These ants tuck in their remaining legs and generally cooperate with the rescuers. Not so with ants more seriously hurt, who may not even give off pheromones. Rescuers still stop to investigate. But the seriously injured ants often flail around instead of cooperating, and the rescuers give up.
Frank also has seen ants act more severely injured than they truly are. If the returning fighters bypass them, “they will immediately stand up and run as fast as they can behind the others,” he says. “In humans, it’s a very selfish behavior.” For ants, predators lurk, and the colony benefits by finding the injured first. For injured raiders that do get home, another ant — usually not the carrier — steps in to treat the wound by repeatedly moving her mouthparts over it. When Frank isolated the ants to prevent this wound licking, about 80 percent of injured ants died. When he allowed ants an hour of treatment before isolating them, only 10 percent of them died.
Based on Frank’s observations, others who study ants are now wondering if they also have seen such rescue tactics. Andy Suarez of the University of Illinois at Urbana-Champaign wants another look at big Dinoponera australis that he’s frequently seen prowling for prey despite missing a limb. And Bert Hölldobler wonders whether weaver ants he has seen retrieving injured nest mates after battle were rescuing them. The usual interpretation has been cannibalism, says Hölldobler, at Arizona State University in Tempe.
Frank, however, used bright acrylic spots to track the fate of rescued Matabele ants. They weren’t for lunch.
Galaxies, stars, planets and life, all are formed from one essential substance: matter.
But the abundance of matter is one of the biggest unsolved mysteries of physics. The Big Bang, 13.8 billion years ago, spawned equal amounts of matter and its bizarro twin, antimatter. Matter and antimatter partners annihilate when they meet, so an even stephen universe would have ended up full of energy — and nothing else. Somehow, the balance tipped toward matter in the early universe. A beguiling subatomic particle called a neutrino may reveal how that happened. If neutrinos are their own antiparticles — meaning that the neutrino’s matter and antimatter versions are the same thing — the lightweight particle might point to an explanation for the universe’s glut of matter.
So scientists are hustling to find evidence of a hypothetical kind of nuclear decay that can occur only if neutrinos and antineutrinos are one and the same. Four experiments have recently published results showing no hint of the process, known as neutrinoless double beta decay (SN: 7/6/02, p. 10). But another attempt, set to begin soon, may have a fighting chance of detecting this decay, if it occurs. Meanwhile, planning is under way for a new generation of experiments that will make even more sensitive measurements.
“Right now, we’re standing on the brink of what potentially could be a really big discovery,” says Janet Conrad, a neutrino physicist at MIT not involved with the experiments. Each matter particle has an antiparticle, a partner with the opposite electric charge. Electrons have positrons as partners; protons have antiprotons. But it’s unclear how this pattern applies to neutrinos, which have no electric charge.
Rather than having distinct matter and antimatter varieties, neutrinos might be the lone example of a theorized class of particle dubbed a Majorana fermion (SN: 8/19/17, p. 8), which are their own antiparticles. “No other particle that we know of could have this property; the neutrino is the only one,” says neutrino physicist Jason Detwiler of the University of Washington in Seattle, who is a member of the KamLAND-Zen and Majorana Demonstrator neutrinoless double beta decay experiments.
Neutrinoless double beta decay is a variation on standard beta decay, a relatively common radioactive process that occurs naturally on Earth. In beta decay, a neutron within an atom’s nucleus converts into a proton, releasing an electron and an antineutrino. The element thereby transforms into another one further along the periodic table. In certain isotopes of particular elements — species of atoms characterized by a given number of protons and neutrons — two beta decays can occur simultaneously, emitting two electrons and two antineutrinos. Although double beta decay is exceedingly rare, it has been detected. If the neutrino is its own antiparticle, a neutrino-free version of this decay might also occur: In a rarity atop a rarity, the antineutrino emitted in one of the two simultaneous beta decays might be reabsorbed by the other, resulting in no escaping antineutrinos.
Such a process “creates asymmetry between matter and antimatter,” says physicist Giorgio Gratta of Stanford University, who works on the EXO-200 neutrinoless double beta decay experiment. In typical beta decay, one matter particle emitted — the electron — balances out the antimatter particle — the antineutrino. But in neutrinoless double beta decay, two electrons are emitted with no corresponding antimatter particles. Early in the universe, other processes might also have behaved in a similarly asymmetric way.
On the hunt To spot the unusual decay, scientists are building experiments filled with carefully selected isotopes of certain elements and monitoring the material for electrons of a particular energy, which would be released in the neutrinoless decay.
If any experiment observes this process, “it would be a huge deal,” says particle physicist Yury Kolomensky of the University of California, Berkeley, a member of the CUORE neutrinoless double beta decay experiment. “It is a Nobel Prize‒level discovery.”
Unfortunately, the latest results won’t be garnering any Nobels. In a paper accepted in Physical Review Letters, the GERDA experiment spotted no signs of the decay. Located in the Gran Sasso underground lab in Italy, GERDA looks for the decay of the isotope germanium-76. (The number indicates the quantity of protons and neutrons in the atom’s nucleus.) Since there were no signs of the decay, if the process occurs it must be extremely rare, the scientists concluded, and its half-life must be long — more than 80 trillion trillion years.
Three other experiments have also recently come up empty. The Majorana Demonstrator experiment, located at the Sanford Underground Research Facility in Lead, S.D., which also looks for the decay in germanium, reported no evidence of neutrinoless double beta decay in a paper accepted in Physical Review Letters. Meanwhile, EXO-200, located in the Waste Isolation Pilot Plant, underground in a salt deposit near Carlsbad, N.M., reported no signs of the decay in xenon-136 in a paper published in the Feb. 16 Physical Review Letters.
Likewise, no evidence for the decay materialized in the CUORE experiment, in results reported in a paper accepted in Physical Review Letters. Composed of crystals containing tellurium-130, CUORE is also located in the Gran Sasso underground lab.
The most sensitive search thus far comes from the KamLAND-Zen neutrinoless double beta decay experiment located in a mine in Hida, Japan, which found a half-life longer than 100 trillion trillion years for the neutrinoless double beta decay of xenon-136.
That result means that, if neutrinos are their own antiparticles, their mass has to be less than about 0.061 to 0.165 electron volts depending on theoretical assumptions, the KamLAND-Zen collaboration reported in a 2016 paper in Physical Review Letters. (An electron volt is particle physicists’ unit of energy and mass. For comparison, an electron has a much larger mass of half a million electron volts.)
Neutrinos, which come in three different varieties and have three different masses, are extremely light, but exactly how tiny those masses are is not known. Mass measured by neutrinoless double beta decay experiments is an effective mass, a kind of weighted average of the three neutrino masses. The smaller that mass, the lower the rate of the neutrinoless decays (and therefore the longer the half-life), and the harder the decays are to find.
KamLAND-Zen looks for decays of xenon-136 dissolved in a tank of liquid. Now, KamLAND-Zen is embarking on a new incarnation of the experiment, using about twice as much xenon, which will reach down to even smaller masses, and even rarer decays. Finding neutrinoless double beta decay may be more likely below about 0.05 electron volts, where neutrino mass has been predicted to lie if the particles are their own antiparticles.
Supersizing the search KamLAND-Zen’s new experiment is only a start. Decades of additional work may be necessary before scientists clinch the case for or against neutrinos being their own antiparticles. But, says KamLAND-Zen member Lindley Winslow, a physicist at MIT, “sometimes nature is very kind to you.” The experiment could begin taking data as early as this spring, says Winslow, who is also a member of CUORE.
To keep searching, experiments must get bigger, while remaining extremely clean, free from any dust or contamination that could harbor radioactive isotopes. “What we are searching for is a decay that is very, very, very rare,” says GERDA collaborator Riccardo Brugnera, a physicist at the University of Padua in Italy. Anything that could mimic the decay could easily swamp the real thing, making the experiment less sensitive. Too many of those mimics, known as background, could limit the ability to see the decays, or to prove that they don’t occur.
In a 2017 paper in Nature, the GERDA experiment deemed itself essentially free from background — a first among such experiments. Reaching that milestone is good news for the future of these experiments. Scientists from GERDA and the Majorana Demonstrator are preparing to team up on a bigger and better experiment, called LEGEND, and many other teams are also planning scaled-up versions of their current detectors.
Antimatter whodunit If scientists conclude that neutrinos are their own antiparticles, that fact could reveal why antimatter is so scarce. It could also explain why neutrinos are vastly lighter than other particles. “You can kill multiple problems with one stone,” Conrad says.
Theoretical physicists suggest that if neutrinos are their own antiparticles, undetected heavier neutrinos might be paired up with the lighter neutrinos that we observe. In what’s known as the seesaw mechanism, the bulky neutrino would act like a big kid on a seesaw, weighing down one end and lifting the lighter neutrinos to give them a smaller mass. At the same time, the heavy neutrinos — theorized to have existed at the high energies present in the young universe — could have given the infant cosmos its early preference for matter.
Discovering that neutrinos are their own antiparticles wouldn’t clinch the seesaw scenario. But it would provide a strong hint that neutrinos are essential to explaining where the antimatter went. And that’s a question physicists would love to answer.
“The biggest mystery in the universe is who stole all the antimatter. There’s no bigger theft that has occurred than that,” Conrad says.
Artificial intelligence algorithms may soon bring the diagnostic know-how of an eye doctor to primary care offices and walk-in clinics, speeding up the detection of health problems and the start of treatment, especially in areas where specialized doctors are scarce. The first such program — trained to spot symptoms of diabetes-related vision loss in eye images — is pending approval by the U.S. Food and Drug Administration.
While other already approved AI programs help doctors examine medical images, there’s “not a specialist looking over the shoulder of [this] algorithm,” says Michael Abràmoff, who founded and heads a company that developed the system under FDA review, dubbed IDx-DR. “It makes the clinical decision on its own.” IDx-DR and similar AI programs, which are learning to predict everything from age-related sight loss to heart problems just by looking at eye images, don’t follow preprogrammed guidelines for how to diagnose a disease. They’re machine-learning algorithms that researchers teach to recognize symptoms of a particular condition, using example images labeled with whether or not that patient had that condition. IDx-DR studied over 1 million eye images to learn how to recognize symptoms of diabetic retinopathy, a condition that develops when high blood sugar damages retinal blood vessels (SN Online: 6/29/10). Between 12,000 and 24,000 people in the United States lose their vision to diabetic retinopathy each year, but the condition can be treated if caught early. Researchers compared how well IDx-DR detected diabetic retinopathy in more than 800 U.S. patients with diagnoses made by three human specialists. Of the patients identified by IDx-DR as having at least moderate diabetic retinopathy, more than 85 percent actually did. And of the patients IDx-DR ruled as having mild or no diabetic retinopathy, more than 82.5 percent actually did, researchers reported February 22 at the annual meeting of the Macula Society in Beverly Hills, Calif.
IDx-DR is on the fast-track to FDA clearance, and a decision is expected within a few months, says Abràmoff, a retinal specialist at the University of Iowa in Iowa City. If approved, it would become the first autonomous AI to be used in primary care offices and clinics.
AI algorithms to diagnose other eye diseases are in the works, too. An AI described February 22 in Cell studied over 100,000 eye images to learn the signs of several eye conditions. These included age-related macular degeneration, or AMD — a leading cause of vision loss in adults over 50 — and diabetic macular edema, a condition that develops from diabetic retinopathy.
This AI was designed to flag advanced AMD or diabetic macular edema for urgent treatment, and to refer less severe cases for routine checkups. In tests, the algorithm was 96.6 percent accurate in diagnosing eye conditions from 1,000 pictures. Six ophthalmologists made similar referrals based on the same eye images.
Researchers still need to test how this algorithm fares in the real world where the quality of images may vary from clinic to clinic, says Aaron Lee, an ophthalmologist at the University of Washington in Seattle. But this kind of AI could be especially useful in rural and developing regions where medical resources and specialists are scarce and people otherwise wouldn’t have easy access to in-person eye exams.
AI might also be able to use eye pictures to identify other kinds of health problems. One algorithm that studied retinal images from over 284,000 patients could predict cardiovascular health risk factors such as high blood pressure.
The algorithm was 71 percent accurate in distinguishing eye images between smoking and nonsmoking patients, according to a report February 19 in Nature Biomedical Engineering. And it predicted which patients would have a major cardiovascular event, such as a heart attack, within the next five years 70 percent of the time.
With AI getting more adept at screening for a growing list of conditions, “some people might be concerned that this is machines taking over” health care, says Caroline Baumal, an ophthalmologist at Tufts University in Boston. But diagnostic AI can’t replace the human touch. “Doctors will still need to be there to see patients and treat patients and talk to patients,” Baumal says. AI will just help people who need treatment get it faster.
The seeds for Martian clouds may come from the dusty tails of comets.
Charged particles, or ions, of magnesium from the cosmic dust can trigger the formation of tiny ice crystals that help form clouds, a new analysis of Mars’ atmosphere suggests.
For more than a decade, rovers and orbiters have captured images of Martian skies with wispy clouds made of carbon dioxide ice. But “it hasn’t been easy to explain where they come from,” says chemist John Plane of the University of Leeds in England. The cloud-bearing layer of the atmosphere is between –120° and –140° Celsius — too warm for carbon dioxide clouds to form on their own, which can happen at about –220° C. Then in 2017, NASA’s MAVEN orbiter detected a layer of magnesium ions hovering about 90 kilometers above the Martian surface (SN: 4/29/17, p. 20). Scientists think the magnesium, and possibly other metals not yet detected, comes from cosmic dust left by passing comets. The dust vaporizes as it hits the atmosphere, leaving a sprinkling of metals suspended in the air. Earth has a similar layer of atmospheric metals, but none had been observed elsewhere in the solar system before.
According to the new calculations, the bits of magnesium clump with carbon dioxide gas — which makes up about 95 percent of Mars’ atmosphere — to produce magnesium carbonate molecules. These larger, charged molecules could attract the atmosphere’s sparse water, creating what Plane calls “dirty” ice crystals.
At the temperatures seen in Mars’ cloud layer, pure carbon dioxide ice crystals are too small to gather clouds around them. But clouds could form around dirty ice at temperatures as high as –123° C, Plane and colleagues report online March 6 in the Journal of Geophysical Research: Planets.