EPA underestimates methane emissions

The U.S. Environmental Protection Agency has a methane problem — and that could misinform the country’s carbon-cutting plans. Recent studies suggest that the agency’s reports fail to capture the full scope of U.S. methane emissions, including “super emitters” that contribute a disproportionate share of methane release. Those EPA reports influence the country’s actions to combat climate change and the regulation of methane-producing industries such as agriculture and natural gas production.

With EPA’s next annual methane report due to be published by April 15, early signs suggest that the agency is taking steps to fix the methane mismatch. A preliminary draft of the report revises the agency’s methane calculations for 2013 — the most recent year reported — upward by about 27 percent for the natural gas and petroleum sectors, a difference of about 2 million metric tons.
Yet it’s unclear how that and other revisions will factor into final methane emission totals in the upcoming report. The draft estimates that U.S. methane emissions from all sources in 2014 were about 28 million metric tons, up slightly from the revised estimate for 2013 and well above the original 2013 estimate of 25.453 million metric tons. But the totals in the draft don’t include updates to emission estimates from the oil and gas industry.
“EPA is reviewing the substantial body of new studies that have become available in the past year on the natural gas and petroleum sector,” says EPA spokesperson Enesta Jones. The agency is also gathering feedback from scientists and industry experts to further improve their reporting.

Methane, which makes up the bulk of natural gas, originates from natural sources, such as wetlands, as well as from human activities such as landfills, cattle ranches (SN: 11/28/15, p. 22) and the oil and gas industry. Globally, human activities release about 60 percent of the 600 million metric tons of methane emitted into the atmosphere each year. Once in the air, methane prevents some of Earth’s heat from escaping into space, causing a warming effect. Methane emissions currently account for about a quarter of human-caused global warming.

The EPA’s underestimation of U.S. methane emissions comes down to accounting. EPA samples emissions from known methane sources, such as cows or natural gas pipelines, and works out an average. That average is then multiplied by the nation’s number of cows, lengths of pipe and other methane sources. Results from this method disagree with satellite and land-based observations that measure changes in the total amount of methane in the air. A 2013 report in the Proceedings of the National Academy of Sciences found that U.S. methane emissions based on atmospheric measurements are about 50 percent larger than EPA estimates (SN Online: 11/25/13).
EPA’s reports don’t just misjudge the scale of emissions, they also miss the long-term trend, recent work suggests. EPA reported that U.S. methane emissions remained largely unchanged from 2002 to 2014. But researchers report online March 2 in Geophysical Research Letters that emissions of the greenhouse gas rose more than 30 percent over that period. The United States could be responsible for as much as 30 to 60 percent of the global increase in methane emissions over the last decade, the study’s authors conclude. “We’re definitely not a small piece of that pie,” says Harvard University atmospheric scientist Alex Turner, who coauthored the study.
Correctly tracking methane is important, Turner says, because over a 100-year period, the warming impact of methane is more than 25 times that of the same amount of CO2. Methane levels have also risen faster: Since the start of the industrial revolution, methane concentrations have more than doubled while CO2 has risen more than 40 percent.

While methane is more potent than CO2, there is about 200 times less methane in the atmosphere than CO2. Furthermore, methane stays in the atmosphere for only around 12 years before being absorbed by soil or breaking apart in chemical reactions. “If we reduce methane emissions, the climate responds very quickly and global warming would slow down almost immediately,” says Cornell University earth systems scientist Robert Howarth. “CO2, on the other hand, has an influence that will go hundreds to even thousands of years into the future.”

Turner and colleagues tracked methane across the continental United States using land stations that measure methane in the air and satellite observations that record dips in the infrared radiation frequencies absorbed and reemitted by methane. The researchers compared these methane measurements with those taken over Bermuda and the North Pacific Ocean — places upwind of the United States and far from major methane sources.

From 2002 through 2014, methane concentrations over the continental United States grew faster than those over the oceans, the researchers found. The difference was most pronounced over the central United States, where methane concentrations rose nearly twice as fast as in the rest of the country. Natural gas drilling and production boomed in in the central United States during the period studied, though the researchers could not precisely trace the source of the additional methane.

Turner and colleagues say they’re now working with EPA to reconcile the methane estimates. EPA will provide small-scale estimates of methane emissions down to a 10-kilometer-wide grid. By combining that grid with space and land observations, scientists should be able to isolate where methane mismatches are the most pronounced.

While Turner’s research can’t pinpoint the exact origins of the additional methane, other studies point to the oil and gas industry. The numbers that the EPA uses to tabulate methane emissions assume that equipment is functioning as intended, says Stanford University sustainability engineer Adam Brandt. Malfunctioning equipment can spew huge amounts of methane. That became abundantly – and visibly – clear last October when the largest U.S. methane leak in history began in an underground storage facility near Los Angeles. The leak released 97,100 metric tons of methane, equivalent to the annual greenhouse gas emissions of 572,000 cars, before being permanently sealed in February, researchers estimated in the March 18 Science.

Super methane emitters are a big problem elsewhere, too, albeit typically much smaller than the California leak, researchers report in the June 2016 Environmental Pollution. Surveying emissions from 100 natural gas leaks around Boston, the researchers found that 7 percent of leaks contributed half of the total methane released. In 2014, a different research team reported in Environmental Science & Technology that 19 percent of pneumatic controllers used at natural gas production sites accounted for 95 percent of all controller emissions.

Monitoring and quick repairs can stamp out rogue methane sources quickly, Brandt says. “This is a problem that’s easier to fix than it is to understand,” he says.

Nightshade plants bleed sugar as a call to ants for backup

Herbivores beware: Take a bite out of bittersweet nightshade (Solanum dulcamara), and you might have an ant problem on your hands. The plants produce a sugary goo that serves as an indirect defense, attracting ants that eat herbivores, Tobias Lortzing of Berlin’s Free University and colleagues write April 25 in Nature Plants.

Observations of wild nightshade plants in Germany suggest that plants that ooze goo attract more ants (mostly European fire ants, or Myrmica rubra) than undamaged plants. In greenhouse experiments, those ants fed on both the goo and roving slugs and flea beetle larvae, substantially reducing leaf damage. Leaf-munching adult flea beetles and, to a lesser degree, slugs prompted the goo production. The ants didn’t attack the beetles but did protect the plant from slugs and beetle larvae.

Plenty of other plants produce defensive nectars via organs called nectaries, and nightshades’ bleeding may be a unique, primitive version of that protective strategy, the scientists report.

Why Labrador retrievers are obsessed with food

Labrador retrievers tend to be more overweight and keen to scarf down their kibble than other dog breeds. Eleanor Raffan of the University of Cambridge and her colleagues chalk this trend up — least in part — to a suspect gene.

The team found that, among a small group of assistance dogs, a form of a gene called POMC that was missing a chunk of DNA was more common in obese Labs than in lean ones. This held true on a larger scale, too: Out of 411 Labs in the United Kingdom and United States, 22 percent carried the deletion mutation. Looking across other breeds, only Labradors and flat coat retrievers, a close relative, carried the gene variant, which also correlated with greater weight and food begging tendencies, the team reports May 3 in Cell Metabolism.

POMC plays a role in a metabolism pathway, and the deletion may inhibit the production of proteins that regulate hunger, the researchers suspect. (That might explain why the variant turned up in about 75 percent of assistance dogs, which are trained using food motivation.)

Here are a few more things for the childproofing list

There’s nothing like having kids to open your eyes to the world’s dangers. With two little rascals in tow, grocery stores, dentists’ offices and even grandparents’ homes morph into death traps full of sharp, poisonous and heavy things. Short of keeping a tight grip on little hands, there’s not much you can do to childproof absolutely everything when you’re out and about. At home, it’s easier to make rooms safe for kids: Cover electrical outlets, keep drugs and potentially poisonous stuff out of reach, bolt dressers to the wall, and so on.

But every so often, I come across a study that points out an unexpectedly dangerous object. Clearly, none of these things rise to Bag O’Glass danger levels. But in the spirit of The More You Know, here are five objects that carry hidden risks to children:

Laundry pods
These cute, candy-colored packets can be irresistible to children — and toxic when eaten. Since 2012, when single-load pods for laundry detergent became popular, poison control centers have been fielding calls about toddlers who got ahold of pods. From 2013 to 2014, over 22,000 U.S. children under age 6 were exposed to these pods, mostly by eating them, data from the National Poison Data System show. And in just that two-year period, cases of laundry pod exposure rose 17 percent, scientists reported in the May Pediatrics.

Those numbers are particularly worrisome because laundry pods appeared to be more dangerous than regular laundry detergent (liquid or powder) and dishwasher detergent in any form (pod, liquid or powder). In a small number of kids, eating laundry pods caused serious trouble, including coma, respiratory arrest and cardiac arrest. Two children died, scientists wrote in the Pediatrics paper.

Tiny turtles
Oh, they’re adorable, but turtles can carry salmonella, bacteria that come with diarrhea, fever and cramps. Kids are particularly susceptible, and infections can be severe for them. Recognizing this risk, the FDA banned the sale of small turtles (shell less than 4 inches long) in 1975.
Yet in recent years, small turtles have slowly crawled back into children’s grubby little hands, carrying salmonella with them, scientists reported in January in Pediatrics. From 2011 to 2013, turtles were implicated in eight multistate Salmonella outbreaks, hitting hard in children younger than 5. Of the 473 people affected by the outbreaks, the median age was 4.

Big TVs
I’m not talking about the dangers of screen time here. I mean the television itself. Today’s flat screen TVs are more wobbly than the older, heavier tube-based TVs. Every 30 minutes, a kid is treated in the emergency room for a TV-related injury — that’s more than 17,000 children in the United States per year and increasing. And little heads and necks are the most frequently injured body parts.

Liquid nicotine
Along with the rise of e-cigarettes come refill cartridges, most of which contain concentrated liquid nicotine in flavors such as cherry crush, vanilla and mint. These appealing flavors mask nicotine that can be dangerous to kids. In 2015, poison control centers reported over 3,000 incidents of unintentional nicotine exposure, many of them in children. In comparison, just 271 exposures were reported in 2011.

That worrisome increase prompted the Child Nicotine Poisoning Prevention Act of 2015, signed into law by President Obama on January 28, requiring nicotine cartridges to be packaged in child-proof containers — a no-brainer.

Trampolines
Maniacal bouncing is clearly exhilarating for children, but also risky. I say this as a childhood-double-bounce survivor, so I understand the appeal. But just a note of caution: These springy injury machines come with a constellation of scary medical stats. Concussions, broken bones, sprains and neck injuries are signature trampoline troubles. A survey of a national injury database showed that broken bones accounted for 29 percent of all trampoline injuries reported to emergency departments, scientists reported in 2014 in the Journal of Pediatrics Orthopedics. The vast majority (93 percent) of those fractures belonged to children 16 and under.

Attempts to make trampolines safer — by putting a net around the perimeter, for instance — don’t seem to lower injury rates, an Australian study found. That’s why the American Academy of Pediatrics, the Canadian Paediatric Society, the American Academy of Orthopaedic Surgeons and other groups all urge caution, or an outright ban.

Space experts say sending humans to Mars worth the risk

WASHINGTON — There’s a long-standing joke that NASA is always 20 years from putting astronauts on Mars. Mission details shared at a recent summit shows that the space agency is right on schedule. A to-do list from 2015 looks remarkably similar to one compiled in 1990. One difference: NASA is now building a rocket and test-driving technologies needed to get a crew to Mars. But the specifics for the longest road trip in history — and what astronauts will do once they arrive — remain an open question.

“Are we going to just send them there to explore and do things that we could do robotically though slower, or can we raise the bar?” asked planetary scientist Jim Bell during the Humans to Mars summit. “We need to make sure that what these folks are being asked to do is worthy of the risk to their lives,” said Bell, of Arizona State University in Tempe.
The three-day symposium, which ended May 19, was organized by Explore Mars Inc., a nonprofit dedicated to putting astronauts on Mars by the 2030s.

While the summit didn’t break new scientific ground, it did bring together planetary scientists , space enthusiasts and representatives from both NASA and the aerospace industry to talk about the challenges facing a crewed mission to Mars and rough ideas for how to get there.

Part of the appeal in sending humans is the pace of discovery. Drilling just one hole with the Curiosity rover, which has been exploring Gale Crater on Mars since August 2012 (SN: 5/2/2015, p. 24), currently takes about a week. “It’s a laborious, frustrating, wonderful — frustrating — multiday process,” said Bell.

Humans also can react to novel situations, make quick decisions and see things in a way robotic eyes cannot. “A robot explorer is nowhere near as good as what a human geologist can do,” says Ramses Ramirez, a planetary scientist at Cornell University. “There’s just a lot more freedom.”

Researchers saw the human advantage firsthand in 1997 when they sent a rover called Nomad on a 45-day trek across the Atacama Desert in Chile. Nomad was controlled by operators in the United States to simulate operating a robot on another planet. Humans at the rover site provided a reality check on the data Nomad sent back. “There was a qualitative difference,” says Edwin Kite, a planetary scientist at the University of Chicago. And it wasn’t just that the geologists could do things faster. “The robots were driving past evidence of life that humans were finding very obvious.”
To get astronauts ready to explore Mars, the Apollo program is a good template, said Jim Head, a geologist at Brown University who helped train the Apollo astronauts. “Our strategy was called t-cubed: train them, trust them and turn them loose.” While each of the moon expeditions had a plan, the astronauts were trusted to use their judgment. Apollo 15 astronaut David Scott, for example, came across a chunk of deep lunar crust that researchers hoped to find although it wasn’t at a planned stop. “He spotted it three meters away,” said Head. “He saw it shining and recognized it immediately. That’s exploration.”

Despite a lack of clear goals for a jaunt to Mars, NASA is forging ahead. The Orion crew capsule has already been to space once; a 2014 launch atop a Delta IV Heavy rocket sent an uncrewed Orion 5,800 kilometers into space before it splashed down in the Pacific Ocean (SN Online: 12/5/2014). And construction of the Space Launch System, a rocket intended to hurl humans at the moon and Mars, is under way. The first test flight, scheduled for October 2018, will send Orion on a multiday uncrewed trip around the moon. NASA hopes to put astronauts onboard for a lunar orbit in 2021.

Meanwhile, the crew aboard the International Space Station is testing technologies that will keep humans healthy and happy during an interplanetary cruise. Astronaut Scott Kelly recently completed a nearly yearlong visit to the station intended to reveal the effects of long-duration space travel on the human body (SN Online: 2/29/2016). And on April 10, a prototype inflatable habitat — the Bigelow Expandable Activity Module — arrived at the station and was attached to a docking port six days later. The station crew will inflate the module for the first time on May 26. No one will live in it, but over the next two years, astronauts will collect data on how well the habitat handles radiation, temperature extremes and run-ins with space debris.
Beyond that, the plans get fuzzy. The general idea is to construct an outpost in orbit around the moon as a testing and staging ground starting in the late 2020s. The first crew to Mars might land on the planet — or might not. One idea is to set up camp in Mars orbit; from there, astronauts could operate robots on the surface without long communication delays. Another idea has humans touching down on one of Mars’ two moons, Phobos or Deimos. When crews do land on the Martian surface, NASA envisions establishing a base from which astronauts could plan expeditions.

With so few details, it’s difficult for the space agency to identify specific technologies to invest in. “There have been lots of studies — we get a lot of grief that it’s nothing but studies,” said Bret Drake, an engineer at the Aerospace Corp. in El Segundo, Calif. “But out of the studies, there are a lot of common things that come to the top no matter what path you take.”

Any mission to Mars has to support astronauts for roughly 500 to 1,000 days. The mission has to deal with round-trip communication delays of up to 42 minutes. It will need the ability to land roughly 40-ton payloads on the surface of Mars (current robotic missions drop about a ton). Living off the land is also key, making use of local water and minerals. And astronauts need the ability to not just survive, but drive around and explore. “We want to land in a safe place, which is going to be geologically boring, but we want to go to exciting locations,” said Drake.

The technical and logistical challenges might be the easiest part. “We do know enough to pull this off,” Ramirez says. “The biggest problem is political will.” Congress has yet to sign off on funding this adventure (nor has NASA presented a budget — expected to be in the hundreds of billions of dollars), and future administrations could decide to kill it.

Multiple summit speakers stressed the importance of using technology that is proven or under development — no exotic engines or rotating artificial gravity habitats for now. And a series of small missions —baby steps to the moon and an asteroid before committing to Mars — could show progress that might help keep momentum (and public interest) alive.

“We thought going to the moon was impossible, but we got there,” says Ramirez. “If we dedicate ourselves as a nation to do something crazy, we’ll do it. I have no doubt.”

Jumping gene turned peppered moths the color of soot

Peppered moths and copycat butterflies owe their wing color-changing abilities to a single gene, two independent studies suggest.

A genetic tweak in a portion of the cortex gene that doesn’t make protein painted the speckled gray wings of peppered moths black, researchers report online June 1 in Nature. Genetic variants in DNA interspersed with and surrounding the cortex gene also help some tasty species of Heliconius butterflies mimic unpalatable species and avoid getting eaten by predators, a second team of scientists reports, also June 1 in Nature.
In the often-told evolutionary tale, the color shift in moths began as factories in Britain started to darken the skies with coal smoke during the Industrial Revolution in the 1800s. Victorian naturalists took note as a newly discovered, all-black carbonaria form of peppered moths (Biston betularia) blended into soot-covered backgrounds; the light-colored typica moths, which lacked the mutation, were easily picked off by birds. By 1970, nearly 99 percent of peppered moths were black in some localities. As air pollution decreased in the late 20th century, black moths became more visible to birds. As a result, carbonaria moths are now rare.

“This begins to unravel exactly what the original mutation was that produced the black … moths that were favored by natural selection” during much of the last century, says evolutionary biologist Paul Brakefield of the University of Cambridge in England. “It adds a new and exciting element to the story.”

Wing pattern changes in butterflies and peppered moths are textbook examples of natural selection, but the molecular details behind the adaptation have eluded scientists for decades. In 2011, researchers tracked the traits to a region of a chromosome all the species have in common (SN: 5/7/11, p. 11; SN: 9/24/11, p. 16). Which of the many genes in that region might be responsible remained a mystery.
In peppered moths, the region of interest stretches over about 400,000 DNA bases and contains 13 genes and two microRNAs. “There aren’t really any genes that scream out to you, ‘I’m involved in wing patterning,’” says evolutionary geneticist Ilik Saccheri at the University of Liverpool in England.
Saccheri and colleagues compared that region in one black moth and three typical moths. The researchers found 87 places where the black moth differed from the light-colored moths. Most of the differences were changes in single DNA bases — the information-carrying chemicals in DNA. Such genetic variants are known as SNPs for single nucleotide polymorphisms. One difference was the insertion of a 21,925-base-long stretch of DNA into the region. This big chunk of DNA contained multiple copies of a transposable element, or jumping gene. Transposable elements are viruslike pieces of DNA that copy and insert themselves into a host’s DNA.

By examining the DNA of hundreds more typica moths and ruling out mutations one by one, the team ended up with one candidate: the large transposable element that had landed in the cortex gene. But the jumping gene didn’t land in the DNA that encodes the protein. Instead it landed in an intron — a stretch of DNA that gets chopped out after the gene is copied into RNA and before a protein is made.

The jumping gene first landed in the cortex intron in about 1819, the researchers calculated from historical measurements of how common the trait was throughout history. That timing gave the mutation about 20 to 30 moth generations to spread through the population before people first reported sightings of the black moths in 1848. Saccheri and colleagues found the transposable element in 105 of 110 wild-caught carbonaria moths and none of the 283 typica moths tested. The remaining five moths are black because of another, unknown, genetic variation.

Similarly, Nicola Nadeau, an evolutionary geneticist at the University of Sheffield in England, and colleagues combed through more than 1 million DNA bases in each of five species of Heliconius butterflies. The researchers were looking for genetic variants associated with the presence or absence of yellow bands on the wings.

Nadeau’s team found 108 SNPs in all H. erato favorinus butterflies that have a yellow band on their hind wings. Most of those SNPs were in introns of the cortex gene or outside of the gene. Butterflies that lack the yellow band don’t have those SNPs.

Other DNA changes were found to draw yellow bars on the wings of different species of Heliconius butterflies, suggesting that evolution acted multiple times on the cortex gene with similar results.

The finding that the same gene influences wing patterns in butterflies and moths supports an idea that some genes are hot spots of natural selection, says Robert Reed, an evolutionary biologist at Cornell University.

None of the genetic differences in the butterflies or peppered moths change the cortex gene itself. That leaves open the possibility that the transposable element and SNPs aren’t doing anything to cortex, but may be regulating a different gene. But the evidence that cortex really is the gene upon which natural selection has acted is strong, says Reed. “I’d be surprised if they were wrong.”

Still, it’s not obvious how cortex changes wing patterns, says Saccheri. “We’re both equally puzzled about how it is doing what it appears to be doing.” The teams have evidence that cortex helps determine when certain wing scales grow. In butterflies and moths, the timing of wing scale development affects the color of the wings, says Reed. “You see colors popping up almost like a paint-by-numbers.”

Yellow, white and red scales develop first. Black scales come later. Cortex is known to be involved in cell growth. So varying levels of the protein may speed up development of wing scales, causing them to become colored, or slow their growth, allowing them to turn black, the researchers speculate.

Mars once had many moons

Mars’ misshapen moons, Phobos and Deimos, might be all that’s left of a larger family that arose in the wake of a giant impact with the Red Planet billions of years ago, researchers report online July 4 in Nature Geoscience.

The origin of the two moons has never been clear; they could be captured asteroids or homegrown satellites. But their orbits are hard to explain if they were snagged during a flyby, and previous calculations have had trouble reproducing locally sourced satellites. The new study finds that a ring of rocks blown off of the planet by a collision with an asteroid could have been a breeding ground for a set of larger satellites relatively close to the planet. Those moons, long since reclaimed by Mars, could have herded remaining debris in the sparsely populated outer part of the ring to form Phobos and Deimos.
Pascal Rosenblatt, a planetary scientist at the Royal Observatory of Belgium in Brussels, and colleagues ran computer simulations to show how the helper moons formed, did their duty and then fell to Mars, leaving behind a pair of moons similar to Phobos and Deimos.

The rain of moons is not over. While Deimos is in a stable orbit, Phobos is developing stress fractures as it slowly inches toward the Red Planet (SN: 12/12/15, p. 11).

Why the turtle got its shell

Turtle shells didn’t get their start as natural armor, it seems. The reptiles’ ancestors might have evolved partial shells to help them burrow instead, new research suggests. Only later did the hard body covering become useful for protection.

The findings might also help explain how turtles’ ancestors survived a mass extinction 250 million years ago that wiped out most plants and animals on earth, scientists report online July 14 in Current Biology.

Most shelled animals, like armadillos, get their shells by adding bony scales all over their bodies. Turtles, though, form shells by gradually broadening their ribs until the bones fuse together. Fossils from ancient reptiles with partial shells made from thickened ribs suggest that turtles’ ancestors began to suit up in the same way.
It’s an unusual mechanism, says Tyler Lyson, a paleontologist at the Denver Museum of Nature and Science who led the study. Thicker ribs don’t offer much in the way of protection until they’re fully fused, as they are in modern turtles. And the modification makes critical functions like moving and breathing much harder — a steep price for an animal to pay. So Lyson suspected there was some advantage other than protection to the partial shells.

He and his colleagues examined fossils from prototurtles, focusing on an ancient South African reptile called Eunotosaurus africanus.

Eunotosaurus shared many characteristics with animals that dig and burrow, the researchers found. The reptile had huge claws and large triceps in addition to thickened ribs.
“We could tell that this animal was very powerful,” says Lyson.
Broad ribs “provide a really, really strong and stable base from which to operate this powerful digging mechanism,” he adds. Like a backhoe, Eunotosaurus could brace itself to burrow into the dirt.

Thanks to a lucky recent find of a fossil preserving the bones around the eyes, the team was even able to tell that the prototurtles’ eyes were well adapted to low light. That’s another characteristic of animals that spend time underground.

Swimming and digging use similar motions, Lyson says, so you would expect to find similar skeletal adaptations in water-dwelling animals. But large claws good for moving dirt suggest a life on land.

Fossils from other prototurtle species also have wider ribs and big claws. So the researchers think these traits may have been important for early turtle evolution in general, not just for Eunotosaurus.

Not everyone is entirely convinced. “It’s a very plausible idea, although many other animals burrow but don’t have these specializations,” says Hans Sues, a paleontologist at the Smithsonian Institution’s National Museum of Natural History. Sues says that it will be important to find and study other turtle ancestors well-adapted to digging to bolster the explanation.

Lyson thinks the prototurtles’ burrowing tendencies might have helped them survive the end-Permian mass extinction around 250 million years ago (SN: 9/19/15, p. 10).

“Lots of animals at this time period burrowed underground to avoid the very, very arid environment that was present in South Africa,” Lyson says. “The burrow provides more climate control.”

Debate accelerates on universe’s expansion speed

A puzzling mismatch is plaguing two methods for measuring how fast the universe is expanding. When the discrepancy arose a few years ago, scientists suspected it would fade away, a symptom of measurement errors. But the latest, more precise measurements of the expansion rate — a number known as the Hubble constant — have only deepened the mystery.

“There’s nothing obvious in the measurements or analyses that have been done that can easily explain this away, which is why I think we are paying attention,” says theoretical physicist Marc Kamionkowski of Johns Hopkins University.
If the mismatch persists, it could reveal the existence of stealthy new subatomic particles or illuminate details of the mysterious dark energy that pushes the universe to expand faster and faster.

Measurements based on observations of supernovas, massive stellar explosions, indicate that distantly separated galaxies are spreading apart at 73 kilometers per second for each megaparsec (about 3.3 million light-years) of distance between them. Scientists used data from NASA’s Hubble Space Telescope to make their estimate, presented in a paper to be published in the Astrophysical Journal and available online at arXiv.org. The analysis pegs the Hubble constant to within experimental errors of just 2.4 percent — more precise than previous estimates using the supernova method.

But another set of measurements, made by the European Space Agency’s Planck satellite, puts the figure about 9 percent lower than the supernova measurements, at 67 km/s per megaparsec with an experimental error of less than 1 percent. That puts the two measurements in conflict. Planck’s result, reported in a paper published online May 10 at arXiv.org, is based on measurements of the cosmic microwave background radiation, ancient light that originated just 380,000 years after the Big Bang.

And now, another team has weighed in with a measurement of the Hubble constant. The Baryon Oscillation Spectroscopic Survey also reported that the universe is expanding at 67 km/s per mega-parsec, with an error of 1.5 percent, in a paper posted online at arXiv.org on July 11. This puts BOSS in conflict with the supernova measurements as well. To make the measurement, BOSS scientists studied patterns in the clustering of 1.2 million galaxies. That clustering is the result of pressure waves in the early universe; analyzing the spacing of those imprints on the sky provides a measure of the universe’s expansion.

Although the conflict isn’t new (SN: 4/5/14, p. 18), the evidence that something is amiss has strengthened as scientists continue to refine their measurements.
The latest results are now precise enough that the discrepancy is unlikely to be a fluke. “It’s gone from looking like maybe just bad luck, to — no, this can’t be bad luck,” says the leader of the supernova measurement team, Adam Riess of Johns Hopkins. But the cause is still unknown, Riess says. “It’s kind of a mystery at this point.”
Since its birth from a cosmic speck in the Big Bang, the universe has been continually expanding. And that expansion is now accelerating, as galaxy clusters zip away from one another at an ever-increasing rate. The discovery of this acceleration in the 1990s led scientists to conclude that dark energy pervades the universe, pushing it to expand faster and faster.

As the universe expands, supernovas’ light is stretched, shifting its frequency. For objects of known distance, that frequency shift can be used to infer the Hubble constant. But measuring distances in the universe is complicated, requiring the construction of a “distance ladder,” which combines several methods that build on one another.

To create their distance ladder, Riess and colleagues combined geometrical distance measurements with “standard candles” — objects of known brightness. Since a candle that’s farther away is dimmer, if you know its absolute brightness, you can calculate its distance. For standard candles, the team used Cepheid variable stars, which pulsate at a rate that is correlated with their brightness, and type 1a supernovas, whose brightness properties are well-understood.

Scientists on the Planck team, on the other hand, analyzed the cosmic microwave background, using variations in its temperature and polarization to calculate how fast the universe was expanding shortly after the Big Bang. The scientists used that information to predict its current rate of expansion.

As for what might be causing the persistent discrepancy between the two methods, there are no easy answers, Kamionkowski says. “In terms of exotic physics explanations, we’ve been scratching our heads.”

A new type of particle could explain the mismatch. One possibility is an undiscovered variety of neutrino, which would affect the expansion rate in the early universe, says theoretical astrophysicist David Spergel of Princeton University. “But it’s hard to fit that to the other data we have.” Instead, Spergel favors another explanation: some currently unknown feature of dark energy. “We know so little about dark energy, that would be my guess on where the solution most likely is,” he says.

If dark energy is changing with time, pushing the universe to expand faster than predicted, that could explain the discrepancy. “We could be on our way to discovering something nontrivial about the dark energy — that it is an evolving energy field as opposed to just constant,” says cosmologist Kevork Abazajian of the University of California, Irvine.

A more likely explanation, some experts say, is that a subtle aspect of one of the measurements is not fully understood. “At this point, I wouldn’t say that you would point at either one and say that there are really obvious things wrong,” says astronomer Wendy Freedman of the University of Chicago. But, she says, if the Cepheid calibration doesn’t work as well as expected, that could slightly shift the measurement of the Hubble constant.

“In order to ascertain if there’s a problem, you need to do a completely independent test,” says Freedman. Her team is working on a measurement of the Hubble constant without Cepheids, instead using two other types of stars: RR Lyrae variable stars and red giant branch stars.

Another possibility, says Spergel, is that “there’s something missing in the Planck results.” Planck scientists measure the size of temperature fluctuations between points on the sky. Points separated by larger distances on the sky give a value of the Hubble constant in better agreement with the supernova results. And measurements from a previous cosmic microwave background experiment, WMAP, are also closer to the supernova measurements.

But, says George Efstathiou, an astrophysicist at the University of Cambridge and a Planck collaboration member, “I would say that the Planck results are rock solid.” If simple explanations in both analyses are excluded, astronomers may be forced to conclude that something important is missing in scientists’ understanding of the universe.

Compared with past disagreements over values of the Hubble constant, the new discrepancy is relatively minor. “Historically, people argued vehemently about whether the Hubble constant was 50 or 100, with the two camps not conceding an inch,” says theoretical physicist Katherine Freese of the University of Michigan in Ann Arbor. The current difference between the two measurements is “tiny by the standards of the old days.”

Cosmological measurements have only recently become precise enough for a few-percent discrepancy to be an issue. “That it’s so difficult to explain is actually an indication of how far we’ve come in cosmology,” Kamionkowski says. “Twenty-five years ago you would wave your hands and make something up.”

Oldest evidence of cancer in human family tree found

Cancer goes way, way back. A deadly form of this disease and a noncancerous but still serious tumor afflicted members of the human evolutionary family nearly 2 million years ago, two new investigations of fossils suggest.

If those conclusions hold up, cancers are not just products of modern societies, as some researchers have proposed. “Our studies show that cancers and tumors occurred in our ancient relatives millions of years before modern industrial societies existed,” says medical anthropologist Edward Odes of the University of the Witwatersrand in Johannesburg, a coauthor of both new studies. Today, however, pesticides, longer life spans and other features of the industrialized world may increase rates of cancers and tumors.
A 1.6-million- to 1.8-million-year-old hominid, either from the Homo genus or a dead-end line called Paranthropus, suffered from a potentially fatal bone cancer, Odes and colleagues say in one of two papers published in the July/August South African Journal of Science. Advanced X-ray techniques enabled identification of a fast-growing bone cancer on a hominid toe fossil previously unearthed at South Africa’s Swartkrans Cave site, the researchers report. This malignant cancer consisted of a mass of bone growth on both the toe’s surface and inside the bone.

Until now, the oldest proposed cancer in hominids consisted of an unusual growth on an African Homo erectus jaw fragment dating to roughly 1.5 million years ago. Critics, though, regard that growth as the result of a fractured jaw, not cancer.

A second new study, led by biological anthropologist Patrick Randolph-Quinney, now at the University of Central Lancashire in England, identifies the oldest known benign tumor in a hominid in a bone from an
Australopithecus sediba child. This tumor penetrated deep into a spinal bone, close to an opening for the spinal cord. Nearly 2-million-year-old partial skeletons of the child and an adult of the same species were found in an underground cave at South Africa’s Malapa site ( SN: 8/10/13, p. 26 ).
Although not life-threatening, this tumor would have interfered with walking, running and climbing, the researchers say. People today, especially children, rarely develop such tumors in spinal bones.

“This is the first evidence of such a disease in a young individual in the fossil record,” Randolph-Quinney says.

X-ray technology allowed scientists to create and analyze 3-D copies of the inside and outside of the toe and spine fossils.

But studies of fossil bones alone, even with sophisticated imaging technology, provide “a very small window” for detecting cancers and tumors, cautions paleoanthropologist Janet Monge of the University of Pennsylvania Museum of Archaeology and Anthropology in Philadelphia. Microscopic analysis of soft-tissue cells, which are typically absent on fossils, confirms cancer diagnoses in people today, she says.

Without additional evidence of bone changes in and around the proposed cancer and tumor, Monge won’t draw any conclusions about what caused those growths.

Monge led a team that found a tumor on a 120,000- to 130,000-year-old Neandertal rib bone from Eastern Europe. Whether the tumor was cancerous or caused serious health problems can’t be determined, the scientists concluded in 2013 in PLOS ONE.