These genes may be why dogs are so friendly

DNA might reveal how dogs became man’s best friend.

A new study shows that some of the same genes linked to the behavior of extremely social people can also make dogs friendlier. The result, published July 19 in Science Advances, suggests that dogs’ domestication may be the result of just a few genetic changes rather than hundreds or thousands of them.

“It is great to see initial genetic evidence supporting the self-domestication hypothesis or ‘survival of the friendliest,’” says evolutionary anthropologist Brian Hare of Duke University, who studies how dogs think and learn. “This is another piece of the puzzle suggesting that humans did not create dogs intentionally, but instead wolves that were friendliest toward humans were at an evolutionary advantage as our two species began to interact.”

Not much is known about the underlying genetics of how dogs became domesticated. In 2010, evolutionary geneticist Bridgett vonHoldt of Princeton University and colleagues published a study comparing dogs’ and wolves’ DNA. The biggest genetic differences gave clues to why dogs and wolves don’t look the same. But major differences were also found in WBSCR17, a gene linked to Williams-Beuren syndrome in humans.
Williams-Beuren syndrome leads to delayed development, impaired thinking ability and hypersociability. VonHoldt and colleagues wondered if changes to the same gene in dogs would make the animals more social than wolves, and whether that might have influenced dogs’ domestication.

In the new study, vonHoldt and colleagues compared the sociability of domestic dogs with that of wolves raised by humans. Dogs typically spent more time than wolves staring at and interacting with a human stranger nearby, showing the dogs were more social than the wolves. Analyzing the genetic blueprint of those dogs and wolves, along with DNA data of other wolves and dogs, showed variations in three genes associated with the social behaviors directed at humans: WBSCR17, GTF2I and GTF2IRD1. All three are tied to Williams-Beuren syndrome in humans.
“It’s fascinating that a handful of genetic changes could be so influential on social behavior,” vonHoldt says.

She and colleagues propose that such changes may be closely intertwined with dog domestication. Previous hypotheses have suggested that domestication was linked dogs’ development of advanced ways of analyzing and applying information about social situations, a way of thinking assumed to be unique to humans. “Instead of developing a more complex form of cognition, dogs appear to be engaging in excessively friendly behavior that increases the amount of time they spend near us and watching us,” says study coauthor Monique Udell, who studies animal behavior at Oregon State University in Corvallis. In turn, she says, that gives dogs “the opportunities necessary for them to learn about our behavior and what maximizes their success when living with us.”

The team notes, for instance, that in addition to contributing to sociability, the variations in WBSCR17 may represent an adaptation in dogs to living with humans. A previous study revealed that variations in WBSCR17 were tied to the ability to digest carbohydrates — a source of energy wolves would have rarely consumed. Yet, the variations in domestic dogs suggest those changes would help them thrive on the starch-rich diets of humans. Links between another gene related to starch digestion in dogs and domestication, however, have recently been called into question (SN Online: 7/18/17).

The other variations, the team argues, would have predisposed the dogs to be hypersocial with humans, a trait that humans would then have selected for as dogs were bred over generations.

This robot grows like a plant

Robots are branching out. A new prototype soft robot takes inspiration from plants by growing to explore its environment.

Vines and some fungi extend from their tips to explore their surroundings. Elliot Hawkes of the University of California in Santa Barbara and his colleagues designed a bot that works on similar principles. Its mechanical body sits inside a plastic tube reel that extends through pressurized inflation, a method that some invertebrates like peanut worms (Sipunculus nudus) also use to extend their appendages. The plastic tubing has two compartments, and inflating one side or the other changes the extension direction. A camera sensor at the tip alerts the bot when it’s about to run into something.

In the lab, Hawkes and his colleagues programmed the robot to form 3-D structures such as a radio antenna, turn off a valve, navigate a maze, swim through glue, act as a fire extinguisher, squeeze through tight gaps, shimmy through fly paper and slither across a bed of nails. The soft bot can extend up to 72 meters, and unlike plants, it can grow at a speed of 10 meters per second, the team reports July 19 in Science Robotics. The design could serve as a model for building robots that can traverse constrained environments.

This isn’t the first robot to take inspiration from plants. One plantlike predecessor was a robot modeled on roots.

Perovskites power up the solar industry

Tsutomu Miyasaka was on a mission to build a better solar cell. It was the early 2000s, and the Japanese scientist wanted to replace the delicate molecules that he was using to capture sunlight with a sturdier, more effective option.

So when a student told him about an unfamiliar material with unusual properties, Miyasaka had to try it. The material was “very strange,” he says, but he was always keen on testing anything that might respond to light.
Other scientists were running electricity through the material, called a perovskite, to generate light. Miyasaka, at Toin University of Yokohama in Japan, wanted to know if the material could also do the opposite: soak up sunlight and convert it into electricity. To his surprise, the idea worked. When he and his team replaced the light-sensitive components of a solar cell with a very thin layer of the perovskite, the illuminated cell pumped out a little bit of electric current.

The result, reported in 2009 in the Journal of the American Chemical Society, piqued the interest of other scientists, too. The perovskite’s properties made it (and others in the perovskite family) well-suited to efficiently generate energy from sunlight. Perhaps, some scientists thought, this perovskite might someday be able to outperform silicon, the light-absorbing material used in more than 90 percent of solar cells around the world.
Initial excitement quickly translated into promising early results. An important metric for any solar cell is how efficient it is — that is, how much of the sunlight that strikes its surface actually gets converted to electricity. By that standard, perovskite solar cells have shone, increasing in efficiency faster than any previous solar cell material in history. The meager 3.8 percent efficiency reported by Miyasaka’s team in 2009 is up to 22 percent this year. Today, the material is almost on par with silicon, which scientists have been tinkering with for more than 60 years to bring to a similar efficiency level.
“People are very excited because [perovskite’s] efficiency number has climbed so fast. It really feels like this is the thing to be working on right now,” says Jao van de Lagemaat, a chemist at the National Renewable Energy Laboratory in Golden, Colo.

Now, perovskite solar cells are at something of a crossroads. Lab studies have proved their potential: They are cheaper and easier to fabricate than time-tested silicon solar cells. Though perovskites are unlikely to completely replace silicon, the newer materials could piggyback onto existing silicon cells to create extra-effective cells. Perovskites could also harness solar energy in new applications where traditional silicon cells fall flat — as light-absorbing coatings on windows, for instance, or as solar panels that work on cloudy days or even absorb ambient sunlight indoors.

Whether perovskites can make that leap, though, depends on current research efforts to fix some drawbacks. Their tendency to degrade under heat and humidity, for example, is not a great characteristic for a product meant to spend hours in the sun. So scientists are trying to boost stability without killing efficiency.

“There are challenges, but I think we’re well on our way to getting this stuff stable enough,” says Henry Snaith, a physicist at the University of Oxford. Finding a niche for perovskites in an industry so dominated by silicon, however, requires thinking about solar energy in creative ways.

Leaping electrons
Perovskites flew under the radar for years before becoming solar stars. The first known perovskite was a mineral, calcium titanate, or CaTiO3, discovered in the 19th century. In more recent years, perovskites have expanded to a class of compounds with a similar structure and chemical recipe — a 1:1:3 ingredient ratio — that can be tweaked with different elements to make different “flavors.”

But the perovskites being studied for the light-absorbing layer of solar cells are mostly lab creations. Many are lead halide perovskites, which combine a lead ion and three ions of iodine or a related element, such as bromine, with a third type of ion (usually something like methylammonium). Those ingredients link together to form perovskites’ hallmark cagelike pyramid-on-pyramid structure. Swapping out different ingredients (replacing lead with tin, for instance) can yield many kinds of perovskites, all with slightly different chemical properties but the same basic crystal structure.

Perovskites owe their solar skills to the way their electrons interact with light. When sunlight shines on a solar panel, photons — tiny packets of light energy — bombard the panel’s surface like a barrage of bullets and get absorbed. When a photon is absorbed into the solar cell, it can share some of its energy with a negatively charged electron. Electrons are attracted to the positively charged nucleus of an atom. But a photon can give an electron enough energy to escape that pull, much like a video game character getting a power-up to jump a motorbike across a ravine. As the energized electron leaps away, it leaves behind a positively charged hole. A separate layer of the solar cell collects the electrons, ferrying them off as electric current.

The amount of energy needed to kick an electron over the ravine is different for every material. And not all photon power-ups are created equal. Sunlight contains low-energy photons (infrared light) and high-energy photons (sunburn-causing ultraviolet radiation), as well as all of the visible light in between.

Photons with too little energy “will just sail right on through” the light-catching layer and never get absorbed, says Daniel Friedman, a photovoltaic researcher at the National Renewable Energy Lab. Only a photon that comes in with energy higher than the amount needed to power up an electron will get absorbed. But any excess energy a photon carries beyond what’s needed to boost up an electron gets lost as heat. The more heat lost, the more inefficient the cell.
Because the photons in sunlight vary so much in energy, no solar cell will ever be able to capture and optimally use every photon that comes its way. So you pick a material, like silicon, that’s a good compromise — one that catches a decent number of photons but doesn’t waste too much energy as heat, Friedman says.

Although it has dominated the solar cell industry, silicon can’t fully use the energy from higher-energy photons; the material’s solar conversion efficiency tops out at around 30 percent in theory and has hit 20-some percent in practice. Perovskites could do better. The electrons inside perovskite crystals require a bit more energy to dislodge. So when higher-energy photons come into the solar cell, they devote more of their energy to dislodging electrons and generating electric current, and waste less as heat. Plus, by changing the ingredients and their ratios in a perovskite, scientists can adjust the photons it catches. Using different types of perovskites across multiple layers could allow solar cells to more effectively absorb a broader range of photons.

Perovskites have a second efficiency perk. When a photon excites an electron inside a material and leaves behind a positively charged hole, there’s a tendency for the electron to slide right back into a hole. This recombination, as it’s known, is inefficient — an electron that could have fed an electric current instead just stays put.

In perovskites, though, excited electrons usually migrate quite far from their holes, Snaith and others have found by testing many varieties of the material. That boosts the chances the electrons will make it out of the perovskite layer without landing back in a hole.

“It’s a very rare property,” Miyasaka says. It makes for an efficient sunlight absorber.

Some properties of perovskites also make them easier than silicon to turn into solar cells. Making a conventional silicon solar cell requires many steps, all done in just the right order at just the right temperature — something like baking a fragile soufflé. The crystals of silicon have to be perfect, because even small defects in the material can hurt its efficiency. The need for such precision makes silicon solar cells more expensive to produce.

Perovskites are more like brownies from a box — simpler, less finicky. “You can make it in an office, basically,” says materials scientist Robert Chang of Northwestern University in Evanston, Ill. He’s exaggerating, but only a little. Perovskites are made by essentially mixing a bunch of ingredients together and depositing them on a surface in a thin, even film. And while making crystalline silicon requires temperatures up to 2000° Celsius, perovskite crystals form at easier-to-reach temperatures — lower than 200°.

Seeking stability
In many ways, perovskites have become even more promising solar cell materials over time, as scientists have uncovered exciting new properties and finessed the materials’ use. But no material is perfect. So now, scientists are searching for ways to overcome perovskites’ real-world limitations. The most pressing issue is their instability, van de Lagemaat says. The high efficiency levels reported from labs often last only days or hours before the materials break down.

Tackling stability is a less flashy problem than chasing efficiency records, van de Lagemaat points out, which is perhaps why it’s only now getting attention. Stability isn’t a single number that you can flaunt, like an efficiency value. It’s also a bit harder to define, especially since how long a solar cell lasts depends on environmental conditions like humidity and precipitation levels, which vary by location.

Encapsulating the cell with water-resistant coatings is one strategy, but some scientists want to bake stability into the material itself. To do that, they’re experimenting with different perovskite designs. For instance, solar cells containing stacks of flat, graphenelike sheets of perovskites seem to hold up better than solar cells with the standard three-dimensional crystal and its interwoven layers.

In these 2-D perovskites, some of the methylammonium ions are replaced by something larger, like butylammonium. Swapping in the bigger ion forces the crystal to form in sheets just nanometers thick, which stack on top of each other like pages in a book, says chemist Aditya Mohite of Los Alamos National Laboratory in New Mexico. The butylammonium ion, which naturally repels water, forms spacer layers between the 2-D sheets and stops water from permeating into the crystal.
Getting the 2-D layers to line up just right has proved tricky, Mohite says. But by precisely controlling the way the layers form, he and colleagues created a solar cell that runs at 12.5 percent efficiency while standing up to light and humidity longer than a similar 3-D model, the team reported in 2016 in Nature. Although it was protected with a layer of glass, the 3-D perovskite solar cell lost performance rapidly, within a few days, while the 2-D perovskite withered only slightly. (After three months, the 2-D version was still working almost as well as it had been at the beginning.)

Despite the seemingly complex structure of the 2-D perovskites, they are no more complicated to make than their 3-D counterparts, says Mercouri Kanatzidis, a chemist at Northwestern and a collaborator on the 2-D perovskite project. With the right ingredients, he says, “they form on their own.”

His goal now is to boost the efficiency of 2-D perovskite cells, which don’t yet match up to their 3-D counterparts. And he’s testing different water-repelling ions to reach an ideal stability without sacrificing efficiency.

Other scientists have mixed 2-D and 3-D perovskites to create an ultra-long-lasting cell — at least by perovskite standards. A solar panel made of these cells ran at only 11 percent efficiency, but held up for 10,000 hours of illumination, or more than a year, according to research published in June in Nature Communications. And, importantly, that efficiency was maintained over an area of about 50 square centimeters, more on par with real-world conditions than the teeny-tiny cells made in most research labs.

A place for perovskites?
With boosts to their stability, perovskite solar cells are getting closer to commercial reality. And scientists are assessing where the light-capturing material might actually make its mark.

Some fans have pitted perovskites head-to-head with silicon, suggesting the newbie could one day replace the time-tested material. But a total takeover probably isn’t a realistic goal, says Sarah Kurtz, codirector of the National Center for Photovoltaics at the National Renewable Energy Lab.

“People have been saying for decades that silicon can’t get lower in cost to meet our needs,” Kurtz says. But, she points out, the price of solar energy from silicon-based panels has dropped far lower than people originally expected. There are a lot of silicon solar panels out there, and a lot of commercial manufacturing plants already set up to deal with silicon. That’s a barrier to a new technology, no matter how great it is. Other silicon alternatives face the same limitation. “Historically, silicon has always been dominant,” Kurtz says.
For Snaith, that’s not a problem. He cofounded Oxford Photo-voltaics Limited, one of the first companies trying to commercialize perovskite solar cells. His team is developing a solar cell with a perovskite layer over a standard silicon cell to make a super-efficient double-decker cell. That way, Snaith says, the team can capitalize on the massive amount of machinery already set up to build commercial silicon solar cells.
A perovskite layer on top of silicon would absorb higher-energy photons and turn them into electricity. Lower-energy photons that couldn’t excite the perovskite’s electrons would pass through to the silicon layer, where they could still generate current. By combining multiple materials in this way, it’s possible to catch more photons, making a more efficient cell.

That idea isn’t new, Snaith points out: For years, scientists have been layering various solar cell materials in this way. But these double-decker cells have traditionally been expensive and complicated to make, limiting their applications. Perovskites’ ease of fabrication could change the game. Snaith’s team is seeing some improvement already, bumping the efficiency of a silicon solar cell from 10 to 23.6 percent by adding a perovskite layer, for example. The team reported that result online in February in Nature Energy.

Rather than compete with silicon solar panels for space on sunny rooftops and in open fields, perovskites could also bring solar energy to totally new venues.

“I don’t think it’s smart for perovskites to compete with silicon,” Miyasaka says. Perovskites excel in other areas. “There’s a whole world of applications where silicon can’t be applied.”

Silicon solar cells don’t work as well on rainy or cloudy days, or indoors, where light is less direct, he says. Perovskites shine in these situations. And while traditional silicon solar cells are opaque, very thin films of perovskites could be printed onto glass to make sunlight-capturing windows. That could be a way to bring solar power to new places, turning glassy skyscrapers into serious power sources, for example. Perovskites could even be printed on flexible plastics to make solar-powered coatings that charge cell phones.

That printing process is getting closer to reality: Scientists at the University of Toronto recently reported a way to make all layers of a perovskite solar cell at temperatures below 150° — including the light-absorbing perovskite layer, but also the background workhorse layers that carry the electrons away and funnel them into current. That could streamline and simplify the production process, making mass newspaper-style printing of perovskite solar cells more doable.

Printing perovskite solar cells on glass is also an area of interest for Oxford Photovoltaics, Snaith says. The company’s ultimate target is to build a perovskite cell that will last 25 years, as long as a traditional silicon cell.

Moon had a magnetic field for at least a billion years longer than thought

The moon had a magnetic field for at least 2 billion years, or maybe longer.

Analysis of a relatively young rock collected by Apollo astronauts reveals the moon had a weak magnetic field until 1 billion to 2.5 billion years ago, at least a billion years later than previous data showed. Extending this lifetime offers insights into how small bodies generate magnetic fields, researchers report August 9 in Science Advances. The result may also suggest how life could survive on tiny planets or moons.
“A magnetic field protects the atmosphere of a planet or moon, and the atmosphere protects the surface,” says study coauthor Sonia Tikoo, a planetary scientist at Rutgers University in New Brunswick, N.J. Together, the two protect the potential habitability of the planet or moon, possibly those far beyond our solar system.

The moon does not currently have a global magnetic field. Whether one ever existed was a question debated for decades (SN: 12/17/11, p. 17). On Earth, molten rock sloshes around the outer core of the planet over time, causing electrically conductive fluid moving inside to form a magnetic field. This setup is called a dynamo. At 1 percent of Earth’s mass, the moon would have cooled too quickly to generate a long-lived roiling interior.
Magnetized rocks brought back by Apollo astronauts, however, revealed that the moon must have had some magnetizing force. The rocks suggested that the magnetic field was strong at least 4.25 billion years ago, early on in the moon’s history, but then dwindled and maybe even got cut off about 3.1 billion years ago.
Tikoo and colleagues analyzed fragments of a lunar rock collected along the southern rim of the moon’s Dune Crater during the Apollo 15 mission in 1971. The team determined the rock was 1 billion to 2.5 billion years old and found it was magnetized. The finding suggests the moon had a magnetic field, albeit a weak one, when the rock formed, the researchers conclude.
A drop in the magnetic field strength suggests the dynamo driving it was generated in two distinct ways, Tikoo says. Early on, Earth and the moon would have sat much closer together, allowing Earth’s gravity to tug on and spin the rocky exterior of the moon. That outer layer would have dragged against the liquid interior, generating friction and a very strong magnetic field (SN Online: 12/4/14).

Then slowly, starting about 3.5 billion years ago, the moon moved away from Earth, weakening the dynamo. But by that point, the moon would have started to cool, causing less dense, hotter material in the core to rise and denser, cooler material to sink, as in Earth’s core. This roiling of material would have sustained a weak field that lasted for at least a billion years, until the moon’s interior cooled, causing the dynamo to die completely, the team suggests.

The two-pronged explanation for the moon’s dynamo is “an entirely plausible idea,” says planetary scientist Ian Garrick-Bethell of the University of California, Santa Cruz. But researchers are just starting to create computer simulations of the strength of magnetic fields to understand how such weaker fields might arise. So it is hard to say exactly what generated the lunar dynamo, he says.

If the idea is correct, it may mean other small planets and moons could have similarly weak, long-lived magnetic fields. Having such an enduring shield could protect those bodies from harmful radiation, boosting the chances for life to survive.

Here are the paths of the next 15 total solar eclipses

August’s total solar eclipse won’t be the last time the moon cloaks the sun’s light. From now to 2040, for example, skywatchers around the globe can witness 15 such events.

Their predicted paths aren’t random scribbles. Solar eclipses occur in what’s called a Saros cycle — a period that lasts about 18 years, 11 days and eight hours, and is governed by the moon’s orbit. (Lunar eclipses follow a Saros cycle, too, which the Chaldeans first noticed probably around 500 B.C.)

Two total solar eclipses separated by that 18-years-and-change period are almost twins — compare this year’s eclipse with the Sept. 2, 2035 eclipse, for example. They take place at roughly the same time of year, at roughly the same latitude and with the moon at about the same distance from Earth. But those extra eight hours, during which the Earth has rotated an additional third of the way on its axis, shift the eclipse path to a different part of the planet.
This cycle repeats over time, creating a family of eclipses called a Saros series. A series lasts 12 to 15 centuries and includes about 70 or more eclipses. The solar eclipses of 2019 and 2037 belong to a different Saros series, so their paths too are shifted mimics. Their tracks differ in shape from 2017’s, because the moon is at a different place in its orbit when it passes between the Earth and the sun. Paths are wider at the poles because the moon’s shadow is hitting the Earth’s surface at a steep angle.

Predicting and mapping past and future eclipses allows scientists “to examine the patterns of eclipse cycles, the most prominent of which is the Saros,” says astrophysicist Fred Espenak, who is retired from NASA’s Goddard Spaceflight Center in Greenbelt, Md.

He would know. Espenak and his colleague Jean Meeus, a retired Belgian astronomer, have mapped solar eclipse paths from 2000 B.C. to A.D. 3000. For archaeologists and historians peering backward, the maps help match up accounts of long-ago eclipses with actual paths. For eclipse chasers peering forward, the data are an itinerary.

“I got interested in figuring out how to calculate eclipse paths for my own use, for planning … expeditions,” says Espenak, who was 18 when he witnessed his first total solar eclipse. It was in 1970, and he secured permission to drive the family car from southern New York to North Carolina to see it. Since then, Espenak, nicknamed “Mr. Eclipse,” has been to every continent, including Antarctica, for a total eclipse of the sun.

“It’s such a dramatic, spectacular, beautiful event,” he says. “You only get a few brief minutes, typically, of totality before it ends. After it’s over, you’re craving to see it again.”

Rumors swirl that LIGO snagged gravitational waves from a neutron star collision

Speculation is running rampant about potential new discoveries of gravitational waves, just as the latest search wound down August 25.

Publicly available logs from astronomical observatories indicate that several telescopes have been zeroing in on one particular region of the sky, potentially in response to a detection of ripples in spacetime by the Advanced Laser Interferometer Gravitational-Wave Observatory, LIGO. These records have raised hopes that, for the first time, scientists may have glimpsed electromagnetic radiation — light — produced in tandem with gravitational waves. That light would allow scientists to glean more information about the waves’ source. Several tweets from astronomers reporting rumors of a new LIGO detection have fanned the flames of anticipation and amplified hopes that the source may be a cosmic convulsion unlike any LIGO has seen before.
“There is a lot of excitement,” says astrophysicist Rosalba Perna of Stony Brook University in New York, who is not involved with the LIGO collaboration. “We are all very anxious to actually see the announcement.”

An Aug. 25 post on the LIGO collaboration’s website announced the end of the current round of data taking, which began November 30, 2016. Virgo, a gravitational wave detector in Italy, had joined forces with LIGO’s two on August 1 (SN Online: 8/1/17). The three detectors will now undergo upgrades to improve their sensitivity. The update noted that “some promising gravitational-wave candidates have been identified in data from both LIGO and Virgo during our preliminary analysis, and we have shared what we currently know with astronomical observing partners.”

When LIGO detects gravitational waves, the collaboration alerts astronomers to the approximate location the waves seemed to originate from. The hope is that a telescope could pick up light from the aftermath of the cosmic catastrophe that created the gravitational waves — although no light has been found in previous detections.

LIGO previously detected three sets of gravitational waves from merging black holes (SN: 6/24/17, p. 6). Black hole coalescences aren’t expected to generate light that could be spotted by telescopes, but another prime candidate could: a smashup between two remnants of stars known as neutron stars. Scientists have been eagerly awaiting LIGO’s first detections of such mergers, which are suspected to be the sites where the universe’s heaviest elements are formed. An observation of a neutron star crash also could provide information about the ultradense material that makes up neutron stars.
Since mid-August, seemingly in response to a LIGO alert, several telescopes have observed a section of sky around the galaxy NGC 4993, located 134 million light-years away in the constellation Hydra. The Hubble Space Telescope has made at least three sets of observations in that vicinity, including one on August 22 seeking “observations of the first electromagnetic counterparts to gravitational wave sources.”

Likewise, the Chandra X-ray Observatory targeted the same region of sky on August 19. And records from the Gemini Observatory’s telescope in Chile indicate several potentially related observations, including one referencing “an exceptional LIGO/Virgo event.”

“I think it’s very, very likely that LIGO has seen something,” says astrophysicist David Radice of Princeton University, who is not affiliated with LIGO. But, he says, he doesn’t know whether its source has been confirmed as merging neutron stars.

LIGO scientists haven’t commented directly on the veracity of the rumor. “We have some substantial work to do before we will be able to share with confidence any quantitative results. We are working as fast as we can,” LIGO spokesperson David Shoemaker of MIT wrote in an e-mail.

Tabby’s star is probably just dusty, and still not an alien megastructure

Alien megastructures are out. The unusual fading of an oddball star is more likely caused by either clouds of dust or an abnormal cycle of brightening and dimming, two new papers suggest.

Huan Meng of the University of Arizona in Tucson and his colleagues suggest that KIC 8462852, known as Tabby’s star, is dimming thanks to an orbiting cloud of fine dust particles. The team observed the star with the infrared Spitzer and ultraviolet Swift space telescopes from October 2015 to December 2016 — the first observations in multiple wavelengths of light. They found that the star is dimming faster in short blue wavelengths than longer infrared ones, suggesting smaller particles.
“That almost absolutely ruled out the alien megastructure scenario, unless it’s an alien microstructure,” Meng says.

Tabby’s star is most famous for suddenly dropping in brightness by up to 22 percent over the course of a few days (SN Online: 2/2/16). Later observations suggested the star is also fading by about 4 percent per year (SN: 9/17/16, p. 12), which Meng’s team confirmed in a paper posted online August 24 at arXiv.org.

Joshua Simon of the Observatories of the Carnegie Institution for Science in Pasadena, Calif., found a similar dimming in data on Tabby’s star from the All Sky Automated Survey going back to 2006. Simon and colleagues also found for the first time that the star grew brighter in 2014, and possibly in 2006, they reported in a paper August 25 at arXiv.org.

“That’s fascinating,” says astrophysicist Tabetha Boyajian of Louisiana State University in Baton Rouge. She first reported the star’s flickers in 2015 (the star is nicknamed for her) and is a coauthor on Meng’s paper. “We always speculated that it would brighten sometime. It can’t just get fainter all the time — otherwise it would disappear. This shows that it does brighten.”

The brightening could be due to a magnetic cycle like the sun’s, Simon suggests. But no known cycle makes a star brighten and dim by quite so much, so the star would still be odd.
Brian Metzger of Columbia University previously suggested that a ripped-up planet falling in pieces into the star could explain both the long-term and short-term dimming. He thinks that model still works, although it needs some tweaks.

“This adds some intrigue to what’s going on, but I don’t think it really changes the landscape,” says Metzger, who was not involved in the new studies. And newer observations could complicate things further: The star went through another bout of dimming between May and July. “I’m waiting to see the papers analyzing this recent event,” Metzger says.

50 years ago, West Germany embraced nuclear power

West German power companies have decided to go ahead with two nuclear power station projects…. Compared with the U.S. and Britain, Germany has been relatively backward in the application of nuclear energy…. The slow German start is only partly the result of restrictions placed upon German nuclear research after the war. — Science News, September 16, 1967

Update
Both East and West Germany embraced nuclear power until antinuclear protests in the 1970s gathered steam. In 1998, the unified German government began a nuclear phaseout, which Chancellor Angela Merkel halted in 2009. The 2011 Fukushima nuclear disaster in Japan caused a rapid reversal. Germany closed eight of its nuclear plants immediately, and announced that all nuclear power in the country would go dark by 2022 (SN Online: 6/1/11). A pivot to renewable energy — wind, solar, hydropower and biomass — produced 188 billion kilowatt-hours of electricity in 2016, nearly 32 percent of German electricity usage.

From day one, a frog’s developing brain is calling the shots

Frog brains get busy long before they’re fully formed. Just a day after fertilization, embryonic brains begin sending signals to far-off places in the body, helping oversee the layout of complex patterns of muscles and nerve fibers. And when the brain is missing, bodily chaos ensues, researchers report online September 25 in Nature Communications.

The results, from brainless embryos and tadpoles, broaden scientists’ understanding of the types of signals involved in making sure bodies develop correctly, says developmental biologist Catherine McCusker of the University of Massachusetts Boston. Scientists are familiar with short-range signals among nearby cells that help pattern bodies. But because these newly described missives travel all the way from the brain to the far reaches of the body, they are “the first example of really long-range signals,” she says.
Celia Herrera-Rincon of Tufts University in Medford, Mass., and colleagues came up with a simple approach to tease out the brain’s influence on the growing body. Just one day after fertilization, the scientists lopped off the still-forming brains of African clawed frog embryos. These embryos survive to become tadpoles even without brains, a quirk of biology that allowed the researchers to see whether the brain is required for the body’s development.
The answer was a definite — and surprising — yes, Herrera-Rincon says. Long before the brain is mature, it’s already organizing and guiding organ behavior, she says. Brainless tadpoles had bungled patterns of muscles. Normally, muscle fibers form a stacked chevron pattern. But in tadpoles lacking a brain, this pattern didn’t form correctly. “The borders between segments are all wonky,” says study coauthor Michael Levin, also of Tufts University. “They can’t keep a straight line.”
Nerve fibers that crisscross tadpoles’ bodies also grew in an abnormal pattern. Levin and colleagues noticed extra nerve fibers snaking across the brainless tadpoles in a chaotic pattern, “a nerve network that shouldn’t be there,” he says.

Muscle and nerve abnormalities are the most obvious differences. But brainless tadpoles probably have more subtle defects in other parts of their bodies, such as the heart. The search for those defects is the subject of ongoing experiments, Levin says.
In addition to keeping patterns on point, the young frog brain may protect its body from chemical assaults. A molecule that binds to certain proteins on cells in the body had no effect on normal embryos. But when given to brainless embryos, the same molecule caused their spinal cords and tails to grow crooked. These results suggest that early in development, brains keep embryos safe from agents that would otherwise cause harm.

“The brain is instructing cells that are really a long way away from it,” Levin says. While the precise identities of these long-range signals aren’t known, the researchers have some ideas. When brainless embryos were dosed with a drug that targets cells that typically respond to the chemical messenger acetylcholine, the muscle pattern improved. Similarly, the addition of a protein called HCN2 that can tweak the activity of cells also seemed to improve muscle development. More work is needed before scientists know whether these interventions are actually mimicking messaging from the early brain, and if so, how.

Frog development isn’t the same as mammalian development, but frog development “is pretty applicable to human biology,” McCusker says. In fundamental ways, humans and frogs are built from the same molecular toolbox, she says. So the results hint that a growing human brain might also interact similarly with a growing human body.

Narwhals react to certain dangers in a really strange way

When escaping from humans, narwhals don’t just freeze or flee. They do both.

These deep-diving marine mammals have similar physiological responses to those of an animal frozen in fear: Their heart rate, breathing and metabolism slow, mimicking a “deer in the headlights” reaction. But narwhals (Monodon monoceros) take this freeze response to extremes. The animals decrease their heart rate to as slow as three beats per minute for more than 10 minutes, while pumping their tails as much as 25 strokes per minute during an escape dive, an international team of researchers reports in the Dec. 8 Science.
“That was astounding to us because there are other marine mammals that can have heart rates that low but not typically for that long a period of time, and especially not while they’re swimming as hard as they can,” says Terrie Williams, a biologist at the University of California, Santa Cruz. So far, this costly escape has been observed only after a prolonged interaction with humans.

Usually, narwhals will escape natural predators such as killer whales by stealthily slipping under ice sheets or huddling in spots too shallow for their pursuers, Williams says. But interactions with humans — something that will happen increasingly as melting sea ice opens up the Arctic — may be changing that calculus.
“When narwhals detect humans, they often dive quickly and disappear from sight,” says Kristin Laidre, an ecologist at the University of Washington in Seattle who studies marine mammals in the Arctic.
Williams and her colleagues partnered with indigenous hunters in East Greenland to capture narwhals in nets. Then, the researchers stuck monitoring equipment to the narwhals’ backs with suction cups and released the creatures. The team tracked the tail stroke rate and cardiovascular response of the narwhals after their release, and determined how much energy the animals used during their deep escape dives.

During normal dives, narwhals reduce their heart rate to about 10 to 20 beats per minute to conserve oxygen while spending prolonged time underwater. These regular deep dives to forage for food don’t require rigorous exercise. But during escape dives after being entangled in a net for an hour or longer, “the heart rates were going down to levels of three and four beats per minute, and being maintained at that level for 10 minutes at a time,” Williams says.

The narwhals were observed making multiple dives to depths of 45 to 473 meters in the hours following escape. When fleeing, the tusked animals expended about three to six times as much energy as they normally burn while resting. The authors calculated that the frantic getaway, combined with what they called “cardiac freeze,” severely and rapidly depletes the narwhals’ available oxygen in their lungs, blood and muscles — using 97 percent of the creatures’ oxygen stores compared with 52 percent on normal dives of similar depth and duration.

“There is a concern from our group that this is just pushing the biology of these animals beyond what they can do,” Williams says. As human activity increases in the Arctic, there may be more chance of inciting this potentially harmful escape response in narwhals.

The creatures may also become more vulnerable to other human-caused disturbances, such as seismic exploration, hunting and noise from large vessels and fishing boats. The researchers plan to investigate whether these activities cause the same flee-and-freeze reaction, and whether this extreme response affects narwhals’ long-term health.

This study “provides a new physiological angle on the vulnerability of narwhals to anthropogenic disturbance, which is likely to increase in the Arctic with sea ice loss,” Laidre says. Better understanding the human impacts on narwhals is essential for conservation of this species, she adds.