Robots are branching out. A new prototype soft robot takes inspiration from plants by growing to explore its environment.
Vines and some fungi extend from their tips to explore their surroundings. Elliot Hawkes of the University of California in Santa Barbara and his colleagues designed a bot that works on similar principles. Its mechanical body sits inside a plastic tube reel that extends through pressurized inflation, a method that some invertebrates like peanut worms (Sipunculus nudus) also use to extend their appendages. The plastic tubing has two compartments, and inflating one side or the other changes the extension direction. A camera sensor at the tip alerts the bot when it’s about to run into something.
In the lab, Hawkes and his colleagues programmed the robot to form 3-D structures such as a radio antenna, turn off a valve, navigate a maze, swim through glue, act as a fire extinguisher, squeeze through tight gaps, shimmy through fly paper and slither across a bed of nails. The soft bot can extend up to 72 meters, and unlike plants, it can grow at a speed of 10 meters per second, the team reports July 19 in Science Robotics. The design could serve as a model for building robots that can traverse constrained environments.
This isn’t the first robot to take inspiration from plants. One plantlike predecessor was a robot modeled on roots.
Tsutomu Miyasaka was on a mission to build a better solar cell. It was the early 2000s, and the Japanese scientist wanted to replace the delicate molecules that he was using to capture sunlight with a sturdier, more effective option.
So when a student told him about an unfamiliar material with unusual properties, Miyasaka had to try it. The material was “very strange,” he says, but he was always keen on testing anything that might respond to light. Other scientists were running electricity through the material, called a perovskite, to generate light. Miyasaka, at Toin University of Yokohama in Japan, wanted to know if the material could also do the opposite: soak up sunlight and convert it into electricity. To his surprise, the idea worked. When he and his team replaced the light-sensitive components of a solar cell with a very thin layer of the perovskite, the illuminated cell pumped out a little bit of electric current.
The result, reported in 2009 in the Journal of the American Chemical Society, piqued the interest of other scientists, too. The perovskite’s properties made it (and others in the perovskite family) well-suited to efficiently generate energy from sunlight. Perhaps, some scientists thought, this perovskite might someday be able to outperform silicon, the light-absorbing material used in more than 90 percent of solar cells around the world. Initial excitement quickly translated into promising early results. An important metric for any solar cell is how efficient it is — that is, how much of the sunlight that strikes its surface actually gets converted to electricity. By that standard, perovskite solar cells have shone, increasing in efficiency faster than any previous solar cell material in history. The meager 3.8 percent efficiency reported by Miyasaka’s team in 2009 is up to 22 percent this year. Today, the material is almost on par with silicon, which scientists have been tinkering with for more than 60 years to bring to a similar efficiency level. “People are very excited because [perovskite’s] efficiency number has climbed so fast. It really feels like this is the thing to be working on right now,” says Jao van de Lagemaat, a chemist at the National Renewable Energy Laboratory in Golden, Colo.
Now, perovskite solar cells are at something of a crossroads. Lab studies have proved their potential: They are cheaper and easier to fabricate than time-tested silicon solar cells. Though perovskites are unlikely to completely replace silicon, the newer materials could piggyback onto existing silicon cells to create extra-effective cells. Perovskites could also harness solar energy in new applications where traditional silicon cells fall flat — as light-absorbing coatings on windows, for instance, or as solar panels that work on cloudy days or even absorb ambient sunlight indoors.
Whether perovskites can make that leap, though, depends on current research efforts to fix some drawbacks. Their tendency to degrade under heat and humidity, for example, is not a great characteristic for a product meant to spend hours in the sun. So scientists are trying to boost stability without killing efficiency.
“There are challenges, but I think we’re well on our way to getting this stuff stable enough,” says Henry Snaith, a physicist at the University of Oxford. Finding a niche for perovskites in an industry so dominated by silicon, however, requires thinking about solar energy in creative ways.
Leaping electrons Perovskites flew under the radar for years before becoming solar stars. The first known perovskite was a mineral, calcium titanate, or CaTiO3, discovered in the 19th century. In more recent years, perovskites have expanded to a class of compounds with a similar structure and chemical recipe — a 1:1:3 ingredient ratio — that can be tweaked with different elements to make different “flavors.”
But the perovskites being studied for the light-absorbing layer of solar cells are mostly lab creations. Many are lead halide perovskites, which combine a lead ion and three ions of iodine or a related element, such as bromine, with a third type of ion (usually something like methylammonium). Those ingredients link together to form perovskites’ hallmark cagelike pyramid-on-pyramid structure. Swapping out different ingredients (replacing lead with tin, for instance) can yield many kinds of perovskites, all with slightly different chemical properties but the same basic crystal structure.
Perovskites owe their solar skills to the way their electrons interact with light. When sunlight shines on a solar panel, photons — tiny packets of light energy — bombard the panel’s surface like a barrage of bullets and get absorbed. When a photon is absorbed into the solar cell, it can share some of its energy with a negatively charged electron. Electrons are attracted to the positively charged nucleus of an atom. But a photon can give an electron enough energy to escape that pull, much like a video game character getting a power-up to jump a motorbike across a ravine. As the energized electron leaps away, it leaves behind a positively charged hole. A separate layer of the solar cell collects the electrons, ferrying them off as electric current.
The amount of energy needed to kick an electron over the ravine is different for every material. And not all photon power-ups are created equal. Sunlight contains low-energy photons (infrared light) and high-energy photons (sunburn-causing ultraviolet radiation), as well as all of the visible light in between.
Photons with too little energy “will just sail right on through” the light-catching layer and never get absorbed, says Daniel Friedman, a photovoltaic researcher at the National Renewable Energy Lab. Only a photon that comes in with energy higher than the amount needed to power up an electron will get absorbed. But any excess energy a photon carries beyond what’s needed to boost up an electron gets lost as heat. The more heat lost, the more inefficient the cell. Because the photons in sunlight vary so much in energy, no solar cell will ever be able to capture and optimally use every photon that comes its way. So you pick a material, like silicon, that’s a good compromise — one that catches a decent number of photons but doesn’t waste too much energy as heat, Friedman says.
Although it has dominated the solar cell industry, silicon can’t fully use the energy from higher-energy photons; the material’s solar conversion efficiency tops out at around 30 percent in theory and has hit 20-some percent in practice. Perovskites could do better. The electrons inside perovskite crystals require a bit more energy to dislodge. So when higher-energy photons come into the solar cell, they devote more of their energy to dislodging electrons and generating electric current, and waste less as heat. Plus, by changing the ingredients and their ratios in a perovskite, scientists can adjust the photons it catches. Using different types of perovskites across multiple layers could allow solar cells to more effectively absorb a broader range of photons.
Perovskites have a second efficiency perk. When a photon excites an electron inside a material and leaves behind a positively charged hole, there’s a tendency for the electron to slide right back into a hole. This recombination, as it’s known, is inefficient — an electron that could have fed an electric current instead just stays put.
In perovskites, though, excited electrons usually migrate quite far from their holes, Snaith and others have found by testing many varieties of the material. That boosts the chances the electrons will make it out of the perovskite layer without landing back in a hole.
“It’s a very rare property,” Miyasaka says. It makes for an efficient sunlight absorber.
Some properties of perovskites also make them easier than silicon to turn into solar cells. Making a conventional silicon solar cell requires many steps, all done in just the right order at just the right temperature — something like baking a fragile soufflé. The crystals of silicon have to be perfect, because even small defects in the material can hurt its efficiency. The need for such precision makes silicon solar cells more expensive to produce.
Perovskites are more like brownies from a box — simpler, less finicky. “You can make it in an office, basically,” says materials scientist Robert Chang of Northwestern University in Evanston, Ill. He’s exaggerating, but only a little. Perovskites are made by essentially mixing a bunch of ingredients together and depositing them on a surface in a thin, even film. And while making crystalline silicon requires temperatures up to 2000° Celsius, perovskite crystals form at easier-to-reach temperatures — lower than 200°.
Seeking stability In many ways, perovskites have become even more promising solar cell materials over time, as scientists have uncovered exciting new properties and finessed the materials’ use. But no material is perfect. So now, scientists are searching for ways to overcome perovskites’ real-world limitations. The most pressing issue is their instability, van de Lagemaat says. The high efficiency levels reported from labs often last only days or hours before the materials break down.
Tackling stability is a less flashy problem than chasing efficiency records, van de Lagemaat points out, which is perhaps why it’s only now getting attention. Stability isn’t a single number that you can flaunt, like an efficiency value. It’s also a bit harder to define, especially since how long a solar cell lasts depends on environmental conditions like humidity and precipitation levels, which vary by location.
Encapsulating the cell with water-resistant coatings is one strategy, but some scientists want to bake stability into the material itself. To do that, they’re experimenting with different perovskite designs. For instance, solar cells containing stacks of flat, graphenelike sheets of perovskites seem to hold up better than solar cells with the standard three-dimensional crystal and its interwoven layers.
In these 2-D perovskites, some of the methylammonium ions are replaced by something larger, like butylammonium. Swapping in the bigger ion forces the crystal to form in sheets just nanometers thick, which stack on top of each other like pages in a book, says chemist Aditya Mohite of Los Alamos National Laboratory in New Mexico. The butylammonium ion, which naturally repels water, forms spacer layers between the 2-D sheets and stops water from permeating into the crystal. Getting the 2-D layers to line up just right has proved tricky, Mohite says. But by precisely controlling the way the layers form, he and colleagues created a solar cell that runs at 12.5 percent efficiency while standing up to light and humidity longer than a similar 3-D model, the team reported in 2016 in Nature. Although it was protected with a layer of glass, the 3-D perovskite solar cell lost performance rapidly, within a few days, while the 2-D perovskite withered only slightly. (After three months, the 2-D version was still working almost as well as it had been at the beginning.)
Despite the seemingly complex structure of the 2-D perovskites, they are no more complicated to make than their 3-D counterparts, says Mercouri Kanatzidis, a chemist at Northwestern and a collaborator on the 2-D perovskite project. With the right ingredients, he says, “they form on their own.”
His goal now is to boost the efficiency of 2-D perovskite cells, which don’t yet match up to their 3-D counterparts. And he’s testing different water-repelling ions to reach an ideal stability without sacrificing efficiency.
Other scientists have mixed 2-D and 3-D perovskites to create an ultra-long-lasting cell — at least by perovskite standards. A solar panel made of these cells ran at only 11 percent efficiency, but held up for 10,000 hours of illumination, or more than a year, according to research published in June in Nature Communications. And, importantly, that efficiency was maintained over an area of about 50 square centimeters, more on par with real-world conditions than the teeny-tiny cells made in most research labs.
A place for perovskites? With boosts to their stability, perovskite solar cells are getting closer to commercial reality. And scientists are assessing where the light-capturing material might actually make its mark.
Some fans have pitted perovskites head-to-head with silicon, suggesting the newbie could one day replace the time-tested material. But a total takeover probably isn’t a realistic goal, says Sarah Kurtz, codirector of the National Center for Photovoltaics at the National Renewable Energy Lab.
“People have been saying for decades that silicon can’t get lower in cost to meet our needs,” Kurtz says. But, she points out, the price of solar energy from silicon-based panels has dropped far lower than people originally expected. There are a lot of silicon solar panels out there, and a lot of commercial manufacturing plants already set up to deal with silicon. That’s a barrier to a new technology, no matter how great it is. Other silicon alternatives face the same limitation. “Historically, silicon has always been dominant,” Kurtz says. For Snaith, that’s not a problem. He cofounded Oxford Photo-voltaics Limited, one of the first companies trying to commercialize perovskite solar cells. His team is developing a solar cell with a perovskite layer over a standard silicon cell to make a super-efficient double-decker cell. That way, Snaith says, the team can capitalize on the massive amount of machinery already set up to build commercial silicon solar cells. A perovskite layer on top of silicon would absorb higher-energy photons and turn them into electricity. Lower-energy photons that couldn’t excite the perovskite’s electrons would pass through to the silicon layer, where they could still generate current. By combining multiple materials in this way, it’s possible to catch more photons, making a more efficient cell.
That idea isn’t new, Snaith points out: For years, scientists have been layering various solar cell materials in this way. But these double-decker cells have traditionally been expensive and complicated to make, limiting their applications. Perovskites’ ease of fabrication could change the game. Snaith’s team is seeing some improvement already, bumping the efficiency of a silicon solar cell from 10 to 23.6 percent by adding a perovskite layer, for example. The team reported that result online in February in Nature Energy.
Rather than compete with silicon solar panels for space on sunny rooftops and in open fields, perovskites could also bring solar energy to totally new venues.
“I don’t think it’s smart for perovskites to compete with silicon,” Miyasaka says. Perovskites excel in other areas. “There’s a whole world of applications where silicon can’t be applied.”
Silicon solar cells don’t work as well on rainy or cloudy days, or indoors, where light is less direct, he says. Perovskites shine in these situations. And while traditional silicon solar cells are opaque, very thin films of perovskites could be printed onto glass to make sunlight-capturing windows. That could be a way to bring solar power to new places, turning glassy skyscrapers into serious power sources, for example. Perovskites could even be printed on flexible plastics to make solar-powered coatings that charge cell phones.
That printing process is getting closer to reality: Scientists at the University of Toronto recently reported a way to make all layers of a perovskite solar cell at temperatures below 150° — including the light-absorbing perovskite layer, but also the background workhorse layers that carry the electrons away and funnel them into current. That could streamline and simplify the production process, making mass newspaper-style printing of perovskite solar cells more doable.
Printing perovskite solar cells on glass is also an area of interest for Oxford Photovoltaics, Snaith says. The company’s ultimate target is to build a perovskite cell that will last 25 years, as long as a traditional silicon cell.
The moon had a magnetic field for at least 2 billion years, or maybe longer.
Analysis of a relatively young rock collected by Apollo astronauts reveals the moon had a weak magnetic field until 1 billion to 2.5 billion years ago, at least a billion years later than previous data showed. Extending this lifetime offers insights into how small bodies generate magnetic fields, researchers report August 9 in Science Advances. The result may also suggest how life could survive on tiny planets or moons. “A magnetic field protects the atmosphere of a planet or moon, and the atmosphere protects the surface,” says study coauthor Sonia Tikoo, a planetary scientist at Rutgers University in New Brunswick, N.J. Together, the two protect the potential habitability of the planet or moon, possibly those far beyond our solar system.
The moon does not currently have a global magnetic field. Whether one ever existed was a question debated for decades (SN: 12/17/11, p. 17). On Earth, molten rock sloshes around the outer core of the planet over time, causing electrically conductive fluid moving inside to form a magnetic field. This setup is called a dynamo. At 1 percent of Earth’s mass, the moon would have cooled too quickly to generate a long-lived roiling interior. Magnetized rocks brought back by Apollo astronauts, however, revealed that the moon must have had some magnetizing force. The rocks suggested that the magnetic field was strong at least 4.25 billion years ago, early on in the moon’s history, but then dwindled and maybe even got cut off about 3.1 billion years ago. Tikoo and colleagues analyzed fragments of a lunar rock collected along the southern rim of the moon’s Dune Crater during the Apollo 15 mission in 1971. The team determined the rock was 1 billion to 2.5 billion years old and found it was magnetized. The finding suggests the moon had a magnetic field, albeit a weak one, when the rock formed, the researchers conclude. A drop in the magnetic field strength suggests the dynamo driving it was generated in two distinct ways, Tikoo says. Early on, Earth and the moon would have sat much closer together, allowing Earth’s gravity to tug on and spin the rocky exterior of the moon. That outer layer would have dragged against the liquid interior, generating friction and a very strong magnetic field (SN Online: 12/4/14).
Then slowly, starting about 3.5 billion years ago, the moon moved away from Earth, weakening the dynamo. But by that point, the moon would have started to cool, causing less dense, hotter material in the core to rise and denser, cooler material to sink, as in Earth’s core. This roiling of material would have sustained a weak field that lasted for at least a billion years, until the moon’s interior cooled, causing the dynamo to die completely, the team suggests.
The two-pronged explanation for the moon’s dynamo is “an entirely plausible idea,” says planetary scientist Ian Garrick-Bethell of the University of California, Santa Cruz. But researchers are just starting to create computer simulations of the strength of magnetic fields to understand how such weaker fields might arise. So it is hard to say exactly what generated the lunar dynamo, he says.
If the idea is correct, it may mean other small planets and moons could have similarly weak, long-lived magnetic fields. Having such an enduring shield could protect those bodies from harmful radiation, boosting the chances for life to survive.
August’s total solar eclipse won’t be the last time the moon cloaks the sun’s light. From now to 2040, for example, skywatchers around the globe can witness 15 such events.
Their predicted paths aren’t random scribbles. Solar eclipses occur in what’s called a Saros cycle — a period that lasts about 18 years, 11 days and eight hours, and is governed by the moon’s orbit. (Lunar eclipses follow a Saros cycle, too, which the Chaldeans first noticed probably around 500 B.C.)
Two total solar eclipses separated by that 18-years-and-change period are almost twins — compare this year’s eclipse with the Sept. 2, 2035 eclipse, for example. They take place at roughly the same time of year, at roughly the same latitude and with the moon at about the same distance from Earth. But those extra eight hours, during which the Earth has rotated an additional third of the way on its axis, shift the eclipse path to a different part of the planet. This cycle repeats over time, creating a family of eclipses called a Saros series. A series lasts 12 to 15 centuries and includes about 70 or more eclipses. The solar eclipses of 2019 and 2037 belong to a different Saros series, so their paths too are shifted mimics. Their tracks differ in shape from 2017’s, because the moon is at a different place in its orbit when it passes between the Earth and the sun. Paths are wider at the poles because the moon’s shadow is hitting the Earth’s surface at a steep angle.
Predicting and mapping past and future eclipses allows scientists “to examine the patterns of eclipse cycles, the most prominent of which is the Saros,” says astrophysicist Fred Espenak, who is retired from NASA’s Goddard Spaceflight Center in Greenbelt, Md.
He would know. Espenak and his colleague Jean Meeus, a retired Belgian astronomer, have mapped solar eclipse paths from 2000 B.C. to A.D. 3000. For archaeologists and historians peering backward, the maps help match up accounts of long-ago eclipses with actual paths. For eclipse chasers peering forward, the data are an itinerary.
“I got interested in figuring out how to calculate eclipse paths for my own use, for planning … expeditions,” says Espenak, who was 18 when he witnessed his first total solar eclipse. It was in 1970, and he secured permission to drive the family car from southern New York to North Carolina to see it. Since then, Espenak, nicknamed “Mr. Eclipse,” has been to every continent, including Antarctica, for a total eclipse of the sun.
“It’s such a dramatic, spectacular, beautiful event,” he says. “You only get a few brief minutes, typically, of totality before it ends. After it’s over, you’re craving to see it again.”
Speculation is running rampant about potential new discoveries of gravitational waves, just as the latest search wound down August 25.
Publicly available logs from astronomical observatories indicate that several telescopes have been zeroing in on one particular region of the sky, potentially in response to a detection of ripples in spacetime by the Advanced Laser Interferometer Gravitational-Wave Observatory, LIGO. These records have raised hopes that, for the first time, scientists may have glimpsed electromagnetic radiation — light — produced in tandem with gravitational waves. That light would allow scientists to glean more information about the waves’ source. Several tweets from astronomers reporting rumors of a new LIGO detection have fanned the flames of anticipation and amplified hopes that the source may be a cosmic convulsion unlike any LIGO has seen before. “There is a lot of excitement,” says astrophysicist Rosalba Perna of Stony Brook University in New York, who is not involved with the LIGO collaboration. “We are all very anxious to actually see the announcement.”
An Aug. 25 post on the LIGO collaboration’s website announced the end of the current round of data taking, which began November 30, 2016. Virgo, a gravitational wave detector in Italy, had joined forces with LIGO’s two on August 1 (SN Online: 8/1/17). The three detectors will now undergo upgrades to improve their sensitivity. The update noted that “some promising gravitational-wave candidates have been identified in data from both LIGO and Virgo during our preliminary analysis, and we have shared what we currently know with astronomical observing partners.”
When LIGO detects gravitational waves, the collaboration alerts astronomers to the approximate location the waves seemed to originate from. The hope is that a telescope could pick up light from the aftermath of the cosmic catastrophe that created the gravitational waves — although no light has been found in previous detections.
LIGO previously detected three sets of gravitational waves from merging black holes (SN: 6/24/17, p. 6). Black hole coalescences aren’t expected to generate light that could be spotted by telescopes, but another prime candidate could: a smashup between two remnants of stars known as neutron stars. Scientists have been eagerly awaiting LIGO’s first detections of such mergers, which are suspected to be the sites where the universe’s heaviest elements are formed. An observation of a neutron star crash also could provide information about the ultradense material that makes up neutron stars. Since mid-August, seemingly in response to a LIGO alert, several telescopes have observed a section of sky around the galaxy NGC 4993, located 134 million light-years away in the constellation Hydra. The Hubble Space Telescope has made at least three sets of observations in that vicinity, including one on August 22 seeking “observations of the first electromagnetic counterparts to gravitational wave sources.”
Likewise, the Chandra X-ray Observatory targeted the same region of sky on August 19. And records from the Gemini Observatory’s telescope in Chile indicate several potentially related observations, including one referencing “an exceptional LIGO/Virgo event.”
“I think it’s very, very likely that LIGO has seen something,” says astrophysicist David Radice of Princeton University, who is not affiliated with LIGO. But, he says, he doesn’t know whether its source has been confirmed as merging neutron stars.
LIGO scientists haven’t commented directly on the veracity of the rumor. “We have some substantial work to do before we will be able to share with confidence any quantitative results. We are working as fast as we can,” LIGO spokesperson David Shoemaker of MIT wrote in an e-mail.
Alien megastructures are out. The unusual fading of an oddball star is more likely caused by either clouds of dust or an abnormal cycle of brightening and dimming, two new papers suggest.
Huan Meng of the University of Arizona in Tucson and his colleagues suggest that KIC 8462852, known as Tabby’s star, is dimming thanks to an orbiting cloud of fine dust particles. The team observed the star with the infrared Spitzer and ultraviolet Swift space telescopes from October 2015 to December 2016 — the first observations in multiple wavelengths of light. They found that the star is dimming faster in short blue wavelengths than longer infrared ones, suggesting smaller particles. “That almost absolutely ruled out the alien megastructure scenario, unless it’s an alien microstructure,” Meng says.
Tabby’s star is most famous for suddenly dropping in brightness by up to 22 percent over the course of a few days (SN Online: 2/2/16). Later observations suggested the star is also fading by about 4 percent per year (SN: 9/17/16, p. 12), which Meng’s team confirmed in a paper posted online August 24 at arXiv.org.
Joshua Simon of the Observatories of the Carnegie Institution for Science in Pasadena, Calif., found a similar dimming in data on Tabby’s star from the All Sky Automated Survey going back to 2006. Simon and colleagues also found for the first time that the star grew brighter in 2014, and possibly in 2006, they reported in a paper August 25 at arXiv.org.
“That’s fascinating,” says astrophysicist Tabetha Boyajian of Louisiana State University in Baton Rouge. She first reported the star’s flickers in 2015 (the star is nicknamed for her) and is a coauthor on Meng’s paper. “We always speculated that it would brighten sometime. It can’t just get fainter all the time — otherwise it would disappear. This shows that it does brighten.”
The brightening could be due to a magnetic cycle like the sun’s, Simon suggests. But no known cycle makes a star brighten and dim by quite so much, so the star would still be odd. Brian Metzger of Columbia University previously suggested that a ripped-up planet falling in pieces into the star could explain both the long-term and short-term dimming. He thinks that model still works, although it needs some tweaks.
“This adds some intrigue to what’s going on, but I don’t think it really changes the landscape,” says Metzger, who was not involved in the new studies. And newer observations could complicate things further: The star went through another bout of dimming between May and July. “I’m waiting to see the papers analyzing this recent event,” Metzger says.
West German power companies have decided to go ahead with two nuclear power station projects…. Compared with the U.S. and Britain, Germany has been relatively backward in the application of nuclear energy…. The slow German start is only partly the result of restrictions placed upon German nuclear research after the war. — Science News, September 16, 1967
Update Both East and West Germany embraced nuclear power until antinuclear protests in the 1970s gathered steam. In 1998, the unified German government began a nuclear phaseout, which Chancellor Angela Merkel halted in 2009. The 2011 Fukushima nuclear disaster in Japan caused a rapid reversal. Germany closed eight of its nuclear plants immediately, and announced that all nuclear power in the country would go dark by 2022 (SN Online: 6/1/11). A pivot to renewable energy — wind, solar, hydropower and biomass — produced 188 billion kilowatt-hours of electricity in 2016, nearly 32 percent of German electricity usage.
After 20 years in space and 13 years orbiting Saturn, the veteran spacecraft spent its last 90 seconds or so firing its thrusters as hard as it could to keep sending Saturnian secrets back to Earth for as long as possible.
The spacecraft entered Saturn’s atmosphere at about 3:31 a.m. PDT on September 15 and immediately began running through all of its stabilizing procedures to try to keep itself upright. The signal that Cassini had reached its destination arrived at Earth at 4:54 a.m., and cut out about a minute later as the spacecraft lost its battle with Saturn’s atmosphere. “The signal from the spacecraft is gone, and within the next 45 seconds, so will be the spacecraft,” Cassini project manager Earl Maize announced from the mission control center at NASA’s Jet Propulsion Lab. “I hope you’re all as deeply proud of this amazing accomplishment. Congratulations to you all. This has been an incredible mission, an incredible spacecraft, and you’re all an incredible team. I’m going to call this the end of mission. Project manager, off the net.”
With that, the mission control team erupted in applause, hugs and some tears. It’s the end of an era. But the spacecraft’s last moments at Saturn will answer questions that couldn’t have been addressed any other way. Going out in a blaze of glory seems fitting. Since its launch in 1997, the probe traveled a total of 7.9 billion kilometers. It gathered more than 635 gigabytes of science data and took more than 450,000 images. It completed 294 orbits of Saturn, discovered six named moons and made 162 close, deliberate flybys of the ringed planet’s largest and most interesting moons. The last flyby sealed Cassini’s fate. On September 11, at 12:04 p.m., Cassini passed by Saturn’s largest moon Titan one last time ( SN Online: 9/11/17 ). The moon’s gravity nudged Cassini on an irretrievable trajectory into the giant planet’s atmosphere. Also blame the moons — particularly lake-dappled Titan and watery Enceladus — for why Cassini went out in such dramatic fashion. The mission team decided to sacrifice the spacecraft when it ran out of fuel, rather than risk a collision with one of those potentially habitable moons and contaminating it with any still-lingering earthly microbes.
“Because of planetary protection and our desire to go back to Enceladus, go back to Titan, go back to the Saturn system, we must protect those bodies for future exploration,” Jim Green, director of NASA’s planetary science division, said at a news conference on September 13.
Even in its months-long death spiral, Cassini collected unprecedented observations. Starting in April, the spacecraft made 22 dives through the unexplored region between Saturn and its rings. Measurements of the gravity and composition in that zone will help solve outstanding mysteries. How long is Saturn’s day? How much material is in the rings? When and how did the rings form?
To answer that last question in particular, “you have to fly between the planet and the rings,” says planetary scientist Matthew Hedman of the University of Idaho in Moscow, who uses Cassini data to study the rings. “That’s risky. We had to wait until the end of the mission to take that risk.” On September 13 and 14, Cassini took a last look around the Saturn system’s greatest hits, taking a color mosaic image of Saturn and the rings, a movie sequence of Enceladus setting behind Saturn, Titan and tiny moonlets in the rings that pull the icy ring particles around themselves to form features called propellers.
Inside the mission control center on the afternoon of September 14, a hushed operations team waited for Cassini to come online for the last time to start sending the last pictures back (SNOnline: 9/15/17). Then flight engineer Michael Staab at JPL suddenly broke the silence. “Yeah!” he yelled, pumping both arms in the air. Cassini’s last signal had just come in.
“That tells us that the spacecraft is nice and healthy, she’s doing just fine. She’s doing exactly what she’s supposed to do, like she’s done for 13 years,” Staab said. “We’re just gonna track her now, all the way in.” In the wee hours of September 15, the spacecraft reconfigured itself to shift from a recording device to a transmitting probe. As of that moment, its last and only job was to stream everything it could sense directly back to Earth in real time. Turning so that its ion neutral mass spectrometer was facing directly towards Saturn, Cassini could taste the atmosphere for the first time and investigate a phenomenon called “ring rain,” in which water and ice from the rings splash into the atmosphere. This idea was introduced in the early 1980s, but Cassini has already shown that it’s more complicated than previously thought.
“We’re trying to find out exactly what is coming from the rings and what is due to the atmosphere,” Hunter Waite, Cassini team lead for the mass spectrometer instrument and an atmospheric scientist at the Southwest Research Institute in San Antonio, said at the Sept. 13 news conference. “That final plunge will allow us to do that.” That plunge happened at about 3:31 a.m., when Cassini entered the atmosphere about 10 degrees north of the equator, falling at around 34 kilometers per second. It took data constantly, directly measuring the temperature, magnetic field, plasma density and composition of the upper layers of Saturn’s atmosphere for the first time ever. When it hit the atmosphere, Cassini started firing its thrusters to keep its antenna pointed at Earth despite the forces of the atmosphere trying to knock it askew. But about a minute later, the atmosphere won, when Cassini was about 1,400 kilometers above the cloud tops.
What happened next, scientists can only imagine. Models suggest this fiery demise: The spacecraft attempted to stabilize itself, but to no avail. It started to tumble faster and faster. Atmospheric friction broke the spacecraft apart, bit by bit — first its thermal blankets burned off, then aluminum components started to melt. The spacecraft probably fell another 1,000 kilometers as it disintegrated like a meteor, Maize said.
Saturn’s atmosphere crushed and melted the bits and pieces, until they completely dissociated and became part of the very planet the spacecraft had been sent to observe.
When all was said and done, the spacecraft lasted about 30 seconds longer than expected. That may help ensure the team got enough data to figure out Saturn’s rotation period, science team member Michele Dougherty of Imperial College London said at a post-mission news conference September 15. “I’m hoping we can do it, I’m not going to promise. Ask me in three months’ time.”
There are no planned future missions to Saturn, although some Cassini alumni are already working on proposals. Outer solar system astronomers are now setting their sights on Jupiter and its icy, possibly life-friendly moons. The European Space Agency’s Jupiter Icy Moons Explorer (JUICE) and NASA’s Europa Clipper both hope to launch around 2022. Those missions may pave the way for a lander on Europa (SN Online: 2/18/17), which could directly look for life in that moon’s subsurface seas.
Planetary scientist Kevin Hand at JPL, who is leading the science definition team for the proposed Europa lander, feels a debt to Cassini.
“When you’re at the earliest frontiers of exploration, it’s hard to feel sad,” he said. “The wake we’re experiencing right now for Cassini, it’s not so much an end but the early steps that pave the way for the next stage of exploration.”
Even Maize is more proud than mourning.
“This is exactly the way we always planned it. It’s sad that we’re losing this incredible discovery machine,” he said in the moments leading up to Cassini’s disintegration. “But the real sense here is just, all right, we got it!”
Frog brains get busy long before they’re fully formed. Just a day after fertilization, embryonic brains begin sending signals to far-off places in the body, helping oversee the layout of complex patterns of muscles and nerve fibers. And when the brain is missing, bodily chaos ensues, researchers report online September 25 in Nature Communications.
The results, from brainless embryos and tadpoles, broaden scientists’ understanding of the types of signals involved in making sure bodies develop correctly, says developmental biologist Catherine McCusker of the University of Massachusetts Boston. Scientists are familiar with short-range signals among nearby cells that help pattern bodies. But because these newly described missives travel all the way from the brain to the far reaches of the body, they are “the first example of really long-range signals,” she says. Celia Herrera-Rincon of Tufts University in Medford, Mass., and colleagues came up with a simple approach to tease out the brain’s influence on the growing body. Just one day after fertilization, the scientists lopped off the still-forming brains of African clawed frog embryos. These embryos survive to become tadpoles even without brains, a quirk of biology that allowed the researchers to see whether the brain is required for the body’s development. The answer was a definite — and surprising — yes, Herrera-Rincon says. Long before the brain is mature, it’s already organizing and guiding organ behavior, she says. Brainless tadpoles had bungled patterns of muscles. Normally, muscle fibers form a stacked chevron pattern. But in tadpoles lacking a brain, this pattern didn’t form correctly. “The borders between segments are all wonky,” says study coauthor Michael Levin, also of Tufts University. “They can’t keep a straight line.” Nerve fibers that crisscross tadpoles’ bodies also grew in an abnormal pattern. Levin and colleagues noticed extra nerve fibers snaking across the brainless tadpoles in a chaotic pattern, “a nerve network that shouldn’t be there,” he says.
Muscle and nerve abnormalities are the most obvious differences. But brainless tadpoles probably have more subtle defects in other parts of their bodies, such as the heart. The search for those defects is the subject of ongoing experiments, Levin says. In addition to keeping patterns on point, the young frog brain may protect its body from chemical assaults. A molecule that binds to certain proteins on cells in the body had no effect on normal embryos. But when given to brainless embryos, the same molecule caused their spinal cords and tails to grow crooked. These results suggest that early in development, brains keep embryos safe from agents that would otherwise cause harm.
“The brain is instructing cells that are really a long way away from it,” Levin says. While the precise identities of these long-range signals aren’t known, the researchers have some ideas. When brainless embryos were dosed with a drug that targets cells that typically respond to the chemical messenger acetylcholine, the muscle pattern improved. Similarly, the addition of a protein called HCN2 that can tweak the activity of cells also seemed to improve muscle development. More work is needed before scientists know whether these interventions are actually mimicking messaging from the early brain, and if so, how.
Frog development isn’t the same as mammalian development, but frog development “is pretty applicable to human biology,” McCusker says. In fundamental ways, humans and frogs are built from the same molecular toolbox, she says. So the results hint that a growing human brain might also interact similarly with a growing human body.
As far as last meals go, squid isn’t a bad choice. Cephalopod remains appear to dominate the stomach contents of a newly analyzed ichthyosaur fossil from nearly 200 million years ago.
The ancient marine reptiles once roamed Jurassic seas and commonly pop up in England’s fossil-rich coast near Lyme Regis. But a lot of ichthyosaur museum specimens lack records of where they came from, making their age difficult to place.
Dean Lomax of the University of Manchester and his colleagues reexamined one such fossil. Based on its skull, they identified the creature as a newborn Ichthyosaurus communis. Microfossils of shrimp and amoeba species around the ichthyosaur put the specimen at 199 million to 196 million years old, the researchers estimate.
Tiny hook structures stand out in the newborn’s ribs — most likely the remnants of prehistoric black squid arms. Another baby ichthyosaur fossil that lived more recently had a stomach full of fish scales. So the new find suggests a shift in the menu for young ichthyosaurs at some point in their evolutionary history, the researchers write October 3 in Historical Biology.