Einstein’s light-bending by single far-off star detected

For the first time, astronomers have seen a star outside of the solar system bend the light from another star. The measurement, reported June 7 in Austin, Texas, at a meeting of the American Astronomical Society, vindicates both Einstein’s most famous theory and what goes on in the inner lives of stellar corpses.

Astronomers using the Hubble Space Telescope watched as a white dwarf passed in front of a more distant star. That star seemed to move in a small loop, its apparent position deflected by the white dwarf’s gravity.
More than a century ago, Albert Einstein predicted that the way spacetime bends around a massive object — the sun, say — should shift the apparent position of stars that appear behind that object. The measurement of this effect during a solar eclipse in 1919 confirmed Einstein’s general theory of relativity: Mass warps spacetime and bends the path of light rays (SN: 10/17/15, p. 16).

The New York Times hailed it as “one of the greatest — perhaps the greatest — of achievements in the history of human thought.” But even Einstein doubted the light-bending effect could be detected for more distant stars than the sun.

Now, in a study published in the June 9 issue of Science, Kailash Sahu of the Space Telescope Science Institute in Baltimore and his colleagues have shown that it can.

“This is an elegant outcome,” says Terry Oswalt at Embry-Riddle Aeronautical University in Daytona Beach, Fla., who was not involved in the new work. “Einstein would be very proud.”
While the stars literally aligned to make the measurement possible, this was no lucky accident. Sahu and colleagues scoured a catalog of 5,000 stellar motions to find a pair of stars likely to pass close enough on the sky that Hubble could sense the shift.

There were a few possible candidates, and one of them, called Stein 2051 B, was already a mysterious character.

Located about 18 light-years from Earth, Stein 2051 B is a white dwarf, a common end-of-life state for a sunlike star. When low-mass stars run out of fuel, they puff up into a red giant while fusing helium into carbon and oxygen. Eventually, they slough off outer layers of gas, leaving this carbon-oxygen core — the white dwarf — behind. About 97 percent of the stars in the Milky Way, including the sun, are or someday will be white dwarfs.

White dwarfs are extremely dense. They are prevented from collapsing into a black hole only by the pressure their electrons produce in trying not to be in the same quantum state as each other. This bizarre situation sets strict limits on their sizes and masses: For a given radius, a white dwarf can be only so massive, and only so large for a given mass.

This mass-radius relation was laid out in Nobel prize‒winning work by Subrahmanyan Chandrasekhar in the 1930s, but it has been difficult to prove. The only white dwarfs weighed so far share their orbits with other stars whose mutual motions help astronomers calculate their masses. But some astronomers worry that those companions could have added mass to the white dwarfs, throwing off this precise relationship.

Stein 2051 B also has a companion, but it is so far away that the two stars almost certainly evolved independently. That distance also means it would take hundreds of years to precisely measure the white dwarf’s mass. The best efforts to find a rough mass so far created a conundrum: Stein 2051 B appeared to be much lighter than expected. It would need an exotic iron core to explain it.

Measuring the shift of a background star provides a way to measure the white dwarf’s mass directly. The more massive the foreground star — in this case, the white dwarf — the greater the deflection of light from the background star.

“This is the most direct method of measuring the mass,” Sahu says. “It’s almost like putting somebody on a scale and reading off their weight.”

The white dwarf was scheduled to pass near a background star on March 5, 2014. Sahu’s team made eight observations of the two stars’ positions between October 2013 and October 2015.

The team found that the background star appeared to move in a small ellipse as the white dwarf approached and then moved away from it, exactly as predicted by Einstein’s equations. That suggests its mass is 0.675 times the mass of the sun — well within the normal range for its size.

This first measurement won’t be the last, Oswalt says. Several new star surveys are coming online in the next few years that will track the motions of billions of stars at once. That means that even though light-bending alignments are rare, astronomers should catch several more soon.

Live antibiotics use bacteria to kill bacteria

The woman in her 70s was in trouble. What started as a broken leg led to an infection in her hip that hung on for two years and several hospital stays. At a Nevada hospital, doctors gave the woman seven different antibiotics, one after the other. The drugs did little to help her. Lab results showed that none of the 14 antibiotics available at the hospital could fight the infection, caused by the bacterium Klebsiella pneumoniae.

Epidemiologist Lei Chen of the Washoe County Health District sent a bacterial sample to the U.S. Centers for Disease Control and Prevention. The bacteria, CDC scientists found, produced a nasty enzyme called New Delhi metallo-beta-lactamase, known for disabling many antibiotics. The enzyme was first seen in a patient from India, which is where the Nevada woman broke her leg and received treatment before returning to the United States.
The enzyme is worrisome because it arms bacteria against carbapenems, a group of last-resort antibiotics, says Alexander Kallen, a CDC medical epidemiologist based in Atlanta, who calls the drugs “our biggest guns for our sickest patients.”

The CDC’s final report revealed startling news: The bacteria raging in the woman’s body were resistant to all 26 antibiotics available in the United States. She died from septic shock; the infection shut down her organs.

Kallen estimates that there have been fewer than 10 cases of completely resistant bacterial infections in the United States. Such absolute resistance to all available drugs, though incredibly rare, was a “nightmare scenario,” says Daniel Kadouri, a micro-biologist at Rutgers School of Dental Medicine in Newark, N.J.

Antibiotic-resistant bacteria infect more than 2 million people in the United States every year, and at least 23,000 die, according to 2013 data, the most recent available from the CDC.

It’s time to flip the nightmare scenario and send a killer after the killer bacteria, say a handful of scientists with a new approach for fighting infection. The strategy, referred to as a “living antibiotic,” would pit one group of bacteria — given as a drug and dubbed “the predators” — against the bacteria that are wreaking havoc among humans.
The approach sounds extreme, but it might be necessary. Antimicrobial resistance “is something that we really, really have to take seriously,” says Elizabeth Tayler, senior technical officer for antimicrobial resistance at the World Health Organization in Geneva. “The ability of future generations to manage infection is at risk. It’s a global problem.”

The number of resistant strains has exploded, in part because doctors prescribe antibiotics too often. At least 30 percent of antibiotic prescriptions in the United States are not necessary, according to the CDC. When more people are exposed to more antibiotics, resistance is likely to build faster. And new alternatives are scarce, Kallen says, as the pace of developing novel antibiotics has slowed.

In search of new ideas, DARPA, a Department of Defense agency that invests in breakthrough technologies, is supporting work on predatory bacteria by Kadouri, as well as Robert Mitchell of Ulsan National Institute of Science and Technology in South Korea, Liz Sockett of the University of Nottingham in England and Edouard Jurkevitch of the Hebrew University of Jerusalem. This work, the agency says, represents “a significant departure from conventional antibiotic therapies.”

The approach is so unusual, people have called Kadouri and his lab crazy. “Probably, we are,” he jokes.

A movie-worthy killer
The notion of predatory bacteria sounds a bit scary, especially when Kadouri likens the most thoroughly studied of the predators, Bdellovibrio bacteriovorus, to the vicious space creatures in the Alien movies.

B. bacteriovorus, called gram-negative because of how they are stained for microscope viewing, dine on other gram-negative bacteria. All gram-negative bacteria have an inner membrane and outer cell wall. The predators don’t go after the other main type of bacteria, gram-positives, which have just one membrane.
When it encounters a gram-negative bacterium, the predator appears to latch on with grappling hook–like appendages. Then, like a classic cat burglar cutting a hole in glass, B. bacteriovorus forces its way through the outer membrane and seems to seal the hole behind it. Once within the space between the outer and inner membranes, the predator secretes enzymes — as damaging as the movie aliens’ acid spit — that chew its prey’s nutrients and DNA into bite-sized pieces.

B. bacteriovorus then uses the broken-down genetic building blocks to make its own DNA and begin replicating. The invader and its progeny eventually emerge from the shell of the prey in a way reminiscent of a cinematic chest-bursting scene.

“It’s a very efficient killing machine,” Kadouri says. That’s good news because many of the most dangerous pathogens that are resistant to antibiotics are gram-negative (SN: 6/10/17, p. 8), according to a list released by the WHO in February.

It’s the predator’s hunger for the bad-guy bacteria, the ones that current drugs have become useless against, that Kadouri and other researchers hope to harness.

Pitting predatory against pathogenic bacteria sounds risky. But, from what researchers can tell, these killer bacteria appear safe. “We know that [B. bacteriovorus] doesn’t target mammalian cells,” Kadouri says.

Saving the see-through fish
To find out whether enlisting predatory bacteria might be crazy good and not just plain crazy, Kadouri’s lab group tested B. bacteriovorus’ killing ability against an array of bacteria in lab dishes in 2010. The microbe significantly reduced levels of 68 of the 83 bacteria tested.

Since then, Kadouri and others have looked at the predator’s ability to devour dangerous pathogens in animals. In rats and chickens, B. bacteriovorus reduced the number of bad bacteria. But the animals were always given nonlethal doses of pathogens, leaving open the question of whether the predator could save the animals’ lives.

Sockett needed to see evidence of survival improvement. “If we’re going to have Bdellovibrio as a medicine, we have to cure something,” she says. “We can count changes in numbers of bacteria, but if that doesn’t change the outcome of the infection — change the number of [animals] that die — it’s not worth it.”

So she teamed up with cell biologist Serge Mostowy of Imperial College London for a study in zebrafish. The aim was to see how many animals predatory bacteria could save from a deadly infection. The team also tested how the host’s immune system interacted with the predators.

The researchers gave zebra-fish larvae fatal doses of an antibiotic-resistant strain of Shigella flexneri, which causes dysentery in humans. Before infecting the fish, the researchers divided them into four groups. Two groups had their immune systems altered to produce fewer macrophages, the white blood cells that attack pathogens. Immune systems in the other two groups remained intact. B. bacteriovorus was injected into an unchanged group and a macrophage-deficient group, while two groups received no treatment.

All of the untreated fish with fewer macrophages died within 72 hours of receiving S. flexneri, the researchers reported in December in Current Biology. Of the fish with a normal immune system, 65 percent that received predator treatment survived compared with 35 percent with no predator treatment. Even in the fish with impaired immune systems, the predators saved about a quarter of the lot.
“This is the first time that Bdellovibrio has ever been used as an injected therapy in live organisms,” Sockett says. “And the important thing is the injection improved the survival of the zebrafish.”

The study also pulled off another first. In previous work, researchers had been unable to see predation as it happened within an animal. Because zebra-fish larvae are transparent, study coauthor Alexandra Willis captured images of B. bacteriovorus gobbling up S. flexneri.

“We were literally having to run to the microscope because the process was just happening so fast,” says Willis, a graduate student in Mostowy’s lab. After the predator invades, its rod-shaped prey become round. Willis saw Bdellovibrio “rounding” its prey within 15 minutes. From start to finish, the predatory cycle took about three to four hours.

The predator’s speed may be what gave it the edge over the infection, Mostowy says. B. bacteriovorus attacks fast, chipping away at the pathogens until the infection is reduced to a level that the immune system can handle. “Otherwise there are too many bacteria and the immune system would be overwhelmed,” he says. “We’re putting a shocking amount of Shigella, 50,000 bacteria, into the fish.”

Within 48 hours, S. flexneri levels dropped 98 percent in the surviving fish, from 50,000 to 1,000.

The immune cells also cleared nearly all the B. bacteriovorus predators from the fish. The predators had enough time to attack the infection before being targeted by the immune system themselves, creating an ideal treatment window. Even if the host’s immune system hadn’t attacked the predators, once the bacteria are gone, Willis says, the predators are out of food. Unable to replicate, they eventually die off.

A clean sweep
Predatory bacteria are efficient in more ways than one. They’re not just good killers — they eliminate the evidence too.

Typical antibiotic treatments don’t target a bacterium’s DNA, so they are likely to leave pieces of the bacterial body behind. That’s like killing a few bandits, but leaving their weapons so the next invaders can easily arm themselves for a new attack. This could be one way that multidrug resistance evolves, Mitchell says. For example, penicillin will kill all bacteria that aren’t resistant to the drug. The surviving bacteria can swim through the aftermath of the antibiotic attack and grab genes from their fallen comrades to incorporate into their own genomes. The destroyed bacteria may have had a resistance gene to a different antibiotic, say, vancomycin. Now you have bacteria that are resistant to both penicillin and vancomycin. Not good.

Predatory bacteria, on the other hand, “decimate the genome” of their prey, Mitchell says. They don’t just kill the bandit, they melt down all the DNA weapons so no pathogens can use them. In one experiment that has yet to be published, B. bacteriovorus almost completely ate up the genetic material of a bacterial colony within two hours — showing itself as a fast-acting predator that could prevent bacterial genes from falling into the wrong hands.

On top of that, even if pathogenic bacteria mutate, a common way they pick up new forms of resistance, they aren’t protected from predation. Resistance to predation hasn’t been reported in lab experiments since B. bacteriovorus was discovered in 1962, Mitchell says. Researchers don’t think there’s a single pathway or gene in a prey bacterium that the predator targets. Instead, B. bacteriovorus seem to use sheer force to break in. “It’s kind of like cracking an egg with a hammer,” Kadouri says. That’s not exactly something bacteria can mutate to protect themselves against.

Some bacteria manage to band together and cover themselves with a kind of built-in biological shield, which offers protection against antibiotics. But for predatory bacteria, the shield is more of a welcome mat.

Going after the gram-positives
When bacteria cluster together on a surface, whether in your body, on a countertop or on a medical instrument, they can form a biofilm. The thick, slimy shield helps microbes withstand antibiotic attacks because the drugs have difficulty penetrating the slime. Antibiotics usually act on fast-growing bacteria, but within a biofilm, bacteria are sluggish and dormant, making antibiotics less effective, Kadouri says.
But to predatory bacteria, a biofilm is like Jell-O — a tasty snack that’s easy to swallow. Once inside, B. bacteriovorus spreads like wildfire because its prey are now huddled together as confined targets. “It’s like putting zebras and a lion in a restaurant and closing the door and seeing what happens,” Kadouri says. For the zebras, “it can’t end well.”

Kadouri’s lab has shown repeatedly that predatory bacteria effectively eat away biofilms that protect gram-negative bacteria, and are in fact more efficient at killing bacteria within those biofilms.

Gram-positive bacteria cloak themselves in biofilms too. In 2014 in Scientific Reports, Mitchell and his team reported finding a way to use Bdellovibrio to weaken gram-positive bacteria, turning their protective shield against them and perhaps helping antibiotics do their job.

The discovery comes from studies of one naturally occurring B. bacteriovorus mutant with extra-scary spit. The mutant isn’t predatory. Instead of eating a prey’s DNA to make its own, it can grow and replicate like a normal bacterial colony. As it grows, it produces especially destructive enzymes. Among the mix of enzymes are proteases, which break down proteins.

Mitchell and his team tested the strength of the mutant’s secretions against the gram-positive Staphylococcus aureus. A cocktail of the enzymes applied to an S. aureus biofilm degraded the slime shield and reduced the bacterium’s virulence. Biofilms can make bacteria up to 1,000 times more resistant to antibiotics, Mitchell says. The next step, he adds, is to see if degrading a biofilm resensitizes a gram-positive bacterium to antibiotics.

Mitchell and his team also treated S. aureus cells that didn’t have a biofilm with the mutant’s enzyme mix and then exposed them to human cells. Eighty percent of the bacteria were no longer able to invade human cells, Mitchell says. The “acid spit” chewed up surface proteins that the pathogen uses to attach to and invade human cells. The enzymes didn’t kill the bacteria but did make them less virile.

No downsides yet
Predatory bacteria can efficiently eat other gram-negative bacteria, munch through biofilms and even save zebrafish from the jaws of an infectious death. But are they safe? Kadouri and the other researchers have done many studies, though none in humans yet, to try to answer that question.
In a 2016 study published in Scientific Reports, Kadouri and colleagues applied B. bacteriovorus to the eyes of rabbits and compared the effect with that of a common antibiotic eye drop, vancomycin. The vancomycin visibly inflamed the eyes, while the predatory bacteria had little to no effect. The eyes treated with predatory bacteria were indistinguishable from eyes treated with a saline solution, used as the control treatment. Other studies looking for potential toxic effects of B. bacteriovorus have so far found none.

In 2011, Sockett’s team gave chickens an oral dose of predatory bacteria. At 28 days, the researchers saw no difference in health between treated and untreated chickens. The makeup of the birds’ gut bacteria was altered, but not in a way that was harmful, she and her team reported in Applied and Environmental Microbiology.

Kadouri analyzed rats’ gut microbes after a treatment of predatory bacteria, reporting the results in a study published March 6 in Scientific Reports. Here too, the rodents’ guts showed little to no inflammation. When they sequenced the bacterial contents of the rats’ feces, the researchers saw small differences between the treated and untreated rats. But none of the changes appeared harmful, and the animals grew and acted normally.

If the rats had taken common antibiotics, it would have been a different story, Kadouri points out. Those drugs would have given the animals diarrhea, reduced their appetites and altered their gut flora in a big way. “When you take antibiotics, you’re basically t hrowing an atomic bomb” into your gut, Kadouri says. “You’re wiping everything out.”
Both Mitchell and Kadouri tested B. bacteriovorus on human cells and found that the predatory bacteria didn’t harm the cells or prompt an immune response. The researchers separately reported their findings in late 2016 in Scientific Reports and PLOS ONE .
Microbiologist Elizabeth Emmert of Salisbury University in Maryland studies B. bacterio-vorus as a means to protect crops — carrots and potatoes — from bacterial soft rot diseases. For humans, she calls the microbes a “promising” therapy for bacterial infections. “It seems most feasible as a topical treatment for wounds, since it would not have to survive passage through the digestive tract.”

There are plenty of questions that need answering first. Mitchell guesses that there will probably be 10 more years of rigorous testing in animals before moving on to human clinical studies. But pursuing these alternatives is worth the effort.

“The drugs that we’re taking are not benign and cuddly and nice,” Kadouri says. “We need them, but they don’t come without side effects.” Even though a living antibiotic sounds a bit crazy, it might be the best option in this dangerous era of antibiotic resistance.

Kepler shows small exoplanets are either super-Earths or mini-Neptunes

Small worlds come in two flavors. The complete dataset from the original mission of the planet-hunting Kepler space telescope reveals a split in the exoplanet family tree, setting super-Earths apart from mini-Neptunes.

Kepler’s final exoplanet catalog, released in a news conference June 19, now consists of 4,034 exoplanet candidates. Of those, 49 are rocky worlds in their stars’ habitable zones, including 10 newly discovered ones. So far, 2,335 candidates have been confirmed as planets and they include about 30 temperate, terrestrial worlds.
Careful measurements of the candidates’ stars revealed a surprising gap between planets about 1.5 and two times the size of Earth, Benjamin Fulton of the University of Hawaii at Manoa and Caltech and his colleagues found. There are a few planets in the gap, but most straddle it.

That splits the population of small planets into those that are rocky like Earth — 1.5 Earth radii or less — and those that are gassy like Neptune, between 2 and 3.5 Earth radii.

“This is a major new division in the family tree of exoplanets, somewhat analogous to the discovery that mammals and lizards are separate branches on the tree of life,” Fulton said.

The Kepler space telescope launched in 2009 and stared at a single patch of sky in the constellation Cygnus for four years. (Its stabilizing reaction wheels later broke and it began a new mission called K2 (SN Online: 5/15/13).) Kepler watched sunlike stars for telltale dips in brightness that would reveal a passing planet. Its ultimate goal was to come up with a single number: The fraction of stars like the sun that host planets like Earth.
The Kepler team has still not calculated that number, but astronomers are confident that they have enough data to do so, said Susan Thompson of the SETI Institute in Mountain View, Calif. She presented the results during the Kepler/K2 Science Conference IV being held at NASA’s Ames Research Center in Moffett Field, Calif.

Thompson and her colleagues ran the Kepler dataset through “Robovetter” software, which acted like a sieve to catch all the potential planets it contained. Running fake planet data through the software pinpointed how likely it was to confuse other signals for a planet or miss true planets.

“This is the first time we have a population that’s really well-characterized so we can do a statistical study and understand Earth analogs out there,” Thompson said.

Astronomers’ knowledge of these planets is only as good as their knowledge of their stars. So Fulton and his colleagues used the Keck telescope in Hawaii to precisely measure the sizes of 1,300 planet-hosting stars in the Kepler field of view. Those sizes in turn helped pin down the sizes of the planets with four times more precision than before.

The split in planet types they found could come from small differences in the planets’ sizes, compositions and distances from their stars. Young stars blow powerful winds of charged particles, which can blowtorch a growing planet’s atmosphere away. If a planet was too close to its star or too small to have a thick atmosphere — less than 75 percent larger than Earth — it would lose its atmosphere and end up in the smaller group. The planets that look more like Neptune today either had more gas to begin with or grew up in a gentler environment, Fulton said.

That divergence could have implications for the abundance of life in the galaxy. The surfaces of mini-Neptunes — if they exist — would suffer under the crushing pressure of such a thick atmosphere.

“These would not be nice places to live,” Fulton said. “Our result sharpens up the dividing line between potentially habitable planets and those that are inhospitable.”

Upcoming missions, like the Transiting Exoplanet Survey Satellite due to launch in 2018, will fill in the details of the exoplanet landscape with more observations of planets around bright stars. Later, telescopes like the James Webb Space Telescope, also scheduled to launch in 2018, will be able to check the atmospheres of those planets for signs of life.

“We can now really ask the question, ‘Is our planetary system unique in the galaxy?’” exoplanet astronomer Courtney Dressing of Caltech says. “My guess is the answer’s no. We’re not that special.”

Quantum computers are about to get real

Although the term “quantum computer” might suggest a miniature, sleek device, the latest incarnations are a far cry from anything available in the Apple Store. In a laboratory just 60 kilometers north of New York City, scientists are running a fledgling quantum computer through its paces — and the whole package looks like something that might be found in a dark corner of a basement. The cooling system that envelops the computer is about the size and shape of a household water heater.

Beneath that clunky exterior sits the heart of the computer, the quantum processor, a tiny, precisely engineered chip about a centimeter on each side. Chilled to temperatures just above absolute zero, the computer — made by IBM and housed at the company’s Thomas J. Watson Research Center in Yorktown Heights, N.Y. — comprises 16 quantum bits, or qubits, enough for only simple calculations.

If this computer can be scaled up, though, it could transcend current limits of computation. Computers based on the physics of the super­small can solve puzzles no other computer can — at least in theory — because quantum entities behave unlike anything in a larger realm.

Quantum computers aren’t putting standard computers to shame just yet. The most advanced computers are working with fewer than two dozen qubits. But teams from industry and academia are working on expanding their own versions of quantum computers to 50 or 100 qubits, enough to perform certain calculations that the most powerful supercomputers can’t pull off.
The race is on to reach that milestone, known as “quantum supremacy.” Scientists should meet this goal within a couple of years, says quantum physicist David Schuster of the University of Chicago. “There’s no reason that I see that it won’t work.”
But supremacy is only an initial step, a symbolic marker akin to sticking a flagpole into the ground of an unexplored landscape. The first tasks where quantum computers prevail will be contrived problems set up to be difficult for a standard computer but easy for a quantum one. Eventually, the hope is, the computers will become prized tools of scientists and businesses.

Attention-getting ideas
Some of the first useful problems quantum computers will probably tackle will be to simulate small molecules or chemical reactions. From there, the computers could go on to speed the search for new drugs or kick-start the development of energy-saving catalysts to accelerate chemical reactions. To find the best material for a particular job, quantum computers could search through millions of possibilities to pinpoint the ideal choice, for example, ultrastrong polymers for use in airplane wings. Advertisers could use a quantum algorithm to improve their product recommendations — dishing out an ad for that new cell phone just when you’re on the verge of purchasing one.

Quantum computers could provide a boost to machine learning, too, allowing for nearly flawless handwriting recognition or helping self-driving cars assess the flood of data pouring in from their sensors to swerve away from a child running into the street. And scientists might use quantum computers to explore exotic realms of physics, simulating what might happen deep inside a black hole, for example.

But quantum computers won’t reach their real potential — which will require harnessing the power of millions of qubits — for more than a decade. Exactly what possibilities exist for the long-term future of quantum computers is still up in the air.

The outlook is similar to the patchy vision that surrounded the development of standard computers — which quantum scientists refer to as “classical” computers — in the middle of the 20th century. When they began to tinker with electronic computers, scientists couldn’t fathom all of the eventual applications; they just knew the machines possessed great power. From that initial promise, classical computers have become indispensable in science and business, dominating daily life, with handheld smartphones becoming constant companions (SN: 4/1/17, p. 18).
Since the 1980s, when the idea of a quantum computer first attracted interest, progress has come in fits and starts. Without the ability to create real quantum computers, the work remained theoretical, and it wasn’t clear when — or if — quantum computations would be achievable. Now, with the small quantum computers at hand, and new developments coming swiftly, scientists and corporations are preparing for a new technology that finally seems within reach.

“Companies are really paying attention,” Microsoft’s Krysta Svore said March 13 in New Orleans during a packed session at a meeting of the American Physical Society. Enthusiastic physicists filled the room and huddled at the doorways, straining to hear as she spoke. Svore and her team are exploring what these nascent quantum computers might eventually be capable of. “We’re very excited about the potential to really revolutionize … what we can compute.”

Anatomy of a qubit
Quantum computing’s promise is rooted in quantum mechanics, the counterintuitive physics that governs tiny entities such as atoms, electrons and molecules. The basic element of a quantum computer is the qubit (pronounced “CUE-bit”). Unlike a standard computer bit, which can take on a value of 0 or 1, a qubit can be 0, 1 or a combination of the two — a sort of purgatory between 0 and 1 known as a quantum super­position. When a qubit is measured, there’s some chance of getting 0 and some chance of getting 1. But before it’s measured, it’s both 0 and 1.

Because qubits can represent 0 and 1 simultaneously, they can encode a wealth of information. In computations, both possibilities — 0 and 1 — are operated on at the same time, allowing for a sort of parallel computation that speeds up solutions.

Another qubit quirk: Their properties can be intertwined through the quantum phenomenon of entanglement (SN: 4/29/17, p. 8). A measurement of one qubit in an entangled pair instantly reveals the value of its partner, even if they are far apart — what Albert Einstein called “spooky action at a distance.”
Such weird quantum properties can make for superefficient calculations. But the approach won’t speed up solutions for every problem thrown at it. Quantum calculators are particularly suited to certain types of puzzles, the kind for which correct answers can be selected by a process called quantum interference. Through quantum interference, the correct answer is amplified while others are canceled out, like sets of ripples meeting one another in a lake, causing some peaks to become larger and others to disappear.

One of the most famous potential uses for quantum computers is breaking up large integers into their prime factors. For classical computers, this task is so difficult that credit card data and other sensitive information are secured via encryption based on factoring numbers. Eventually, a large enough quantum computer could break this type of encryption, factoring numbers that would take millions of years for a classical computer to crack.

Quantum computers also promise to speed up searches, using qubits to more efficiently pick out an information needle in a data haystack.

Qubits can be made using a variety of materials, including ions, silicon or superconductors, which conduct electricity without resistance. Unfortunately, none of these technologies allow for a computer that will fit easily on a desktop. Though the computer chips themselves are tiny, they depend on large cooling systems, vacuum chambers or other bulky equipment to maintain the delicate quantum properties of the qubits. Quantum computers will probably be confined to specialized laboratories for the foreseeable future, to be accessed remotely via the internet.

Going supreme
That vision of Web-connected quantum computers has already begun to Quantum computing is exciting. It’s coming, and we want a lot more people to be well-versed in itmaterialize. In 2016, IBM unveiled the Quantum Experience, a quantum computer that anyone around the world can access online for free.
With only five qubits, the Quantum Experience is “limited in what you can do,” says Jerry Chow, who manages IBM’s experimental quantum computing group. (IBM’s 16-qubit computer is in beta testing, so Quantum Experience users are just beginning to get their hands on it.) Despite its limitations, the Quantum Experience has allowed scientists, computer programmers and the public to become familiar with programming quantum computers — which follow different rules than standard computers and therefore require new ways of thinking about problems. “Quantum computing is exciting. It’s coming, and we want a lot more people to be well-versed in it,” Chow says. “That’ll make the development and the advancement even faster.”

But to fully jump-start quantum computing, scientists will need to prove that their machines can outperform the best standard computers. “This step is important to convince the community that you’re building an actual quantum computer,” says quantum physicist Simon Devitt of Macquarie University in Sydney. A demonstration of such quantum supremacy could come by the end of the year or in 2018, Devitt predicts.

Researchers from Google set out a strategy to demonstrate quantum supremacy, posted online at arXiv.org in 2016. They proposed an algorithm that, if run on a large enough quantum computer, would produce results that couldn’t be replicated by the world’s most powerful supercomputers.

The method involves performing random operations on the qubits, and measuring the distribution of answers that are spit out. Getting the same distribution on a classical supercomputer would require simulating the complex inner workings of a quantum computer. Simulating a quantum computer with more than about 45 qubits becomes unmanageable. Supercomputers haven’t been able to reach these quantum wilds.

To enter this hinterland, Google, which has a nine-qubit computer, has aggressive plans to scale up to 49 qubits. “We’re pretty optimistic,” says Google’s John Martinis, also a physicist at the University of California, Santa Barbara.

Martinis and colleagues plan to proceed in stages, working out the kinks along the way. “You build something, and then if it’s not working exquisitely well, then you don’t do the next one — you fix what’s going on,” he says. The researchers are currently developing quantum computers of 15 and 22 qubits.

IBM, like Google, also plans to go big. In March, the company announced it would build a 50-qubit computer in the next few years and make it available to businesses eager to be among the first adopters of the burgeoning technology. Just two months later, in May, IBM announced that its scientists had created the 16-qubit quantum computer, as well as a 17-qubit prototype that will be a technological jumping-off point for the company’s future line of commercial computers.
But a quantum computer is much more than the sum of its qubits. “One of the real key aspects about scaling up is not simply … qubit number, but really improving the device performance,” Chow says. So IBM researchers are focusing on a standard they call “quantum volume,” which takes into account several factors. These include the number of qubits, how each qubit is connected to its neighbors, how quickly errors slip into calculations and how many operations can be performed at once. “These are all factors that really give your quantum processor its power,” Chow says.

Errors are a major obstacle to boosting quantum volume. With their delicate quantum properties, qubits can accumulate glitches with each operation. Qubits must resist these errors or calculations quickly become unreliable. Eventually, quantum computers with many qubits will be able to fix errors that crop up, through a procedure known as error correction. Still, to boost the complexity of calculations quantum computers can take on, qubit reliability will need to keep improving.

Different technologies for forming qubits have various strengths and weaknesses, which affect quantum volume. IBM and Google build their qubits out of superconducting materials, as do many academic scientists. In superconductors cooled to extremely low temperatures, electrons flow unimpeded. To fashion superconducting qubits, scientists form circuits in which current flows inside a loop of wire made of aluminum or another superconducting material.

Several teams of academic researchers create qubits from single ions, trapped in place and probed with lasers. Intel and others are working with qubits fabricated from tiny bits of silicon known as quantum dots (SN: 7/11/15, p. 22). Microsoft is studying what are known as topological qubits, which would be extra-resistant to errors creeping into calculations. Qubits can even be forged from diamond, using defects in the crystal that isolate a single electron. Photonic quantum computers, meanwhile, make calculations using particles of light. A Chinese-led team demonstrated in a paper published May 1 in Nature Photonics that a light-based quantum computer could outperform the earliest electronic computers on a particular problem.

One company, D-Wave, claims to have a quantum computer that can perform serious calculations, albeit using a more limited strategy than other quantum computers (SN: 7/26/14, p. 6). But many scientists are skeptical about the approach. “The general consensus at the moment is that something quantum is happening, but it’s still very unclear what it is,” says Devitt.

Identical ions
While superconducting qubits have received the most attention from giants like IBM and Google, underdogs taking different approaches could eventually pass these companies by. One potential upstart is Chris Monroe, who crafts ion-based quantum computers.
On a walkway near his office on the University of Maryland campus in College Park, a banner featuring a larger-than-life portrait of Monroe adorns a fence. The message: Monroe’s quantum computers are a “fearless idea.” The banner is part of an advertising campaign featuring several of the university’s researchers, but Monroe seems an apt choice, because his research bucks the trend of working with superconducting qubits.

Monroe and his small army of researchers arrange ions in neat lines, manipulating them with lasers. In a paper published in Nature in 2016, Monroe and colleagues debuted a five-qubit quantum computer, made of ytterbium ions, allowing scientists to carry out various quantum computations. A 32-ion computer is in the works, he says.

Monroe’s labs — he has half a dozen of them on campus — don’t resemble anything normally associated with computers. Tables hold an indecipherable mess of lenses and mirrors, surrounding a vacuum chamber that houses the ions. As with IBM’s computer, although the full package is bulky, the quantum part is minuscule: The chain of ions spans just hundredths of a millimeter.

Scientists in laser goggles tend to the whole setup. The foreign nature of the equipment explains why ion technology for quantum computing hasn’t taken off yet, Monroe says. So he and colleagues took matters into their own hands, creating a start-up called IonQ, which plans to refine ion computers to make them easier to work with.

Monroe points out a few advantages of his technology. In particular, ions of the same type are identical. In other systems, tiny differences between qubits can muck up a quantum computer’s operations. As quantum computers scale up, Monroe says, there will be a big price to pay for those small differences. “Having qubits that are identical, over millions of them, is going to be really important.”

In a paper published in March in Proceedings of the National Academy of Sciences, Monroe and colleagues compared their quantum computer with IBM’s Quantum Experience. The ion computer performed operations more slowly than IBM’s superconducting one, but it benefited from being more interconnected — each ion can be entangled with any other ion, whereas IBM’s qubits can be entangled only with adjacent qubits. That interconnectedness means that calculations can be performed in fewer steps, helping to make up for the slower operation speed, and minimizing the opportunity for errors.
Early applications
Computers like Monroe’s are still far from unlocking the full power of quantum computing. To perform increasingly complex tasks, scientists will have to correct the errors that slip into calculations, fixing problems on the fly by spreading information out among many qubits. Unfortunately, such error correction multiplies the number of qubits required by a factor of 10, 100 or even thousands, depending on the quality of the qubits. Fully error-corrected quantum computers will require millions of qubits. That’s still a long way off.

So scientists are sketching out some simple problems that quantum computers could dig into without error correction. One of the most important early applications will be to study the chemistry of small molecules or simple reactions, by using quantum computers to simulate the quantum mechanics of chemical systems. In 2016, scientists from Google, Harvard University and other institutions performed such a quantum simulation of a hydrogen molecule. Hydrogen has already been simulated with classical computers with similar results, but more complex molecules could follow as quantum computers scale up.

Once error-corrected quantum computers appear, many quantum physicists have their eye on one chemistry problem in particular: making fertilizer. Though it seems an unlikely mission for quantum physicists, the task illustrates the game-changing potential of quantum computers.

The Haber-Bosch process, which is used to create nitrogen-rich fertilizers, is hugely energy intensive, demanding high temperatures and pressures. The process, essential for modern farming, consumes around 1 percent of the world’s energy supply. There may be a better way. Nitrogen-fixing bacteria easily extract nitrogen from the air, thanks to the enzyme nitrogenase. Quantum computers could help simulate this enzyme and reveal its properties, perhaps allowing scientists “to design a catalyst to improve the nitrogen fixation reaction, make it more efficient, and save on the world’s energy,” says Microsoft’s Svore. “That’s the kind of thing we want to do on a quantum computer. And for that problem it looks like we’ll need error correction.”

Pinpointing applications that don’t require error correction is difficult, and the possibilities are not fully mapped out. “It’s not because they don’t exist; I think it’s because physicists are not the right people to be finding them,” says Devitt, of Macquarie. Once the hardware is available, the thinking goes, computer scientists will come up with new ideas.

That’s why companies like IBM are pushing their quantum computers to users via the Web. “A lot of these companies are realizing that they need people to start playing around with these things,” Devitt says.

Quantum scientists are trekking into a new, uncharted realm of computation, bringing computer programmers along for the ride. The capabilities of these fledgling systems could reshape the way society uses computers.

Eventually, quantum computers may become part of the fabric of our technological society. Quantum computers could become integrated into a quantum internet, for example, which would be more secure than what exists today (SN: 10/15/16, p. 13).

“Quantum computers and quantum communication effectively allow you to do things in a much more private way,” says physicist Seth Lloyd of MIT, who envisions Web searches that not even the search engine can spy on.

There are probably plenty more uses for quantum computers that nobody has thought up yet.

“We’re not sure exactly what these are going to be used for. That makes it a little weird,” Monroe says. But, he maintains, the computers will find their niches. “Build it and they will come.”

Drinking sugary beverages in pregnancy linked to kids’ later weight gain

An expectant mom might want to think twice about quenching her thirst with soda.

The more sugary beverages a mom drank during mid-pregnancy, the heavier her kids were in elementary school compared with kids whose mothers consumed less of the drinks, a new study finds. At age 8, boys and girls weighed approximately 0.25 kilograms more — about half a pound — with each serving mom added per day while pregnant, researchers report online July 10 in Pediatrics.
“What happens in early development really has a long-term impact,” says Meghan Azad, an epidemiologist at the University of Manitoba in Canada, who was not involved in the study. A fetus’s metabolism develops in response to the surrounding environment, including the maternal diet, she says.

The new findings come out of a larger project that studies the impact of pregnant moms’ diets on their kids’ health. “We know that what mothers eat during pregnancy may affect their children’s health and later obesity,” says biostatistician Sheryl Rifas-Shiman of Harvard Medical School and Harvard Pilgrim Health Care Institute in Boston. “We decided to look at sugar-sweetened beverages as one of these factors.” Sugary drinks are associated with excessive weight gain and obesity in studies of adults and children.

Rifas-Shiman and colleagues included 1,078 mother-child pairs in the study. Moms filled out a questionnaire in the first and second trimesters of their pregnancy about what they were drinking — soda, fruit drinks, 100 percent fruit juice, diet soda or water — and how often. Soda and fruit drinks were considered sugar-sweetened beverages. A serving was defined as a can, glass or bottle of a beverage.

When the children of these moms were in elementary school, the researchers assessed the kids using several different measurements of obesity. They took kids’ height and weight to calculate body mass index and performed a scanning technique to determine total fat mass, among other methods.

Of the 1,078 kids in the study, 272, or 25 percent, were considered overweight or obese based on their BMI. Moms who drank at least two servings of sugar-sweetened beverages per day during the second trimester had children most likely to fall in this group. Other measurements of obesity were also highest for these kids. Children’s own sugary beverage drinking habits did not alter the results, the scientists say.

The research can’t say moms’ soda sips directly caused the weight gain in her kids. But based on this study and other work, limiting sugary drinks during pregnancy “is probably a good idea,” Azad says. There’s no harm in avoiding them, “and it looks like there may be a benefit.” Her advice is to drink water instead.

These genes may be why dogs are so friendly

DNA might reveal how dogs became man’s best friend.

A new study shows that some of the same genes linked to the behavior of extremely social people can also make dogs friendlier. The result, published July 19 in Science Advances, suggests that dogs’ domestication may be the result of just a few genetic changes rather than hundreds or thousands of them.

“It is great to see initial genetic evidence supporting the self-domestication hypothesis or ‘survival of the friendliest,’” says evolutionary anthropologist Brian Hare of Duke University, who studies how dogs think and learn. “This is another piece of the puzzle suggesting that humans did not create dogs intentionally, but instead wolves that were friendliest toward humans were at an evolutionary advantage as our two species began to interact.”

Not much is known about the underlying genetics of how dogs became domesticated. In 2010, evolutionary geneticist Bridgett vonHoldt of Princeton University and colleagues published a study comparing dogs’ and wolves’ DNA. The biggest genetic differences gave clues to why dogs and wolves don’t look the same. But major differences were also found in WBSCR17, a gene linked to Williams-Beuren syndrome in humans.
Williams-Beuren syndrome leads to delayed development, impaired thinking ability and hypersociability. VonHoldt and colleagues wondered if changes to the same gene in dogs would make the animals more social than wolves, and whether that might have influenced dogs’ domestication.

In the new study, vonHoldt and colleagues compared the sociability of domestic dogs with that of wolves raised by humans. Dogs typically spent more time than wolves staring at and interacting with a human stranger nearby, showing the dogs were more social than the wolves. Analyzing the genetic blueprint of those dogs and wolves, along with DNA data of other wolves and dogs, showed variations in three genes associated with the social behaviors directed at humans: WBSCR17, GTF2I and GTF2IRD1. All three are tied to Williams-Beuren syndrome in humans.
“It’s fascinating that a handful of genetic changes could be so influential on social behavior,” vonHoldt says.

She and colleagues propose that such changes may be closely intertwined with dog domestication. Previous hypotheses have suggested that domestication was linked dogs’ development of advanced ways of analyzing and applying information about social situations, a way of thinking assumed to be unique to humans. “Instead of developing a more complex form of cognition, dogs appear to be engaging in excessively friendly behavior that increases the amount of time they spend near us and watching us,” says study coauthor Monique Udell, who studies animal behavior at Oregon State University in Corvallis. In turn, she says, that gives dogs “the opportunities necessary for them to learn about our behavior and what maximizes their success when living with us.”

The team notes, for instance, that in addition to contributing to sociability, the variations in WBSCR17 may represent an adaptation in dogs to living with humans. A previous study revealed that variations in WBSCR17 were tied to the ability to digest carbohydrates — a source of energy wolves would have rarely consumed. Yet, the variations in domestic dogs suggest those changes would help them thrive on the starch-rich diets of humans. Links between another gene related to starch digestion in dogs and domestication, however, have recently been called into question (SN Online: 7/18/17).

The other variations, the team argues, would have predisposed the dogs to be hypersocial with humans, a trait that humans would then have selected for as dogs were bred over generations.

This robot grows like a plant

Robots are branching out. A new prototype soft robot takes inspiration from plants by growing to explore its environment.

Vines and some fungi extend from their tips to explore their surroundings. Elliot Hawkes of the University of California in Santa Barbara and his colleagues designed a bot that works on similar principles. Its mechanical body sits inside a plastic tube reel that extends through pressurized inflation, a method that some invertebrates like peanut worms (Sipunculus nudus) also use to extend their appendages. The plastic tubing has two compartments, and inflating one side or the other changes the extension direction. A camera sensor at the tip alerts the bot when it’s about to run into something.

In the lab, Hawkes and his colleagues programmed the robot to form 3-D structures such as a radio antenna, turn off a valve, navigate a maze, swim through glue, act as a fire extinguisher, squeeze through tight gaps, shimmy through fly paper and slither across a bed of nails. The soft bot can extend up to 72 meters, and unlike plants, it can grow at a speed of 10 meters per second, the team reports July 19 in Science Robotics. The design could serve as a model for building robots that can traverse constrained environments.

This isn’t the first robot to take inspiration from plants. One plantlike predecessor was a robot modeled on roots.

Perovskites power up the solar industry

Tsutomu Miyasaka was on a mission to build a better solar cell. It was the early 2000s, and the Japanese scientist wanted to replace the delicate molecules that he was using to capture sunlight with a sturdier, more effective option.

So when a student told him about an unfamiliar material with unusual properties, Miyasaka had to try it. The material was “very strange,” he says, but he was always keen on testing anything that might respond to light.
Other scientists were running electricity through the material, called a perovskite, to generate light. Miyasaka, at Toin University of Yokohama in Japan, wanted to know if the material could also do the opposite: soak up sunlight and convert it into electricity. To his surprise, the idea worked. When he and his team replaced the light-sensitive components of a solar cell with a very thin layer of the perovskite, the illuminated cell pumped out a little bit of electric current.

The result, reported in 2009 in the Journal of the American Chemical Society, piqued the interest of other scientists, too. The perovskite’s properties made it (and others in the perovskite family) well-suited to efficiently generate energy from sunlight. Perhaps, some scientists thought, this perovskite might someday be able to outperform silicon, the light-absorbing material used in more than 90 percent of solar cells around the world.
Initial excitement quickly translated into promising early results. An important metric for any solar cell is how efficient it is — that is, how much of the sunlight that strikes its surface actually gets converted to electricity. By that standard, perovskite solar cells have shone, increasing in efficiency faster than any previous solar cell material in history. The meager 3.8 percent efficiency reported by Miyasaka’s team in 2009 is up to 22 percent this year. Today, the material is almost on par with silicon, which scientists have been tinkering with for more than 60 years to bring to a similar efficiency level.
“People are very excited because [perovskite’s] efficiency number has climbed so fast. It really feels like this is the thing to be working on right now,” says Jao van de Lagemaat, a chemist at the National Renewable Energy Laboratory in Golden, Colo.

Now, perovskite solar cells are at something of a crossroads. Lab studies have proved their potential: They are cheaper and easier to fabricate than time-tested silicon solar cells. Though perovskites are unlikely to completely replace silicon, the newer materials could piggyback onto existing silicon cells to create extra-effective cells. Perovskites could also harness solar energy in new applications where traditional silicon cells fall flat — as light-absorbing coatings on windows, for instance, or as solar panels that work on cloudy days or even absorb ambient sunlight indoors.

Whether perovskites can make that leap, though, depends on current research efforts to fix some drawbacks. Their tendency to degrade under heat and humidity, for example, is not a great characteristic for a product meant to spend hours in the sun. So scientists are trying to boost stability without killing efficiency.

“There are challenges, but I think we’re well on our way to getting this stuff stable enough,” says Henry Snaith, a physicist at the University of Oxford. Finding a niche for perovskites in an industry so dominated by silicon, however, requires thinking about solar energy in creative ways.

Leaping electrons
Perovskites flew under the radar for years before becoming solar stars. The first known perovskite was a mineral, calcium titanate, or CaTiO3, discovered in the 19th century. In more recent years, perovskites have expanded to a class of compounds with a similar structure and chemical recipe — a 1:1:3 ingredient ratio — that can be tweaked with different elements to make different “flavors.”

But the perovskites being studied for the light-absorbing layer of solar cells are mostly lab creations. Many are lead halide perovskites, which combine a lead ion and three ions of iodine or a related element, such as bromine, with a third type of ion (usually something like methylammonium). Those ingredients link together to form perovskites’ hallmark cagelike pyramid-on-pyramid structure. Swapping out different ingredients (replacing lead with tin, for instance) can yield many kinds of perovskites, all with slightly different chemical properties but the same basic crystal structure.

Perovskites owe their solar skills to the way their electrons interact with light. When sunlight shines on a solar panel, photons — tiny packets of light energy — bombard the panel’s surface like a barrage of bullets and get absorbed. When a photon is absorbed into the solar cell, it can share some of its energy with a negatively charged electron. Electrons are attracted to the positively charged nucleus of an atom. But a photon can give an electron enough energy to escape that pull, much like a video game character getting a power-up to jump a motorbike across a ravine. As the energized electron leaps away, it leaves behind a positively charged hole. A separate layer of the solar cell collects the electrons, ferrying them off as electric current.

The amount of energy needed to kick an electron over the ravine is different for every material. And not all photon power-ups are created equal. Sunlight contains low-energy photons (infrared light) and high-energy photons (sunburn-causing ultraviolet radiation), as well as all of the visible light in between.

Photons with too little energy “will just sail right on through” the light-catching layer and never get absorbed, says Daniel Friedman, a photovoltaic researcher at the National Renewable Energy Lab. Only a photon that comes in with energy higher than the amount needed to power up an electron will get absorbed. But any excess energy a photon carries beyond what’s needed to boost up an electron gets lost as heat. The more heat lost, the more inefficient the cell.
Because the photons in sunlight vary so much in energy, no solar cell will ever be able to capture and optimally use every photon that comes its way. So you pick a material, like silicon, that’s a good compromise — one that catches a decent number of photons but doesn’t waste too much energy as heat, Friedman says.

Although it has dominated the solar cell industry, silicon can’t fully use the energy from higher-energy photons; the material’s solar conversion efficiency tops out at around 30 percent in theory and has hit 20-some percent in practice. Perovskites could do better. The electrons inside perovskite crystals require a bit more energy to dislodge. So when higher-energy photons come into the solar cell, they devote more of their energy to dislodging electrons and generating electric current, and waste less as heat. Plus, by changing the ingredients and their ratios in a perovskite, scientists can adjust the photons it catches. Using different types of perovskites across multiple layers could allow solar cells to more effectively absorb a broader range of photons.

Perovskites have a second efficiency perk. When a photon excites an electron inside a material and leaves behind a positively charged hole, there’s a tendency for the electron to slide right back into a hole. This recombination, as it’s known, is inefficient — an electron that could have fed an electric current instead just stays put.

In perovskites, though, excited electrons usually migrate quite far from their holes, Snaith and others have found by testing many varieties of the material. That boosts the chances the electrons will make it out of the perovskite layer without landing back in a hole.

“It’s a very rare property,” Miyasaka says. It makes for an efficient sunlight absorber.

Some properties of perovskites also make them easier than silicon to turn into solar cells. Making a conventional silicon solar cell requires many steps, all done in just the right order at just the right temperature — something like baking a fragile soufflé. The crystals of silicon have to be perfect, because even small defects in the material can hurt its efficiency. The need for such precision makes silicon solar cells more expensive to produce.

Perovskites are more like brownies from a box — simpler, less finicky. “You can make it in an office, basically,” says materials scientist Robert Chang of Northwestern University in Evanston, Ill. He’s exaggerating, but only a little. Perovskites are made by essentially mixing a bunch of ingredients together and depositing them on a surface in a thin, even film. And while making crystalline silicon requires temperatures up to 2000° Celsius, perovskite crystals form at easier-to-reach temperatures — lower than 200°.

Seeking stability
In many ways, perovskites have become even more promising solar cell materials over time, as scientists have uncovered exciting new properties and finessed the materials’ use. But no material is perfect. So now, scientists are searching for ways to overcome perovskites’ real-world limitations. The most pressing issue is their instability, van de Lagemaat says. The high efficiency levels reported from labs often last only days or hours before the materials break down.

Tackling stability is a less flashy problem than chasing efficiency records, van de Lagemaat points out, which is perhaps why it’s only now getting attention. Stability isn’t a single number that you can flaunt, like an efficiency value. It’s also a bit harder to define, especially since how long a solar cell lasts depends on environmental conditions like humidity and precipitation levels, which vary by location.

Encapsulating the cell with water-resistant coatings is one strategy, but some scientists want to bake stability into the material itself. To do that, they’re experimenting with different perovskite designs. For instance, solar cells containing stacks of flat, graphenelike sheets of perovskites seem to hold up better than solar cells with the standard three-dimensional crystal and its interwoven layers.

In these 2-D perovskites, some of the methylammonium ions are replaced by something larger, like butylammonium. Swapping in the bigger ion forces the crystal to form in sheets just nanometers thick, which stack on top of each other like pages in a book, says chemist Aditya Mohite of Los Alamos National Laboratory in New Mexico. The butylammonium ion, which naturally repels water, forms spacer layers between the 2-D sheets and stops water from permeating into the crystal.
Getting the 2-D layers to line up just right has proved tricky, Mohite says. But by precisely controlling the way the layers form, he and colleagues created a solar cell that runs at 12.5 percent efficiency while standing up to light and humidity longer than a similar 3-D model, the team reported in 2016 in Nature. Although it was protected with a layer of glass, the 3-D perovskite solar cell lost performance rapidly, within a few days, while the 2-D perovskite withered only slightly. (After three months, the 2-D version was still working almost as well as it had been at the beginning.)

Despite the seemingly complex structure of the 2-D perovskites, they are no more complicated to make than their 3-D counterparts, says Mercouri Kanatzidis, a chemist at Northwestern and a collaborator on the 2-D perovskite project. With the right ingredients, he says, “they form on their own.”

His goal now is to boost the efficiency of 2-D perovskite cells, which don’t yet match up to their 3-D counterparts. And he’s testing different water-repelling ions to reach an ideal stability without sacrificing efficiency.

Other scientists have mixed 2-D and 3-D perovskites to create an ultra-long-lasting cell — at least by perovskite standards. A solar panel made of these cells ran at only 11 percent efficiency, but held up for 10,000 hours of illumination, or more than a year, according to research published in June in Nature Communications. And, importantly, that efficiency was maintained over an area of about 50 square centimeters, more on par with real-world conditions than the teeny-tiny cells made in most research labs.

A place for perovskites?
With boosts to their stability, perovskite solar cells are getting closer to commercial reality. And scientists are assessing where the light-capturing material might actually make its mark.

Some fans have pitted perovskites head-to-head with silicon, suggesting the newbie could one day replace the time-tested material. But a total takeover probably isn’t a realistic goal, says Sarah Kurtz, codirector of the National Center for Photovoltaics at the National Renewable Energy Lab.

“People have been saying for decades that silicon can’t get lower in cost to meet our needs,” Kurtz says. But, she points out, the price of solar energy from silicon-based panels has dropped far lower than people originally expected. There are a lot of silicon solar panels out there, and a lot of commercial manufacturing plants already set up to deal with silicon. That’s a barrier to a new technology, no matter how great it is. Other silicon alternatives face the same limitation. “Historically, silicon has always been dominant,” Kurtz says.
For Snaith, that’s not a problem. He cofounded Oxford Photo-voltaics Limited, one of the first companies trying to commercialize perovskite solar cells. His team is developing a solar cell with a perovskite layer over a standard silicon cell to make a super-efficient double-decker cell. That way, Snaith says, the team can capitalize on the massive amount of machinery already set up to build commercial silicon solar cells.
A perovskite layer on top of silicon would absorb higher-energy photons and turn them into electricity. Lower-energy photons that couldn’t excite the perovskite’s electrons would pass through to the silicon layer, where they could still generate current. By combining multiple materials in this way, it’s possible to catch more photons, making a more efficient cell.

That idea isn’t new, Snaith points out: For years, scientists have been layering various solar cell materials in this way. But these double-decker cells have traditionally been expensive and complicated to make, limiting their applications. Perovskites’ ease of fabrication could change the game. Snaith’s team is seeing some improvement already, bumping the efficiency of a silicon solar cell from 10 to 23.6 percent by adding a perovskite layer, for example. The team reported that result online in February in Nature Energy.

Rather than compete with silicon solar panels for space on sunny rooftops and in open fields, perovskites could also bring solar energy to totally new venues.

“I don’t think it’s smart for perovskites to compete with silicon,” Miyasaka says. Perovskites excel in other areas. “There’s a whole world of applications where silicon can’t be applied.”

Silicon solar cells don’t work as well on rainy or cloudy days, or indoors, where light is less direct, he says. Perovskites shine in these situations. And while traditional silicon solar cells are opaque, very thin films of perovskites could be printed onto glass to make sunlight-capturing windows. That could be a way to bring solar power to new places, turning glassy skyscrapers into serious power sources, for example. Perovskites could even be printed on flexible plastics to make solar-powered coatings that charge cell phones.

That printing process is getting closer to reality: Scientists at the University of Toronto recently reported a way to make all layers of a perovskite solar cell at temperatures below 150° — including the light-absorbing perovskite layer, but also the background workhorse layers that carry the electrons away and funnel them into current. That could streamline and simplify the production process, making mass newspaper-style printing of perovskite solar cells more doable.

Printing perovskite solar cells on glass is also an area of interest for Oxford Photovoltaics, Snaith says. The company’s ultimate target is to build a perovskite cell that will last 25 years, as long as a traditional silicon cell.

Moon had a magnetic field for at least a billion years longer than thought

The moon had a magnetic field for at least 2 billion years, or maybe longer.

Analysis of a relatively young rock collected by Apollo astronauts reveals the moon had a weak magnetic field until 1 billion to 2.5 billion years ago, at least a billion years later than previous data showed. Extending this lifetime offers insights into how small bodies generate magnetic fields, researchers report August 9 in Science Advances. The result may also suggest how life could survive on tiny planets or moons.
“A magnetic field protects the atmosphere of a planet or moon, and the atmosphere protects the surface,” says study coauthor Sonia Tikoo, a planetary scientist at Rutgers University in New Brunswick, N.J. Together, the two protect the potential habitability of the planet or moon, possibly those far beyond our solar system.

The moon does not currently have a global magnetic field. Whether one ever existed was a question debated for decades (SN: 12/17/11, p. 17). On Earth, molten rock sloshes around the outer core of the planet over time, causing electrically conductive fluid moving inside to form a magnetic field. This setup is called a dynamo. At 1 percent of Earth’s mass, the moon would have cooled too quickly to generate a long-lived roiling interior.
Magnetized rocks brought back by Apollo astronauts, however, revealed that the moon must have had some magnetizing force. The rocks suggested that the magnetic field was strong at least 4.25 billion years ago, early on in the moon’s history, but then dwindled and maybe even got cut off about 3.1 billion years ago.
Tikoo and colleagues analyzed fragments of a lunar rock collected along the southern rim of the moon’s Dune Crater during the Apollo 15 mission in 1971. The team determined the rock was 1 billion to 2.5 billion years old and found it was magnetized. The finding suggests the moon had a magnetic field, albeit a weak one, when the rock formed, the researchers conclude.
A drop in the magnetic field strength suggests the dynamo driving it was generated in two distinct ways, Tikoo says. Early on, Earth and the moon would have sat much closer together, allowing Earth’s gravity to tug on and spin the rocky exterior of the moon. That outer layer would have dragged against the liquid interior, generating friction and a very strong magnetic field (SN Online: 12/4/14).

Then slowly, starting about 3.5 billion years ago, the moon moved away from Earth, weakening the dynamo. But by that point, the moon would have started to cool, causing less dense, hotter material in the core to rise and denser, cooler material to sink, as in Earth’s core. This roiling of material would have sustained a weak field that lasted for at least a billion years, until the moon’s interior cooled, causing the dynamo to die completely, the team suggests.

The two-pronged explanation for the moon’s dynamo is “an entirely plausible idea,” says planetary scientist Ian Garrick-Bethell of the University of California, Santa Cruz. But researchers are just starting to create computer simulations of the strength of magnetic fields to understand how such weaker fields might arise. So it is hard to say exactly what generated the lunar dynamo, he says.

If the idea is correct, it may mean other small planets and moons could have similarly weak, long-lived magnetic fields. Having such an enduring shield could protect those bodies from harmful radiation, boosting the chances for life to survive.

Here are the paths of the next 15 total solar eclipses

August’s total solar eclipse won’t be the last time the moon cloaks the sun’s light. From now to 2040, for example, skywatchers around the globe can witness 15 such events.

Their predicted paths aren’t random scribbles. Solar eclipses occur in what’s called a Saros cycle — a period that lasts about 18 years, 11 days and eight hours, and is governed by the moon’s orbit. (Lunar eclipses follow a Saros cycle, too, which the Chaldeans first noticed probably around 500 B.C.)

Two total solar eclipses separated by that 18-years-and-change period are almost twins — compare this year’s eclipse with the Sept. 2, 2035 eclipse, for example. They take place at roughly the same time of year, at roughly the same latitude and with the moon at about the same distance from Earth. But those extra eight hours, during which the Earth has rotated an additional third of the way on its axis, shift the eclipse path to a different part of the planet.
This cycle repeats over time, creating a family of eclipses called a Saros series. A series lasts 12 to 15 centuries and includes about 70 or more eclipses. The solar eclipses of 2019 and 2037 belong to a different Saros series, so their paths too are shifted mimics. Their tracks differ in shape from 2017’s, because the moon is at a different place in its orbit when it passes between the Earth and the sun. Paths are wider at the poles because the moon’s shadow is hitting the Earth’s surface at a steep angle.

Predicting and mapping past and future eclipses allows scientists “to examine the patterns of eclipse cycles, the most prominent of which is the Saros,” says astrophysicist Fred Espenak, who is retired from NASA’s Goddard Spaceflight Center in Greenbelt, Md.

He would know. Espenak and his colleague Jean Meeus, a retired Belgian astronomer, have mapped solar eclipse paths from 2000 B.C. to A.D. 3000. For archaeologists and historians peering backward, the maps help match up accounts of long-ago eclipses with actual paths. For eclipse chasers peering forward, the data are an itinerary.

“I got interested in figuring out how to calculate eclipse paths for my own use, for planning … expeditions,” says Espenak, who was 18 when he witnessed his first total solar eclipse. It was in 1970, and he secured permission to drive the family car from southern New York to North Carolina to see it. Since then, Espenak, nicknamed “Mr. Eclipse,” has been to every continent, including Antarctica, for a total eclipse of the sun.

“It’s such a dramatic, spectacular, beautiful event,” he says. “You only get a few brief minutes, typically, of totality before it ends. After it’s over, you’re craving to see it again.”