Live antibiotics use bacteria to kill bacteria

The woman in her 70s was in trouble. What started as a broken leg led to an infection in her hip that hung on for two years and several hospital stays. At a Nevada hospital, doctors gave the woman seven different antibiotics, one after the other. The drugs did little to help her. Lab results showed that none of the 14 antibiotics available at the hospital could fight the infection, caused by the bacterium Klebsiella pneumoniae.

Epidemiologist Lei Chen of the Washoe County Health District sent a bacterial sample to the U.S. Centers for Disease Control and Prevention. The bacteria, CDC scientists found, produced a nasty enzyme called New Delhi metallo-beta-lactamase, known for disabling many antibiotics. The enzyme was first seen in a patient from India, which is where the Nevada woman broke her leg and received treatment before returning to the United States.
The enzyme is worrisome because it arms bacteria against carbapenems, a group of last-resort antibiotics, says Alexander Kallen, a CDC medical epidemiologist based in Atlanta, who calls the drugs “our biggest guns for our sickest patients.”

The CDC’s final report revealed startling news: The bacteria raging in the woman’s body were resistant to all 26 antibiotics available in the United States. She died from septic shock; the infection shut down her organs.

Kallen estimates that there have been fewer than 10 cases of completely resistant bacterial infections in the United States. Such absolute resistance to all available drugs, though incredibly rare, was a “nightmare scenario,” says Daniel Kadouri, a micro-biologist at Rutgers School of Dental Medicine in Newark, N.J.

Antibiotic-resistant bacteria infect more than 2 million people in the United States every year, and at least 23,000 die, according to 2013 data, the most recent available from the CDC.

It’s time to flip the nightmare scenario and send a killer after the killer bacteria, say a handful of scientists with a new approach for fighting infection. The strategy, referred to as a “living antibiotic,” would pit one group of bacteria — given as a drug and dubbed “the predators” — against the bacteria that are wreaking havoc among humans.
The approach sounds extreme, but it might be necessary. Antimicrobial resistance “is something that we really, really have to take seriously,” says Elizabeth Tayler, senior technical officer for antimicrobial resistance at the World Health Organization in Geneva. “The ability of future generations to manage infection is at risk. It’s a global problem.”

The number of resistant strains has exploded, in part because doctors prescribe antibiotics too often. At least 30 percent of antibiotic prescriptions in the United States are not necessary, according to the CDC. When more people are exposed to more antibiotics, resistance is likely to build faster. And new alternatives are scarce, Kallen says, as the pace of developing novel antibiotics has slowed.

In search of new ideas, DARPA, a Department of Defense agency that invests in breakthrough technologies, is supporting work on predatory bacteria by Kadouri, as well as Robert Mitchell of Ulsan National Institute of Science and Technology in South Korea, Liz Sockett of the University of Nottingham in England and Edouard Jurkevitch of the Hebrew University of Jerusalem. This work, the agency says, represents “a significant departure from conventional antibiotic therapies.”

The approach is so unusual, people have called Kadouri and his lab crazy. “Probably, we are,” he jokes.

A movie-worthy killer
The notion of predatory bacteria sounds a bit scary, especially when Kadouri likens the most thoroughly studied of the predators, Bdellovibrio bacteriovorus, to the vicious space creatures in the Alien movies.

B. bacteriovorus, called gram-negative because of how they are stained for microscope viewing, dine on other gram-negative bacteria. All gram-negative bacteria have an inner membrane and outer cell wall. The predators don’t go after the other main type of bacteria, gram-positives, which have just one membrane.
When it encounters a gram-negative bacterium, the predator appears to latch on with grappling hook–like appendages. Then, like a classic cat burglar cutting a hole in glass, B. bacteriovorus forces its way through the outer membrane and seems to seal the hole behind it. Once within the space between the outer and inner membranes, the predator secretes enzymes — as damaging as the movie aliens’ acid spit — that chew its prey’s nutrients and DNA into bite-sized pieces.

B. bacteriovorus then uses the broken-down genetic building blocks to make its own DNA and begin replicating. The invader and its progeny eventually emerge from the shell of the prey in a way reminiscent of a cinematic chest-bursting scene.

“It’s a very efficient killing machine,” Kadouri says. That’s good news because many of the most dangerous pathogens that are resistant to antibiotics are gram-negative (SN: 6/10/17, p. 8), according to a list released by the WHO in February.

It’s the predator’s hunger for the bad-guy bacteria, the ones that current drugs have become useless against, that Kadouri and other researchers hope to harness.

Pitting predatory against pathogenic bacteria sounds risky. But, from what researchers can tell, these killer bacteria appear safe. “We know that [B. bacteriovorus] doesn’t target mammalian cells,” Kadouri says.

Saving the see-through fish
To find out whether enlisting predatory bacteria might be crazy good and not just plain crazy, Kadouri’s lab group tested B. bacteriovorus’ killing ability against an array of bacteria in lab dishes in 2010. The microbe significantly reduced levels of 68 of the 83 bacteria tested.

Since then, Kadouri and others have looked at the predator’s ability to devour dangerous pathogens in animals. In rats and chickens, B. bacteriovorus reduced the number of bad bacteria. But the animals were always given nonlethal doses of pathogens, leaving open the question of whether the predator could save the animals’ lives.

Sockett needed to see evidence of survival improvement. “If we’re going to have Bdellovibrio as a medicine, we have to cure something,” she says. “We can count changes in numbers of bacteria, but if that doesn’t change the outcome of the infection — change the number of [animals] that die — it’s not worth it.”

So she teamed up with cell biologist Serge Mostowy of Imperial College London for a study in zebrafish. The aim was to see how many animals predatory bacteria could save from a deadly infection. The team also tested how the host’s immune system interacted with the predators.

The researchers gave zebra-fish larvae fatal doses of an antibiotic-resistant strain of Shigella flexneri, which causes dysentery in humans. Before infecting the fish, the researchers divided them into four groups. Two groups had their immune systems altered to produce fewer macrophages, the white blood cells that attack pathogens. Immune systems in the other two groups remained intact. B. bacteriovorus was injected into an unchanged group and a macrophage-deficient group, while two groups received no treatment.

All of the untreated fish with fewer macrophages died within 72 hours of receiving S. flexneri, the researchers reported in December in Current Biology. Of the fish with a normal immune system, 65 percent that received predator treatment survived compared with 35 percent with no predator treatment. Even in the fish with impaired immune systems, the predators saved about a quarter of the lot.
“This is the first time that Bdellovibrio has ever been used as an injected therapy in live organisms,” Sockett says. “And the important thing is the injection improved the survival of the zebrafish.”

The study also pulled off another first. In previous work, researchers had been unable to see predation as it happened within an animal. Because zebra-fish larvae are transparent, study coauthor Alexandra Willis captured images of B. bacteriovorus gobbling up S. flexneri.

“We were literally having to run to the microscope because the process was just happening so fast,” says Willis, a graduate student in Mostowy’s lab. After the predator invades, its rod-shaped prey become round. Willis saw Bdellovibrio “rounding” its prey within 15 minutes. From start to finish, the predatory cycle took about three to four hours.

The predator’s speed may be what gave it the edge over the infection, Mostowy says. B. bacteriovorus attacks fast, chipping away at the pathogens until the infection is reduced to a level that the immune system can handle. “Otherwise there are too many bacteria and the immune system would be overwhelmed,” he says. “We’re putting a shocking amount of Shigella, 50,000 bacteria, into the fish.”

Within 48 hours, S. flexneri levels dropped 98 percent in the surviving fish, from 50,000 to 1,000.

The immune cells also cleared nearly all the B. bacteriovorus predators from the fish. The predators had enough time to attack the infection before being targeted by the immune system themselves, creating an ideal treatment window. Even if the host’s immune system hadn’t attacked the predators, once the bacteria are gone, Willis says, the predators are out of food. Unable to replicate, they eventually die off.

A clean sweep
Predatory bacteria are efficient in more ways than one. They’re not just good killers — they eliminate the evidence too.

Typical antibiotic treatments don’t target a bacterium’s DNA, so they are likely to leave pieces of the bacterial body behind. That’s like killing a few bandits, but leaving their weapons so the next invaders can easily arm themselves for a new attack. This could be one way that multidrug resistance evolves, Mitchell says. For example, penicillin will kill all bacteria that aren’t resistant to the drug. The surviving bacteria can swim through the aftermath of the antibiotic attack and grab genes from their fallen comrades to incorporate into their own genomes. The destroyed bacteria may have had a resistance gene to a different antibiotic, say, vancomycin. Now you have bacteria that are resistant to both penicillin and vancomycin. Not good.

Predatory bacteria, on the other hand, “decimate the genome” of their prey, Mitchell says. They don’t just kill the bandit, they melt down all the DNA weapons so no pathogens can use them. In one experiment that has yet to be published, B. bacteriovorus almost completely ate up the genetic material of a bacterial colony within two hours — showing itself as a fast-acting predator that could prevent bacterial genes from falling into the wrong hands.

On top of that, even if pathogenic bacteria mutate, a common way they pick up new forms of resistance, they aren’t protected from predation. Resistance to predation hasn’t been reported in lab experiments since B. bacteriovorus was discovered in 1962, Mitchell says. Researchers don’t think there’s a single pathway or gene in a prey bacterium that the predator targets. Instead, B. bacteriovorus seem to use sheer force to break in. “It’s kind of like cracking an egg with a hammer,” Kadouri says. That’s not exactly something bacteria can mutate to protect themselves against.

Some bacteria manage to band together and cover themselves with a kind of built-in biological shield, which offers protection against antibiotics. But for predatory bacteria, the shield is more of a welcome mat.

Going after the gram-positives
When bacteria cluster together on a surface, whether in your body, on a countertop or on a medical instrument, they can form a biofilm. The thick, slimy shield helps microbes withstand antibiotic attacks because the drugs have difficulty penetrating the slime. Antibiotics usually act on fast-growing bacteria, but within a biofilm, bacteria are sluggish and dormant, making antibiotics less effective, Kadouri says.
But to predatory bacteria, a biofilm is like Jell-O — a tasty snack that’s easy to swallow. Once inside, B. bacteriovorus spreads like wildfire because its prey are now huddled together as confined targets. “It’s like putting zebras and a lion in a restaurant and closing the door and seeing what happens,” Kadouri says. For the zebras, “it can’t end well.”

Kadouri’s lab has shown repeatedly that predatory bacteria effectively eat away biofilms that protect gram-negative bacteria, and are in fact more efficient at killing bacteria within those biofilms.

Gram-positive bacteria cloak themselves in biofilms too. In 2014 in Scientific Reports, Mitchell and his team reported finding a way to use Bdellovibrio to weaken gram-positive bacteria, turning their protective shield against them and perhaps helping antibiotics do their job.

The discovery comes from studies of one naturally occurring B. bacteriovorus mutant with extra-scary spit. The mutant isn’t predatory. Instead of eating a prey’s DNA to make its own, it can grow and replicate like a normal bacterial colony. As it grows, it produces especially destructive enzymes. Among the mix of enzymes are proteases, which break down proteins.

Mitchell and his team tested the strength of the mutant’s secretions against the gram-positive Staphylococcus aureus. A cocktail of the enzymes applied to an S. aureus biofilm degraded the slime shield and reduced the bacterium’s virulence. Biofilms can make bacteria up to 1,000 times more resistant to antibiotics, Mitchell says. The next step, he adds, is to see if degrading a biofilm resensitizes a gram-positive bacterium to antibiotics.

Mitchell and his team also treated S. aureus cells that didn’t have a biofilm with the mutant’s enzyme mix and then exposed them to human cells. Eighty percent of the bacteria were no longer able to invade human cells, Mitchell says. The “acid spit” chewed up surface proteins that the pathogen uses to attach to and invade human cells. The enzymes didn’t kill the bacteria but did make them less virile.

No downsides yet
Predatory bacteria can efficiently eat other gram-negative bacteria, munch through biofilms and even save zebrafish from the jaws of an infectious death. But are they safe? Kadouri and the other researchers have done many studies, though none in humans yet, to try to answer that question.
In a 2016 study published in Scientific Reports, Kadouri and colleagues applied B. bacteriovorus to the eyes of rabbits and compared the effect with that of a common antibiotic eye drop, vancomycin. The vancomycin visibly inflamed the eyes, while the predatory bacteria had little to no effect. The eyes treated with predatory bacteria were indistinguishable from eyes treated with a saline solution, used as the control treatment. Other studies looking for potential toxic effects of B. bacteriovorus have so far found none.

In 2011, Sockett’s team gave chickens an oral dose of predatory bacteria. At 28 days, the researchers saw no difference in health between treated and untreated chickens. The makeup of the birds’ gut bacteria was altered, but not in a way that was harmful, she and her team reported in Applied and Environmental Microbiology.

Kadouri analyzed rats’ gut microbes after a treatment of predatory bacteria, reporting the results in a study published March 6 in Scientific Reports. Here too, the rodents’ guts showed little to no inflammation. When they sequenced the bacterial contents of the rats’ feces, the researchers saw small differences between the treated and untreated rats. But none of the changes appeared harmful, and the animals grew and acted normally.

If the rats had taken common antibiotics, it would have been a different story, Kadouri points out. Those drugs would have given the animals diarrhea, reduced their appetites and altered their gut flora in a big way. “When you take antibiotics, you’re basically t hrowing an atomic bomb” into your gut, Kadouri says. “You’re wiping everything out.”
Both Mitchell and Kadouri tested B. bacteriovorus on human cells and found that the predatory bacteria didn’t harm the cells or prompt an immune response. The researchers separately reported their findings in late 2016 in Scientific Reports and PLOS ONE .
Microbiologist Elizabeth Emmert of Salisbury University in Maryland studies B. bacterio-vorus as a means to protect crops — carrots and potatoes — from bacterial soft rot diseases. For humans, she calls the microbes a “promising” therapy for bacterial infections. “It seems most feasible as a topical treatment for wounds, since it would not have to survive passage through the digestive tract.”

There are plenty of questions that need answering first. Mitchell guesses that there will probably be 10 more years of rigorous testing in animals before moving on to human clinical studies. But pursuing these alternatives is worth the effort.

“The drugs that we’re taking are not benign and cuddly and nice,” Kadouri says. “We need them, but they don’t come without side effects.” Even though a living antibiotic sounds a bit crazy, it might be the best option in this dangerous era of antibiotic resistance.

Quantum computers are about to get real

Although the term “quantum computer” might suggest a miniature, sleek device, the latest incarnations are a far cry from anything available in the Apple Store. In a laboratory just 60 kilometers north of New York City, scientists are running a fledgling quantum computer through its paces — and the whole package looks like something that might be found in a dark corner of a basement. The cooling system that envelops the computer is about the size and shape of a household water heater.

Beneath that clunky exterior sits the heart of the computer, the quantum processor, a tiny, precisely engineered chip about a centimeter on each side. Chilled to temperatures just above absolute zero, the computer — made by IBM and housed at the company’s Thomas J. Watson Research Center in Yorktown Heights, N.Y. — comprises 16 quantum bits, or qubits, enough for only simple calculations.

If this computer can be scaled up, though, it could transcend current limits of computation. Computers based on the physics of the super­small can solve puzzles no other computer can — at least in theory — because quantum entities behave unlike anything in a larger realm.

Quantum computers aren’t putting standard computers to shame just yet. The most advanced computers are working with fewer than two dozen qubits. But teams from industry and academia are working on expanding their own versions of quantum computers to 50 or 100 qubits, enough to perform certain calculations that the most powerful supercomputers can’t pull off.
The race is on to reach that milestone, known as “quantum supremacy.” Scientists should meet this goal within a couple of years, says quantum physicist David Schuster of the University of Chicago. “There’s no reason that I see that it won’t work.”
But supremacy is only an initial step, a symbolic marker akin to sticking a flagpole into the ground of an unexplored landscape. The first tasks where quantum computers prevail will be contrived problems set up to be difficult for a standard computer but easy for a quantum one. Eventually, the hope is, the computers will become prized tools of scientists and businesses.

Attention-getting ideas
Some of the first useful problems quantum computers will probably tackle will be to simulate small molecules or chemical reactions. From there, the computers could go on to speed the search for new drugs or kick-start the development of energy-saving catalysts to accelerate chemical reactions. To find the best material for a particular job, quantum computers could search through millions of possibilities to pinpoint the ideal choice, for example, ultrastrong polymers for use in airplane wings. Advertisers could use a quantum algorithm to improve their product recommendations — dishing out an ad for that new cell phone just when you’re on the verge of purchasing one.

Quantum computers could provide a boost to machine learning, too, allowing for nearly flawless handwriting recognition or helping self-driving cars assess the flood of data pouring in from their sensors to swerve away from a child running into the street. And scientists might use quantum computers to explore exotic realms of physics, simulating what might happen deep inside a black hole, for example.

But quantum computers won’t reach their real potential — which will require harnessing the power of millions of qubits — for more than a decade. Exactly what possibilities exist for the long-term future of quantum computers is still up in the air.

The outlook is similar to the patchy vision that surrounded the development of standard computers — which quantum scientists refer to as “classical” computers — in the middle of the 20th century. When they began to tinker with electronic computers, scientists couldn’t fathom all of the eventual applications; they just knew the machines possessed great power. From that initial promise, classical computers have become indispensable in science and business, dominating daily life, with handheld smartphones becoming constant companions (SN: 4/1/17, p. 18).
Since the 1980s, when the idea of a quantum computer first attracted interest, progress has come in fits and starts. Without the ability to create real quantum computers, the work remained theoretical, and it wasn’t clear when — or if — quantum computations would be achievable. Now, with the small quantum computers at hand, and new developments coming swiftly, scientists and corporations are preparing for a new technology that finally seems within reach.

“Companies are really paying attention,” Microsoft’s Krysta Svore said March 13 in New Orleans during a packed session at a meeting of the American Physical Society. Enthusiastic physicists filled the room and huddled at the doorways, straining to hear as she spoke. Svore and her team are exploring what these nascent quantum computers might eventually be capable of. “We’re very excited about the potential to really revolutionize … what we can compute.”

Anatomy of a qubit
Quantum computing’s promise is rooted in quantum mechanics, the counterintuitive physics that governs tiny entities such as atoms, electrons and molecules. The basic element of a quantum computer is the qubit (pronounced “CUE-bit”). Unlike a standard computer bit, which can take on a value of 0 or 1, a qubit can be 0, 1 or a combination of the two — a sort of purgatory between 0 and 1 known as a quantum super­position. When a qubit is measured, there’s some chance of getting 0 and some chance of getting 1. But before it’s measured, it’s both 0 and 1.

Because qubits can represent 0 and 1 simultaneously, they can encode a wealth of information. In computations, both possibilities — 0 and 1 — are operated on at the same time, allowing for a sort of parallel computation that speeds up solutions.

Another qubit quirk: Their properties can be intertwined through the quantum phenomenon of entanglement (SN: 4/29/17, p. 8). A measurement of one qubit in an entangled pair instantly reveals the value of its partner, even if they are far apart — what Albert Einstein called “spooky action at a distance.”
Such weird quantum properties can make for superefficient calculations. But the approach won’t speed up solutions for every problem thrown at it. Quantum calculators are particularly suited to certain types of puzzles, the kind for which correct answers can be selected by a process called quantum interference. Through quantum interference, the correct answer is amplified while others are canceled out, like sets of ripples meeting one another in a lake, causing some peaks to become larger and others to disappear.

One of the most famous potential uses for quantum computers is breaking up large integers into their prime factors. For classical computers, this task is so difficult that credit card data and other sensitive information are secured via encryption based on factoring numbers. Eventually, a large enough quantum computer could break this type of encryption, factoring numbers that would take millions of years for a classical computer to crack.

Quantum computers also promise to speed up searches, using qubits to more efficiently pick out an information needle in a data haystack.

Qubits can be made using a variety of materials, including ions, silicon or superconductors, which conduct electricity without resistance. Unfortunately, none of these technologies allow for a computer that will fit easily on a desktop. Though the computer chips themselves are tiny, they depend on large cooling systems, vacuum chambers or other bulky equipment to maintain the delicate quantum properties of the qubits. Quantum computers will probably be confined to specialized laboratories for the foreseeable future, to be accessed remotely via the internet.

Going supreme
That vision of Web-connected quantum computers has already begun to Quantum computing is exciting. It’s coming, and we want a lot more people to be well-versed in itmaterialize. In 2016, IBM unveiled the Quantum Experience, a quantum computer that anyone around the world can access online for free.
With only five qubits, the Quantum Experience is “limited in what you can do,” says Jerry Chow, who manages IBM’s experimental quantum computing group. (IBM’s 16-qubit computer is in beta testing, so Quantum Experience users are just beginning to get their hands on it.) Despite its limitations, the Quantum Experience has allowed scientists, computer programmers and the public to become familiar with programming quantum computers — which follow different rules than standard computers and therefore require new ways of thinking about problems. “Quantum computing is exciting. It’s coming, and we want a lot more people to be well-versed in it,” Chow says. “That’ll make the development and the advancement even faster.”

But to fully jump-start quantum computing, scientists will need to prove that their machines can outperform the best standard computers. “This step is important to convince the community that you’re building an actual quantum computer,” says quantum physicist Simon Devitt of Macquarie University in Sydney. A demonstration of such quantum supremacy could come by the end of the year or in 2018, Devitt predicts.

Researchers from Google set out a strategy to demonstrate quantum supremacy, posted online at arXiv.org in 2016. They proposed an algorithm that, if run on a large enough quantum computer, would produce results that couldn’t be replicated by the world’s most powerful supercomputers.

The method involves performing random operations on the qubits, and measuring the distribution of answers that are spit out. Getting the same distribution on a classical supercomputer would require simulating the complex inner workings of a quantum computer. Simulating a quantum computer with more than about 45 qubits becomes unmanageable. Supercomputers haven’t been able to reach these quantum wilds.

To enter this hinterland, Google, which has a nine-qubit computer, has aggressive plans to scale up to 49 qubits. “We’re pretty optimistic,” says Google’s John Martinis, also a physicist at the University of California, Santa Barbara.

Martinis and colleagues plan to proceed in stages, working out the kinks along the way. “You build something, and then if it’s not working exquisitely well, then you don’t do the next one — you fix what’s going on,” he says. The researchers are currently developing quantum computers of 15 and 22 qubits.

IBM, like Google, also plans to go big. In March, the company announced it would build a 50-qubit computer in the next few years and make it available to businesses eager to be among the first adopters of the burgeoning technology. Just two months later, in May, IBM announced that its scientists had created the 16-qubit quantum computer, as well as a 17-qubit prototype that will be a technological jumping-off point for the company’s future line of commercial computers.
But a quantum computer is much more than the sum of its qubits. “One of the real key aspects about scaling up is not simply … qubit number, but really improving the device performance,” Chow says. So IBM researchers are focusing on a standard they call “quantum volume,” which takes into account several factors. These include the number of qubits, how each qubit is connected to its neighbors, how quickly errors slip into calculations and how many operations can be performed at once. “These are all factors that really give your quantum processor its power,” Chow says.

Errors are a major obstacle to boosting quantum volume. With their delicate quantum properties, qubits can accumulate glitches with each operation. Qubits must resist these errors or calculations quickly become unreliable. Eventually, quantum computers with many qubits will be able to fix errors that crop up, through a procedure known as error correction. Still, to boost the complexity of calculations quantum computers can take on, qubit reliability will need to keep improving.

Different technologies for forming qubits have various strengths and weaknesses, which affect quantum volume. IBM and Google build their qubits out of superconducting materials, as do many academic scientists. In superconductors cooled to extremely low temperatures, electrons flow unimpeded. To fashion superconducting qubits, scientists form circuits in which current flows inside a loop of wire made of aluminum or another superconducting material.

Several teams of academic researchers create qubits from single ions, trapped in place and probed with lasers. Intel and others are working with qubits fabricated from tiny bits of silicon known as quantum dots (SN: 7/11/15, p. 22). Microsoft is studying what are known as topological qubits, which would be extra-resistant to errors creeping into calculations. Qubits can even be forged from diamond, using defects in the crystal that isolate a single electron. Photonic quantum computers, meanwhile, make calculations using particles of light. A Chinese-led team demonstrated in a paper published May 1 in Nature Photonics that a light-based quantum computer could outperform the earliest electronic computers on a particular problem.

One company, D-Wave, claims to have a quantum computer that can perform serious calculations, albeit using a more limited strategy than other quantum computers (SN: 7/26/14, p. 6). But many scientists are skeptical about the approach. “The general consensus at the moment is that something quantum is happening, but it’s still very unclear what it is,” says Devitt.

Identical ions
While superconducting qubits have received the most attention from giants like IBM and Google, underdogs taking different approaches could eventually pass these companies by. One potential upstart is Chris Monroe, who crafts ion-based quantum computers.
On a walkway near his office on the University of Maryland campus in College Park, a banner featuring a larger-than-life portrait of Monroe adorns a fence. The message: Monroe’s quantum computers are a “fearless idea.” The banner is part of an advertising campaign featuring several of the university’s researchers, but Monroe seems an apt choice, because his research bucks the trend of working with superconducting qubits.

Monroe and his small army of researchers arrange ions in neat lines, manipulating them with lasers. In a paper published in Nature in 2016, Monroe and colleagues debuted a five-qubit quantum computer, made of ytterbium ions, allowing scientists to carry out various quantum computations. A 32-ion computer is in the works, he says.

Monroe’s labs — he has half a dozen of them on campus — don’t resemble anything normally associated with computers. Tables hold an indecipherable mess of lenses and mirrors, surrounding a vacuum chamber that houses the ions. As with IBM’s computer, although the full package is bulky, the quantum part is minuscule: The chain of ions spans just hundredths of a millimeter.

Scientists in laser goggles tend to the whole setup. The foreign nature of the equipment explains why ion technology for quantum computing hasn’t taken off yet, Monroe says. So he and colleagues took matters into their own hands, creating a start-up called IonQ, which plans to refine ion computers to make them easier to work with.

Monroe points out a few advantages of his technology. In particular, ions of the same type are identical. In other systems, tiny differences between qubits can muck up a quantum computer’s operations. As quantum computers scale up, Monroe says, there will be a big price to pay for those small differences. “Having qubits that are identical, over millions of them, is going to be really important.”

In a paper published in March in Proceedings of the National Academy of Sciences, Monroe and colleagues compared their quantum computer with IBM’s Quantum Experience. The ion computer performed operations more slowly than IBM’s superconducting one, but it benefited from being more interconnected — each ion can be entangled with any other ion, whereas IBM’s qubits can be entangled only with adjacent qubits. That interconnectedness means that calculations can be performed in fewer steps, helping to make up for the slower operation speed, and minimizing the opportunity for errors.
Early applications
Computers like Monroe’s are still far from unlocking the full power of quantum computing. To perform increasingly complex tasks, scientists will have to correct the errors that slip into calculations, fixing problems on the fly by spreading information out among many qubits. Unfortunately, such error correction multiplies the number of qubits required by a factor of 10, 100 or even thousands, depending on the quality of the qubits. Fully error-corrected quantum computers will require millions of qubits. That’s still a long way off.

So scientists are sketching out some simple problems that quantum computers could dig into without error correction. One of the most important early applications will be to study the chemistry of small molecules or simple reactions, by using quantum computers to simulate the quantum mechanics of chemical systems. In 2016, scientists from Google, Harvard University and other institutions performed such a quantum simulation of a hydrogen molecule. Hydrogen has already been simulated with classical computers with similar results, but more complex molecules could follow as quantum computers scale up.

Once error-corrected quantum computers appear, many quantum physicists have their eye on one chemistry problem in particular: making fertilizer. Though it seems an unlikely mission for quantum physicists, the task illustrates the game-changing potential of quantum computers.

The Haber-Bosch process, which is used to create nitrogen-rich fertilizers, is hugely energy intensive, demanding high temperatures and pressures. The process, essential for modern farming, consumes around 1 percent of the world’s energy supply. There may be a better way. Nitrogen-fixing bacteria easily extract nitrogen from the air, thanks to the enzyme nitrogenase. Quantum computers could help simulate this enzyme and reveal its properties, perhaps allowing scientists “to design a catalyst to improve the nitrogen fixation reaction, make it more efficient, and save on the world’s energy,” says Microsoft’s Svore. “That’s the kind of thing we want to do on a quantum computer. And for that problem it looks like we’ll need error correction.”

Pinpointing applications that don’t require error correction is difficult, and the possibilities are not fully mapped out. “It’s not because they don’t exist; I think it’s because physicists are not the right people to be finding them,” says Devitt, of Macquarie. Once the hardware is available, the thinking goes, computer scientists will come up with new ideas.

That’s why companies like IBM are pushing their quantum computers to users via the Web. “A lot of these companies are realizing that they need people to start playing around with these things,” Devitt says.

Quantum scientists are trekking into a new, uncharted realm of computation, bringing computer programmers along for the ride. The capabilities of these fledgling systems could reshape the way society uses computers.

Eventually, quantum computers may become part of the fabric of our technological society. Quantum computers could become integrated into a quantum internet, for example, which would be more secure than what exists today (SN: 10/15/16, p. 13).

“Quantum computers and quantum communication effectively allow you to do things in a much more private way,” says physicist Seth Lloyd of MIT, who envisions Web searches that not even the search engine can spy on.

There are probably plenty more uses for quantum computers that nobody has thought up yet.

“We’re not sure exactly what these are going to be used for. That makes it a little weird,” Monroe says. But, he maintains, the computers will find their niches. “Build it and they will come.”

Perovskites power up the solar industry

Tsutomu Miyasaka was on a mission to build a better solar cell. It was the early 2000s, and the Japanese scientist wanted to replace the delicate molecules that he was using to capture sunlight with a sturdier, more effective option.

So when a student told him about an unfamiliar material with unusual properties, Miyasaka had to try it. The material was “very strange,” he says, but he was always keen on testing anything that might respond to light.
Other scientists were running electricity through the material, called a perovskite, to generate light. Miyasaka, at Toin University of Yokohama in Japan, wanted to know if the material could also do the opposite: soak up sunlight and convert it into electricity. To his surprise, the idea worked. When he and his team replaced the light-sensitive components of a solar cell with a very thin layer of the perovskite, the illuminated cell pumped out a little bit of electric current.

The result, reported in 2009 in the Journal of the American Chemical Society, piqued the interest of other scientists, too. The perovskite’s properties made it (and others in the perovskite family) well-suited to efficiently generate energy from sunlight. Perhaps, some scientists thought, this perovskite might someday be able to outperform silicon, the light-absorbing material used in more than 90 percent of solar cells around the world.
Initial excitement quickly translated into promising early results. An important metric for any solar cell is how efficient it is — that is, how much of the sunlight that strikes its surface actually gets converted to electricity. By that standard, perovskite solar cells have shone, increasing in efficiency faster than any previous solar cell material in history. The meager 3.8 percent efficiency reported by Miyasaka’s team in 2009 is up to 22 percent this year. Today, the material is almost on par with silicon, which scientists have been tinkering with for more than 60 years to bring to a similar efficiency level.
“People are very excited because [perovskite’s] efficiency number has climbed so fast. It really feels like this is the thing to be working on right now,” says Jao van de Lagemaat, a chemist at the National Renewable Energy Laboratory in Golden, Colo.

Now, perovskite solar cells are at something of a crossroads. Lab studies have proved their potential: They are cheaper and easier to fabricate than time-tested silicon solar cells. Though perovskites are unlikely to completely replace silicon, the newer materials could piggyback onto existing silicon cells to create extra-effective cells. Perovskites could also harness solar energy in new applications where traditional silicon cells fall flat — as light-absorbing coatings on windows, for instance, or as solar panels that work on cloudy days or even absorb ambient sunlight indoors.

Whether perovskites can make that leap, though, depends on current research efforts to fix some drawbacks. Their tendency to degrade under heat and humidity, for example, is not a great characteristic for a product meant to spend hours in the sun. So scientists are trying to boost stability without killing efficiency.

“There are challenges, but I think we’re well on our way to getting this stuff stable enough,” says Henry Snaith, a physicist at the University of Oxford. Finding a niche for perovskites in an industry so dominated by silicon, however, requires thinking about solar energy in creative ways.

Leaping electrons
Perovskites flew under the radar for years before becoming solar stars. The first known perovskite was a mineral, calcium titanate, or CaTiO3, discovered in the 19th century. In more recent years, perovskites have expanded to a class of compounds with a similar structure and chemical recipe — a 1:1:3 ingredient ratio — that can be tweaked with different elements to make different “flavors.”

But the perovskites being studied for the light-absorbing layer of solar cells are mostly lab creations. Many are lead halide perovskites, which combine a lead ion and three ions of iodine or a related element, such as bromine, with a third type of ion (usually something like methylammonium). Those ingredients link together to form perovskites’ hallmark cagelike pyramid-on-pyramid structure. Swapping out different ingredients (replacing lead with tin, for instance) can yield many kinds of perovskites, all with slightly different chemical properties but the same basic crystal structure.

Perovskites owe their solar skills to the way their electrons interact with light. When sunlight shines on a solar panel, photons — tiny packets of light energy — bombard the panel’s surface like a barrage of bullets and get absorbed. When a photon is absorbed into the solar cell, it can share some of its energy with a negatively charged electron. Electrons are attracted to the positively charged nucleus of an atom. But a photon can give an electron enough energy to escape that pull, much like a video game character getting a power-up to jump a motorbike across a ravine. As the energized electron leaps away, it leaves behind a positively charged hole. A separate layer of the solar cell collects the electrons, ferrying them off as electric current.

The amount of energy needed to kick an electron over the ravine is different for every material. And not all photon power-ups are created equal. Sunlight contains low-energy photons (infrared light) and high-energy photons (sunburn-causing ultraviolet radiation), as well as all of the visible light in between.

Photons with too little energy “will just sail right on through” the light-catching layer and never get absorbed, says Daniel Friedman, a photovoltaic researcher at the National Renewable Energy Lab. Only a photon that comes in with energy higher than the amount needed to power up an electron will get absorbed. But any excess energy a photon carries beyond what’s needed to boost up an electron gets lost as heat. The more heat lost, the more inefficient the cell.
Because the photons in sunlight vary so much in energy, no solar cell will ever be able to capture and optimally use every photon that comes its way. So you pick a material, like silicon, that’s a good compromise — one that catches a decent number of photons but doesn’t waste too much energy as heat, Friedman says.

Although it has dominated the solar cell industry, silicon can’t fully use the energy from higher-energy photons; the material’s solar conversion efficiency tops out at around 30 percent in theory and has hit 20-some percent in practice. Perovskites could do better. The electrons inside perovskite crystals require a bit more energy to dislodge. So when higher-energy photons come into the solar cell, they devote more of their energy to dislodging electrons and generating electric current, and waste less as heat. Plus, by changing the ingredients and their ratios in a perovskite, scientists can adjust the photons it catches. Using different types of perovskites across multiple layers could allow solar cells to more effectively absorb a broader range of photons.

Perovskites have a second efficiency perk. When a photon excites an electron inside a material and leaves behind a positively charged hole, there’s a tendency for the electron to slide right back into a hole. This recombination, as it’s known, is inefficient — an electron that could have fed an electric current instead just stays put.

In perovskites, though, excited electrons usually migrate quite far from their holes, Snaith and others have found by testing many varieties of the material. That boosts the chances the electrons will make it out of the perovskite layer without landing back in a hole.

“It’s a very rare property,” Miyasaka says. It makes for an efficient sunlight absorber.

Some properties of perovskites also make them easier than silicon to turn into solar cells. Making a conventional silicon solar cell requires many steps, all done in just the right order at just the right temperature — something like baking a fragile soufflé. The crystals of silicon have to be perfect, because even small defects in the material can hurt its efficiency. The need for such precision makes silicon solar cells more expensive to produce.

Perovskites are more like brownies from a box — simpler, less finicky. “You can make it in an office, basically,” says materials scientist Robert Chang of Northwestern University in Evanston, Ill. He’s exaggerating, but only a little. Perovskites are made by essentially mixing a bunch of ingredients together and depositing them on a surface in a thin, even film. And while making crystalline silicon requires temperatures up to 2000° Celsius, perovskite crystals form at easier-to-reach temperatures — lower than 200°.

Seeking stability
In many ways, perovskites have become even more promising solar cell materials over time, as scientists have uncovered exciting new properties and finessed the materials’ use. But no material is perfect. So now, scientists are searching for ways to overcome perovskites’ real-world limitations. The most pressing issue is their instability, van de Lagemaat says. The high efficiency levels reported from labs often last only days or hours before the materials break down.

Tackling stability is a less flashy problem than chasing efficiency records, van de Lagemaat points out, which is perhaps why it’s only now getting attention. Stability isn’t a single number that you can flaunt, like an efficiency value. It’s also a bit harder to define, especially since how long a solar cell lasts depends on environmental conditions like humidity and precipitation levels, which vary by location.

Encapsulating the cell with water-resistant coatings is one strategy, but some scientists want to bake stability into the material itself. To do that, they’re experimenting with different perovskite designs. For instance, solar cells containing stacks of flat, graphenelike sheets of perovskites seem to hold up better than solar cells with the standard three-dimensional crystal and its interwoven layers.

In these 2-D perovskites, some of the methylammonium ions are replaced by something larger, like butylammonium. Swapping in the bigger ion forces the crystal to form in sheets just nanometers thick, which stack on top of each other like pages in a book, says chemist Aditya Mohite of Los Alamos National Laboratory in New Mexico. The butylammonium ion, which naturally repels water, forms spacer layers between the 2-D sheets and stops water from permeating into the crystal.
Getting the 2-D layers to line up just right has proved tricky, Mohite says. But by precisely controlling the way the layers form, he and colleagues created a solar cell that runs at 12.5 percent efficiency while standing up to light and humidity longer than a similar 3-D model, the team reported in 2016 in Nature. Although it was protected with a layer of glass, the 3-D perovskite solar cell lost performance rapidly, within a few days, while the 2-D perovskite withered only slightly. (After three months, the 2-D version was still working almost as well as it had been at the beginning.)

Despite the seemingly complex structure of the 2-D perovskites, they are no more complicated to make than their 3-D counterparts, says Mercouri Kanatzidis, a chemist at Northwestern and a collaborator on the 2-D perovskite project. With the right ingredients, he says, “they form on their own.”

His goal now is to boost the efficiency of 2-D perovskite cells, which don’t yet match up to their 3-D counterparts. And he’s testing different water-repelling ions to reach an ideal stability without sacrificing efficiency.

Other scientists have mixed 2-D and 3-D perovskites to create an ultra-long-lasting cell — at least by perovskite standards. A solar panel made of these cells ran at only 11 percent efficiency, but held up for 10,000 hours of illumination, or more than a year, according to research published in June in Nature Communications. And, importantly, that efficiency was maintained over an area of about 50 square centimeters, more on par with real-world conditions than the teeny-tiny cells made in most research labs.

A place for perovskites?
With boosts to their stability, perovskite solar cells are getting closer to commercial reality. And scientists are assessing where the light-capturing material might actually make its mark.

Some fans have pitted perovskites head-to-head with silicon, suggesting the newbie could one day replace the time-tested material. But a total takeover probably isn’t a realistic goal, says Sarah Kurtz, codirector of the National Center for Photovoltaics at the National Renewable Energy Lab.

“People have been saying for decades that silicon can’t get lower in cost to meet our needs,” Kurtz says. But, she points out, the price of solar energy from silicon-based panels has dropped far lower than people originally expected. There are a lot of silicon solar panels out there, and a lot of commercial manufacturing plants already set up to deal with silicon. That’s a barrier to a new technology, no matter how great it is. Other silicon alternatives face the same limitation. “Historically, silicon has always been dominant,” Kurtz says.
For Snaith, that’s not a problem. He cofounded Oxford Photo-voltaics Limited, one of the first companies trying to commercialize perovskite solar cells. His team is developing a solar cell with a perovskite layer over a standard silicon cell to make a super-efficient double-decker cell. That way, Snaith says, the team can capitalize on the massive amount of machinery already set up to build commercial silicon solar cells.
A perovskite layer on top of silicon would absorb higher-energy photons and turn them into electricity. Lower-energy photons that couldn’t excite the perovskite’s electrons would pass through to the silicon layer, where they could still generate current. By combining multiple materials in this way, it’s possible to catch more photons, making a more efficient cell.

That idea isn’t new, Snaith points out: For years, scientists have been layering various solar cell materials in this way. But these double-decker cells have traditionally been expensive and complicated to make, limiting their applications. Perovskites’ ease of fabrication could change the game. Snaith’s team is seeing some improvement already, bumping the efficiency of a silicon solar cell from 10 to 23.6 percent by adding a perovskite layer, for example. The team reported that result online in February in Nature Energy.

Rather than compete with silicon solar panels for space on sunny rooftops and in open fields, perovskites could also bring solar energy to totally new venues.

“I don’t think it’s smart for perovskites to compete with silicon,” Miyasaka says. Perovskites excel in other areas. “There’s a whole world of applications where silicon can’t be applied.”

Silicon solar cells don’t work as well on rainy or cloudy days, or indoors, where light is less direct, he says. Perovskites shine in these situations. And while traditional silicon solar cells are opaque, very thin films of perovskites could be printed onto glass to make sunlight-capturing windows. That could be a way to bring solar power to new places, turning glassy skyscrapers into serious power sources, for example. Perovskites could even be printed on flexible plastics to make solar-powered coatings that charge cell phones.

That printing process is getting closer to reality: Scientists at the University of Toronto recently reported a way to make all layers of a perovskite solar cell at temperatures below 150° — including the light-absorbing perovskite layer, but also the background workhorse layers that carry the electrons away and funnel them into current. That could streamline and simplify the production process, making mass newspaper-style printing of perovskite solar cells more doable.

Printing perovskite solar cells on glass is also an area of interest for Oxford Photovoltaics, Snaith says. The company’s ultimate target is to build a perovskite cell that will last 25 years, as long as a traditional silicon cell.

From day one, a frog’s developing brain is calling the shots

Frog brains get busy long before they’re fully formed. Just a day after fertilization, embryonic brains begin sending signals to far-off places in the body, helping oversee the layout of complex patterns of muscles and nerve fibers. And when the brain is missing, bodily chaos ensues, researchers report online September 25 in Nature Communications.

The results, from brainless embryos and tadpoles, broaden scientists’ understanding of the types of signals involved in making sure bodies develop correctly, says developmental biologist Catherine McCusker of the University of Massachusetts Boston. Scientists are familiar with short-range signals among nearby cells that help pattern bodies. But because these newly described missives travel all the way from the brain to the far reaches of the body, they are “the first example of really long-range signals,” she says.
Celia Herrera-Rincon of Tufts University in Medford, Mass., and colleagues came up with a simple approach to tease out the brain’s influence on the growing body. Just one day after fertilization, the scientists lopped off the still-forming brains of African clawed frog embryos. These embryos survive to become tadpoles even without brains, a quirk of biology that allowed the researchers to see whether the brain is required for the body’s development.
The answer was a definite — and surprising — yes, Herrera-Rincon says. Long before the brain is mature, it’s already organizing and guiding organ behavior, she says. Brainless tadpoles had bungled patterns of muscles. Normally, muscle fibers form a stacked chevron pattern. But in tadpoles lacking a brain, this pattern didn’t form correctly. “The borders between segments are all wonky,” says study coauthor Michael Levin, also of Tufts University. “They can’t keep a straight line.”
Nerve fibers that crisscross tadpoles’ bodies also grew in an abnormal pattern. Levin and colleagues noticed extra nerve fibers snaking across the brainless tadpoles in a chaotic pattern, “a nerve network that shouldn’t be there,” he says.

Muscle and nerve abnormalities are the most obvious differences. But brainless tadpoles probably have more subtle defects in other parts of their bodies, such as the heart. The search for those defects is the subject of ongoing experiments, Levin says.
In addition to keeping patterns on point, the young frog brain may protect its body from chemical assaults. A molecule that binds to certain proteins on cells in the body had no effect on normal embryos. But when given to brainless embryos, the same molecule caused their spinal cords and tails to grow crooked. These results suggest that early in development, brains keep embryos safe from agents that would otherwise cause harm.

“The brain is instructing cells that are really a long way away from it,” Levin says. While the precise identities of these long-range signals aren’t known, the researchers have some ideas. When brainless embryos were dosed with a drug that targets cells that typically respond to the chemical messenger acetylcholine, the muscle pattern improved. Similarly, the addition of a protein called HCN2 that can tweak the activity of cells also seemed to improve muscle development. More work is needed before scientists know whether these interventions are actually mimicking messaging from the early brain, and if so, how.

Frog development isn’t the same as mammalian development, but frog development “is pretty applicable to human biology,” McCusker says. In fundamental ways, humans and frogs are built from the same molecular toolbox, she says. So the results hint that a growing human brain might also interact similarly with a growing human body.

A baby ichthyosaur’s last meal revealed

As far as last meals go, squid isn’t a bad choice. Cephalopod remains appear to dominate the stomach contents of a newly analyzed ichthyosaur fossil from nearly 200 million years ago.

The ancient marine reptiles once roamed Jurassic seas and commonly pop up in England’s fossil-rich coast near Lyme Regis. But a lot of ichthyosaur museum specimens lack records of where they came from, making their age difficult to place.

Dean Lomax of the University of Manchester and his colleagues reexamined one such fossil. Based on its skull, they identified the creature as a newborn Ichthyosaurus communis. Microfossils of shrimp and amoeba species around the ichthyosaur put the specimen at 199 million to 196 million years old, the researchers estimate.

Tiny hook structures stand out in the newborn’s ribs — most likely the remnants of prehistoric black squid arms. Another baby ichthyosaur fossil that lived more recently had a stomach full of fish scales. So the new find suggests a shift in the menu for young ichthyosaurs at some point in their evolutionary history, the researchers write October 3 in Historical Biology.

Here’s what really happened to Hanny’s Voorwerp

The weird glowing blob of gas known as Hanny’s Voorwerp was a 10-year-old mystery. Now, Lia Sartori of ETH Zurich and colleagues have come to a two-pronged solution.

Hanny van Arkel, then a teacher in the Netherlands, discovered the strange bluish-green voorwerp, Dutch for “object,” in 2008 as she was categorizing pictures of galaxies as part of the Galaxy Zoo citizen science project.

Further observations showed that the voorwerp was a glowing cloud of gas that stretched some 100,000 light-years from the core of a massive nearby galaxy called IC 2497. The glow came from radiation emitted by an actively feeding black hole in the galaxy.
To excite the voorwerp’s glow, the black hole and its surrounding accretion disk, the active galactic nucleus, or AGN, should have had the brightness of about 2.5 trillion suns; its radio emission, however, suggested the AGN emitted the equivalent of a relatively paltry 25,000 suns. Either the AGN was obscured by dust, or the black hole slowed its eating around 100,000 years ago, causing its brightness to plunge.

Sartori and colleagues made the first direct measurement of the AGN’s intrinsic brightness using NASA’s NuSTAR telescope, which observed IC 2497 in high-energy X-rays that cut through the dust.

They found that the AGN is obscured by dust and it is dimmer than expected; the feeding has slowed way down. The team reported on arXiv.org on November 20 that IC 2497’s heart is as bright as 50 billion to 100 billion suns, meaning it dropped in brightness by a factor of 50 in the past 100,000 years — a less dramatic drop than previously thought.
“Both hypotheses that we thought before are true,” Sartori says.

Sartori plans to analyze NuSTAR observations of other voorwerpjes to see if their galaxies’ black holes are also in the process of shutting down — or even booting up.

“If you look at these clouds, you get information on how the black hole was in the past,” she says. “So we have a way to study how the activity of supermassive black holes varies on superhuman time scales.”

Editor’s note: This story was updated December 5, 2017, to clarify that the brightness measured by the researchers came from the accretion disk around an actively eating black hole, not the black hole itself.

Pollinators are usually safe from a Venus flytrap

Out of the hundreds of species of carnivorous plants found across the planet, none attract quite as much fascination as the Venus flytrap. The plants are native to just a small section of North Carolina and South Carolina, but these tiny plants can now be found around the world. They’re a favorite among gardeners, who grow them in homes and greenhouses.

Scientists, too, have long been intrigued by the plants and have extensively studied the famous trap. But far less is known about the flower that blooms on a stalk 15 to 35 centimeters above — including what pollinates that flower.
“The rest of the plant is so incredibly cool that most folks don’t get past looking at the active trap leaves,” says Clyde Sorenson, an entomologist at North Carolina State University in Raleigh. Plus, notes Sorenson’s NCSU colleague Elsa Youngsteadt, an insect ecologist, because flytraps are native to just a small part of North and South Carolina, field studies can be difficult. And most people who raise flytraps cut off the flowers so the plant can put more energy into making traps.

Sorenson and Youngsteadt realized that the mystery of flytrap pollination was sitting almost literally in their backyard. So they and their colleagues set out to solve it. They collected flytrap flower visitors and prey from three sites in Pender County, North Carolina, on four days in May and June 2016, being careful not to damage the plants.

“This is one of the prettiest places where you could work,” Youngsteadt says. Venus flytraps are habitat specialists, found only in certain spots of longleaf pine savannas in the Carolinas. “They need plenty of sunlight but like their feet to be wet,” says Sorenson. In May and June, the spots of savanna where the flytraps grow are “just delightful,” he says. And other carnivorous plants can be found there, too, including pitcher plants and sundews.
The researchers brought their finds back to the lab for identification. They also cataloged what kind of pollen was on flower visitors, and how much.
Nearly 100 species of arthropods visited the flowers, the team reports February 5 in American Naturalist. “The diversity of visitors on those flowers was surprising,” says Youngsteadt. However, only three species — a sweat bee and two beetles — appeared to be the most important, as they were either the most frequent visitors or carriers of the most pollen.
The study also found little overlap between pollinators and prey. Only 13 species were found both in a trap and on a flower, and of the nine potential pollinators in that group, none were found in high numbers.

For a carnivorous plant, “you don’t want to eat your pollinators,” Sorenson says. Flytraps appear to be doing a good job at that.

There are three ways that a plant can keep those groups separate, the researchers note. Flowers and traps could exist at different times of the year. However, that’s not the case with Venus flytraps. The plants produce the two structures at separate times, but traps stick around and are active during plant flowering.

Another possibility is the spatial separation of the two structures. Pollinators tend to be fliers while prey were more often crawling arthropods, such as spiders and ants. This matches up with the high flowers and low traps. But the researchers would like to do some experiments that manipulate the heights of the structures to see just how much that separation matters, Youngsteadt says.

The third option is that different scents or colors produced by flowers and traps might lure in different species to each structure. That’s another area for future study, Youngsteadt says. While attraction to scent and color are well documented for traps, little is now known about those factors for the flowers.

Venus flytraps are considered vulnerable to extinction, threatened by humans, Sorenson notes. The plant’s habitat is being destroyed as the population of the Carolinas grows. What is left of the habitat is being degraded as fires are suppressed (fires help clear vegetation and keep sunlight shining on the flytraps). And people steal flytraps from the wild by the thousands.

While research into their pollinators won’t help with any of those threats, it could aid in future conservation efforts. “Anything we can do to better understand how this plant reproduces will be of use down the road,” Sorenson says.

But what really excites the scientists is that they discovered something new so close to home. “One of the most thrilling parts of all this,” Sorenson says, “is that this plant has been known to science for [so long], everyone knows it, but there’s still a whole lot of things to discover.”

The Neil Armstrong biopic ‘First Man’ captures early spaceflight’s terror

First Man is not a movie about the moon landing.

The Neil Armstrong biopic, opening October 12, follows about eight years of the life of the first man on the moon, and spends about eight minutes depicting the lunar surface. Instead of the triumphant ticker tape parades that characterize many movies about the space race, First Man focuses on the terror, grief and heartache that led to that one small step.

“It’s a very different movie and storyline than people expect,” says James Hansen, author of the 2005 biography of Armstrong that shares the film’s name and a consultant on the film.
The story opens shortly before Armstrong’s 2-year-old daughter, Karen, died of a brain tumor in January 1962. That loss hangs over the rest of the film, setting the movie’s surprisingly somber emotional tone. The cinematography is darker than most space movies. Colors are muted. Music is ominous or absent — a lot of scenes include only ambient sound, like a pen scratching on paper, a glass breaking or a phone clicking into the receiver.
Karen’s death also seems to motivate the rest of Armstrong’s journey. Getting a fresh start may have been part of the reason why the grieving Armstrong (portrayed by Ryan Gosling) applied to the NASA Gemini astronaut program, although he never explicitly says so. And without giving too much away, a private moment Armstrong takes at the edge of Little West crater on the moon recalls his enduring bond with his daughter.

Hansen’s book also makes the case that Karen’s death motivated Armstrong’s astronaut career. Armstrong’s oldest son, Rick, who was 12 when his father landed on the moon, agrees that it’s plausible. “But it’s not something that he ever really definitively talked about,” Rick Armstrong says.

Armstrong’s reticence about Karen — and almost everything else — is true to life. That’s not all the film got right. Gosling captured Armstrong’s gravitas as well as his humor, and Claire Foy as his wife, Janet Armstrong, “is just amazing,” Rick Armstrong says.

Beyond the performances, the filmmakers, including director Damien Chazelle and screenwriter Josh Singer, went to great lengths to make the technical aspects of spaceflight historically accurate. The Gemini and Apollo cockpits Gosling sits in are replicas of the real spacecraft, and he flipped switches and hit buttons that would have controlled real flight. Much of the dialog during space scenes was taken verbatim from NASA’s control room logs, Hansen says.

The result is a visceral sense of how frightening and risky those early flights were. The spacecraft rattled and creaked like they were about to fall apart. The scene of Armstrong’s flight on the 1966 Gemini 8 mission, which ended early when the spacecraft started spinning out of control and almost killed its passengers, is terrifying. The 1967 fire inside the Apollo 1 spacecraft, which killed astronauts Ed White, Gus Grissom and Roger Chaffee, is gruesome.

“We wanted to treat that one with extreme care and love and get it exactly right,” Hansen says. “What we have in that scene, none of it’s made up.”

Even when the filmmakers took poetic license, they did it in a historical way. A vomit-inducing gyroscope that Gosling rides in during Gemini astronaut training was, in real life, used for the earlier Mercury astronauts, but not for Gemini, for instance. Since the Mercury astronauts never experienced the kind of dizzying rotation that the gyroscope mimicked, NASA dismantled it before the next group of astronauts arrived.

“They probably shouldn’t have dismantled it,” Hansen says — it did simulate what ended up happening in the Gemini 8 accident. So the filmmakers used the gyroscope experience as foreshadowing.

Meanwhile, present-day astronauts are not immune to harrowing brushes with death: a Russian Soyuz capsule carrying two astronauts malfunctioned October 11, and the astronauts had to evacuate in an alarming “ballistic descent.” NASA is currently talking about when and how to send astronauts back to the moon from American soil. The first commercial crew astronauts, who will test spacecraft built by Boeing and SpaceX, were announced in August.

First Man is a timely and sobering reminder of the risks involved in taking these giant leaps.

Loneliness is bad for brains

SAN DIEGO — Mice yanked out of their community and held in solitary isolation show signs of brain damage.

After a month of being alone, the mice had smaller nerve cells in certain parts of the brain. Other brain changes followed, scientists reported at a news briefing November 4 at the annual meeting of the Society for Neuroscience.

It’s not known whether similar damage happens in the brains of isolated humans. If so, the results have implications for the health of people who spend much of their time alone, including the estimated tens of thousands of inmates in solitary confinement in the United States and elderly people in institutionalized care facilities.

The new results, along with other recent brain studies, clearly show that for social species, isolation is damaging, says neurobiologist Huda Akil of the University of Michigan in Ann Arbor. “There is no question that this is changing the basic architecture of the brain,” Akil says.
Neurobiologist Richard Smeyne of Thomas Jefferson University in Philadelphia and his colleagues raised communities of multiple generations of mice in large enclosures packed with toys, mazes and things to climb. When some of the animals reached adulthood, they were taken out and put individually into “a typical shoebox cage,” Smeyne said.

This abrupt switch from a complex society to isolation induced changes in the brain, Smeyne and his colleagues later found. The overall size of nerve cells, or neurons, shrunk by about 20 percent after a month of isolation. That shrinkage held roughly steady over three months as mice remained in isolation.
To the researchers’ surprise, after a month of isolation, the mice’s neurons had a higher density of spines — structures for making neural connections — on message-receiving dendrites. An increase in spines is a change that usually signals something positive. “It’s almost as though the brain is trying to save itself,” Smeyne said.

But by three months, the density of dendritic spines had decreased back to baseline levels, perhaps a sign that the brain couldn’t save itself when faced with continued isolation. “It’s tried to recover, it can’t, and we start to see these problems,” Smeyne said.

The researchers uncovered other worrisome signals, too, including reductions in a protein called BDNF, which spurs neural growth. Levels of the stress hormone cortisol changed, too. Compared with mice housed in groups, isolated mice also had more broken DNA in their neurons.

The researchers studied neurons in the sensory cortex, a brain area involved in taking in information, and the motor cortex, which helps control movement. It’s not known whether similar effects happen in other brain areas, Smeyne says.

It’s also not known how the neural changes relate to mice’s behavior. In people, long-term isolation can lead to depression, anxiety and psychosis. Brainpower is affected, too. Isolated people develop problems reasoning, remembering and navigating.

Smeyne is conducting longer-term studies aimed at figuring out the effects of neuron shrinkage on thinking skills and behavior. He and his colleagues also plan to return isolated mice to their groups to see if the brain changes can be reversed. Those types of studies get at an important issue, Akil says. “The question is, ‘When is it too far gone?’”

How locust ecology inspired an opera

Locust: The Opera finds a novel way to doom a soprano: species extinction.

The libretto, written by entomologist Jeff Lockwood of the University of Wyoming in Laramie, features a scientist, a rancher and a dead insect. The scientist tenor agonizes over why the Rocky Mountain locust went extinct at the dawn of the 20th century. He comes up with hypotheses, three of which unravel to music and frustration.

The project hatched in 2014. “Jeff got in his head, ‘Oh, opera is a good way to tell science stories,’ which takes a creative mind to think that,” says Anne Guzzo, who composed the music. Guzzo teaches music theory and composition at the University of Wyoming.
locust brought famine and ruin to farms across the western United States. “This was a devastating pest that caused enormous human suffering,” Lockwood says. Epic swarms would suddenly descend on and eat vast swaths of cropland. “On the other hand, it was an iconic species that defined and shaped the continent.” Lockwood had written about the locust’s mysterious and sudden extinction in the 2004 book Locust , but the topic “begged in my mind for the grandeur of opera.” He spent several years mulling how to create a one-hour opera for three singers about the swarming grasshopper species.
Then the ghost of Hamlet’s father, in the opera “Amleto,” based on Shakespeare’s play, inspired a breakthrough. Lockwood imagined a spectral soprano locust, who haunted a scientist until he figured out what killed her kind.

To make one locust soprano represent trillions, Guzzo challenged her music theory class to find ways of evoking the sound of a swarm. They tried snapping fingers, rattling cardstock and crinkling cellophane. But “the simplest answer was the most elegant,” Guzzo says — tasking the audience with shivering sheets of tissue paper in sequence, so that a great wave of rustling swept through the auditorium.

For the libretto, Lockwood took an unusually data-driven approach. After surveying opera lengths and word counts, he paced his work at 25 to 30 words per minute, policing himself sternly. If a scene was long by two words, he’d find two to cut.
He wrote the dialogue not in verse, but as conversation, some of it a bit professorial. Guzzo asked for a few line changes. “I just couldn’t get ‘manic expressions of fecundity’ to fit where I wanted it to,” she says.
Eventually, the scientist solves the mystery, but takes no joy in telling the beautiful locust ghost that humans had unwittingly doomed her kind by destroying vital locust habitat. For tragedy, Lockwood says, “there has to be a loss tinged with a kind of remorse.”

The opera, performed twice in Jackson, Wyo., will next be staged in March in Agadir, Morocco.