There’s still a lot we don’t know about the proton

Nuclear physicist Evangeline Downie hadn’t planned to study one of the thorniest puzzles of the proton.

But when opportunity knocked, Downie couldn’t say no. “It’s the proton,” she exclaims. The mysteries that still swirl around this jewel of the subatomic realm were too tantalizing to resist. The plentiful particles make up much of the visible matter in the universe. “We’re made of them, and we don’t understand them fully,” she says.

Many physicists delving deep into the heart of matter in recent decades have been lured to the more exotic and unfamiliar subatomic particles: mesons, neutrinos and the famous Higgs boson — not the humble proton.
But rather than chasing the rarest of the rare, scientists like Downie are painstakingly scrutinizing the proton itself with ever-higher precision. In the process, some of these proton enthusiasts have stumbled upon problems in areas of physics that scientists thought they had figured out.

Surprisingly, some of the particle’s most basic characteristics are not fully pinned down. The latest measurements of its radius disagree with one another by a wide margin, for example, a fact that captivated Downie. Likewise, scientists can’t yet explain the source of the proton’s spin, a basic quantum property. And some physicists have a deep but unconfirmed suspicion that the seemingly eternal particles don’t live forever — protons may decay. Such a decay is predicted by theories that unite disparate forces of nature under one grand umbrella. But decay has not yet been witnessed.

Like the base of a pyramid, the physics of the proton serves as a foundation for much of what scientists know about the behavior of matter. To understand the intricacies of the universe, says Downie, of George Washington University in Washington, D.C., “we have to start with, in a sense, the simplest system.”

Sizing things up
For most of the universe’s history, protons have been VIPs — very important particles. They formed just millionths of a second after the Big Bang, once the cosmos cooled enough for the positively charged particles to take shape. But protons didn’t step into the spotlight until about 100 years ago, when Ernest Rutherford bombarded nitrogen with radioactively produced particles, breaking up the nuclei and releasing protons.

A single proton in concert with a single electron makes up hydrogen — the most plentiful element in the universe. One or more protons are present in the nucleus of every atom. Each element has a unique number of protons, signified by an element’s atomic number. In the core of the sun, fusing protons generate heat and light needed for life to flourish. Lone protons are also found as cosmic rays, whizzing through space at breakneck speeds, colliding with Earth’s atmosphere and producing showers of other particles, such as electrons, muons and neutrinos.

In short, protons are everywhere. Even minor tweaks to scientists’ understanding of the minuscule particle, therefore, could have far-reaching implications. So any nagging questions, however small in scale, can get proton researchers riled up.

A disagreement of a few percent in measurements of the proton’s radius has attracted intense interest, for example. Until several years ago, scientists agreed: The proton’s radius was about 0.88 femtometers, or 0.88 millionths of a billionth of a meter — about a trillionth the width of a poppy seed.
But that neat picture was upended in the span of a few hours, in May 2010, at the Precision Physics of Simple Atomic Systems conference in Les Houches, France. Two teams of scientists presented new, more precise measurements, unveiling what they thought would be the definitive size of the proton. Instead the figures disagreed by about 4 percent (SN: 7/31/10, p. 7). “We both expected that we would get the same number, so we were both surprised,” says physicist Jan Bernauer of MIT.

By itself, a slight revision of the proton’s radius wouldn’t upend physics. But despite extensive efforts, the groups can’t explain why they get different numbers. As researchers have eliminated simple explanations for the impasse, they’ve begun wondering if the mismatch could be the first hint of a breakdown that could shatter accepted tenets of physics.

The two groups each used different methods to size up the proton. In an experiment at the MAMI particle accelerator in Mainz, Germany, Bernauer and colleagues estimated the proton’s girth by measuring how much electrons’ trajectories were deflected when fired at protons. That test found the expected radius of about 0.88 femtometers (SN Online: 12/17/10).

But a team led by physicist Randolf Pohl of the Max Planck Institute of Quantum Optics in Garching, Germany, used a new, more precise method. The researchers created muonic hydrogen, a proton that is accompanied not by an electron but by a heftier cousin — a muon.

In an experiment at the Paul Scherrer Institute in Villigen, Switzerland, Pohl and collaborators used lasers to bump the muons to higher energy levels. The amount of energy required depends on the size of the proton. Because the more massive muon hugs closer to the proton than electrons do, the energy levels of muonic hydrogen are more sensitive to the proton’s size than ordinary hydrogen, allowing for measurements 10 times as precise as electron-scattering measurements.

Pohl’s results suggested a smaller proton radius, about 0.841 femtometers, a stark difference from the other measurement. Follow-up measurements of muonic deuterium — which has a proton and a neutron in its nucleus — also revealed a smaller than expected size, he and collaborators reported last year in Science. Physicists have racked their brains to explain why the two measurements don’t agree. Experimental error could be to blame, but no one can pinpoint its source. And the theoretical physics used to calculate the radius from the experimental data seems solid.

Now, more outlandish possibilities are being tossed around. An unexpected new particle that interacts with muons but not electrons could explain the difference (SN: 2/23/13, p. 8). That would be revolutionary: Physicists believe that electrons and muons should behave identically in particle interactions. “It’s a very sacred principle in theoretical physics,” says John Negele, a theoretical particle physicist at MIT. “If there’s unambiguous evidence that it’s been broken, that’s really a fundamental discovery.”

But established physics theories die hard. Shaking the foundations of physics, Pohl says, is “what I dream of, but I think that’s not going to happen.” Instead, he suspects, the discrepancy is more likely to be explained through minor tweaks to the experiments or the theory.

The alluring mystery of the proton radius reeled Downie in. During conversations in the lab with some fellow physicists, she learned of an upcoming experiment that could help settle the issue. The experiment’s founders were looking for collaborators, and Downie leaped on the bandwagon. The Muon Proton Scattering Experiment, or MUSE, to take place at the Paul Scherrer Institute beginning in 2018, will scatter both electrons and muons off of protons and compare the results. It offers a way to test whether the two particles behave differently, says Downie, who is now a spokesperson for MUSE.

A host of other experiments are in progress or planning stages. Scientists with the Proton Radius Experiment, or PRad, located at Jefferson Lab in Newport News, Va., hope to improve on Bernauer and colleagues’ electron-scattering measurements. PRad researchers are analyzing their data and should have a new number for the proton radius soon.

But for now, the proton’s identity crisis, at least regarding its size, remains. That poses problems for ultrasensitive tests of one of physicists’ most essential theories. Quantum electrodynamics, or QED, the theory that unites quantum mechanics and Albert Einstein’s special theory of relativity, describes the physics of electromagnetism on small scales. Using this theory, scientists can calculate the properties of quantum systems, such as hydrogen atoms, in exquisite detail — and so far the predictions match reality. But such calculations require some input — including the proton’s radius. Therefore, to subject the theory to even more stringent tests, gauging the proton’s size is a must-do task.
Spin doctors
Even if scientists eventually sort out the proton’s size snags, there’s much left to understand. Dig deep into the proton’s guts, and the seemingly simple particle becomes a kaleidoscope of complexity. Rattling around inside each proton is a trio of particles called quarks: one negatively charged “down” quark and two positively charged “up” quarks. Neutrons, on the flip side, comprise two down quarks and one up quark.

Yet even the quark-trio picture is too simplistic. In addition to the three quarks that are always present, a chaotic swarm of transient particles churns within the proton. Evanescent throngs of additional quarks and their antimatter partners, antiquarks, continually swirl into existence, then annihilate each other. Gluons, the particle “glue” that holds the proton together, careen between particles. Gluons are the messengers of the strong nuclear force, an interaction that causes quarks to fervently attract one another.
As a result of this chaos, the properties of protons — and neutrons as well — are difficult to get a handle on. One property, spin, has taken decades of careful investigation, and it’s still not sorted out. Quantum particles almost seem to be whirling at blistering speed, like the Earth rotating about its axis. This spin produces angular momentum — a quality of a rotating object that, for example, keeps a top revolving until friction slows it. The spin also makes protons behave like tiny magnets, because a rotating electric charge produces a magnetic field. This property is the key to the medical imaging procedure called magnetic resonance imaging, or MRI.

But, like nearly everything quantum, there’s some weirdness mixed in: There’s no actual spinning going on. Because fundamental particles like quarks don’t have a finite physical size — as far as scientists know — they can’t twirl. Despite the lack of spinning, the particles still behave like they have a spin, which can take on only certain values: integer multiples of 1/2.

Quarks have a spin of 1/2, and gluons a spin of 1. These spins combine to help yield the proton’s total spin. In addition, just as the Earth is both spinning about its own axis and orbiting the sun, quarks and gluons may also circle about the proton’s center, producing additional angular momentum that can contribute to the proton’s total spin.

Somehow, the spin and orbital motion of quarks and gluons within the proton combine to produce its spin of 1/2. Originally, physicists expected that the explanation would be simple. The only particles that mattered, they thought, were the proton’s three main quarks, each with a spin of 1/2. If two of those spins were oriented in opposite directions, they could cancel one another out to produce a total spin of 1/2. But experiments beginning in the 1980s showed that “this picture was very far from true,” says theoretical high-energy physicist Juan Rojo of Vrije University Amsterdam. Surprisingly, only a small fraction of the spin seemed to be coming from the quarks, befuddling scientists with what became known as the “spin crisis” (SN: 9/6/97, p. 158). Neutron spin was likewise enigmatic.

Scientists’ next hunch was that gluons contribute to the proton’s spin. “Verifying this hypothesis was very difficult,” Rojo says. It required experimental studies at the Relativistic Heavy Ion Collider, RHIC, a particle accelerator at Brookhaven National Laboratory in Upton, N.Y.

In these experiments, scientists collided protons that were polarized: The two protons’ spins were either aligned or pointed in opposite directions. Researchers counted the products of those collisions and compared the results for aligned and opposing spins. The results revealed how much of the spin comes from gluons. According to an analysis by Rojo and colleagues, published in Nuclear Physics B in 2014, gluons make up about 35 percent of the proton’s spin. Since the quarks make up about 25 percent, that leaves another 40 percent still unaccounted for.

“We have absolutely no idea how the entire spin is made up,” says nuclear physicist Elke-Caroline Aschenauer of Brookhaven. “We maybe have understood a small fraction of it.” That’s because each quark or gluon carries a certain fraction of the proton’s energy, and the lowest energy quarks and gluons cannot be spotted at RHIC. A proposed collider, called the Electron-Ion Collider (location to be determined), could help scientists investigate the neglected territory.

The Electron-Ion Collider could also allow scientists to map the still-unmeasured orbital motion of quarks and gluons, which may contribute to the proton’s spin as well.
An unruly force
Experimental physicists get little help from theoretical physics when attempting to unravel the proton’s spin and its other perplexities. “The proton is not something you can calculate from first principles,” Aschenauer says. Quantum chromo-dynamics, or QCD — the theory of the quark-corralling strong force transmitted by gluons — is an unruly beast. It is so complex that scientists can’t directly solve the theory’s equations.

The difficulty lies with the behavior of the strong force. As long as quarks and their companions stick relatively close, they are happy and can mill about the proton at will. But absence makes the heart grow fonder: The farther apart the quarks get, the more insistently the strong force pulls them back together, containing them within the proton. This behavior explains why no one has found a single quark in isolation. It also makes the proton’s properties especially difficult to calculate. Without accurate theoretical calculations, scientists can’t predict what the proton’s radius should be, or how the spin should be divvied up.
To simplify the math of the proton, physicists use a technique called lattice QCD, in which they imagine that the world is made of a grid of points in space and time (SN: 8/7/04, p. 90). A quark can sit at one point or another in the grid, but not in the spaces in between. Time, likewise, proceeds in jumps. In such a situation, QCD becomes more manageable, though calculations still require powerful supercomputers.

Lattice QCD calculations of the proton’s spin are making progress, but there’s still plenty of uncertainty. In 2015, theoretical particle and nuclear physicist Keh-Fei Liu and colleagues calculated the spin contributions from the gluons, the quarks and the quarks’ angular momentum, reporting the results in Physical Review D. By their calculation, about half of the spin comes from the quarks’ motion within the proton, about a quarter from the quarks’ spin, with the last quarter or so from the gluons. The numbers don’t exactly match the experimental measurements, but that’s understandable — the lattice QCD numbers are still fuzzy. The calculation relies on various approximations, so it “is not cast in stone,” says Liu, of the University of Kentucky in Lexington.

Death of a proton
Although protons seem to live forever, scientists have long questioned that immortality. Some popular theories predict that protons decay, disintegrating into other particles over long timescales. Yet despite extensive searches, no hint of this demise has materialized.

A class of ideas known as grand unified theories predict that protons eventually succumb. These theories unite three of the forces of nature, creating a single framework that could explain electromagnetism, the strong nuclear force and the weak nuclear force, which is responsible for certain types of radioactive decay. (Nature’s fourth force, gravity, is not yet incorporated into these models.) Under such unified theories, the three forces reach equal strengths at extremely high energies. Such energetic conditions were present in the early universe — well before protons formed — just a trillionth of a trillionth of a trillionth of a second after the Big Bang. As the cosmos cooled, those forces would have separated into three different facets that scientists now observe.
“We have a lot of circumstantial evidence that something like unification must be happening,” says theoretical high-energy physicist Kaladi Babu of Oklahoma State University in Stillwater. Beyond the appeal of uniting the forces, grand unified theories could explain some curious coincidences of physics, such as the fact that the proton’s electric charge precisely balances the electron’s charge. Another bonus is that the particles in grand unified theories fill out a family tree, with quarks becoming the kin of electrons, for example.

Under these theories, a decaying proton would disintegrate into other particles, such as a positron (the antimatter version of an electron) and a particle called a pion, composed of a quark and an antiquark, which itself eventually decays. If such a grand unified theory is correct and protons do decay, the process must be extremely rare — protons must live a very long time, on average, before they break down. If most protons decayed rapidly, atoms wouldn’t stick around long either, and the matter that makes up stars, planets — even human bodies — would be falling apart left and right.

Protons have existed for 13.8 billion years, since just after the Big Bang. So they must live exceedingly long lives, on average. But the particles could perish at even longer timescales. If they do, scientists should be able to monitor many particles at once to see a few protons bite the dust ahead of the curve (SN: 12/15/79, p. 405). But searches for decaying protons have so far come up empty.

Still, the search continues. To hunt for decaying protons, scientists go deep underground, for example, to a mine in Hida, Japan. There, at the Super-Kamiokande experiment (SN: 2/18/17, p. 24), they monitor a giant tank of water — 50,000 metric tons’ worth — waiting for a single proton to wink out of existence. After watching that water tank for nearly two decades, the scientists reported in the Jan. 1 Physical Review D that protons must live longer than 1.6 × 1034 years on average, assuming they decay predominantly into a positron and a pion.

Experimental limits on the proton lifetime “are sort of painting the theorists into a corner,” says Ed Kearns of Boston University, who searches for proton decay with Super-K. If a new theory predicts a proton lifetime shorter than what Super-K has measured, it’s wrong. Physicists must go back to the drawing board until they come up with a theory that agrees with Super-K’s proton-decay drought.

Many grand unified theories that remain standing in the wake of Super-K’s measurements incorporate supersymmetry, the idea that each known particle has another, more massive partner. In such theories, those new particles are additional pieces in the puzzle, fitting into an even larger family tree of interconnected particles. But theories that rely on supersymmetry may be in trouble. “We would have preferred to see supersymmetry at the Large Hadron Collider by now,” Babu says, referring to the particle accelerator located at the European particle physics lab, CERN, in Geneva, which has consistently come up empty in supersymmetry searches since it turned on in 2009 (SN: 10/1/16, p. 12).

But supersymmetric particles could simply be too massive for the LHC to find. And some grand unified theories that don’t require supersymmetry still remain viable. Versions of these theories predict proton lifetimes within reach of an upcoming generation of experiments. Scientists plan to follow up Super-K with Hyper-K, with an even bigger tank of water. And DUNE, the Deep Underground Neutrino Experiment, planned for installation in a former gold mine in Lead, S.D., will use liquid argon to detect protons decaying into particles that the water detectors might miss.
If protons do decay, the universe will become frail in its old age. According to Super-K, sometime well after its 1034 birthday, the cosmos will become a barren sea of light. Stars, planets and life will disappear. If seemingly dependable protons give in, it could spell the death of the universe as we know it.

Although protons may eventually become extinct, proton research isn’t going out of style anytime soon. Even if scientists resolve the dilemmas of radius, spin and lifetime, more questions will pile up — it’s part of the labyrinthine task of studying quantum particles that multiply in complexity the closer scientists look. These deeper studies are worthwhile, says Downie. The inscrutable proton is “the most fundamental building block of everything, and until we understand that, we can’t say we understand anything else.”

Top 10 science anniversaries of 2017

Every year science offers a diverse menu of anniversaries to celebrate. Births (or deaths) of famous scientists, landmark discoveries or scientific papers — significant events of all sorts qualify for celebratory consideration, as long as the number of years gone by is some worthy number, like 25, 50, 75 or 100. Or simple multiples thereof with polysyllabic names.

2017 has more than enough such anniversaries for a Top 10 list, so some worthwhile events don’t even make the cut, such as the births of Stephen Hawking (1942) and Arthur C. Clarke (1917). The sesquicentennial of Michael Faraday’s death (1867) almost made the list, but was bumped at the last minute by a book. Namely:

  1. On Growth and Form, centennial (1917)
    A true magnum opus, by the Scottish biologist D’Arcy Wentworth Thompson, On Growth and Form has inspired many biologists with its mathematical analysis of physical and structural forces underlying the diversity of shapes and forms in the biological world. Nobel laureate biologist Sir Peter Medawar praised Thompson’s book as “beyond comparison the finest work of literature in all the annals of science that have been recorded in the English tongue.”
  2. Birth of Abraham de Moivre, semiseptcentennial (1667).
    Born in France on May 26, 1667, de Moivre moved as a young man to London where he did his best work, earning election to the Royal Society. Despite exceptional mathematical skill, though, he attained no academic position and earned a meager living as a tutor. He is most famous for his book The Doctrine of Chances, which was in essence an 18th century version of Gambling for Dummies. It contained major advances in probability theory and in later editions introduced the concept of the famous bell curve. Isaac Newton was impressed; the legend goes that when anyone asked him about probability, Newton said to go talk to de Moivre.
  3. Exoplanets, quadranscentennial (1992)It seems like exoplanets have been around almost forever (and probably actually were), but the first confirmed by Earthbound astronomers were reported just a quarter century ago. Three planets showed up orbiting not an ordinary star, but a pulsar, a rapidly spinning neutron star left behind by a supernova.
    Astrophysicists Aleksander Wolszczan and Dale Frail found a sign of the planets, first detected with the Arecibo radio telescope, in irregularities in the radio pulses from the millisecond pulsar PSR1257+12. Some luck was involved. In 1990, the Arecibo telescope was being repaired and couldn’t pivot to point at a specific target; instead it constantly watched just one region of the sky. PSR1257+12 just happened to float by.
  4. Birth of Marie Curie, sesquicentennial (1867)
    No doubt the most famous Polish-born scientist since Copernicus, Curie was born in Warsaw on November 7, 1867, as Maria Sklodowska. Challenged by poverty, family tragedies and poor health, she nevertheless excelled as a high school student. But she then worked as a governess, while continuing as much science education as possible, until her married sister invited her to Paris. There she completed her physics education with honors and met and married another young physicist, Pierre Curie.

Together they tackled the mystery of the newly discovered radioactivity, winning the physics Nobel in 1903 along with radioactivity’s discoverer, Henri Becquerel. Marie continued the work after her husband’s tragic death in 1906; she became the first person to win a second Nobel, awarded in chemistry in 1911 for her discovery of the new radioactive elements polonium and radium.

  1. Laws of Robotics, semisesquicentennial (1942)
    One of science fiction’s greatest contributions to modern technological philosophy was Isaac Asimov’s Laws of Robotics, which first appeared in a short story in the March 1942 issue of Astounding Science Fiction. Later, those laws formed the motif of his many robot novels and appeared in his famous Foundation Trilogy (and subsequent sequels and prequels). They were:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Much later Asimov added a “zeroth law,” requiring robots to protect all of humankind even if that meant violating the other three laws. Artificial intelligence researchers all know about Asimov’s laws, but somehow have not managed to enforce them on social media. Incidentally, this year is also the quadranscentennial of Asimov’s death in 1992.

  1. First sustained nuclear fission chain reaction, semisesquicentennial (1942)
    Enrico Fermi, the Italian Nobel laureate, escaped fascist Italy to come to the United States shortly after nuclear fission’s discovery in Germany. Fermi directed construction of the “atomic pile,” or nuclear reactor, on a squash court under the stands of the University of Chicago’s football stadium. Fermi and his collaborators showed that neutrons emitted from fissioning uranium nuclei could induce more fission, creating a chain reaction capable of releasing enormous amounts of energy. Which it later did.
  2. Discovery of pulsars, semicentennial (1967)
    Science’s awareness of the existence of pulsars turns 50 this year, thanks to the diligence of Irish astrophysicist Jocelyn Bell Burnell. She spent many late-night hours examining the data recordings from the radio telescope she helped to build that first spotted a signal from a pulsar. She recognized that the signal was something special even though others thought it was just a glitch in the apparatus. But she was a graduate student so her supervisor got the Nobel Prize instead of her.
  3. Einstein’s theory of lasers, centennial (1917)
    Albert Einstein did not actually invent the laser, but he developed the mathematical understanding that made lasers possible. By 1917, physicists knew that quantum physics played a part in the working of atoms, but the details were fuzzy. Niels Bohr had shown in 1913 that an atom’s electrons occupy different energy levels, and that falling from a high energy level to a lower one emits radiation.

Einstein worked out the math describing this process when many atoms have electrons in high-energy states and emit radiation. His analysis of matter-radiation interaction indicated that it would be possible to prepare many atoms in the same high-energy state and then stimulate them to emit radiation all at once. Properly done, all the atoms would emit radiation of identical wavelength with the waves in phase. A few decades later other physicists figured out how to build such a device for use as a powerful weapon or to read bar codes at grocery stores.

  1. Qubits, quadranscentennial (1992)
    An even better quantum anniversary than lasers is the presentation to the world of the concept of quantum bits of information. Physicist Ben Schumacher of Kenyon College in Ohio unveiled the idea at a conference in Dallas in 1992 (I was there). A “quantum bit” of information, or qubit, represents the information contained in a quantum particle, which can exist in multiple states at once. A photon, for instance, might simultaneously be in a state of horizontal or vertical polarization. Or an electron’s spin could be up and down at the same time.

Such states differ from classical bits of information in a computer, recorded as either a 0 or 1; a quantum bit is both 0 and 1 at the same time. It becomes one or the other only when observed, much like a flipped coin is nether heads nor tails until somebody catches it, or it lands on the 50 yard line. Schumacher’s idea did not get a lot of attention at first, but it eventually became the foundational idea for quantum information theory, a field now booming with efforts to construct a quantum computer based on the manipulation of qubits.

  1. Birth of modern cosmology, centennial (1917)
    It might seem unfair that Einstein gets two Top 10 anniversaries in 2017, but 1917 was a good year for him. Before publishing his laser paper, Einstein tweaked the equations of his brand-new general theory of relativity in order to better explain the universe (details in Part 1). Weirdly, Einstein didn’t understand the universe, and he later thought the term he added to his equations was a mistake. But it turns out that today’s understanding of the universe’s behavior — expanding at an accelerating rate — seems to require the term that Einstein thought he had added erroneously. But you can’t expect Einstein to have foreseen everything. He probably had no idea that lasers would revolutionize grocery shopping either.

The scales of the ocellated lizard are surprisingly coordinated

A lizard’s intricately patterned skin follows rules like those used by a simple type of computer program.

As the ocellated lizard (Timon lepidus) grows, it transforms from a drab, polka-dotted youngster to an emerald-flecked adult. Its scales first morph from white and brown to green and black. Then, as the animal ages, individual scales flip from black to green, or vice versa.

Biophysicist Michel Milinkovitch of the University of Geneva realized that the scales weren’t changing their colors by chance. “You have chains of green and chains of black, and they form this labyrinthine pattern that very clearly is not random,” he says. That intricate ornamentation, he and colleagues report April 13 in Nature, can be explained by a cellular automaton, a concept developed by mathematicians in the 1940s and ’50s to simulate diverse complex systems.
A cellular automaton is composed of a grid of colored pixels. Using a set of rules, each pixel has a chance of switching its shade, based on the colors of surrounding pixels. By comparing photos of T. lepidus at different ages, the scientists showed that its scales obey such rules.
In the adult lizard, if a black scale is surrounded by other black scales, it is more likely to switch than a black one bounded by green, the researchers found. Eventually, the lizards’ scales settle down into a mostly stable state. Black scales wind up with around three green neighbors, and green scales have around four black ones. The researchers propose that interacting pigment cells could explain the color flips.

Computer scientists use cellular automata to simulate the real world, re-creating the turbulent motions of fluids or nerve cell activity in the brain, for example. But the new study is the first time the process has been seen with the naked eye in a real-life animal.
The scales on an ocellated lizard change color as the animal ages (more than three years of growth shown in first clip). Circles highlight four instances of color-flipping scales. Blue circles indicate a scale that switches from green to black, the green circle indicates a black to green transformation, and the light blue circle marks a scale that flip-flops from green to black to green. Researchers used a cellular automaton to simulate the adult lizard’s color-swapping scales (second clip), and re-create the labyrinthine patterns that develop on its skin.

The Zika epidemic began long before anyone noticed

The Zika virus probably arrived in the Western Hemisphere from somewhere in the Pacific more than a year before it was detected, a new genetic analysis of the epidemic shows. Researchers also found that as Zika fanned outward from Brazil, it entered neighboring countries and South Florida multiple times without being noticed.

Although Zika quietly took root in northeastern Brazil in late 2013 or early 2014, many months passed before Brazilian health authorities received reports of unexplained fever and skin rashes. Zika was finally confirmed as the culprit in May 2015.
The World Health Organization did not declare the epidemic a public health emergency until February 2016, after babies of Zika-infected mothers began to be born with severe neurological problems. Zika, which is carried by mosquitoes, infected an estimated 1 million people in Brazil alone in 2015, and is now thought to be transmitted in 84 countries, territories and regions.

Although Zika’s path was documented starting in 2015 through records of human cases, less was known about how the virus spread so silently before detection, or how outbreaks in different parts of Central and South America were connected. Now two groups working independently, reporting online May 24 in Nature, have compared samples from different times and locations to read the history recorded in random mutations of the virus’s 10 genes.

One team, led by scientists in the United Kingdom and Brazil, drove more than 1,200 miles across Brazil — “a Top Gear–style road trip,” one scientist quipped — with a portable device that could produce a complete catalog of the virus’s genes in less than a day. A second team, led by researchers at the Broad Institute of MIT and Harvard, analyzed more than 100 Zika genomes from infected patients and mosquitoes in nine countries and Puerto Rico. Based on where the cases originated, and the estimated rate at which genetic changes appear, the scientists re-created Zika’s evolutionary timeline.

Together, the studies revealed an epidemic that was silently churning long before anyone knew. “We found that in each of the regions we could analyze, Zika virus circulated undetected for many months, up to a year or longer, before the first locally transmitted cases were reported,” says Bronwyn MacInnis, an infectious disease geneticist at the Broad Institute, in Cambridge, Mass. “This means the outbreak in these regions was under way much earlier than previously thought.”

Although the epidemic exploded out of Brazil, the scientists also found a remote possibility of early settlement in the Caribbean. “It’s not immediately clear whether Zika stopped off somewhere else in the Americas before it got to northeast Brazil,” said Oliver Pybus, who studies evolution and infectious disease at the University of Oxford in England.
In a third study reported in Nature, researchers from more than two dozen institutions followed a trail of genetic clues to determine when and how Zika made its way to Florida. Those researchers concluded that Zika was introduced multiple times into the Miami area, most likely from the Caribbean, before local mosquitoes picked it up. The number of human cases increased in step with the rise in mosquito populations, said Kristian Andersen, an infectious disease researcher at the Scripps Research Institute in La Jolla, Calif. “Focusing on getting rid of mosquitoes is an effective way of preventing human cases,” he says.
Stealth spread
An analysis of more than 100 Zika genomes revealed that the virus showed up in nine countries 4.5 to 9 months earlier than the first confirmed cases of Zika virus infection. Colors indicate the distribution of groups of closely related strains of the virus.

Hover over/tap map to explore Zika’s spread in the Americas.
Previous studies have found traces of the virus’s footprints across the Americas, but none included so many different samples, says Young-Min Lee of Utah State University, who has also studied Zika’s genes. The current studies provide a higher-resolution look at the timing of the epidemic’s spread, he says, but in terms of Zika’s origins and progression from country to country, “overall the big picture is consistent with what we suspected.”

In addition to revealing Zika’s history, genetic studies are also valuable in fighting current and future disease outbreaks. Since diagnostic tests and even vaccine development are based on Zika’s genetics, it’s important to monitor mutations during an outbreak. Researchers developed quick-turnaround genomic analyses for Ebola in recent years, for example, that could aid a faster response during the next outbreak.

In the future, faster analysis of viral threats in the field might improve the odds of stopping the next epidemic, Lee says. It’s possible for a single infected traveler stepping off a plane to spark an epidemic long before doctors notice. “If one introduction [of a virus] can cause an outbreak, you have a very narrow window to try to contain it.”

The opioid epidemic spurs a search for new, safer painkillers

Last year, Joan Peay slipped on her garage steps and smashed her knee on the welcome mat. Peay, 77, is no stranger to pain. The Tennessee retiree has had 17 surgeries in the last 35 years — knee replacements, hip replacements, back surgery. She even survived a 2012 fungal meningitis outbreak that sickened her and hundreds of others, and killed 64. This knee injury, though, “hurt like the dickens.”

When she asked her longtime doctor for something stronger than ibuprofen to manage the pain, he treated her like a criminal, Peay says. His response was frustrating: “He’s known me for nine years, and I’ve never asked him for pain medicine other than what’s needed after surgery,” she says. She received nothing stronger than over-the-counter remedies. A year after the fall, she still lives in constant pain.
Just five years ago, Peay might have been handed a bottle of opioid painkillers for her knee. After all, opioids — including codeine, morphine and oxycodone — are some of the most powerful tools available to stop pain.
But an opioid addiction epidemic spreading across the United States has soured some doctors on the drugs. Many are justifiably concerned that patients will get hooked or share their pain pills with friends and family. And even short-term users risk dangerous side effects: The drugs slow breathing and can cause constipation, nausea and vomiting.

A newfound restraint in prescribing opioids is in many cases warranted, but it’s putting people like Peay in a tough spot: Opioids have become harder to get. Even though the drugs are far from perfect, patients have few other options.
Many drugs that have been heralded as improvements over existing opioids are just old opioids repackaged in new ways, says Nora Volkow, director of the National Institute on Drug Abuse. Companies will formulate a pill that is harder to crush, for instance, or mix in another drug that prevents an opioid pill from working if it’s crushed up and snorted for a quick high. Addicts, however, can still sidestep these safeguards. And the newly packaged drugs have the same fundamental risks as the old ones.

The need for new pain medicines is “urgent,” says Volkow.

Scientists have been searching for effective alternatives for years without success. But a better understanding of the way the brain sends and receives specific chemical messages may finally boost progress.

Scientists are designing new, more targeted molecules that might kill pain as well as today’s opioids do — with fewer side effects. Others are exploring the potential of tweaking existing opioid molecules to skip the negative effects. And some researchers are steering clear of opioids entirely, testing molecules in marijuana to ease chronic pain.

Opioid action
Humans recognized the potential power of opioids long before they understood how to control it. Ancient Sumerians cultivated opium-containing poppy plants more than 5,000 years ago, calling their crop the “joy plant.” Other civilizations followed suit, using the plant to treat aches and pains. But the addictive power of opium-derived morphine wasn’t recognized until the 1800s, and scientists have only recently begun to piece together exactly how opioids get such a stronghold on the brain.

Opioids mimic the body’s natural painkillers — molecules like endorphins. Both endorphins and opioids latch on to proteins called opioid receptors on the surface of nerve cells. When an opioid binds to a receptor in the peripheral nervous system, the nerve cells outside the brain, the receptor changes shape and sets in motion a cellular game of telephone that stops pain messages from reaching the brain.

The danger comes because opioid receptors scattered throughout the body and in crucial parts of the brain can cause far-reaching side effects when drugs latch on. For starters, many opioid receptors are located near the base of the brain — the part that controls breathing and heart rate. When a drug like morphine binds to one of these receptors in the brain stem, breathing and heart rate slow down. At low doses, the drug just makes people feel relaxed. At high doses, though, it can be deadly — most opioid overdose deaths occur when a person stops breathing. And high numbers of opioid receptors in the gut — thanks in part to all the nerve endings there — can trigger constipation and sometimes nausea.
Plus, opioids are highly addictive. These drugs mess with the brain’s reward system, triggering release of dopamine at levels higher than what the brain is used to. Gradually, the opioid receptors in the brain become less sensitive to the drugs, so the body demands higher and higher doses to get the same feel-good benefit. Such tolerance can reset the system so the body’s natural opioids no longer have the same effect either. If a person tries to go without the drugs, withdrawal symptoms like intense sweating and muscle cramps kick in — the body is physically dependent on the drugs. Addiction is a more complex phenomenon than dependence, involving physical cravings so strong that a person will go to extreme lengths to get the next dose. Long-term users of prescription opioids might be dependent on the drugs, but not necessarily addicted. But dependence and addiction often go together.

Despite their risks, opioids are still widely used because they work so well, particularly for moderate to severe short-term pain.

“No matter how much I say I want to avoid opioids, half of my patients will get some kind of opioid. It’s just unavoidable,” says Christopher Wu, an anesthesiologist at Johns Hopkins Medicine.

In the late 1990s and early 2000s, more doctors began doling out the drugs for long-term pain, too. Aggressive marketing campaigns from Purdue Pharma, the maker of OxyContin, promised that the drug was safe — and doctors listened. Opioid overdoses nearly quadrupled between 2000 and 2015, with almost half of those deaths coming from opioids prescribed by a doctor, according to data from the U.S. Centers for Disease Control and Prevention.
Opioid prescriptions have dipped a bit since 2012, thanks in part to stricter prescription laws and prescription registration databases. U.S. doctors wrote about 30 million fewer opioid prescriptions in 2015 than in 2012, data from IMS Health show. But restricting access doesn’t make pain disappear or curb addiction. Some people have turned to more dangerous street alternatives like heroin. And those drugs are sometimes spiked with more potent opioids such as fentanyl (SN: 9/3/16, p. 14) or even carfentanil, a synthetic opioid that’s used to tranquilize elephants. Overdose deaths from fentanyl and heroin have both spiked since 2012, CDC data reveal.

A sharper target
Scientists have been searching for a drug that kills pain as successfully as opioids without the side effects for close to a hundred years, with no luck, says Sam Ananthan, a medicinal chemist at Southern Research in Birmingham, Ala. He is newly optimistic.

“Right now, we have more biological tools, more information regarding the biochemical pathways,” Ananthan says. “Even though prior efforts were not successful, we now have some rational hypotheses.”

Scientists used to think opioid receptors were simple switches: If a molecule latched on, the receptor fired off a specific message. But more recent studies suggest that the same receptor can send multiple missives to different recipients.

The quest for better opioids got a much-needed jolt in 1999, when researchers at Duke University showed that mice lacking a protein called beta-arrestin 2 got more pain relief from morphine than normal mice did. And in a follow-up study, negative effects were less likely. “If we took out beta-arrestin 2, we saw improved pain relief, but less tolerance development,” says Laura Bohn, now a pharmacologist at the Scripps Research Institute in Jupiter, Fla. Bohn and colleagues figured out that mu opioid receptors — the type of opioid receptor targeted by most drugs — send two different streams of messages. One stops pain. The other, which needs beta-arrestin 2, drives many of the negatives of opioids, including the need for more and more drug and the dangerous slowdown of breathing.

Since that work, Bohn’s lab and many others have been trying to create molecules that bind to mu opioid receptors without triggering beta-arrestin 2 activity. The approach, called biased agonism, “has been around some time, but now it’s bearing the fruit,” says Susruta Majumdar, a chemist at Memorial Sloan Kettering Cancer Center in New York City. Scientists have identified dozens of molecules that seem to avoid beta-arrestin 2 in mice. But only a few might make good drugs. One, called PZM21, was described in Nature last year.
Another one has shown promise in humans — a much higher bar. The pharmaceutical company Trevena, headquartered in King of Prussia, Pa., has been working its way through the U.S. Food and Drug Administration’s drug approval process with a molecule called oliceridine. In studies reported in April in San Francisco at the Annual Regional Anesthesiology and Acute Pain Medicine Meeting, oliceridine was as effective as morphine in patients recovering from bunion removal and others who had tummy tuck surgeries. Over the short term, people taking a moderate dose of the drug got pain relief comparable to that of morphine, but reported fewer side effects, such as vomiting and breathing problems.

Oliceridine is an intravenous opioid, not an oral one. That means it would be administered in the short term in hospitals, during and after surgeries. It’s not a replacement for the pills people can go home with, says Jonathan Violin, Trevena’s cofounder. And it’s not perfect: More side effects cropped up at higher doses. But it’s the first opioid using this targeted approach to get this far in human studies. The company hopes to submit an application for FDA approval by the end of 2017, Violin says.

Avoiding the beta-arrestin 2 pathway isn’t the only approach to targeted opioids — just one of the best studied. Ananthan’s lab is taking a different tack. His team showed that mice lacking a different opioid receptor, the delta receptor, tended not to show negative effects in response to the drugs. Now, the researchers are trying to find molecules that can activate mu opioid receptors while blocking delta receptors.

There may also be a way to direct pain-killing messages specifically to the parts of a person’s body that are feeling pain. In one recent study, scientists described a molecule that bound to opioid receptors only when the area around the receptors was more acidic than normal. Inflammation from pain and injury raises acidity, so this molecule could quash pain where necessary, but wouldn’t bind to receptors elsewhere in the body, reducing the likelihood of side effects. Rats in the study, published in the March 3 Science, didn’t find the new molecule as rewarding as fentanyl, so it may be less addictive. And they were less likely to have constipation and slowed breathing.

Drugs face a long uphill climb from even the most promising animal studies to FDA approval for use in humans. Very few make it that far. It’s too soon to tell whether PZM21 and other molecules being studied in mice will ever end up as treatments for patients.

Unwilling to wait, some people in pain are turning to substances that are already available — without a doctor’s order. And scientists are trying to catch up.

Kratom crackdown
In August 2016, the Drug Enforcement Administration announced that it was cracking down on a supplement called kratom. Officials wanted to put the herb in the same regulatory category as heroin and LSD, labeling it a dangerous substance with no medical value. Members of the public vehemently disagreed. More than 23,000 comments poured in from veterans, cancer survivors, factory workers, lawyers and teachers. Almost all of them said the same thing: Kratom freed them from pain.
Made from the leaves of the tropical plant Mitragyna speciosa , kratom is sold in corner convenience stores and through online retailers. Its pain-killing abilities come mainly from two different molecules in the plant’s leaves: mitragynine and the structurally similar 7-hydroxymitragynine. Both have a structure that’s very different from morphine, but they bind to opioid receptors. That technically makes them opioids, even though they don’t look like morphine or oxycodone, Majumdar says. And that’s what concerned the DEA.
But just like some of the new opioids that scientists are developing, kratom’s active ingredients appear — anecdotally, at least — to deliver pain relief with fewer problems and less risk of tolerance. Some chronic opioid users switch to kratom to wean themselves off of pain pills and ease withdrawal symptoms, says Oliver Grundmann, a medicinal chemist at the University of Florida in Gainesville. Other users have never habitually used opioids but are seeking relief from chronic pain or mental health problems, according to a survey he published online May 10 in Drug and Alcohol Dependence. Grundmann hopes the survey results will help guide research into the substance’s efficacy for specific medical concerns.

The safety and efficacy of kratom is still up for debate. There’s a lack of controlled clinical studies about the leaf’s impact on the body, Grundmann says. Plus, the way kratom is regulated — as a supplement — means that people buying it have no guarantee of what they’re actually getting.

While kratom has its fans, its active compounds aren’t very potent, says Majumdar. He thinks he could make a better drug by modifying these molecules.

Majumdar, Sloan Kettering collaborator András Váradi and colleagues tested a structural cousin of 7-hydroxymitragynine: mitragynine pseudoindoxyl. It binds to mu opioid receptors about 200 times as effectively as mitragynine in mice, the researchers reported in August in the Journal of Medicinal Chemistry. Just like Trevena’s oliceridine, the new molecule does not activate beta-arrestin 2. The pseudoindoxyl version also blocks the delta opioid receptor, further impeding nonpain-related activities.

Majumdar hopes a DEA ban on kratom won’t happen; it would severely restrict access, making research much harder to do. For now, there is no ban — but scientists are wary, he says.

Mix it up
Despite the potential for new, better opioids, other researchers are focused on an altogether different set of pain-killing drugs: the cannabinoids (made famous by marijuana, the dried leaves and other parts of the hemp plant, Cannabis sativa).

The active molecules in marijuana don’t have the same fast-acting pain-quenching abilities that opioids do. “If I go into an emergency room with acute pain, give me morphine,” says Yasmin Hurd, a pharmacologist at Mount Sinai in New York City. But with medical marijuana legal in 29 states plus the District of Columbia, the plant is getting more attention as a potential pain reliever, especially for chronic pain (SN: 6/14/14, p. 16).

Doctors in states where marijuana is legal write fewer prescriptions for opioid painkillers, a 2016 study in Health Affairs showed. Those states also had about a 25 percent lower rate of opioid overdose deaths compared with states that didn’t legalize marijuana, according to a 2014 study in JAMA Internal Medicine. When marijuana becomes legally available, some people might choose it instead of opioids.
There might be some merit to that choice. There are plenty of cannabinoid receptors in parts of the brain that process pain messages. But unlike opioid receptors, few exist in the brain stem. That means cannabinoids are far less likely to influence breathing than opioids, says Joseph Cheer, a neurobiologist at the University of Maryland School of Medicine in Baltimore. Fatal overdoses are nearly unheard of.

As with kratom, though, there’s a glut of anecdotal evidence suggesting marijuana’s power to cure everything from pain to anxiety to ulcers — but not many controlled clinical trials to back up the assertions (SN Online: 1/12/17). The knowledge gap is made even wider by the fact that marijuana has wildly different effects depending on how it’s ingested and the relative ratios of certain active molecules in each strain of the plant.There might be some merit to that choice. There are plenty of cannabinoid receptors in parts of the brain that process pain messages. But unlike opioid receptors, few exist in the brain stem. That means cannabinoids are far less likely to influence breathing than opioids, says Joseph Cheer, a neurobiologist at the University of Maryland School of Medicine in Baltimore. Fatal overdoses are nearly unheard of.

“People think they know how marijuana affects the brain,” Hurd says. In reality, “there’s been very little evidence-based structural scientific studies done with marijuana.”

Aron Lichtman, a pharmacologist at Virginia Commonwealth University in Richmond, agrees. “There’s definitely medicine in that plant — that’s been proven,” he says. “The challenge is that it may not work for everybody and every type of pain.”

Scientists who are serious about figuring out marijuana are breaking it down, looking at the plant’s active molecules — cannabinoids — one by one. Cannabidiol, or CBD, has garnered particular attention. Because of the way it indirectly interacts with cannabinoid receptors, it doesn’t give people the high that’s characteristic of tetrahydrocannabinol, or THC, the mind-altering chemical in marijuana. That makes CBD less rewarding and better suited to longer-term use. The molecule can influence signals sent by a number of other receptors in the brain, many involved in pain and inflammation.

But THC might have merit, too. It’s already used in a couple of FDA-approved drugs to treat nausea and vomiting from chemo-therapy. There’s some evidence that those medications might also help relieve pain, though Lichtman calls those studies a “mixed bag.”

Alone, cannabinoids might be fairly weak painkillers. But combined with opioids, he’s shown, they can amplify the pain relief and reduce the opioid dose needed in mice.

Drugs that might amp up the power of the body’s natural cannabinoids are another option. That’s what Ruth Ross of the University of Toronto is studying. A few years ago, her team identified a region on a cannabinoid receptor called CB1 that has an interesting property: Small molecules that bind to it act like volume knobs for the body’s natural cannabinoids, called endocannabinoids. When a molecule of the right shape locks on to CB1, it makes endocannabinoids naturally present in the body more likely to latch on. That boosts pain relief in a targeted way — when endocannabinoids are already being released by the body, such as after injury or stress.

“You magnify the already existing effects of the compound,” Ross says. Her team has identified and patented several of these volume-knob molecules, and is working on improving them.

“For various reasons they wouldn’t be good as drugs,” she says. They have too many effects on the body beyond their intended one. But she’s making slight tweaks to their chemical structures to try to reduce those off-target effects, with the hope that one day the molecules could be studied in patients.

Safer opioids or alternative painkillers would help people deal with their pain without risking addiction or death. Peay has gotten to know people — as a member of social media groups for those living with chronic pain — who are experiencing the crushing results of poorly managed pain. People lose their jobs, she says, or move to Colorado just to get access to legal marijuana. As for her? “I still have my sense of humor, and that helps me get through all the pain.” But she’s holding out for something better.

Einstein’s light-bending by single far-off star detected

For the first time, astronomers have seen a star outside of the solar system bend the light from another star. The measurement, reported June 7 in Austin, Texas, at a meeting of the American Astronomical Society, vindicates both Einstein’s most famous theory and what goes on in the inner lives of stellar corpses.

Astronomers using the Hubble Space Telescope watched as a white dwarf passed in front of a more distant star. That star seemed to move in a small loop, its apparent position deflected by the white dwarf’s gravity.
More than a century ago, Albert Einstein predicted that the way spacetime bends around a massive object — the sun, say — should shift the apparent position of stars that appear behind that object. The measurement of this effect during a solar eclipse in 1919 confirmed Einstein’s general theory of relativity: Mass warps spacetime and bends the path of light rays (SN: 10/17/15, p. 16).

The New York Times hailed it as “one of the greatest — perhaps the greatest — of achievements in the history of human thought.” But even Einstein doubted the light-bending effect could be detected for more distant stars than the sun.

Now, in a study published in the June 9 issue of Science, Kailash Sahu of the Space Telescope Science Institute in Baltimore and his colleagues have shown that it can.

“This is an elegant outcome,” says Terry Oswalt at Embry-Riddle Aeronautical University in Daytona Beach, Fla., who was not involved in the new work. “Einstein would be very proud.”
While the stars literally aligned to make the measurement possible, this was no lucky accident. Sahu and colleagues scoured a catalog of 5,000 stellar motions to find a pair of stars likely to pass close enough on the sky that Hubble could sense the shift.

There were a few possible candidates, and one of them, called Stein 2051 B, was already a mysterious character.

Located about 18 light-years from Earth, Stein 2051 B is a white dwarf, a common end-of-life state for a sunlike star. When low-mass stars run out of fuel, they puff up into a red giant while fusing helium into carbon and oxygen. Eventually, they slough off outer layers of gas, leaving this carbon-oxygen core — the white dwarf — behind. About 97 percent of the stars in the Milky Way, including the sun, are or someday will be white dwarfs.

White dwarfs are extremely dense. They are prevented from collapsing into a black hole only by the pressure their electrons produce in trying not to be in the same quantum state as each other. This bizarre situation sets strict limits on their sizes and masses: For a given radius, a white dwarf can be only so massive, and only so large for a given mass.

This mass-radius relation was laid out in Nobel prize‒winning work by Subrahmanyan Chandrasekhar in the 1930s, but it has been difficult to prove. The only white dwarfs weighed so far share their orbits with other stars whose mutual motions help astronomers calculate their masses. But some astronomers worry that those companions could have added mass to the white dwarfs, throwing off this precise relationship.

Stein 2051 B also has a companion, but it is so far away that the two stars almost certainly evolved independently. That distance also means it would take hundreds of years to precisely measure the white dwarf’s mass. The best efforts to find a rough mass so far created a conundrum: Stein 2051 B appeared to be much lighter than expected. It would need an exotic iron core to explain it.

Measuring the shift of a background star provides a way to measure the white dwarf’s mass directly. The more massive the foreground star — in this case, the white dwarf — the greater the deflection of light from the background star.

“This is the most direct method of measuring the mass,” Sahu says. “It’s almost like putting somebody on a scale and reading off their weight.”

The white dwarf was scheduled to pass near a background star on March 5, 2014. Sahu’s team made eight observations of the two stars’ positions between October 2013 and October 2015.

The team found that the background star appeared to move in a small ellipse as the white dwarf approached and then moved away from it, exactly as predicted by Einstein’s equations. That suggests its mass is 0.675 times the mass of the sun — well within the normal range for its size.

This first measurement won’t be the last, Oswalt says. Several new star surveys are coming online in the next few years that will track the motions of billions of stars at once. That means that even though light-bending alignments are rare, astronomers should catch several more soon.

Live antibiotics use bacteria to kill bacteria

The woman in her 70s was in trouble. What started as a broken leg led to an infection in her hip that hung on for two years and several hospital stays. At a Nevada hospital, doctors gave the woman seven different antibiotics, one after the other. The drugs did little to help her. Lab results showed that none of the 14 antibiotics available at the hospital could fight the infection, caused by the bacterium Klebsiella pneumoniae.

Epidemiologist Lei Chen of the Washoe County Health District sent a bacterial sample to the U.S. Centers for Disease Control and Prevention. The bacteria, CDC scientists found, produced a nasty enzyme called New Delhi metallo-beta-lactamase, known for disabling many antibiotics. The enzyme was first seen in a patient from India, which is where the Nevada woman broke her leg and received treatment before returning to the United States.
The enzyme is worrisome because it arms bacteria against carbapenems, a group of last-resort antibiotics, says Alexander Kallen, a CDC medical epidemiologist based in Atlanta, who calls the drugs “our biggest guns for our sickest patients.”

The CDC’s final report revealed startling news: The bacteria raging in the woman’s body were resistant to all 26 antibiotics available in the United States. She died from septic shock; the infection shut down her organs.

Kallen estimates that there have been fewer than 10 cases of completely resistant bacterial infections in the United States. Such absolute resistance to all available drugs, though incredibly rare, was a “nightmare scenario,” says Daniel Kadouri, a micro-biologist at Rutgers School of Dental Medicine in Newark, N.J.

Antibiotic-resistant bacteria infect more than 2 million people in the United States every year, and at least 23,000 die, according to 2013 data, the most recent available from the CDC.

It’s time to flip the nightmare scenario and send a killer after the killer bacteria, say a handful of scientists with a new approach for fighting infection. The strategy, referred to as a “living antibiotic,” would pit one group of bacteria — given as a drug and dubbed “the predators” — against the bacteria that are wreaking havoc among humans.
The approach sounds extreme, but it might be necessary. Antimicrobial resistance “is something that we really, really have to take seriously,” says Elizabeth Tayler, senior technical officer for antimicrobial resistance at the World Health Organization in Geneva. “The ability of future generations to manage infection is at risk. It’s a global problem.”

The number of resistant strains has exploded, in part because doctors prescribe antibiotics too often. At least 30 percent of antibiotic prescriptions in the United States are not necessary, according to the CDC. When more people are exposed to more antibiotics, resistance is likely to build faster. And new alternatives are scarce, Kallen says, as the pace of developing novel antibiotics has slowed.

In search of new ideas, DARPA, a Department of Defense agency that invests in breakthrough technologies, is supporting work on predatory bacteria by Kadouri, as well as Robert Mitchell of Ulsan National Institute of Science and Technology in South Korea, Liz Sockett of the University of Nottingham in England and Edouard Jurkevitch of the Hebrew University of Jerusalem. This work, the agency says, represents “a significant departure from conventional antibiotic therapies.”

The approach is so unusual, people have called Kadouri and his lab crazy. “Probably, we are,” he jokes.

A movie-worthy killer
The notion of predatory bacteria sounds a bit scary, especially when Kadouri likens the most thoroughly studied of the predators, Bdellovibrio bacteriovorus, to the vicious space creatures in the Alien movies.

B. bacteriovorus, called gram-negative because of how they are stained for microscope viewing, dine on other gram-negative bacteria. All gram-negative bacteria have an inner membrane and outer cell wall. The predators don’t go after the other main type of bacteria, gram-positives, which have just one membrane.
When it encounters a gram-negative bacterium, the predator appears to latch on with grappling hook–like appendages. Then, like a classic cat burglar cutting a hole in glass, B. bacteriovorus forces its way through the outer membrane and seems to seal the hole behind it. Once within the space between the outer and inner membranes, the predator secretes enzymes — as damaging as the movie aliens’ acid spit — that chew its prey’s nutrients and DNA into bite-sized pieces.

B. bacteriovorus then uses the broken-down genetic building blocks to make its own DNA and begin replicating. The invader and its progeny eventually emerge from the shell of the prey in a way reminiscent of a cinematic chest-bursting scene.

“It’s a very efficient killing machine,” Kadouri says. That’s good news because many of the most dangerous pathogens that are resistant to antibiotics are gram-negative (SN: 6/10/17, p. 8), according to a list released by the WHO in February.

It’s the predator’s hunger for the bad-guy bacteria, the ones that current drugs have become useless against, that Kadouri and other researchers hope to harness.

Pitting predatory against pathogenic bacteria sounds risky. But, from what researchers can tell, these killer bacteria appear safe. “We know that [B. bacteriovorus] doesn’t target mammalian cells,” Kadouri says.

Saving the see-through fish
To find out whether enlisting predatory bacteria might be crazy good and not just plain crazy, Kadouri’s lab group tested B. bacteriovorus’ killing ability against an array of bacteria in lab dishes in 2010. The microbe significantly reduced levels of 68 of the 83 bacteria tested.

Since then, Kadouri and others have looked at the predator’s ability to devour dangerous pathogens in animals. In rats and chickens, B. bacteriovorus reduced the number of bad bacteria. But the animals were always given nonlethal doses of pathogens, leaving open the question of whether the predator could save the animals’ lives.

Sockett needed to see evidence of survival improvement. “If we’re going to have Bdellovibrio as a medicine, we have to cure something,” she says. “We can count changes in numbers of bacteria, but if that doesn’t change the outcome of the infection — change the number of [animals] that die — it’s not worth it.”

So she teamed up with cell biologist Serge Mostowy of Imperial College London for a study in zebrafish. The aim was to see how many animals predatory bacteria could save from a deadly infection. The team also tested how the host’s immune system interacted with the predators.

The researchers gave zebra-fish larvae fatal doses of an antibiotic-resistant strain of Shigella flexneri, which causes dysentery in humans. Before infecting the fish, the researchers divided them into four groups. Two groups had their immune systems altered to produce fewer macrophages, the white blood cells that attack pathogens. Immune systems in the other two groups remained intact. B. bacteriovorus was injected into an unchanged group and a macrophage-deficient group, while two groups received no treatment.

All of the untreated fish with fewer macrophages died within 72 hours of receiving S. flexneri, the researchers reported in December in Current Biology. Of the fish with a normal immune system, 65 percent that received predator treatment survived compared with 35 percent with no predator treatment. Even in the fish with impaired immune systems, the predators saved about a quarter of the lot.
“This is the first time that Bdellovibrio has ever been used as an injected therapy in live organisms,” Sockett says. “And the important thing is the injection improved the survival of the zebrafish.”

The study also pulled off another first. In previous work, researchers had been unable to see predation as it happened within an animal. Because zebra-fish larvae are transparent, study coauthor Alexandra Willis captured images of B. bacteriovorus gobbling up S. flexneri.

“We were literally having to run to the microscope because the process was just happening so fast,” says Willis, a graduate student in Mostowy’s lab. After the predator invades, its rod-shaped prey become round. Willis saw Bdellovibrio “rounding” its prey within 15 minutes. From start to finish, the predatory cycle took about three to four hours.

The predator’s speed may be what gave it the edge over the infection, Mostowy says. B. bacteriovorus attacks fast, chipping away at the pathogens until the infection is reduced to a level that the immune system can handle. “Otherwise there are too many bacteria and the immune system would be overwhelmed,” he says. “We’re putting a shocking amount of Shigella, 50,000 bacteria, into the fish.”

Within 48 hours, S. flexneri levels dropped 98 percent in the surviving fish, from 50,000 to 1,000.

The immune cells also cleared nearly all the B. bacteriovorus predators from the fish. The predators had enough time to attack the infection before being targeted by the immune system themselves, creating an ideal treatment window. Even if the host’s immune system hadn’t attacked the predators, once the bacteria are gone, Willis says, the predators are out of food. Unable to replicate, they eventually die off.

A clean sweep
Predatory bacteria are efficient in more ways than one. They’re not just good killers — they eliminate the evidence too.

Typical antibiotic treatments don’t target a bacterium’s DNA, so they are likely to leave pieces of the bacterial body behind. That’s like killing a few bandits, but leaving their weapons so the next invaders can easily arm themselves for a new attack. This could be one way that multidrug resistance evolves, Mitchell says. For example, penicillin will kill all bacteria that aren’t resistant to the drug. The surviving bacteria can swim through the aftermath of the antibiotic attack and grab genes from their fallen comrades to incorporate into their own genomes. The destroyed bacteria may have had a resistance gene to a different antibiotic, say, vancomycin. Now you have bacteria that are resistant to both penicillin and vancomycin. Not good.

Predatory bacteria, on the other hand, “decimate the genome” of their prey, Mitchell says. They don’t just kill the bandit, they melt down all the DNA weapons so no pathogens can use them. In one experiment that has yet to be published, B. bacteriovorus almost completely ate up the genetic material of a bacterial colony within two hours — showing itself as a fast-acting predator that could prevent bacterial genes from falling into the wrong hands.

On top of that, even if pathogenic bacteria mutate, a common way they pick up new forms of resistance, they aren’t protected from predation. Resistance to predation hasn’t been reported in lab experiments since B. bacteriovorus was discovered in 1962, Mitchell says. Researchers don’t think there’s a single pathway or gene in a prey bacterium that the predator targets. Instead, B. bacteriovorus seem to use sheer force to break in. “It’s kind of like cracking an egg with a hammer,” Kadouri says. That’s not exactly something bacteria can mutate to protect themselves against.

Some bacteria manage to band together and cover themselves with a kind of built-in biological shield, which offers protection against antibiotics. But for predatory bacteria, the shield is more of a welcome mat.

Going after the gram-positives
When bacteria cluster together on a surface, whether in your body, on a countertop or on a medical instrument, they can form a biofilm. The thick, slimy shield helps microbes withstand antibiotic attacks because the drugs have difficulty penetrating the slime. Antibiotics usually act on fast-growing bacteria, but within a biofilm, bacteria are sluggish and dormant, making antibiotics less effective, Kadouri says.
But to predatory bacteria, a biofilm is like Jell-O — a tasty snack that’s easy to swallow. Once inside, B. bacteriovorus spreads like wildfire because its prey are now huddled together as confined targets. “It’s like putting zebras and a lion in a restaurant and closing the door and seeing what happens,” Kadouri says. For the zebras, “it can’t end well.”

Kadouri’s lab has shown repeatedly that predatory bacteria effectively eat away biofilms that protect gram-negative bacteria, and are in fact more efficient at killing bacteria within those biofilms.

Gram-positive bacteria cloak themselves in biofilms too. In 2014 in Scientific Reports, Mitchell and his team reported finding a way to use Bdellovibrio to weaken gram-positive bacteria, turning their protective shield against them and perhaps helping antibiotics do their job.

The discovery comes from studies of one naturally occurring B. bacteriovorus mutant with extra-scary spit. The mutant isn’t predatory. Instead of eating a prey’s DNA to make its own, it can grow and replicate like a normal bacterial colony. As it grows, it produces especially destructive enzymes. Among the mix of enzymes are proteases, which break down proteins.

Mitchell and his team tested the strength of the mutant’s secretions against the gram-positive Staphylococcus aureus. A cocktail of the enzymes applied to an S. aureus biofilm degraded the slime shield and reduced the bacterium’s virulence. Biofilms can make bacteria up to 1,000 times more resistant to antibiotics, Mitchell says. The next step, he adds, is to see if degrading a biofilm resensitizes a gram-positive bacterium to antibiotics.

Mitchell and his team also treated S. aureus cells that didn’t have a biofilm with the mutant’s enzyme mix and then exposed them to human cells. Eighty percent of the bacteria were no longer able to invade human cells, Mitchell says. The “acid spit” chewed up surface proteins that the pathogen uses to attach to and invade human cells. The enzymes didn’t kill the bacteria but did make them less virile.

No downsides yet
Predatory bacteria can efficiently eat other gram-negative bacteria, munch through biofilms and even save zebrafish from the jaws of an infectious death. But are they safe? Kadouri and the other researchers have done many studies, though none in humans yet, to try to answer that question.
In a 2016 study published in Scientific Reports, Kadouri and colleagues applied B. bacteriovorus to the eyes of rabbits and compared the effect with that of a common antibiotic eye drop, vancomycin. The vancomycin visibly inflamed the eyes, while the predatory bacteria had little to no effect. The eyes treated with predatory bacteria were indistinguishable from eyes treated with a saline solution, used as the control treatment. Other studies looking for potential toxic effects of B. bacteriovorus have so far found none.

In 2011, Sockett’s team gave chickens an oral dose of predatory bacteria. At 28 days, the researchers saw no difference in health between treated and untreated chickens. The makeup of the birds’ gut bacteria was altered, but not in a way that was harmful, she and her team reported in Applied and Environmental Microbiology.

Kadouri analyzed rats’ gut microbes after a treatment of predatory bacteria, reporting the results in a study published March 6 in Scientific Reports. Here too, the rodents’ guts showed little to no inflammation. When they sequenced the bacterial contents of the rats’ feces, the researchers saw small differences between the treated and untreated rats. But none of the changes appeared harmful, and the animals grew and acted normally.

If the rats had taken common antibiotics, it would have been a different story, Kadouri points out. Those drugs would have given the animals diarrhea, reduced their appetites and altered their gut flora in a big way. “When you take antibiotics, you’re basically t hrowing an atomic bomb” into your gut, Kadouri says. “You’re wiping everything out.”
Both Mitchell and Kadouri tested B. bacteriovorus on human cells and found that the predatory bacteria didn’t harm the cells or prompt an immune response. The researchers separately reported their findings in late 2016 in Scientific Reports and PLOS ONE .
Microbiologist Elizabeth Emmert of Salisbury University in Maryland studies B. bacterio-vorus as a means to protect crops — carrots and potatoes — from bacterial soft rot diseases. For humans, she calls the microbes a “promising” therapy for bacterial infections. “It seems most feasible as a topical treatment for wounds, since it would not have to survive passage through the digestive tract.”

There are plenty of questions that need answering first. Mitchell guesses that there will probably be 10 more years of rigorous testing in animals before moving on to human clinical studies. But pursuing these alternatives is worth the effort.

“The drugs that we’re taking are not benign and cuddly and nice,” Kadouri says. “We need them, but they don’t come without side effects.” Even though a living antibiotic sounds a bit crazy, it might be the best option in this dangerous era of antibiotic resistance.

Kepler shows small exoplanets are either super-Earths or mini-Neptunes

Small worlds come in two flavors. The complete dataset from the original mission of the planet-hunting Kepler space telescope reveals a split in the exoplanet family tree, setting super-Earths apart from mini-Neptunes.

Kepler’s final exoplanet catalog, released in a news conference June 19, now consists of 4,034 exoplanet candidates. Of those, 49 are rocky worlds in their stars’ habitable zones, including 10 newly discovered ones. So far, 2,335 candidates have been confirmed as planets and they include about 30 temperate, terrestrial worlds.
Careful measurements of the candidates’ stars revealed a surprising gap between planets about 1.5 and two times the size of Earth, Benjamin Fulton of the University of Hawaii at Manoa and Caltech and his colleagues found. There are a few planets in the gap, but most straddle it.

That splits the population of small planets into those that are rocky like Earth — 1.5 Earth radii or less — and those that are gassy like Neptune, between 2 and 3.5 Earth radii.

“This is a major new division in the family tree of exoplanets, somewhat analogous to the discovery that mammals and lizards are separate branches on the tree of life,” Fulton said.

The Kepler space telescope launched in 2009 and stared at a single patch of sky in the constellation Cygnus for four years. (Its stabilizing reaction wheels later broke and it began a new mission called K2 (SN Online: 5/15/13).) Kepler watched sunlike stars for telltale dips in brightness that would reveal a passing planet. Its ultimate goal was to come up with a single number: The fraction of stars like the sun that host planets like Earth.
The Kepler team has still not calculated that number, but astronomers are confident that they have enough data to do so, said Susan Thompson of the SETI Institute in Mountain View, Calif. She presented the results during the Kepler/K2 Science Conference IV being held at NASA’s Ames Research Center in Moffett Field, Calif.

Thompson and her colleagues ran the Kepler dataset through “Robovetter” software, which acted like a sieve to catch all the potential planets it contained. Running fake planet data through the software pinpointed how likely it was to confuse other signals for a planet or miss true planets.

“This is the first time we have a population that’s really well-characterized so we can do a statistical study and understand Earth analogs out there,” Thompson said.

Astronomers’ knowledge of these planets is only as good as their knowledge of their stars. So Fulton and his colleagues used the Keck telescope in Hawaii to precisely measure the sizes of 1,300 planet-hosting stars in the Kepler field of view. Those sizes in turn helped pin down the sizes of the planets with four times more precision than before.

The split in planet types they found could come from small differences in the planets’ sizes, compositions and distances from their stars. Young stars blow powerful winds of charged particles, which can blowtorch a growing planet’s atmosphere away. If a planet was too close to its star or too small to have a thick atmosphere — less than 75 percent larger than Earth — it would lose its atmosphere and end up in the smaller group. The planets that look more like Neptune today either had more gas to begin with or grew up in a gentler environment, Fulton said.

That divergence could have implications for the abundance of life in the galaxy. The surfaces of mini-Neptunes — if they exist — would suffer under the crushing pressure of such a thick atmosphere.

“These would not be nice places to live,” Fulton said. “Our result sharpens up the dividing line between potentially habitable planets and those that are inhospitable.”

Upcoming missions, like the Transiting Exoplanet Survey Satellite due to launch in 2018, will fill in the details of the exoplanet landscape with more observations of planets around bright stars. Later, telescopes like the James Webb Space Telescope, also scheduled to launch in 2018, will be able to check the atmospheres of those planets for signs of life.

“We can now really ask the question, ‘Is our planetary system unique in the galaxy?’” exoplanet astronomer Courtney Dressing of Caltech says. “My guess is the answer’s no. We’re not that special.”

Quantum computers are about to get real

Although the term “quantum computer” might suggest a miniature, sleek device, the latest incarnations are a far cry from anything available in the Apple Store. In a laboratory just 60 kilometers north of New York City, scientists are running a fledgling quantum computer through its paces — and the whole package looks like something that might be found in a dark corner of a basement. The cooling system that envelops the computer is about the size and shape of a household water heater.

Beneath that clunky exterior sits the heart of the computer, the quantum processor, a tiny, precisely engineered chip about a centimeter on each side. Chilled to temperatures just above absolute zero, the computer — made by IBM and housed at the company’s Thomas J. Watson Research Center in Yorktown Heights, N.Y. — comprises 16 quantum bits, or qubits, enough for only simple calculations.

If this computer can be scaled up, though, it could transcend current limits of computation. Computers based on the physics of the super­small can solve puzzles no other computer can — at least in theory — because quantum entities behave unlike anything in a larger realm.

Quantum computers aren’t putting standard computers to shame just yet. The most advanced computers are working with fewer than two dozen qubits. But teams from industry and academia are working on expanding their own versions of quantum computers to 50 or 100 qubits, enough to perform certain calculations that the most powerful supercomputers can’t pull off.
The race is on to reach that milestone, known as “quantum supremacy.” Scientists should meet this goal within a couple of years, says quantum physicist David Schuster of the University of Chicago. “There’s no reason that I see that it won’t work.”
But supremacy is only an initial step, a symbolic marker akin to sticking a flagpole into the ground of an unexplored landscape. The first tasks where quantum computers prevail will be contrived problems set up to be difficult for a standard computer but easy for a quantum one. Eventually, the hope is, the computers will become prized tools of scientists and businesses.

Attention-getting ideas
Some of the first useful problems quantum computers will probably tackle will be to simulate small molecules or chemical reactions. From there, the computers could go on to speed the search for new drugs or kick-start the development of energy-saving catalysts to accelerate chemical reactions. To find the best material for a particular job, quantum computers could search through millions of possibilities to pinpoint the ideal choice, for example, ultrastrong polymers for use in airplane wings. Advertisers could use a quantum algorithm to improve their product recommendations — dishing out an ad for that new cell phone just when you’re on the verge of purchasing one.

Quantum computers could provide a boost to machine learning, too, allowing for nearly flawless handwriting recognition or helping self-driving cars assess the flood of data pouring in from their sensors to swerve away from a child running into the street. And scientists might use quantum computers to explore exotic realms of physics, simulating what might happen deep inside a black hole, for example.

But quantum computers won’t reach their real potential — which will require harnessing the power of millions of qubits — for more than a decade. Exactly what possibilities exist for the long-term future of quantum computers is still up in the air.

The outlook is similar to the patchy vision that surrounded the development of standard computers — which quantum scientists refer to as “classical” computers — in the middle of the 20th century. When they began to tinker with electronic computers, scientists couldn’t fathom all of the eventual applications; they just knew the machines possessed great power. From that initial promise, classical computers have become indispensable in science and business, dominating daily life, with handheld smartphones becoming constant companions (SN: 4/1/17, p. 18).
Since the 1980s, when the idea of a quantum computer first attracted interest, progress has come in fits and starts. Without the ability to create real quantum computers, the work remained theoretical, and it wasn’t clear when — or if — quantum computations would be achievable. Now, with the small quantum computers at hand, and new developments coming swiftly, scientists and corporations are preparing for a new technology that finally seems within reach.

“Companies are really paying attention,” Microsoft’s Krysta Svore said March 13 in New Orleans during a packed session at a meeting of the American Physical Society. Enthusiastic physicists filled the room and huddled at the doorways, straining to hear as she spoke. Svore and her team are exploring what these nascent quantum computers might eventually be capable of. “We’re very excited about the potential to really revolutionize … what we can compute.”

Anatomy of a qubit
Quantum computing’s promise is rooted in quantum mechanics, the counterintuitive physics that governs tiny entities such as atoms, electrons and molecules. The basic element of a quantum computer is the qubit (pronounced “CUE-bit”). Unlike a standard computer bit, which can take on a value of 0 or 1, a qubit can be 0, 1 or a combination of the two — a sort of purgatory between 0 and 1 known as a quantum super­position. When a qubit is measured, there’s some chance of getting 0 and some chance of getting 1. But before it’s measured, it’s both 0 and 1.

Because qubits can represent 0 and 1 simultaneously, they can encode a wealth of information. In computations, both possibilities — 0 and 1 — are operated on at the same time, allowing for a sort of parallel computation that speeds up solutions.

Another qubit quirk: Their properties can be intertwined through the quantum phenomenon of entanglement (SN: 4/29/17, p. 8). A measurement of one qubit in an entangled pair instantly reveals the value of its partner, even if they are far apart — what Albert Einstein called “spooky action at a distance.”
Such weird quantum properties can make for superefficient calculations. But the approach won’t speed up solutions for every problem thrown at it. Quantum calculators are particularly suited to certain types of puzzles, the kind for which correct answers can be selected by a process called quantum interference. Through quantum interference, the correct answer is amplified while others are canceled out, like sets of ripples meeting one another in a lake, causing some peaks to become larger and others to disappear.

One of the most famous potential uses for quantum computers is breaking up large integers into their prime factors. For classical computers, this task is so difficult that credit card data and other sensitive information are secured via encryption based on factoring numbers. Eventually, a large enough quantum computer could break this type of encryption, factoring numbers that would take millions of years for a classical computer to crack.

Quantum computers also promise to speed up searches, using qubits to more efficiently pick out an information needle in a data haystack.

Qubits can be made using a variety of materials, including ions, silicon or superconductors, which conduct electricity without resistance. Unfortunately, none of these technologies allow for a computer that will fit easily on a desktop. Though the computer chips themselves are tiny, they depend on large cooling systems, vacuum chambers or other bulky equipment to maintain the delicate quantum properties of the qubits. Quantum computers will probably be confined to specialized laboratories for the foreseeable future, to be accessed remotely via the internet.

Going supreme
That vision of Web-connected quantum computers has already begun to Quantum computing is exciting. It’s coming, and we want a lot more people to be well-versed in itmaterialize. In 2016, IBM unveiled the Quantum Experience, a quantum computer that anyone around the world can access online for free.
With only five qubits, the Quantum Experience is “limited in what you can do,” says Jerry Chow, who manages IBM’s experimental quantum computing group. (IBM’s 16-qubit computer is in beta testing, so Quantum Experience users are just beginning to get their hands on it.) Despite its limitations, the Quantum Experience has allowed scientists, computer programmers and the public to become familiar with programming quantum computers — which follow different rules than standard computers and therefore require new ways of thinking about problems. “Quantum computing is exciting. It’s coming, and we want a lot more people to be well-versed in it,” Chow says. “That’ll make the development and the advancement even faster.”

But to fully jump-start quantum computing, scientists will need to prove that their machines can outperform the best standard computers. “This step is important to convince the community that you’re building an actual quantum computer,” says quantum physicist Simon Devitt of Macquarie University in Sydney. A demonstration of such quantum supremacy could come by the end of the year or in 2018, Devitt predicts.

Researchers from Google set out a strategy to demonstrate quantum supremacy, posted online at arXiv.org in 2016. They proposed an algorithm that, if run on a large enough quantum computer, would produce results that couldn’t be replicated by the world’s most powerful supercomputers.

The method involves performing random operations on the qubits, and measuring the distribution of answers that are spit out. Getting the same distribution on a classical supercomputer would require simulating the complex inner workings of a quantum computer. Simulating a quantum computer with more than about 45 qubits becomes unmanageable. Supercomputers haven’t been able to reach these quantum wilds.

To enter this hinterland, Google, which has a nine-qubit computer, has aggressive plans to scale up to 49 qubits. “We’re pretty optimistic,” says Google’s John Martinis, also a physicist at the University of California, Santa Barbara.

Martinis and colleagues plan to proceed in stages, working out the kinks along the way. “You build something, and then if it’s not working exquisitely well, then you don’t do the next one — you fix what’s going on,” he says. The researchers are currently developing quantum computers of 15 and 22 qubits.

IBM, like Google, also plans to go big. In March, the company announced it would build a 50-qubit computer in the next few years and make it available to businesses eager to be among the first adopters of the burgeoning technology. Just two months later, in May, IBM announced that its scientists had created the 16-qubit quantum computer, as well as a 17-qubit prototype that will be a technological jumping-off point for the company’s future line of commercial computers.
But a quantum computer is much more than the sum of its qubits. “One of the real key aspects about scaling up is not simply … qubit number, but really improving the device performance,” Chow says. So IBM researchers are focusing on a standard they call “quantum volume,” which takes into account several factors. These include the number of qubits, how each qubit is connected to its neighbors, how quickly errors slip into calculations and how many operations can be performed at once. “These are all factors that really give your quantum processor its power,” Chow says.

Errors are a major obstacle to boosting quantum volume. With their delicate quantum properties, qubits can accumulate glitches with each operation. Qubits must resist these errors or calculations quickly become unreliable. Eventually, quantum computers with many qubits will be able to fix errors that crop up, through a procedure known as error correction. Still, to boost the complexity of calculations quantum computers can take on, qubit reliability will need to keep improving.

Different technologies for forming qubits have various strengths and weaknesses, which affect quantum volume. IBM and Google build their qubits out of superconducting materials, as do many academic scientists. In superconductors cooled to extremely low temperatures, electrons flow unimpeded. To fashion superconducting qubits, scientists form circuits in which current flows inside a loop of wire made of aluminum or another superconducting material.

Several teams of academic researchers create qubits from single ions, trapped in place and probed with lasers. Intel and others are working with qubits fabricated from tiny bits of silicon known as quantum dots (SN: 7/11/15, p. 22). Microsoft is studying what are known as topological qubits, which would be extra-resistant to errors creeping into calculations. Qubits can even be forged from diamond, using defects in the crystal that isolate a single electron. Photonic quantum computers, meanwhile, make calculations using particles of light. A Chinese-led team demonstrated in a paper published May 1 in Nature Photonics that a light-based quantum computer could outperform the earliest electronic computers on a particular problem.

One company, D-Wave, claims to have a quantum computer that can perform serious calculations, albeit using a more limited strategy than other quantum computers (SN: 7/26/14, p. 6). But many scientists are skeptical about the approach. “The general consensus at the moment is that something quantum is happening, but it’s still very unclear what it is,” says Devitt.

Identical ions
While superconducting qubits have received the most attention from giants like IBM and Google, underdogs taking different approaches could eventually pass these companies by. One potential upstart is Chris Monroe, who crafts ion-based quantum computers.
On a walkway near his office on the University of Maryland campus in College Park, a banner featuring a larger-than-life portrait of Monroe adorns a fence. The message: Monroe’s quantum computers are a “fearless idea.” The banner is part of an advertising campaign featuring several of the university’s researchers, but Monroe seems an apt choice, because his research bucks the trend of working with superconducting qubits.

Monroe and his small army of researchers arrange ions in neat lines, manipulating them with lasers. In a paper published in Nature in 2016, Monroe and colleagues debuted a five-qubit quantum computer, made of ytterbium ions, allowing scientists to carry out various quantum computations. A 32-ion computer is in the works, he says.

Monroe’s labs — he has half a dozen of them on campus — don’t resemble anything normally associated with computers. Tables hold an indecipherable mess of lenses and mirrors, surrounding a vacuum chamber that houses the ions. As with IBM’s computer, although the full package is bulky, the quantum part is minuscule: The chain of ions spans just hundredths of a millimeter.

Scientists in laser goggles tend to the whole setup. The foreign nature of the equipment explains why ion technology for quantum computing hasn’t taken off yet, Monroe says. So he and colleagues took matters into their own hands, creating a start-up called IonQ, which plans to refine ion computers to make them easier to work with.

Monroe points out a few advantages of his technology. In particular, ions of the same type are identical. In other systems, tiny differences between qubits can muck up a quantum computer’s operations. As quantum computers scale up, Monroe says, there will be a big price to pay for those small differences. “Having qubits that are identical, over millions of them, is going to be really important.”

In a paper published in March in Proceedings of the National Academy of Sciences, Monroe and colleagues compared their quantum computer with IBM’s Quantum Experience. The ion computer performed operations more slowly than IBM’s superconducting one, but it benefited from being more interconnected — each ion can be entangled with any other ion, whereas IBM’s qubits can be entangled only with adjacent qubits. That interconnectedness means that calculations can be performed in fewer steps, helping to make up for the slower operation speed, and minimizing the opportunity for errors.
Early applications
Computers like Monroe’s are still far from unlocking the full power of quantum computing. To perform increasingly complex tasks, scientists will have to correct the errors that slip into calculations, fixing problems on the fly by spreading information out among many qubits. Unfortunately, such error correction multiplies the number of qubits required by a factor of 10, 100 or even thousands, depending on the quality of the qubits. Fully error-corrected quantum computers will require millions of qubits. That’s still a long way off.

So scientists are sketching out some simple problems that quantum computers could dig into without error correction. One of the most important early applications will be to study the chemistry of small molecules or simple reactions, by using quantum computers to simulate the quantum mechanics of chemical systems. In 2016, scientists from Google, Harvard University and other institutions performed such a quantum simulation of a hydrogen molecule. Hydrogen has already been simulated with classical computers with similar results, but more complex molecules could follow as quantum computers scale up.

Once error-corrected quantum computers appear, many quantum physicists have their eye on one chemistry problem in particular: making fertilizer. Though it seems an unlikely mission for quantum physicists, the task illustrates the game-changing potential of quantum computers.

The Haber-Bosch process, which is used to create nitrogen-rich fertilizers, is hugely energy intensive, demanding high temperatures and pressures. The process, essential for modern farming, consumes around 1 percent of the world’s energy supply. There may be a better way. Nitrogen-fixing bacteria easily extract nitrogen from the air, thanks to the enzyme nitrogenase. Quantum computers could help simulate this enzyme and reveal its properties, perhaps allowing scientists “to design a catalyst to improve the nitrogen fixation reaction, make it more efficient, and save on the world’s energy,” says Microsoft’s Svore. “That’s the kind of thing we want to do on a quantum computer. And for that problem it looks like we’ll need error correction.”

Pinpointing applications that don’t require error correction is difficult, and the possibilities are not fully mapped out. “It’s not because they don’t exist; I think it’s because physicists are not the right people to be finding them,” says Devitt, of Macquarie. Once the hardware is available, the thinking goes, computer scientists will come up with new ideas.

That’s why companies like IBM are pushing their quantum computers to users via the Web. “A lot of these companies are realizing that they need people to start playing around with these things,” Devitt says.

Quantum scientists are trekking into a new, uncharted realm of computation, bringing computer programmers along for the ride. The capabilities of these fledgling systems could reshape the way society uses computers.

Eventually, quantum computers may become part of the fabric of our technological society. Quantum computers could become integrated into a quantum internet, for example, which would be more secure than what exists today (SN: 10/15/16, p. 13).

“Quantum computers and quantum communication effectively allow you to do things in a much more private way,” says physicist Seth Lloyd of MIT, who envisions Web searches that not even the search engine can spy on.

There are probably plenty more uses for quantum computers that nobody has thought up yet.

“We’re not sure exactly what these are going to be used for. That makes it a little weird,” Monroe says. But, he maintains, the computers will find their niches. “Build it and they will come.”

Drinking sugary beverages in pregnancy linked to kids’ later weight gain

An expectant mom might want to think twice about quenching her thirst with soda.

The more sugary beverages a mom drank during mid-pregnancy, the heavier her kids were in elementary school compared with kids whose mothers consumed less of the drinks, a new study finds. At age 8, boys and girls weighed approximately 0.25 kilograms more — about half a pound — with each serving mom added per day while pregnant, researchers report online July 10 in Pediatrics.
“What happens in early development really has a long-term impact,” says Meghan Azad, an epidemiologist at the University of Manitoba in Canada, who was not involved in the study. A fetus’s metabolism develops in response to the surrounding environment, including the maternal diet, she says.

The new findings come out of a larger project that studies the impact of pregnant moms’ diets on their kids’ health. “We know that what mothers eat during pregnancy may affect their children’s health and later obesity,” says biostatistician Sheryl Rifas-Shiman of Harvard Medical School and Harvard Pilgrim Health Care Institute in Boston. “We decided to look at sugar-sweetened beverages as one of these factors.” Sugary drinks are associated with excessive weight gain and obesity in studies of adults and children.

Rifas-Shiman and colleagues included 1,078 mother-child pairs in the study. Moms filled out a questionnaire in the first and second trimesters of their pregnancy about what they were drinking — soda, fruit drinks, 100 percent fruit juice, diet soda or water — and how often. Soda and fruit drinks were considered sugar-sweetened beverages. A serving was defined as a can, glass or bottle of a beverage.

When the children of these moms were in elementary school, the researchers assessed the kids using several different measurements of obesity. They took kids’ height and weight to calculate body mass index and performed a scanning technique to determine total fat mass, among other methods.

Of the 1,078 kids in the study, 272, or 25 percent, were considered overweight or obese based on their BMI. Moms who drank at least two servings of sugar-sweetened beverages per day during the second trimester had children most likely to fall in this group. Other measurements of obesity were also highest for these kids. Children’s own sugary beverage drinking habits did not alter the results, the scientists say.

The research can’t say moms’ soda sips directly caused the weight gain in her kids. But based on this study and other work, limiting sugary drinks during pregnancy “is probably a good idea,” Azad says. There’s no harm in avoiding them, “and it looks like there may be a benefit.” Her advice is to drink water instead.