Dogged genetic detective work has led scientists to a hybrid red blood cell protein that offers some protection against malaria.
Reporting online May 18 in Science, researchers describe a genetic variant that apparently is responsible for the fusion of two proteins that protrude from the membranes of red blood cells. In its hybrid form, the protein somehow makes it more difficult for the malaria parasite to invade the blood cells.
Successful invasion by the parasite can cause flulike illness, and in severe cases, death. In 2015, 212 million cases of malaria occurred worldwide, according to the World Health Organization, and 429,000 people died, mostly young children. People carrying the protective genetic variant are 30 to 50 percent less likely to develop severe malaria than those without, the researchers report. The genetic change was found largely in people from Kenya, Malawi and Tanzania, suggesting that it occurred relatively recently in East Africa.
Discovering any genetic changes that protect against malaria is of great interest, says hematologist and malaria specialist Dave Roberts of the University of Oxford, who was not involved with the study. Understanding such changes, he says, “may help us understand the pathological pathways by which the parasite causes so much disease.”
Previous research had hinted that genetic changes to a particular stretch of DNA on chromosome 4 offered some protection against malaria. But the research team, an international collaboration that included researchers and clinicians from across Africa, had to do substantial legwork spanning 10 years to unmask the changes. Databases that gather the genetic instruction books, or genomes, of individuals are biased toward European populations, while African samples are underrepresented. And human genetic diversity is particularly high in sub-Saharan Africa, so genomes with rare genetic changes can be easily missed.
To overcome these hurdles, the researchers analyzed the genomes of more than 12,000 people, sampling widely in Africa. They surveyed 765 individuals from 10 ethnic groups in Gambia, Burkina Faso, Cameroon and Tanzania, as well as more than 2,000 genomes from the 1000 Genomes Project, a public catalog of genetic data. The team also examined genomes of nearly 10,000 people from Gambia, Kenya and Malawi, about half of whom had been hospitalized with severe malaria.
The team discovered that the stretch of DNA in question has undergone major changes; chunks of genes have been deleted, other chunks duplicated or even triplicated. One result stood out in the DNA of the people who were less at risk for malaria: Two genes that provide instructions for two proteins called glycophorin A and glycophorin B were snipped, fused together and duplicated. These proteins are known red blood cell proteins that the malaria parasite Plasmodium falciparum can use to gain access to the cells. This genetic mash-up seems to lead to a protein mash-up: The arm sticking inside the red blood cell is made up of protein A, while the arm sticking out of the cell is made up of protein B. This hybrid protein turns out to have been first described in 1984. Called the Dantu antigen, it’s found on red blood cells of only a small percentage of people outside of Africa and is part of a rare blood group called MNS. It isn’t clear why the hybrid protein makes it harder for the malaria parasite to breach a blood cell. “It might just make the cell more squishy so it feels different to the parasite,” says study coauthor Chris Spencer, a statistical geneticist at Oxford.
The new research suggests that there may be other stretches of DNA in the human genome that may reveal the diversity of responses to the parasite. Those spots are worth looking for, even if the search is difficult, says Spencer.
Typically, genome analysis studies primarily look for single changes — one altered unit of DNA — not wholesale copying or halving of genes. And because researchers break apart and then reassemble the 3-billion-letter-long genetic instruction book in order to analyze it, sections that have duplicated genes are harder to put in the right order and thus harder to study, which was the case with the region containing the red blood cell protein DNA.
“The genome is a big place and it’s natural to look at the things that are easiest,” Spencer says. “But it could be that the most interesting parts of the genome we just haven’t looked at yet.”
The Zika virus probably arrived in the Western Hemisphere from somewhere in the Pacific more than a year before it was detected, a new genetic analysis of the epidemic shows. Researchers also found that as Zika fanned outward from Brazil, it entered neighboring countries and South Florida multiple times without being noticed.
Although Zika quietly took root in northeastern Brazil in late 2013 or early 2014, many months passed before Brazilian health authorities received reports of unexplained fever and skin rashes. Zika was finally confirmed as the culprit in May 2015. The World Health Organization did not declare the epidemic a public health emergency until February 2016, after babies of Zika-infected mothers began to be born with severe neurological problems. Zika, which is carried by mosquitoes, infected an estimated 1 million people in Brazil alone in 2015, and is now thought to be transmitted in 84 countries, territories and regions.
Although Zika’s path was documented starting in 2015 through records of human cases, less was known about how the virus spread so silently before detection, or how outbreaks in different parts of Central and South America were connected. Now two groups working independently, reporting online May 24 in Nature, have compared samples from different times and locations to read the history recorded in random mutations of the virus’s 10 genes.
One team, led by scientists in the United Kingdom and Brazil, drove more than 1,200 miles across Brazil — “a Top Gear–style road trip,” one scientist quipped — with a portable device that could produce a complete catalog of the virus’s genes in less than a day. A second team, led by researchers at the Broad Institute of MIT and Harvard, analyzed more than 100 Zika genomes from infected patients and mosquitoes in nine countries and Puerto Rico. Based on where the cases originated, and the estimated rate at which genetic changes appear, the scientists re-created Zika’s evolutionary timeline.
Together, the studies revealed an epidemic that was silently churning long before anyone knew. “We found that in each of the regions we could analyze, Zika virus circulated undetected for many months, up to a year or longer, before the first locally transmitted cases were reported,” says Bronwyn MacInnis, an infectious disease geneticist at the Broad Institute, in Cambridge, Mass. “This means the outbreak in these regions was under way much earlier than previously thought.”
Although the epidemic exploded out of Brazil, the scientists also found a remote possibility of early settlement in the Caribbean. “It’s not immediately clear whether Zika stopped off somewhere else in the Americas before it got to northeast Brazil,” said Oliver Pybus, who studies evolution and infectious disease at the University of Oxford in England. In a third study reported in Nature, researchers from more than two dozen institutions followed a trail of genetic clues to determine when and how Zika made its way to Florida. Those researchers concluded that Zika was introduced multiple times into the Miami area, most likely from the Caribbean, before local mosquitoes picked it up. The number of human cases increased in step with the rise in mosquito populations, said Kristian Andersen, an infectious disease researcher at the Scripps Research Institute in La Jolla, Calif. “Focusing on getting rid of mosquitoes is an effective way of preventing human cases,” he says. Stealth spread An analysis of more than 100 Zika genomes revealed that the virus showed up in nine countries 4.5 to 9 months earlier than the first confirmed cases of Zika virus infection. Colors indicate the distribution of groups of closely related strains of the virus.
Hover over/tap map to explore Zika’s spread in the Americas. Previous studies have found traces of the virus’s footprints across the Americas, but none included so many different samples, says Young-Min Lee of Utah State University, who has also studied Zika’s genes. The current studies provide a higher-resolution look at the timing of the epidemic’s spread, he says, but in terms of Zika’s origins and progression from country to country, “overall the big picture is consistent with what we suspected.”
In addition to revealing Zika’s history, genetic studies are also valuable in fighting current and future disease outbreaks. Since diagnostic tests and even vaccine development are based on Zika’s genetics, it’s important to monitor mutations during an outbreak. Researchers developed quick-turnaround genomic analyses for Ebola in recent years, for example, that could aid a faster response during the next outbreak.
In the future, faster analysis of viral threats in the field might improve the odds of stopping the next epidemic, Lee says. It’s possible for a single infected traveler stepping off a plane to spark an epidemic long before doctors notice. “If one introduction [of a virus] can cause an outbreak, you have a very narrow window to try to contain it.”
Last year, Joan Peay slipped on her garage steps and smashed her knee on the welcome mat. Peay, 77, is no stranger to pain. The Tennessee retiree has had 17 surgeries in the last 35 years — knee replacements, hip replacements, back surgery. She even survived a 2012 fungal meningitis outbreak that sickened her and hundreds of others, and killed 64. This knee injury, though, “hurt like the dickens.”
When she asked her longtime doctor for something stronger than ibuprofen to manage the pain, he treated her like a criminal, Peay says. His response was frustrating: “He’s known me for nine years, and I’ve never asked him for pain medicine other than what’s needed after surgery,” she says. She received nothing stronger than over-the-counter remedies. A year after the fall, she still lives in constant pain. Just five years ago, Peay might have been handed a bottle of opioid painkillers for her knee. After all, opioids — including codeine, morphine and oxycodone — are some of the most powerful tools available to stop pain. But an opioid addiction epidemic spreading across the United States has soured some doctors on the drugs. Many are justifiably concerned that patients will get hooked or share their pain pills with friends and family. And even short-term users risk dangerous side effects: The drugs slow breathing and can cause constipation, nausea and vomiting.
A newfound restraint in prescribing opioids is in many cases warranted, but it’s putting people like Peay in a tough spot: Opioids have become harder to get. Even though the drugs are far from perfect, patients have few other options. Many drugs that have been heralded as improvements over existing opioids are just old opioids repackaged in new ways, says Nora Volkow, director of the National Institute on Drug Abuse. Companies will formulate a pill that is harder to crush, for instance, or mix in another drug that prevents an opioid pill from working if it’s crushed up and snorted for a quick high. Addicts, however, can still sidestep these safeguards. And the newly packaged drugs have the same fundamental risks as the old ones.
The need for new pain medicines is “urgent,” says Volkow.
Scientists have been searching for effective alternatives for years without success. But a better understanding of the way the brain sends and receives specific chemical messages may finally boost progress.
Scientists are designing new, more targeted molecules that might kill pain as well as today’s opioids do — with fewer side effects. Others are exploring the potential of tweaking existing opioid molecules to skip the negative effects. And some researchers are steering clear of opioids entirely, testing molecules in marijuana to ease chronic pain.
Opioid action Humans recognized the potential power of opioids long before they understood how to control it. Ancient Sumerians cultivated opium-containing poppy plants more than 5,000 years ago, calling their crop the “joy plant.” Other civilizations followed suit, using the plant to treat aches and pains. But the addictive power of opium-derived morphine wasn’t recognized until the 1800s, and scientists have only recently begun to piece together exactly how opioids get such a stronghold on the brain.
Opioids mimic the body’s natural painkillers — molecules like endorphins. Both endorphins and opioids latch on to proteins called opioid receptors on the surface of nerve cells. When an opioid binds to a receptor in the peripheral nervous system, the nerve cells outside the brain, the receptor changes shape and sets in motion a cellular game of telephone that stops pain messages from reaching the brain.
The danger comes because opioid receptors scattered throughout the body and in crucial parts of the brain can cause far-reaching side effects when drugs latch on. For starters, many opioid receptors are located near the base of the brain — the part that controls breathing and heart rate. When a drug like morphine binds to one of these receptors in the brain stem, breathing and heart rate slow down. At low doses, the drug just makes people feel relaxed. At high doses, though, it can be deadly — most opioid overdose deaths occur when a person stops breathing. And high numbers of opioid receptors in the gut — thanks in part to all the nerve endings there — can trigger constipation and sometimes nausea. Plus, opioids are highly addictive. These drugs mess with the brain’s reward system, triggering release of dopamine at levels higher than what the brain is used to. Gradually, the opioid receptors in the brain become less sensitive to the drugs, so the body demands higher and higher doses to get the same feel-good benefit. Such tolerance can reset the system so the body’s natural opioids no longer have the same effect either. If a person tries to go without the drugs, withdrawal symptoms like intense sweating and muscle cramps kick in — the body is physically dependent on the drugs. Addiction is a more complex phenomenon than dependence, involving physical cravings so strong that a person will go to extreme lengths to get the next dose. Long-term users of prescription opioids might be dependent on the drugs, but not necessarily addicted. But dependence and addiction often go together.
Despite their risks, opioids are still widely used because they work so well, particularly for moderate to severe short-term pain.
“No matter how much I say I want to avoid opioids, half of my patients will get some kind of opioid. It’s just unavoidable,” says Christopher Wu, an anesthesiologist at Johns Hopkins Medicine.
In the late 1990s and early 2000s, more doctors began doling out the drugs for long-term pain, too. Aggressive marketing campaigns from Purdue Pharma, the maker of OxyContin, promised that the drug was safe — and doctors listened. Opioid overdoses nearly quadrupled between 2000 and 2015, with almost half of those deaths coming from opioids prescribed by a doctor, according to data from the U.S. Centers for Disease Control and Prevention. Opioid prescriptions have dipped a bit since 2012, thanks in part to stricter prescription laws and prescription registration databases. U.S. doctors wrote about 30 million fewer opioid prescriptions in 2015 than in 2012, data from IMS Health show. But restricting access doesn’t make pain disappear or curb addiction. Some people have turned to more dangerous street alternatives like heroin. And those drugs are sometimes spiked with more potent opioids such as fentanyl (SN: 9/3/16, p. 14) or even carfentanil, a synthetic opioid that’s used to tranquilize elephants. Overdose deaths from fentanyl and heroin have both spiked since 2012, CDC data reveal.
A sharper target Scientists have been searching for a drug that kills pain as successfully as opioids without the side effects for close to a hundred years, with no luck, says Sam Ananthan, a medicinal chemist at Southern Research in Birmingham, Ala. He is newly optimistic.
“Right now, we have more biological tools, more information regarding the biochemical pathways,” Ananthan says. “Even though prior efforts were not successful, we now have some rational hypotheses.”
Scientists used to think opioid receptors were simple switches: If a molecule latched on, the receptor fired off a specific message. But more recent studies suggest that the same receptor can send multiple missives to different recipients.
The quest for better opioids got a much-needed jolt in 1999, when researchers at Duke University showed that mice lacking a protein called beta-arrestin 2 got more pain relief from morphine than normal mice did. And in a follow-up study, negative effects were less likely. “If we took out beta-arrestin 2, we saw improved pain relief, but less tolerance development,” says Laura Bohn, now a pharmacologist at the Scripps Research Institute in Jupiter, Fla. Bohn and colleagues figured out that mu opioid receptors — the type of opioid receptor targeted by most drugs — send two different streams of messages. One stops pain. The other, which needs beta-arrestin 2, drives many of the negatives of opioids, including the need for more and more drug and the dangerous slowdown of breathing.
Since that work, Bohn’s lab and many others have been trying to create molecules that bind to mu opioid receptors without triggering beta-arrestin 2 activity. The approach, called biased agonism, “has been around some time, but now it’s bearing the fruit,” says Susruta Majumdar, a chemist at Memorial Sloan Kettering Cancer Center in New York City. Scientists have identified dozens of molecules that seem to avoid beta-arrestin 2 in mice. But only a few might make good drugs. One, called PZM21, was described in Nature last year. Another one has shown promise in humans — a much higher bar. The pharmaceutical company Trevena, headquartered in King of Prussia, Pa., has been working its way through the U.S. Food and Drug Administration’s drug approval process with a molecule called oliceridine. In studies reported in April in San Francisco at the Annual Regional Anesthesiology and Acute Pain Medicine Meeting, oliceridine was as effective as morphine in patients recovering from bunion removal and others who had tummy tuck surgeries. Over the short term, people taking a moderate dose of the drug got pain relief comparable to that of morphine, but reported fewer side effects, such as vomiting and breathing problems.
Oliceridine is an intravenous opioid, not an oral one. That means it would be administered in the short term in hospitals, during and after surgeries. It’s not a replacement for the pills people can go home with, says Jonathan Violin, Trevena’s cofounder. And it’s not perfect: More side effects cropped up at higher doses. But it’s the first opioid using this targeted approach to get this far in human studies. The company hopes to submit an application for FDA approval by the end of 2017, Violin says.
Avoiding the beta-arrestin 2 pathway isn’t the only approach to targeted opioids — just one of the best studied. Ananthan’s lab is taking a different tack. His team showed that mice lacking a different opioid receptor, the delta receptor, tended not to show negative effects in response to the drugs. Now, the researchers are trying to find molecules that can activate mu opioid receptors while blocking delta receptors.
There may also be a way to direct pain-killing messages specifically to the parts of a person’s body that are feeling pain. In one recent study, scientists described a molecule that bound to opioid receptors only when the area around the receptors was more acidic than normal. Inflammation from pain and injury raises acidity, so this molecule could quash pain where necessary, but wouldn’t bind to receptors elsewhere in the body, reducing the likelihood of side effects. Rats in the study, published in the March 3 Science, didn’t find the new molecule as rewarding as fentanyl, so it may be less addictive. And they were less likely to have constipation and slowed breathing.
Drugs face a long uphill climb from even the most promising animal studies to FDA approval for use in humans. Very few make it that far. It’s too soon to tell whether PZM21 and other molecules being studied in mice will ever end up as treatments for patients.
Unwilling to wait, some people in pain are turning to substances that are already available — without a doctor’s order. And scientists are trying to catch up.
Kratom crackdown In August 2016, the Drug Enforcement Administration announced that it was cracking down on a supplement called kratom. Officials wanted to put the herb in the same regulatory category as heroin and LSD, labeling it a dangerous substance with no medical value. Members of the public vehemently disagreed. More than 23,000 comments poured in from veterans, cancer survivors, factory workers, lawyers and teachers. Almost all of them said the same thing: Kratom freed them from pain. Made from the leaves of the tropical plant Mitragyna speciosa , kratom is sold in corner convenience stores and through online retailers. Its pain-killing abilities come mainly from two different molecules in the plant’s leaves: mitragynine and the structurally similar 7-hydroxymitragynine. Both have a structure that’s very different from morphine, but they bind to opioid receptors. That technically makes them opioids, even though they don’t look like morphine or oxycodone, Majumdar says. And that’s what concerned the DEA. But just like some of the new opioids that scientists are developing, kratom’s active ingredients appear — anecdotally, at least — to deliver pain relief with fewer problems and less risk of tolerance. Some chronic opioid users switch to kratom to wean themselves off of pain pills and ease withdrawal symptoms, says Oliver Grundmann, a medicinal chemist at the University of Florida in Gainesville. Other users have never habitually used opioids but are seeking relief from chronic pain or mental health problems, according to a survey he published online May 10 in Drug and Alcohol Dependence. Grundmann hopes the survey results will help guide research into the substance’s efficacy for specific medical concerns.
The safety and efficacy of kratom is still up for debate. There’s a lack of controlled clinical studies about the leaf’s impact on the body, Grundmann says. Plus, the way kratom is regulated — as a supplement — means that people buying it have no guarantee of what they’re actually getting.
While kratom has its fans, its active compounds aren’t very potent, says Majumdar. He thinks he could make a better drug by modifying these molecules.
Majumdar, Sloan Kettering collaborator András Váradi and colleagues tested a structural cousin of 7-hydroxymitragynine: mitragynine pseudoindoxyl. It binds to mu opioid receptors about 200 times as effectively as mitragynine in mice, the researchers reported in August in the Journal of Medicinal Chemistry. Just like Trevena’s oliceridine, the new molecule does not activate beta-arrestin 2. The pseudoindoxyl version also blocks the delta opioid receptor, further impeding nonpain-related activities.
Majumdar hopes a DEA ban on kratom won’t happen; it would severely restrict access, making research much harder to do. For now, there is no ban — but scientists are wary, he says.
Mix it up Despite the potential for new, better opioids, other researchers are focused on an altogether different set of pain-killing drugs: the cannabinoids (made famous by marijuana, the dried leaves and other parts of the hemp plant, Cannabis sativa).
The active molecules in marijuana don’t have the same fast-acting pain-quenching abilities that opioids do. “If I go into an emergency room with acute pain, give me morphine,” says Yasmin Hurd, a pharmacologist at Mount Sinai in New York City. But with medical marijuana legal in 29 states plus the District of Columbia, the plant is getting more attention as a potential pain reliever, especially for chronic pain (SN: 6/14/14, p. 16).
Doctors in states where marijuana is legal write fewer prescriptions for opioid painkillers, a 2016 study in Health Affairs showed. Those states also had about a 25 percent lower rate of opioid overdose deaths compared with states that didn’t legalize marijuana, according to a 2014 study in JAMA Internal Medicine. When marijuana becomes legally available, some people might choose it instead of opioids. There might be some merit to that choice. There are plenty of cannabinoid receptors in parts of the brain that process pain messages. But unlike opioid receptors, few exist in the brain stem. That means cannabinoids are far less likely to influence breathing than opioids, says Joseph Cheer, a neurobiologist at the University of Maryland School of Medicine in Baltimore. Fatal overdoses are nearly unheard of.
As with kratom, though, there’s a glut of anecdotal evidence suggesting marijuana’s power to cure everything from pain to anxiety to ulcers — but not many controlled clinical trials to back up the assertions (SN Online: 1/12/17). The knowledge gap is made even wider by the fact that marijuana has wildly different effects depending on how it’s ingested and the relative ratios of certain active molecules in each strain of the plant.There might be some merit to that choice. There are plenty of cannabinoid receptors in parts of the brain that process pain messages. But unlike opioid receptors, few exist in the brain stem. That means cannabinoids are far less likely to influence breathing than opioids, says Joseph Cheer, a neurobiologist at the University of Maryland School of Medicine in Baltimore. Fatal overdoses are nearly unheard of.
“People think they know how marijuana affects the brain,” Hurd says. In reality, “there’s been very little evidence-based structural scientific studies done with marijuana.”
Aron Lichtman, a pharmacologist at Virginia Commonwealth University in Richmond, agrees. “There’s definitely medicine in that plant — that’s been proven,” he says. “The challenge is that it may not work for everybody and every type of pain.”
Scientists who are serious about figuring out marijuana are breaking it down, looking at the plant’s active molecules — cannabinoids — one by one. Cannabidiol, or CBD, has garnered particular attention. Because of the way it indirectly interacts with cannabinoid receptors, it doesn’t give people the high that’s characteristic of tetrahydrocannabinol, or THC, the mind-altering chemical in marijuana. That makes CBD less rewarding and better suited to longer-term use. The molecule can influence signals sent by a number of other receptors in the brain, many involved in pain and inflammation.
But THC might have merit, too. It’s already used in a couple of FDA-approved drugs to treat nausea and vomiting from chemo-therapy. There’s some evidence that those medications might also help relieve pain, though Lichtman calls those studies a “mixed bag.”
Alone, cannabinoids might be fairly weak painkillers. But combined with opioids, he’s shown, they can amplify the pain relief and reduce the opioid dose needed in mice.
Drugs that might amp up the power of the body’s natural cannabinoids are another option. That’s what Ruth Ross of the University of Toronto is studying. A few years ago, her team identified a region on a cannabinoid receptor called CB1 that has an interesting property: Small molecules that bind to it act like volume knobs for the body’s natural cannabinoids, called endocannabinoids. When a molecule of the right shape locks on to CB1, it makes endocannabinoids naturally present in the body more likely to latch on. That boosts pain relief in a targeted way — when endocannabinoids are already being released by the body, such as after injury or stress.
“You magnify the already existing effects of the compound,” Ross says. Her team has identified and patented several of these volume-knob molecules, and is working on improving them.
“For various reasons they wouldn’t be good as drugs,” she says. They have too many effects on the body beyond their intended one. But she’s making slight tweaks to their chemical structures to try to reduce those off-target effects, with the hope that one day the molecules could be studied in patients.
Safer opioids or alternative painkillers would help people deal with their pain without risking addiction or death. Peay has gotten to know people — as a member of social media groups for those living with chronic pain — who are experiencing the crushing results of poorly managed pain. People lose their jobs, she says, or move to Colorado just to get access to legal marijuana. As for her? “I still have my sense of humor, and that helps me get through all the pain.” But she’s holding out for something better.
For the first time, astronomers have seen a star outside of the solar system bend the light from another star. The measurement, reported June 7 in Austin, Texas, at a meeting of the American Astronomical Society, vindicates both Einstein’s most famous theory and what goes on in the inner lives of stellar corpses.
Astronomers using the Hubble Space Telescope watched as a white dwarf passed in front of a more distant star. That star seemed to move in a small loop, its apparent position deflected by the white dwarf’s gravity. More than a century ago, Albert Einstein predicted that the way spacetime bends around a massive object — the sun, say — should shift the apparent position of stars that appear behind that object. The measurement of this effect during a solar eclipse in 1919 confirmed Einstein’s general theory of relativity: Mass warps spacetime and bends the path of light rays (SN: 10/17/15, p. 16).
The New York Times hailed it as “one of the greatest — perhaps the greatest — of achievements in the history of human thought.” But even Einstein doubted the light-bending effect could be detected for more distant stars than the sun.
Now, in a study published in the June 9 issue of Science, Kailash Sahu of the Space Telescope Science Institute in Baltimore and his colleagues have shown that it can.
“This is an elegant outcome,” says Terry Oswalt at Embry-Riddle Aeronautical University in Daytona Beach, Fla., who was not involved in the new work. “Einstein would be very proud.” While the stars literally aligned to make the measurement possible, this was no lucky accident. Sahu and colleagues scoured a catalog of 5,000 stellar motions to find a pair of stars likely to pass close enough on the sky that Hubble could sense the shift.
There were a few possible candidates, and one of them, called Stein 2051 B, was already a mysterious character.
Located about 18 light-years from Earth, Stein 2051 B is a white dwarf, a common end-of-life state for a sunlike star. When low-mass stars run out of fuel, they puff up into a red giant while fusing helium into carbon and oxygen. Eventually, they slough off outer layers of gas, leaving this carbon-oxygen core — the white dwarf — behind. About 97 percent of the stars in the Milky Way, including the sun, are or someday will be white dwarfs.
White dwarfs are extremely dense. They are prevented from collapsing into a black hole only by the pressure their electrons produce in trying not to be in the same quantum state as each other. This bizarre situation sets strict limits on their sizes and masses: For a given radius, a white dwarf can be only so massive, and only so large for a given mass.
This mass-radius relation was laid out in Nobel prize‒winning work by Subrahmanyan Chandrasekhar in the 1930s, but it has been difficult to prove. The only white dwarfs weighed so far share their orbits with other stars whose mutual motions help astronomers calculate their masses. But some astronomers worry that those companions could have added mass to the white dwarfs, throwing off this precise relationship.
Stein 2051 B also has a companion, but it is so far away that the two stars almost certainly evolved independently. That distance also means it would take hundreds of years to precisely measure the white dwarf’s mass. The best efforts to find a rough mass so far created a conundrum: Stein 2051 B appeared to be much lighter than expected. It would need an exotic iron core to explain it.
Measuring the shift of a background star provides a way to measure the white dwarf’s mass directly. The more massive the foreground star — in this case, the white dwarf — the greater the deflection of light from the background star.
“This is the most direct method of measuring the mass,” Sahu says. “It’s almost like putting somebody on a scale and reading off their weight.”
The white dwarf was scheduled to pass near a background star on March 5, 2014. Sahu’s team made eight observations of the two stars’ positions between October 2013 and October 2015.
The team found that the background star appeared to move in a small ellipse as the white dwarf approached and then moved away from it, exactly as predicted by Einstein’s equations. That suggests its mass is 0.675 times the mass of the sun — well within the normal range for its size.
This first measurement won’t be the last, Oswalt says. Several new star surveys are coming online in the next few years that will track the motions of billions of stars at once. That means that even though light-bending alignments are rare, astronomers should catch several more soon.
The woman in her 70s was in trouble. What started as a broken leg led to an infection in her hip that hung on for two years and several hospital stays. At a Nevada hospital, doctors gave the woman seven different antibiotics, one after the other. The drugs did little to help her. Lab results showed that none of the 14 antibiotics available at the hospital could fight the infection, caused by the bacterium Klebsiella pneumoniae.
Epidemiologist Lei Chen of the Washoe County Health District sent a bacterial sample to the U.S. Centers for Disease Control and Prevention. The bacteria, CDC scientists found, produced a nasty enzyme called New Delhi metallo-beta-lactamase, known for disabling many antibiotics. The enzyme was first seen in a patient from India, which is where the Nevada woman broke her leg and received treatment before returning to the United States. The enzyme is worrisome because it arms bacteria against carbapenems, a group of last-resort antibiotics, says Alexander Kallen, a CDC medical epidemiologist based in Atlanta, who calls the drugs “our biggest guns for our sickest patients.”
The CDC’s final report revealed startling news: The bacteria raging in the woman’s body were resistant to all 26 antibiotics available in the United States. She died from septic shock; the infection shut down her organs.
Kallen estimates that there have been fewer than 10 cases of completely resistant bacterial infections in the United States. Such absolute resistance to all available drugs, though incredibly rare, was a “nightmare scenario,” says Daniel Kadouri, a micro-biologist at Rutgers School of Dental Medicine in Newark, N.J.
Antibiotic-resistant bacteria infect more than 2 million people in the United States every year, and at least 23,000 die, according to 2013 data, the most recent available from the CDC.
It’s time to flip the nightmare scenario and send a killer after the killer bacteria, say a handful of scientists with a new approach for fighting infection. The strategy, referred to as a “living antibiotic,” would pit one group of bacteria — given as a drug and dubbed “the predators” — against the bacteria that are wreaking havoc among humans. The approach sounds extreme, but it might be necessary. Antimicrobial resistance “is something that we really, really have to take seriously,” says Elizabeth Tayler, senior technical officer for antimicrobial resistance at the World Health Organization in Geneva. “The ability of future generations to manage infection is at risk. It’s a global problem.”
The number of resistant strains has exploded, in part because doctors prescribe antibiotics too often. At least 30 percent of antibiotic prescriptions in the United States are not necessary, according to the CDC. When more people are exposed to more antibiotics, resistance is likely to build faster. And new alternatives are scarce, Kallen says, as the pace of developing novel antibiotics has slowed.
In search of new ideas, DARPA, a Department of Defense agency that invests in breakthrough technologies, is supporting work on predatory bacteria by Kadouri, as well as Robert Mitchell of Ulsan National Institute of Science and Technology in South Korea, Liz Sockett of the University of Nottingham in England and Edouard Jurkevitch of the Hebrew University of Jerusalem. This work, the agency says, represents “a significant departure from conventional antibiotic therapies.”
The approach is so unusual, people have called Kadouri and his lab crazy. “Probably, we are,” he jokes.
A movie-worthy killer The notion of predatory bacteria sounds a bit scary, especially when Kadouri likens the most thoroughly studied of the predators, Bdellovibrio bacteriovorus, to the vicious space creatures in the Alien movies.
B. bacteriovorus, called gram-negative because of how they are stained for microscope viewing, dine on other gram-negative bacteria. All gram-negative bacteria have an inner membrane and outer cell wall. The predators don’t go after the other main type of bacteria, gram-positives, which have just one membrane. When it encounters a gram-negative bacterium, the predator appears to latch on with grappling hook–like appendages. Then, like a classic cat burglar cutting a hole in glass, B. bacteriovorus forces its way through the outer membrane and seems to seal the hole behind it. Once within the space between the outer and inner membranes, the predator secretes enzymes — as damaging as the movie aliens’ acid spit — that chew its prey’s nutrients and DNA into bite-sized pieces.
B. bacteriovorus then uses the broken-down genetic building blocks to make its own DNA and begin replicating. The invader and its progeny eventually emerge from the shell of the prey in a way reminiscent of a cinematic chest-bursting scene.
“It’s a very efficient killing machine,” Kadouri says. That’s good news because many of the most dangerous pathogens that are resistant to antibiotics are gram-negative (SN: 6/10/17, p. 8), according to a list released by the WHO in February.
It’s the predator’s hunger for the bad-guy bacteria, the ones that current drugs have become useless against, that Kadouri and other researchers hope to harness.
Pitting predatory against pathogenic bacteria sounds risky. But, from what researchers can tell, these killer bacteria appear safe. “We know that [B. bacteriovorus] doesn’t target mammalian cells,” Kadouri says.
Saving the see-through fish To find out whether enlisting predatory bacteria might be crazy good and not just plain crazy, Kadouri’s lab group tested B. bacteriovorus’ killing ability against an array of bacteria in lab dishes in 2010. The microbe significantly reduced levels of 68 of the 83 bacteria tested.
Since then, Kadouri and others have looked at the predator’s ability to devour dangerous pathogens in animals. In rats and chickens, B. bacteriovorus reduced the number of bad bacteria. But the animals were always given nonlethal doses of pathogens, leaving open the question of whether the predator could save the animals’ lives.
Sockett needed to see evidence of survival improvement. “If we’re going to have Bdellovibrio as a medicine, we have to cure something,” she says. “We can count changes in numbers of bacteria, but if that doesn’t change the outcome of the infection — change the number of [animals] that die — it’s not worth it.”
So she teamed up with cell biologist Serge Mostowy of Imperial College London for a study in zebrafish. The aim was to see how many animals predatory bacteria could save from a deadly infection. The team also tested how the host’s immune system interacted with the predators.
The researchers gave zebra-fish larvae fatal doses of an antibiotic-resistant strain of Shigella flexneri, which causes dysentery in humans. Before infecting the fish, the researchers divided them into four groups. Two groups had their immune systems altered to produce fewer macrophages, the white blood cells that attack pathogens. Immune systems in the other two groups remained intact. B. bacteriovorus was injected into an unchanged group and a macrophage-deficient group, while two groups received no treatment.
All of the untreated fish with fewer macrophages died within 72 hours of receiving S. flexneri, the researchers reported in December in Current Biology. Of the fish with a normal immune system, 65 percent that received predator treatment survived compared with 35 percent with no predator treatment. Even in the fish with impaired immune systems, the predators saved about a quarter of the lot. “This is the first time that Bdellovibrio has ever been used as an injected therapy in live organisms,” Sockett says. “And the important thing is the injection improved the survival of the zebrafish.”
The study also pulled off another first. In previous work, researchers had been unable to see predation as it happened within an animal. Because zebra-fish larvae are transparent, study coauthor Alexandra Willis captured images of B. bacteriovorus gobbling up S. flexneri.
“We were literally having to run to the microscope because the process was just happening so fast,” says Willis, a graduate student in Mostowy’s lab. After the predator invades, its rod-shaped prey become round. Willis saw Bdellovibrio “rounding” its prey within 15 minutes. From start to finish, the predatory cycle took about three to four hours.
The predator’s speed may be what gave it the edge over the infection, Mostowy says. B. bacteriovorus attacks fast, chipping away at the pathogens until the infection is reduced to a level that the immune system can handle. “Otherwise there are too many bacteria and the immune system would be overwhelmed,” he says. “We’re putting a shocking amount of Shigella, 50,000 bacteria, into the fish.”
Within 48 hours, S. flexneri levels dropped 98 percent in the surviving fish, from 50,000 to 1,000.
The immune cells also cleared nearly all the B. bacteriovorus predators from the fish. The predators had enough time to attack the infection before being targeted by the immune system themselves, creating an ideal treatment window. Even if the host’s immune system hadn’t attacked the predators, once the bacteria are gone, Willis says, the predators are out of food. Unable to replicate, they eventually die off.
A clean sweep Predatory bacteria are efficient in more ways than one. They’re not just good killers — they eliminate the evidence too.
Typical antibiotic treatments don’t target a bacterium’s DNA, so they are likely to leave pieces of the bacterial body behind. That’s like killing a few bandits, but leaving their weapons so the next invaders can easily arm themselves for a new attack. This could be one way that multidrug resistance evolves, Mitchell says. For example, penicillin will kill all bacteria that aren’t resistant to the drug. The surviving bacteria can swim through the aftermath of the antibiotic attack and grab genes from their fallen comrades to incorporate into their own genomes. The destroyed bacteria may have had a resistance gene to a different antibiotic, say, vancomycin. Now you have bacteria that are resistant to both penicillin and vancomycin. Not good.
Predatory bacteria, on the other hand, “decimate the genome” of their prey, Mitchell says. They don’t just kill the bandit, they melt down all the DNA weapons so no pathogens can use them. In one experiment that has yet to be published, B. bacteriovorus almost completely ate up the genetic material of a bacterial colony within two hours — showing itself as a fast-acting predator that could prevent bacterial genes from falling into the wrong hands.
On top of that, even if pathogenic bacteria mutate, a common way they pick up new forms of resistance, they aren’t protected from predation. Resistance to predation hasn’t been reported in lab experiments since B. bacteriovorus was discovered in 1962, Mitchell says. Researchers don’t think there’s a single pathway or gene in a prey bacterium that the predator targets. Instead, B. bacteriovorus seem to use sheer force to break in. “It’s kind of like cracking an egg with a hammer,” Kadouri says. That’s not exactly something bacteria can mutate to protect themselves against.
Some bacteria manage to band together and cover themselves with a kind of built-in biological shield, which offers protection against antibiotics. But for predatory bacteria, the shield is more of a welcome mat.
Going after the gram-positives When bacteria cluster together on a surface, whether in your body, on a countertop or on a medical instrument, they can form a biofilm. The thick, slimy shield helps microbes withstand antibiotic attacks because the drugs have difficulty penetrating the slime. Antibiotics usually act on fast-growing bacteria, but within a biofilm, bacteria are sluggish and dormant, making antibiotics less effective, Kadouri says. But to predatory bacteria, a biofilm is like Jell-O — a tasty snack that’s easy to swallow. Once inside, B. bacteriovorus spreads like wildfire because its prey are now huddled together as confined targets. “It’s like putting zebras and a lion in a restaurant and closing the door and seeing what happens,” Kadouri says. For the zebras, “it can’t end well.”
Kadouri’s lab has shown repeatedly that predatory bacteria effectively eat away biofilms that protect gram-negative bacteria, and are in fact more efficient at killing bacteria within those biofilms.
Gram-positive bacteria cloak themselves in biofilms too. In 2014 in Scientific Reports, Mitchell and his team reported finding a way to use Bdellovibrio to weaken gram-positive bacteria, turning their protective shield against them and perhaps helping antibiotics do their job.
The discovery comes from studies of one naturally occurring B. bacteriovorus mutant with extra-scary spit. The mutant isn’t predatory. Instead of eating a prey’s DNA to make its own, it can grow and replicate like a normal bacterial colony. As it grows, it produces especially destructive enzymes. Among the mix of enzymes are proteases, which break down proteins.
Mitchell and his team tested the strength of the mutant’s secretions against the gram-positive Staphylococcus aureus. A cocktail of the enzymes applied to an S. aureus biofilm degraded the slime shield and reduced the bacterium’s virulence. Biofilms can make bacteria up to 1,000 times more resistant to antibiotics, Mitchell says. The next step, he adds, is to see if degrading a biofilm resensitizes a gram-positive bacterium to antibiotics.
Mitchell and his team also treated S. aureus cells that didn’t have a biofilm with the mutant’s enzyme mix and then exposed them to human cells. Eighty percent of the bacteria were no longer able to invade human cells, Mitchell says. The “acid spit” chewed up surface proteins that the pathogen uses to attach to and invade human cells. The enzymes didn’t kill the bacteria but did make them less virile.
No downsides yet Predatory bacteria can efficiently eat other gram-negative bacteria, munch through biofilms and even save zebrafish from the jaws of an infectious death. But are they safe? Kadouri and the other researchers have done many studies, though none in humans yet, to try to answer that question. In a 2016 study published in Scientific Reports, Kadouri and colleagues applied B. bacteriovorus to the eyes of rabbits and compared the effect with that of a common antibiotic eye drop, vancomycin. The vancomycin visibly inflamed the eyes, while the predatory bacteria had little to no effect. The eyes treated with predatory bacteria were indistinguishable from eyes treated with a saline solution, used as the control treatment. Other studies looking for potential toxic effects of B. bacteriovorus have so far found none.
In 2011, Sockett’s team gave chickens an oral dose of predatory bacteria. At 28 days, the researchers saw no difference in health between treated and untreated chickens. The makeup of the birds’ gut bacteria was altered, but not in a way that was harmful, she and her team reported in Applied and Environmental Microbiology.
Kadouri analyzed rats’ gut microbes after a treatment of predatory bacteria, reporting the results in a study published March 6 in Scientific Reports. Here too, the rodents’ guts showed little to no inflammation. When they sequenced the bacterial contents of the rats’ feces, the researchers saw small differences between the treated and untreated rats. But none of the changes appeared harmful, and the animals grew and acted normally.
If the rats had taken common antibiotics, it would have been a different story, Kadouri points out. Those drugs would have given the animals diarrhea, reduced their appetites and altered their gut flora in a big way. “When you take antibiotics, you’re basically t hrowing an atomic bomb” into your gut, Kadouri says. “You’re wiping everything out.” Both Mitchell and Kadouri tested B. bacteriovorus on human cells and found that the predatory bacteria didn’t harm the cells or prompt an immune response. The researchers separately reported their findings in late 2016 in Scientific Reports and PLOS ONE . Microbiologist Elizabeth Emmert of Salisbury University in Maryland studies B. bacterio-vorus as a means to protect crops — carrots and potatoes — from bacterial soft rot diseases. For humans, she calls the microbes a “promising” therapy for bacterial infections. “It seems most feasible as a topical treatment for wounds, since it would not have to survive passage through the digestive tract.”
There are plenty of questions that need answering first. Mitchell guesses that there will probably be 10 more years of rigorous testing in animals before moving on to human clinical studies. But pursuing these alternatives is worth the effort.
“The drugs that we’re taking are not benign and cuddly and nice,” Kadouri says. “We need them, but they don’t come without side effects.” Even though a living antibiotic sounds a bit crazy, it might be the best option in this dangerous era of antibiotic resistance.
Small worlds come in two flavors. The complete dataset from the original mission of the planet-hunting Kepler space telescope reveals a split in the exoplanet family tree, setting super-Earths apart from mini-Neptunes.
Kepler’s final exoplanet catalog, released in a news conference June 19, now consists of 4,034 exoplanet candidates. Of those, 49 are rocky worlds in their stars’ habitable zones, including 10 newly discovered ones. So far, 2,335 candidates have been confirmed as planets and they include about 30 temperate, terrestrial worlds. Careful measurements of the candidates’ stars revealed a surprising gap between planets about 1.5 and two times the size of Earth, Benjamin Fulton of the University of Hawaii at Manoa and Caltech and his colleagues found. There are a few planets in the gap, but most straddle it.
That splits the population of small planets into those that are rocky like Earth — 1.5 Earth radii or less — and those that are gassy like Neptune, between 2 and 3.5 Earth radii.
“This is a major new division in the family tree of exoplanets, somewhat analogous to the discovery that mammals and lizards are separate branches on the tree of life,” Fulton said.
The Kepler space telescope launched in 2009 and stared at a single patch of sky in the constellation Cygnus for four years. (Its stabilizing reaction wheels later broke and it began a new mission called K2 (SN Online: 5/15/13).) Kepler watched sunlike stars for telltale dips in brightness that would reveal a passing planet. Its ultimate goal was to come up with a single number: The fraction of stars like the sun that host planets like Earth. The Kepler team has still not calculated that number, but astronomers are confident that they have enough data to do so, said Susan Thompson of the SETI Institute in Mountain View, Calif. She presented the results during the Kepler/K2 Science Conference IV being held at NASA’s Ames Research Center in Moffett Field, Calif.
Thompson and her colleagues ran the Kepler dataset through “Robovetter” software, which acted like a sieve to catch all the potential planets it contained. Running fake planet data through the software pinpointed how likely it was to confuse other signals for a planet or miss true planets.
“This is the first time we have a population that’s really well-characterized so we can do a statistical study and understand Earth analogs out there,” Thompson said.
Astronomers’ knowledge of these planets is only as good as their knowledge of their stars. So Fulton and his colleagues used the Keck telescope in Hawaii to precisely measure the sizes of 1,300 planet-hosting stars in the Kepler field of view. Those sizes in turn helped pin down the sizes of the planets with four times more precision than before.
The split in planet types they found could come from small differences in the planets’ sizes, compositions and distances from their stars. Young stars blow powerful winds of charged particles, which can blowtorch a growing planet’s atmosphere away. If a planet was too close to its star or too small to have a thick atmosphere — less than 75 percent larger than Earth — it would lose its atmosphere and end up in the smaller group. The planets that look more like Neptune today either had more gas to begin with or grew up in a gentler environment, Fulton said.
That divergence could have implications for the abundance of life in the galaxy. The surfaces of mini-Neptunes — if they exist — would suffer under the crushing pressure of such a thick atmosphere.
“These would not be nice places to live,” Fulton said. “Our result sharpens up the dividing line between potentially habitable planets and those that are inhospitable.”
Upcoming missions, like the Transiting Exoplanet Survey Satellite due to launch in 2018, will fill in the details of the exoplanet landscape with more observations of planets around bright stars. Later, telescopes like the James Webb Space Telescope, also scheduled to launch in 2018, will be able to check the atmospheres of those planets for signs of life.
“We can now really ask the question, ‘Is our planetary system unique in the galaxy?’” exoplanet astronomer Courtney Dressing of Caltech says. “My guess is the answer’s no. We’re not that special.”
Although the term “quantum computer” might suggest a miniature, sleek device, the latest incarnations are a far cry from anything available in the Apple Store. In a laboratory just 60 kilometers north of New York City, scientists are running a fledgling quantum computer through its paces — and the whole package looks like something that might be found in a dark corner of a basement. The cooling system that envelops the computer is about the size and shape of a household water heater.
Beneath that clunky exterior sits the heart of the computer, the quantum processor, a tiny, precisely engineered chip about a centimeter on each side. Chilled to temperatures just above absolute zero, the computer — made by IBM and housed at the company’s Thomas J. Watson Research Center in Yorktown Heights, N.Y. — comprises 16 quantum bits, or qubits, enough for only simple calculations.
If this computer can be scaled up, though, it could transcend current limits of computation. Computers based on the physics of the supersmall can solve puzzles no other computer can — at least in theory — because quantum entities behave unlike anything in a larger realm.
Quantum computers aren’t putting standard computers to shame just yet. The most advanced computers are working with fewer than two dozen qubits. But teams from industry and academia are working on expanding their own versions of quantum computers to 50 or 100 qubits, enough to perform certain calculations that the most powerful supercomputers can’t pull off. The race is on to reach that milestone, known as “quantum supremacy.” Scientists should meet this goal within a couple of years, says quantum physicist David Schuster of the University of Chicago. “There’s no reason that I see that it won’t work.” But supremacy is only an initial step, a symbolic marker akin to sticking a flagpole into the ground of an unexplored landscape. The first tasks where quantum computers prevail will be contrived problems set up to be difficult for a standard computer but easy for a quantum one. Eventually, the hope is, the computers will become prized tools of scientists and businesses.
Attention-getting ideas Some of the first useful problems quantum computers will probably tackle will be to simulate small molecules or chemical reactions. From there, the computers could go on to speed the search for new drugs or kick-start the development of energy-saving catalysts to accelerate chemical reactions. To find the best material for a particular job, quantum computers could search through millions of possibilities to pinpoint the ideal choice, for example, ultrastrong polymers for use in airplane wings. Advertisers could use a quantum algorithm to improve their product recommendations — dishing out an ad for that new cell phone just when you’re on the verge of purchasing one.
Quantum computers could provide a boost to machine learning, too, allowing for nearly flawless handwriting recognition or helping self-driving cars assess the flood of data pouring in from their sensors to swerve away from a child running into the street. And scientists might use quantum computers to explore exotic realms of physics, simulating what might happen deep inside a black hole, for example.
But quantum computers won’t reach their real potential — which will require harnessing the power of millions of qubits — for more than a decade. Exactly what possibilities exist for the long-term future of quantum computers is still up in the air.
The outlook is similar to the patchy vision that surrounded the development of standard computers — which quantum scientists refer to as “classical” computers — in the middle of the 20th century. When they began to tinker with electronic computers, scientists couldn’t fathom all of the eventual applications; they just knew the machines possessed great power. From that initial promise, classical computers have become indispensable in science and business, dominating daily life, with handheld smartphones becoming constant companions (SN: 4/1/17, p. 18). Since the 1980s, when the idea of a quantum computer first attracted interest, progress has come in fits and starts. Without the ability to create real quantum computers, the work remained theoretical, and it wasn’t clear when — or if — quantum computations would be achievable. Now, with the small quantum computers at hand, and new developments coming swiftly, scientists and corporations are preparing for a new technology that finally seems within reach.
“Companies are really paying attention,” Microsoft’s Krysta Svore said March 13 in New Orleans during a packed session at a meeting of the American Physical Society. Enthusiastic physicists filled the room and huddled at the doorways, straining to hear as she spoke. Svore and her team are exploring what these nascent quantum computers might eventually be capable of. “We’re very excited about the potential to really revolutionize … what we can compute.”
Anatomy of a qubit Quantum computing’s promise is rooted in quantum mechanics, the counterintuitive physics that governs tiny entities such as atoms, electrons and molecules. The basic element of a quantum computer is the qubit (pronounced “CUE-bit”). Unlike a standard computer bit, which can take on a value of 0 or 1, a qubit can be 0, 1 or a combination of the two — a sort of purgatory between 0 and 1 known as a quantum superposition. When a qubit is measured, there’s some chance of getting 0 and some chance of getting 1. But before it’s measured, it’s both 0 and 1.
Because qubits can represent 0 and 1 simultaneously, they can encode a wealth of information. In computations, both possibilities — 0 and 1 — are operated on at the same time, allowing for a sort of parallel computation that speeds up solutions.
Another qubit quirk: Their properties can be intertwined through the quantum phenomenon of entanglement (SN: 4/29/17, p. 8). A measurement of one qubit in an entangled pair instantly reveals the value of its partner, even if they are far apart — what Albert Einstein called “spooky action at a distance.” Such weird quantum properties can make for superefficient calculations. But the approach won’t speed up solutions for every problem thrown at it. Quantum calculators are particularly suited to certain types of puzzles, the kind for which correct answers can be selected by a process called quantum interference. Through quantum interference, the correct answer is amplified while others are canceled out, like sets of ripples meeting one another in a lake, causing some peaks to become larger and others to disappear.
One of the most famous potential uses for quantum computers is breaking up large integers into their prime factors. For classical computers, this task is so difficult that credit card data and other sensitive information are secured via encryption based on factoring numbers. Eventually, a large enough quantum computer could break this type of encryption, factoring numbers that would take millions of years for a classical computer to crack.
Quantum computers also promise to speed up searches, using qubits to more efficiently pick out an information needle in a data haystack.
Qubits can be made using a variety of materials, including ions, silicon or superconductors, which conduct electricity without resistance. Unfortunately, none of these technologies allow for a computer that will fit easily on a desktop. Though the computer chips themselves are tiny, they depend on large cooling systems, vacuum chambers or other bulky equipment to maintain the delicate quantum properties of the qubits. Quantum computers will probably be confined to specialized laboratories for the foreseeable future, to be accessed remotely via the internet.
Going supreme That vision of Web-connected quantum computers has already begun to Quantum computing is exciting. It’s coming, and we want a lot more people to be well-versed in itmaterialize. In 2016, IBM unveiled the Quantum Experience, a quantum computer that anyone around the world can access online for free. With only five qubits, the Quantum Experience is “limited in what you can do,” says Jerry Chow, who manages IBM’s experimental quantum computing group. (IBM’s 16-qubit computer is in beta testing, so Quantum Experience users are just beginning to get their hands on it.) Despite its limitations, the Quantum Experience has allowed scientists, computer programmers and the public to become familiar with programming quantum computers — which follow different rules than standard computers and therefore require new ways of thinking about problems. “Quantum computing is exciting. It’s coming, and we want a lot more people to be well-versed in it,” Chow says. “That’ll make the development and the advancement even faster.”
But to fully jump-start quantum computing, scientists will need to prove that their machines can outperform the best standard computers. “This step is important to convince the community that you’re building an actual quantum computer,” says quantum physicist Simon Devitt of Macquarie University in Sydney. A demonstration of such quantum supremacy could come by the end of the year or in 2018, Devitt predicts.
Researchers from Google set out a strategy to demonstrate quantum supremacy, posted online at arXiv.org in 2016. They proposed an algorithm that, if run on a large enough quantum computer, would produce results that couldn’t be replicated by the world’s most powerful supercomputers.
The method involves performing random operations on the qubits, and measuring the distribution of answers that are spit out. Getting the same distribution on a classical supercomputer would require simulating the complex inner workings of a quantum computer. Simulating a quantum computer with more than about 45 qubits becomes unmanageable. Supercomputers haven’t been able to reach these quantum wilds.
To enter this hinterland, Google, which has a nine-qubit computer, has aggressive plans to scale up to 49 qubits. “We’re pretty optimistic,” says Google’s John Martinis, also a physicist at the University of California, Santa Barbara.
Martinis and colleagues plan to proceed in stages, working out the kinks along the way. “You build something, and then if it’s not working exquisitely well, then you don’t do the next one — you fix what’s going on,” he says. The researchers are currently developing quantum computers of 15 and 22 qubits.
IBM, like Google, also plans to go big. In March, the company announced it would build a 50-qubit computer in the next few years and make it available to businesses eager to be among the first adopters of the burgeoning technology. Just two months later, in May, IBM announced that its scientists had created the 16-qubit quantum computer, as well as a 17-qubit prototype that will be a technological jumping-off point for the company’s future line of commercial computers. But a quantum computer is much more than the sum of its qubits. “One of the real key aspects about scaling up is not simply … qubit number, but really improving the device performance,” Chow says. So IBM researchers are focusing on a standard they call “quantum volume,” which takes into account several factors. These include the number of qubits, how each qubit is connected to its neighbors, how quickly errors slip into calculations and how many operations can be performed at once. “These are all factors that really give your quantum processor its power,” Chow says.
Errors are a major obstacle to boosting quantum volume. With their delicate quantum properties, qubits can accumulate glitches with each operation. Qubits must resist these errors or calculations quickly become unreliable. Eventually, quantum computers with many qubits will be able to fix errors that crop up, through a procedure known as error correction. Still, to boost the complexity of calculations quantum computers can take on, qubit reliability will need to keep improving.
Different technologies for forming qubits have various strengths and weaknesses, which affect quantum volume. IBM and Google build their qubits out of superconducting materials, as do many academic scientists. In superconductors cooled to extremely low temperatures, electrons flow unimpeded. To fashion superconducting qubits, scientists form circuits in which current flows inside a loop of wire made of aluminum or another superconducting material.
Several teams of academic researchers create qubits from single ions, trapped in place and probed with lasers. Intel and others are working with qubits fabricated from tiny bits of silicon known as quantum dots (SN: 7/11/15, p. 22). Microsoft is studying what are known as topological qubits, which would be extra-resistant to errors creeping into calculations. Qubits can even be forged from diamond, using defects in the crystal that isolate a single electron. Photonic quantum computers, meanwhile, make calculations using particles of light. A Chinese-led team demonstrated in a paper published May 1 in Nature Photonics that a light-based quantum computer could outperform the earliest electronic computers on a particular problem.
One company, D-Wave, claims to have a quantum computer that can perform serious calculations, albeit using a more limited strategy than other quantum computers (SN: 7/26/14, p. 6). But many scientists are skeptical about the approach. “The general consensus at the moment is that something quantum is happening, but it’s still very unclear what it is,” says Devitt.
Identical ions While superconducting qubits have received the most attention from giants like IBM and Google, underdogs taking different approaches could eventually pass these companies by. One potential upstart is Chris Monroe, who crafts ion-based quantum computers. On a walkway near his office on the University of Maryland campus in College Park, a banner featuring a larger-than-life portrait of Monroe adorns a fence. The message: Monroe’s quantum computers are a “fearless idea.” The banner is part of an advertising campaign featuring several of the university’s researchers, but Monroe seems an apt choice, because his research bucks the trend of working with superconducting qubits.
Monroe and his small army of researchers arrange ions in neat lines, manipulating them with lasers. In a paper published in Nature in 2016, Monroe and colleagues debuted a five-qubit quantum computer, made of ytterbium ions, allowing scientists to carry out various quantum computations. A 32-ion computer is in the works, he says.
Monroe’s labs — he has half a dozen of them on campus — don’t resemble anything normally associated with computers. Tables hold an indecipherable mess of lenses and mirrors, surrounding a vacuum chamber that houses the ions. As with IBM’s computer, although the full package is bulky, the quantum part is minuscule: The chain of ions spans just hundredths of a millimeter.
Scientists in laser goggles tend to the whole setup. The foreign nature of the equipment explains why ion technology for quantum computing hasn’t taken off yet, Monroe says. So he and colleagues took matters into their own hands, creating a start-up called IonQ, which plans to refine ion computers to make them easier to work with.
Monroe points out a few advantages of his technology. In particular, ions of the same type are identical. In other systems, tiny differences between qubits can muck up a quantum computer’s operations. As quantum computers scale up, Monroe says, there will be a big price to pay for those small differences. “Having qubits that are identical, over millions of them, is going to be really important.”
In a paper published in March in Proceedings of the National Academy of Sciences, Monroe and colleagues compared their quantum computer with IBM’s Quantum Experience. The ion computer performed operations more slowly than IBM’s superconducting one, but it benefited from being more interconnected — each ion can be entangled with any other ion, whereas IBM’s qubits can be entangled only with adjacent qubits. That interconnectedness means that calculations can be performed in fewer steps, helping to make up for the slower operation speed, and minimizing the opportunity for errors. Early applications Computers like Monroe’s are still far from unlocking the full power of quantum computing. To perform increasingly complex tasks, scientists will have to correct the errors that slip into calculations, fixing problems on the fly by spreading information out among many qubits. Unfortunately, such error correction multiplies the number of qubits required by a factor of 10, 100 or even thousands, depending on the quality of the qubits. Fully error-corrected quantum computers will require millions of qubits. That’s still a long way off.
So scientists are sketching out some simple problems that quantum computers could dig into without error correction. One of the most important early applications will be to study the chemistry of small molecules or simple reactions, by using quantum computers to simulate the quantum mechanics of chemical systems. In 2016, scientists from Google, Harvard University and other institutions performed such a quantum simulation of a hydrogen molecule. Hydrogen has already been simulated with classical computers with similar results, but more complex molecules could follow as quantum computers scale up.
Once error-corrected quantum computers appear, many quantum physicists have their eye on one chemistry problem in particular: making fertilizer. Though it seems an unlikely mission for quantum physicists, the task illustrates the game-changing potential of quantum computers.
The Haber-Bosch process, which is used to create nitrogen-rich fertilizers, is hugely energy intensive, demanding high temperatures and pressures. The process, essential for modern farming, consumes around 1 percent of the world’s energy supply. There may be a better way. Nitrogen-fixing bacteria easily extract nitrogen from the air, thanks to the enzyme nitrogenase. Quantum computers could help simulate this enzyme and reveal its properties, perhaps allowing scientists “to design a catalyst to improve the nitrogen fixation reaction, make it more efficient, and save on the world’s energy,” says Microsoft’s Svore. “That’s the kind of thing we want to do on a quantum computer. And for that problem it looks like we’ll need error correction.”
Pinpointing applications that don’t require error correction is difficult, and the possibilities are not fully mapped out. “It’s not because they don’t exist; I think it’s because physicists are not the right people to be finding them,” says Devitt, of Macquarie. Once the hardware is available, the thinking goes, computer scientists will come up with new ideas.
That’s why companies like IBM are pushing their quantum computers to users via the Web. “A lot of these companies are realizing that they need people to start playing around with these things,” Devitt says.
Quantum scientists are trekking into a new, uncharted realm of computation, bringing computer programmers along for the ride. The capabilities of these fledgling systems could reshape the way society uses computers.
Eventually, quantum computers may become part of the fabric of our technological society. Quantum computers could become integrated into a quantum internet, for example, which would be more secure than what exists today (SN: 10/15/16, p. 13).
“Quantum computers and quantum communication effectively allow you to do things in a much more private way,” says physicist Seth Lloyd of MIT, who envisions Web searches that not even the search engine can spy on.
There are probably plenty more uses for quantum computers that nobody has thought up yet.
“We’re not sure exactly what these are going to be used for. That makes it a little weird,” Monroe says. But, he maintains, the computers will find their niches. “Build it and they will come.”
An expectant mom might want to think twice about quenching her thirst with soda.
The more sugary beverages a mom drank during mid-pregnancy, the heavier her kids were in elementary school compared with kids whose mothers consumed less of the drinks, a new study finds. At age 8, boys and girls weighed approximately 0.25 kilograms more — about half a pound — with each serving mom added per day while pregnant, researchers report online July 10 in Pediatrics. “What happens in early development really has a long-term impact,” says Meghan Azad, an epidemiologist at the University of Manitoba in Canada, who was not involved in the study. A fetus’s metabolism develops in response to the surrounding environment, including the maternal diet, she says.
The new findings come out of a larger project that studies the impact of pregnant moms’ diets on their kids’ health. “We know that what mothers eat during pregnancy may affect their children’s health and later obesity,” says biostatistician Sheryl Rifas-Shiman of Harvard Medical School and Harvard Pilgrim Health Care Institute in Boston. “We decided to look at sugar-sweetened beverages as one of these factors.” Sugary drinks are associated with excessive weight gain and obesity in studies of adults and children.
Rifas-Shiman and colleagues included 1,078 mother-child pairs in the study. Moms filled out a questionnaire in the first and second trimesters of their pregnancy about what they were drinking — soda, fruit drinks, 100 percent fruit juice, diet soda or water — and how often. Soda and fruit drinks were considered sugar-sweetened beverages. A serving was defined as a can, glass or bottle of a beverage.
When the children of these moms were in elementary school, the researchers assessed the kids using several different measurements of obesity. They took kids’ height and weight to calculate body mass index and performed a scanning technique to determine total fat mass, among other methods.
Of the 1,078 kids in the study, 272, or 25 percent, were considered overweight or obese based on their BMI. Moms who drank at least two servings of sugar-sweetened beverages per day during the second trimester had children most likely to fall in this group. Other measurements of obesity were also highest for these kids. Children’s own sugary beverage drinking habits did not alter the results, the scientists say.
The research can’t say moms’ soda sips directly caused the weight gain in her kids. But based on this study and other work, limiting sugary drinks during pregnancy “is probably a good idea,” Azad says. There’s no harm in avoiding them, “and it looks like there may be a benefit.” Her advice is to drink water instead.
DNA might reveal how dogs became man’s best friend.
A new study shows that some of the same genes linked to the behavior of extremely social people can also make dogs friendlier. The result, published July 19 in Science Advances, suggests that dogs’ domestication may be the result of just a few genetic changes rather than hundreds or thousands of them.
“It is great to see initial genetic evidence supporting the self-domestication hypothesis or ‘survival of the friendliest,’” says evolutionary anthropologist Brian Hare of Duke University, who studies how dogs think and learn. “This is another piece of the puzzle suggesting that humans did not create dogs intentionally, but instead wolves that were friendliest toward humans were at an evolutionary advantage as our two species began to interact.”
Not much is known about the underlying genetics of how dogs became domesticated. In 2010, evolutionary geneticist Bridgett vonHoldt of Princeton University and colleagues published a study comparing dogs’ and wolves’ DNA. The biggest genetic differences gave clues to why dogs and wolves don’t look the same. But major differences were also found in WBSCR17, a gene linked to Williams-Beuren syndrome in humans. Williams-Beuren syndrome leads to delayed development, impaired thinking ability and hypersociability. VonHoldt and colleagues wondered if changes to the same gene in dogs would make the animals more social than wolves, and whether that might have influenced dogs’ domestication.
In the new study, vonHoldt and colleagues compared the sociability of domestic dogs with that of wolves raised by humans. Dogs typically spent more time than wolves staring at and interacting with a human stranger nearby, showing the dogs were more social than the wolves. Analyzing the genetic blueprint of those dogs and wolves, along with DNA data of other wolves and dogs, showed variations in three genes associated with the social behaviors directed at humans: WBSCR17, GTF2I and GTF2IRD1. All three are tied to Williams-Beuren syndrome in humans. “It’s fascinating that a handful of genetic changes could be so influential on social behavior,” vonHoldt says.
She and colleagues propose that such changes may be closely intertwined with dog domestication. Previous hypotheses have suggested that domestication was linked dogs’ development of advanced ways of analyzing and applying information about social situations, a way of thinking assumed to be unique to humans. “Instead of developing a more complex form of cognition, dogs appear to be engaging in excessively friendly behavior that increases the amount of time they spend near us and watching us,” says study coauthor Monique Udell, who studies animal behavior at Oregon State University in Corvallis. In turn, she says, that gives dogs “the opportunities necessary for them to learn about our behavior and what maximizes their success when living with us.”
The team notes, for instance, that in addition to contributing to sociability, the variations in WBSCR17 may represent an adaptation in dogs to living with humans. A previous study revealed that variations in WBSCR17 were tied to the ability to digest carbohydrates — a source of energy wolves would have rarely consumed. Yet, the variations in domestic dogs suggest those changes would help them thrive on the starch-rich diets of humans. Links between another gene related to starch digestion in dogs and domestication, however, have recently been called into question (SN Online: 7/18/17).
The other variations, the team argues, would have predisposed the dogs to be hypersocial with humans, a trait that humans would then have selected for as dogs were bred over generations.
Robots are branching out. A new prototype soft robot takes inspiration from plants by growing to explore its environment.
Vines and some fungi extend from their tips to explore their surroundings. Elliot Hawkes of the University of California in Santa Barbara and his colleagues designed a bot that works on similar principles. Its mechanical body sits inside a plastic tube reel that extends through pressurized inflation, a method that some invertebrates like peanut worms (Sipunculus nudus) also use to extend their appendages. The plastic tubing has two compartments, and inflating one side or the other changes the extension direction. A camera sensor at the tip alerts the bot when it’s about to run into something.
In the lab, Hawkes and his colleagues programmed the robot to form 3-D structures such as a radio antenna, turn off a valve, navigate a maze, swim through glue, act as a fire extinguisher, squeeze through tight gaps, shimmy through fly paper and slither across a bed of nails. The soft bot can extend up to 72 meters, and unlike plants, it can grow at a speed of 10 meters per second, the team reports July 19 in Science Robotics. The design could serve as a model for building robots that can traverse constrained environments.
This isn’t the first robot to take inspiration from plants. One plantlike predecessor was a robot modeled on roots.