How scientists are hunting for a safer opioid painkiller

An opioid epidemic is upon us. Prescription painkillers such as fentanyl and morphine can ease terrible pain, but they can also cause addiction and death. The Centers for Disease Control and Prevention estimates that nearly 2 million Americans are abusing or addicted to prescription opiates. Politicians are attempting to stem the tide at state and national levels, with bills to change and monitor how physicians prescribe painkillers and to increase access to addiction treatment programs.

Those efforts may make access to painkillers more difficult for some. But pain comes to everyone eventually, and opioids are one of the best ways to make it go away.

Morphine is the king of pain treatment. “For hundreds of years people have used morphine,” says Lakshmi Devi, a pharmacologist at the Ichan School of Medicine Mount Sinai in New York City. “It works, it’s a good drug, that’s why we want it. The problem is the bad stuff.”

The “bad stuff” includes tolerance — patients have to take higher and higher doses to relieve their pain. Drugs such as morphine depress breathing, an effect that can prove deadly. They also cause constipation, drowsiness and vomiting. But “for certain types of pain, there are no medications that are as effective,” says Bryan Roth, a pharmacologist and physician at the University of North Carolina at Chapel Hill. The trick is constructing a drug with all the benefits of an opioid painkiller, and few to none of the side effects. Here are three ways that scientists are searching for the next big pain buster, and three of the chemicals they’ve turned up.

Raid the chemical library
To find new options for promising drugs, scientists often look to chemical libraries of known molecules. “A pharmaceutical company will have libraries of a few million compounds,” Roth explains. Researchers comb through these libraries trying to find those compounds that connect to specific molecules in the body and brain.

When drugs such as morphine enter the brain, they bind to receptors on the outside of cells and cause cascades of chemical activity inside. Opiate drugs bind to three types of opiate receptors: mu, kappa and delta. The mu receptor type is the one associated with the pain-killing — and pleasure-causing — activities of opiates. Activation of this receptor type spawns two cascades of chemical activity. One, the Gi pathway, is associated with pain relief. The other — known as the beta-arrestin pathway — is associated with slowed breathing rate and constipation. So a winning candidate molecule would be one that triggered only the Gi pathway, without triggering beta-arrestin.
Roth and colleagues set out to find a molecule that fit those specifications. But instead of the intense, months-long process of experimentally screening molecules in a chemical library, Roth’s group chose a computational approach, screening more than 3 million compounds in a matter of days. The screen narrowed the candidates down to 23 molecules to test the old fashioned way — both chemically and in mice. Each of these potential painkillers went through even more tests to find those with the strongest bond to the receptor and the highest potency.

In the end, the team focused on a chemical called PMZ21. It activates only the pathway associated with pain relief, and is an effective painkiller in mice. It does not depress breathing rate, and it might even avoid some of the addictive potential of other opiates, though Roth notes that further studies need to be done. He and his colleagues published their findings September 8 in Nature.

Letting the computer handle the initial screen is “a smart way of going about it,” notes Nathanial Jeske, a neuropharmacologist at the University of Texas Health Science Center in San Antonio. But mice are only the first step. “I’m interested to see if the efficacy applies to different animals.”

Making an opiate 2.0
Screening millions of compounds is one way to find a new drug. But why buy new when you can give a chemical makeover to something you already have? This is a “standard medicinal chemistry approach,” Roth says: “Pick a known drug and make analogs [slightly tweaked structures], and that can work.”

That was the approach that Mei-Chuan Ko and his group at Wake Forest University School of Medicine in Winston-Salem, N.C., decided to take with the common opioid painkiller buprenorphine. “Compared to morphine or fentanyl, buprenorphine is safer,” Ko explains, “but it has abuse liability. Physicians still have concerns about the abuse and won’t prescribe it.” Buprenorphine is what’s called a partial agonist at the mu receptor — it can’t fully activate the receptor, even at the highest doses. So it’s an effective painkiller that is harder to overdose on — so much so that it’s used to treat addiction to other opiates. But it can still cause a high, so doctors still worry about people abusing the drug.

So to make a version of buprenorphine with lower addictive potential, Ko and his colleagues focused on a chemical known as BU08028. It’s structurally similar to buprenorphine, but it also hits another type of opioid receptor called the nociceptin-orphanin FQ peptide (or NOP) receptor.

The NOP receptor is not a traditional target. This is partially because its effect in rodents — usually the first recipients of a new drug — is “complicated,” says Ko. “It does kill pain at high doses but not at low doses.” In primates, however, it’s another matter. In tests in four monkeys, BU08028 killed pain effectively at low doses and didn’t suppress breathing. The monkeys also showed little interest in taking the drug voluntarily, which suggests it might not be as addictive as classic opioid drugs. Ko and his colleagues published their results in the Sept. 13 Proceedings of the National Academy of Sciences.*

Off the beaten path
Combing through chemical libraries or tweaking drugs that are already on the market takes advantage of systems that are already well-established. But sometimes, a tough question requires an entirely new approach. “You can either target the receptors you know and love … or you can do the complete opposite and see if there’s a new receptor system,” Devi says.

Jeske and his group chose the latter option. Of the three opiate receptor types — mu, kappa and delta — most drugs (and drug studies) focus on the mu receptor. Jeske’s group chose to investigate delta instead. They were especially interested in targeting delta receptors in the body — far away from the brain and its side effects.

The delta receptor has an unfortunate quirk. When activated by a drug, it can help kill pain. But most of the time, it can’t be activated at all. The receptor is protected — bound up tight by another molecule — and only released when an area is injured. So Jeske’s goal was to find out what was binding up the delta receptor, and figure out how to get rid of it.

Working in rat neurons, Jeske and his group found that when a molecule called GRK2 was around, the delta receptor was inactive. “Knock down GRK2 and the receptor works just fine,” Jeske says. By genetically knocking out GRK2 in rats, Jeske and his group left the delta receptor free to respond to a drug — and to prevent pain. The group published their results September 6 in Cell Reports.

It’s “a completely new target and that’s great,” says Devi. “But that new target with a drug is a tall order.” A single drug is unlikely to be able to both push away GRK2 and then activate the delta receptor to stop pain.

Jeske agrees that a single molecule probably couldn’t take on both roles. Instead, one drug to get rid of GRK2 would be given first, followed by another to activate the delta receptors.

Each drug development method has unearthed drug candidates with early promise. “We’ve solved these problems in mice and rats many times,” Devi notes. But whether sifting through libraries, tweaking older drugs or coming up with entirely new ones, the journey to the clinic has only just begun.

*Paul Czoty and Michael Nader, two authors on the PNAS paper, were on my Ph.D. dissertation committee. I have had neither direct nor indirect involvement with this research.

These acorn worms have a head for swimming

Certain marine worms spend their larval phase as little more than a tiny, transparent “swimming head.” A new study explores the genes involved in that headfirst approach to life.

A mud flat in Morro Bay, Calif., is the only known place where this one species of acorn worm, Schizocardium californicum, is found. After digging up the creatures, Paul Gonzalez, an evolutionary developmental biologist at Stanford University, raised hordes of the larvae at Stanford’s Hopkins Marine Station in Pacific Grove, Calif.
Because a larva and an adult worm look so different, scientists wondered if the same genes and molecular machinery were involved in both phases of development. To find out, Gonzalez and colleagues analyzed the worm’s genetic blueprint during each phase, they report online December 8 in Current Biology.

Genes linked to trunk development were switched off during the larval phase until just before metamorphosis. Instead, most of the genes switched on were associated with head development, Gonzalez says.

The larvae hatch from eggs laid on the mud. When tides flood the area, the squishy, gel-filled animals use hairlike cilia to swim upwards to devour bits of algae. “They’re feeding machines,” Gonzalez says. He speculates that being balloon-shaped noggins, rather than wriggling noodles, may help the organisms float and feed more efficiently.

After about two months of gorging at the algae buffet, the larvae, which grow to roughly 2 millimeters across, transform and sink back into the muck. There, they eventually grow a body that can stretch up to about 40 centimeters.

Evidence falls into place for once and future supercontinents

Look at any map of the Atlantic Ocean, and you might feel the urge to slide South America and Africa together. The two continents just beg to nestle next to each other, with Brazil’s bulge locking into West Africa’s dimple. That visible clue, along with several others, prompted Alfred Wegener to propose over a century ago that the continents had once been joined in a single enormous landmass. He called it Pangaea, or “all lands.”

Today, geologists know that Pangaea was just the most recent in a series of mighty super-continents. Over hundreds of millions of years, enormous plates of Earth’s crust have drifted together and then apart. Pangaea ruled from roughly 400 million to about 200 million years ago. But wind the clock further back, and other supercontinents emerge. Between 1.3 billion and 750 million years ago, all the continents amassed in a great land known as Rodinia. Go back even further, about 1.4 billion years or more, and the crustal shards had arranged themselves into a supercontinent called Nuna.

Using powerful computer programs and geologic clues from rocks around the world, researchers are painting a picture of these long-lost worlds. New studies of magnetic minerals in rock from Brazil, for instance, are helping pin the ancient Amazon to a spot it once occupied in Nuna. Other recent research reveals the geologic stresses that finally pulled Rodinia apart, some 750 million years ago. Scientists have even predicted the formation of the next supercontinent — an amalgam of North America and Asia, evocatively named Amasia — some 250 million years from now.
Reconstructing supercontinents is like trying to assemble a 1,000-piece jigsaw puzzle after you’ve lost a bunch of the pieces and your dog has chewed up others. Still, by figuring out which puzzle pieces went where, geologists have been able to illuminate some of earth science’s most fundamental questions.
For one thing, continental drift, that gradual movement of landmasses across Earth’s surface, profoundly affected life by allowing species to move into different parts of the world depending on what particular landmasses happened to be joined. (The global distribution of dinosaur fossils is dictated by how continents were assembled when those great animals roamed.)

Supercontinents can also help geologists hunting for mineral deposits — imagine discovering gold ore of a certain age in the Amazon and using it to find another gold deposit in a distant landmass that was once joined to the Amazon. More broadly, shifting landmasses have reshaped the face of the planet — as they form, supercontinents push up mountains like the Appalachians, and as they break apart, they create oceans like the Atlantic.

“The assembly and breakup of these continents have profoundly influenced the evolution of the whole Earth,” says Johanna Salminen, a geophysicist at the University of Helsinki in Finland.
Push or pull
For centuries, geologists, biogeographers and explorers have tried to explain various features of the natural world by invoking lost continents. Some of the wilder concepts included Lemuria, a sunken realm between Madagascar and India that offered an out-there rationale for the presence of lemurs and lemurlike fossils in both places, and Mu, an underwater land supposedly described in ancient Mayan manuscripts. While those fantastic notions have fallen out of favor, scientists are exploring the equally mind-bending story of the supercontinents that actually existed.
Earth’s constantly shifting jigsaw puzzle of continents and oceans traces back to the fundamental forces of plate tectonics. The story begins in the centers of oceans, where hot molten rock wells up from deep inside the Earth along underwater mountain chains. The lava cools and solidifies into newborn ocean crust, which moves continually away from either side of the mountain ridge as if carried outward on a conveyor belt. Eventually, the moving ocean crust bumps into a continent, where it either stalls or begins diving beneath that continental crust in a process called subduction.

Those competing forces — pushing newborn crust away from the mid-ocean mountains and pulling older crust down through subduction — are constantly rearranging Earth’s crustal plates. That’s why North America and Europe are getting farther away from each other by a few centimeters each year as the Atlantic widens, and why the Pacific Ocean is shrinking, its seafloor sucked down by subduction along the Ring of Fire — looping from New Zealand to Japan, Alaska and Chile.

By running the process backward in time, geologists can begin to see how oceans and continents have jockeyed for position over millions of years. Computers calculate how plate positions shifted over time, based on the movements of today’s plates as well as geologic data that hint at their past locations.

Those geologic clues — such as magnetic minerals in ancient rocks — are few and far between. But enough remain for researchers to start to cobble together the story of which crustal piece went where.

“To solve a jigsaw puzzle, you don’t necessarily need 100 percent of the pieces before you can look at it and say it’s the Mona Lisa,” says Brendan Murphy, a geophysicist at St. Francis Xavier University in Antigonish, Nova Scotia. “But you need some key pieces.” He adds: “With the eyes and nose, you have a chance.”

No place like Nuna
For ancient Nuna, scientists are starting to find the first of those key pieces. They may not reveal the Mona Lisa’s enigmatic smile, but they are at least starting to fill in a portrait of a long-vanished supercontinent.

Nuna came together starting around 2 billion years ago, with its heart a mash-up of Baltica (the landmass that today contains Scandinavia), Laurentia (which is now much of North America) and Siberia. Geologists argue over many things involving this first supercontinent, starting with its name. “Nuna” is from the Inuktitut language of the Arctic. It means lands bordering the northern oceans, so dubbed for the supercontinent’s Arctic-fringing components. But some researchers prefer to call it Columbia after the Columbia region of North America’s Pacific Northwest.

Whatever its moniker, Nuna/Columbia is an exercise in trying to get all the puzzle pieces to fit. Because Nuna existed so long ago, subduction has recycled many rocks of that age back into the deep Earth, erasing any record of what they were doing at the time. Geologists travel to rocks that remain in places like India, South America and North China, analyzing them for clues to where they were at the time of Nuna.

One of the most promising techniques targets magnetic minerals. Paleomagnetic studies use the minerals as tiny time capsule compasses, which recorded the direction of the magnetic field at the time the rocks formed. The minerals can reveal information about where those rocks used to be, including their latitude relative to where the Earth’s north magnetic pole was at the time.

Salminen has been gathering paleomagnetic data from Nuna-age rocks in Brazil and western Africa. Not surprisingly, given their current lock-and-key configuration, these two chunks were once united as a single ancient continental block, known as the Congo/São Francisco craton. For millions of years, it shuffled around as a single geologic unit, occasionally merging with other blocks and then later splitting away.

Salminen has now figured out where the Congo/São Francisco puzzle piece fit in the jigsaw that made up Nuna. In 1.5-billion-year-old rocks in Brazil, she unearthed magnetic clues that placed the Congo/São Francisco craton at the southeastern tip of Baltica all those years ago. She and her colleagues reported the findings in November in Precambrian Research.

It is the first time scientists have gotten paleomagnetic information about where the craton may have been as far back as Nuna. “This is quite remarkable — it was really needed,” she says. “Now we can say Congo could have been there.” Like building out a jigsaw puzzle from its center, the work essentially expands Nuna’s core.

Rodinia’s radioactive decay
By around 1.3 billion years ago, Nuna was breaking apart, the pieces of the Mona Lisa face shattering and drifting away from each other. It took another 200 million years before they rejoined in the configuration known as Rodinia.

Recent research suggests that Rodinia may not have looked much different than Nuna, though. The Mona Lisa in its second incarnation may still have looked like the portrait of a woman — just maybe with a set of earrings dangling from her lobes.
of Carleton University in Ottawa, Canada, recently explored the relative positions of Laurentia and Siberia between 1.9 billion and 720 million years ago, a period that spans both Nuna and Rodinia. Ernst’s group specializes in studying “large igneous provinces” — the huge outpourings of lava that build up over millions of years. Often the molten rock flows along sheetlike structures known as dikes, which funnel magma from deep in the Earth upward.

By using the radioactive decay of elements in the dike rock, such as uranium decaying to lead, scientists can precisely date when a dike formed. With enough dates on a particular dike, researchers can produce a sort of bar code that is unique to each dike. Later, when the dikes are broken apart and shifted over time, geologists can pinpoint the bar codes that match and thus line up parts of the crust that used to be together.

Ernst’s team found that dikes from Laurentia and Siberia matched during four periods between 1.87 billion and 720 million years ago — suggesting they were connected for that entire span, the team reported in June in Nature Geoscience. Such a long-term relationship suggests that Siberia and Laurentia may have stuck together through the Nuna-Rodinia transition, Ernst says.

Other parts of the puzzle tend to end up in the same relative locations as well, says Joseph Meert, a paleomagnetist at the University of Florida in Gainesville. In each supercontinent, Laurentia, Siberia and Baltica knit themselves together in roughly the same arrangement: Siberia and Baltica nestle like two opposing knobs on one end of Laurentia’s elongated blob. Meert calls these three continental fragments “strange attractors,” since they appear conjoined time after time.

It’s the outer edges of the jigsaw puzzle that change. Fragments like north China and southern Africa end up in different locations around the supercontinent core. “I call those bits the lonely wanderers,” Meert says.

Getting to know Pangaea
While some puzzle-makers try to sort out the reconstructions of past supercontinents, other geologists are exploring deeper questions about why big landmasses come together in the first place. And one place to look is Pangaea.

“Most people would accept what Pangaea looks like,” Murphy says. “But as soon as you start asking why it formed, how it formed and what processes are involved — then all of a sudden you run into problems.”
Around 550 million years ago, subduction zones around the edges of an ancient ocean began dragging that oceanic crust under continental crust. But around 400 million years ago, that subduction suddenly stopped. In a major shift, a different, much younger seafloor began to subduct instead beneath the continents. That young ocean crust kept getting sucked up until it all disappeared, and the continents were left merged in the giant mass of Pangaea.

Imagine in today’s world, if the Pacific stopped shrinking and all of a sudden the Atlantic started shrinking instead. “That’s quite a significant problem,” Murphy says. In unpublished work, he has been exploring the physics of how plates of oceanic and continental crust — which have different densities, buoyancies and other physical characteristics — could have interacted with one another in the run-up to Pangaea.

Supercontinent breakups are similarly complicated. Once all the land amasses in a single big chunk, it cannot stay together forever. In one scenario, its sheer bulk acts as an electric blanket, allowing heat from the deep Earth to pond up beneath it until things get too hot and the supercontinent splinters (SN: 1/21/17, p. 14). In another, physical stressors pull the supercontinent apart.

Peter Cawood, a geologist at the University of St. Andrews in Fife, Scotland, likes the second option. He has been studying mountain ranges that arose when the crustal plates that made up Rodinia collided, pushing up soaring peaks where they met. These include the Grenville mountain-building event of about 1 billion years ago, traces of which linger today in the eroded peaks of the Appalachians. Cawood and his colleagues analyzed the times at which such mountains appeared and put together a detailed timeline of what happened as Rodinia began to break apart.

They note that crustal plates began subducting around the edges of Rodinia right around the time of its breakup. That sucking down of crust caused the supercontinent to be pulled from all directions and eventually break apart, Cawood and his colleagues wrote in Earth and Planetary Science Letters in September. “The timing of major breakup corresponds with this timing of opposing subduction zones,” he says.
The future is Amasia
That stressful situation is similar to what the Pacific Ocean finds itself in today. Because it is flanked by subduction zones around the Ring of Fire, the Pacific Plate is shrinking over time. Some geologists predict that it will vanish entirely in the future, leaving North America and Asia to merge into the next supercontinent, Amasia. Others have devised different possible paths to Amasia, such as closing the Arctic Ocean rather than the Pacific.

“Speculation about the future supercontinent Amasia is exactly that, speculation,” says geologist Ross Mitchell of Curtin University in Perth, Australia, who in 2012 helped describe the mechanics of how Amasia might arise. “But there’s hard science behind the conjecture.”

For instance, Masaki Yoshida of the Japan Agency for Marine-Earth Science and Technology in Yokosuka recently used sophisticated computer models to analyze how today’s continents would continue to move atop the flowing heat of the deep Earth. He combined modern-day plate motions with information on how that internal planetary heat churns in three dimensions, then ran the whole scenario into the future. In a paper in the September Geology, Yoshida describes how North America, Eurasia, Australia and Africa will end up merged in the Northern Hemisphere.

No matter where the continents are headed, they are destined to reassemble. Plate tectonics says it will happen — and a new supercontinent will shape the face of the Earth. It might not look like the Mona Lisa, but it might just be another masterpiece.

Promise and perils of marijuana deserve more scientific scrutiny

Marijuana’s medical promise deserves closer, better-funded scientific scrutiny, a new state-of-the-science report concludes.

The report, released January 12 by the National Academies of Sciences, Engineering and Medicine in Washington, D.C., calls for expanding research on potential medical applications of cannabis and its products, including marijuana and chemical components called cannabinoids.

Big gaps in knowledge remain about health effects of cannabis use, for good or ill. Efforts to study these effects are hampered by federal classification of cannabis as a Schedule 1 drug, meaning it has no accepted medical use and a high potential for abuse. Schedule 1 status makes it difficult for researchers to access cannabis. The new report recommends reclassifying the substance to make it easier to study.
Recommendations from the 16-member committee that authored the report come at a time of heightened acceptance of marijuana and related substances. Cannabis is a legal medical treatment in 28 states and the District of Columbia. Recreational pot use is legal in eight of those states and the District.

“The legalization and commercialization of cannabis has allowed marketing to get ahead of science,” says Raul Gonzalez, a psychologist at Florida International University in Miami who reviewed the report before publication. While the report highlights possible medical benefits, Gonzalez notes that it also underscores negative consequences of regular cannabis use. These include certain respiratory and psychological problems.

A 2015 survey indicated that around 22 million people in the United States ages 12 and older ingested some form of cannabis in the last month, mainly as a recreational drug. Roughly 10 percent of those people reported using cannabis solely for medical reasons and 36 percent reported a mix of recreational and medical use.

“This growing acceptance, accessibility and use of cannabis and its derivatives have raised important public health concerns,” says committee chair Marie McCormick, a Harvard T.H. Chan School of Public Health pediatrician.

She and her committee colleagues considered more than 10,700 abstracts of studies on cannabis’s health effects published between January 1, 1999, and August 1, 2016. The committee gave special weight to research reviews published since 2011.
Cannabis and cannabinoids show medical potential, the report concludes. Evidence indicates that these substances substantially reduce chronic pain in adults. Cannabis derivatives ingested in pills by multiple sclerosis patients temporarily reduce self-reported muscle spasms (SN: 6/19/10, p. 16). Cannabinoids also help to prevent and lessen chemotherapy-induced nausea and vomiting in adults.

Less conclusive evidence suggests cannabis and cannabinoids improve sleep for adults with sleep apnea, fibromyalgia, chronic pain and multiple sclerosis, the report says.

“If cannabis was to be classified as a medicine, then it needs to be rigorously tested like all other medicines,” says pharmacologist Karen Wright of Lancaster University in England. She hopes the new report spurs researchers to develop standards for the chemical composition of cannabis products tested as possible medical treatments. Despite cannabis’s medical promise, scientists have more questions than answers about how its use influences physical and mental health.

Encouragingly, studies reviewed by the committee suggest that smoking marijuana, unlike smoking cigarettes, does not increase the chances of developing lung, head and neck cancers. But pot’s relationship to other cancers — as well as to heart attacks, strokes and diabetes — is unclear. And few or no findings support the use of cannabis to treat Tourette’s syndrome, post-traumatic stress disorder, cancer, epilepsy (SN Online: 4/13/15) or other medical ailments.

Evidence does not conclusively link marijuana smoking to respiratory diseases such as asthma. But regular pot use tends to accompany increased chronic bronchitis episodes and an intensified cough and phlegm production, at least until smoking stops.

Cannabis smoke may deter infection-related inflammation in the body. But data are sparse on whether cannabis or its derivatives influence immune responses in healthy people or those with HIV.

There are some clear downsides to consuming marijuana and related substances, the new report adds. Solid scientific support exists for a link between cannabis use and later development of psychotic disorders such as schizophrenia. A moderate relationship exists between cannabis use and the development of addictions to alcohol, tobacco and illegal drugs.

Fairly strong evidence points to learning, memory and attention problems immediately after smoking marijuana. Limited data, however, tie pot use to academic problems, dropping out of school, unemployment or lowered income in adulthood.

Asteroid barrage, ancient marine life boom not linked

An asteroid bombardment that some say triggered an explosion of marine animal diversity around 471 million years ago actually had nothing to do with it.

Precisely dating meteorites from the salvo, researchers found that the space rock barrage began at least 2 million years after the start of the Great Ordovician Biodiversification Event. So the two phenomena are unrelated, the researchers conclude January 24 in Nature Communications.

Some scientists had previously proposed a causal link between the two events: Raining debris from an asteroid breakup (SN: 7/23/16, p. 4) drove evolution by upsetting ecosystems and opening new ecological niches. The relative timing of the impacts and biodiversification was uncertain, though.
Geologist Anders Lindskog of Lund University in Sweden and colleagues examined 17 crystals buried alongside meteorite fragments. Gradual radioactive decay of uranium atoms inside the crystals allowed the researchers to accurately date the sediment layer to around 467.5 million years ago. Based in part on this age, the researchers estimate that the asteroid breakup took place around 468 million years ago. That’s well after fossil evidence suggests that the diversification event kicked off.

Other forces such as climate change and shifting continents instead promoted biodiversity, the researchers propose.

LSD’s grip on brain protein could explain drug’s long-lasting effects

Locked inside a human brain protein, the hallucinogenic drug LSD takes an extra-long trip.

New X-ray crystallography images reveal how an LSD molecule gets trapped within a protein that senses serotonin, a key chemical messenger in the brain. The protein, called a serotonin receptor, belongs to a family of proteins involved in everything from perception to mood.

The work is the first to decipher the structure of such a receptor bound to LSD, which gets snared in the protein for hours. That could explain why “acid trips” last so long, study coauthor Bryan Roth and colleagues report January 26 in Cell. It’s “the first snapshot of LSD in action,” he says. “Until now, we had no idea how it worked at the molecular level.”
But the results might not be that relevant to people, warns Cornell University biophysicist Harel Weinstein.

Roth’s group didn’t capture the main target of LSD, a serotonin receptor called 5-HT2A, instead imaging the related receptor 5-HT2B. That receptor is “important in rodents, but not that important in humans,” Weinstein says.

Roth’s team has devoted decades to working on 5-HT2A, but the receptor has “thus far been impossible to crystallize,” he says. Predictions of 5-HT2A’s structure, though, are very similar to that of 5-HT2B, he says.

LSD, or lysergic acid diethylamide, was first cooked up in a chemist’s lab in 1938. It was popular (and legal) for recreational use in the early 1960s, but the United States later banned the drug (also known as blotter, boomer, Purple Haze and electric Kool-Aid).

It’s known for altering perception and mood — and for its unusually long-lasting effects. An acid trip can run some 15 hours, and at high doses, effects can linger for days. “It’s an extraordinarily potent drug,” says Roth, a psychiatrist and pharmacologist at the University of North Carolina School of Medicine in Chapel Hill.
Scientists have known for decades that LSD targeted serotonin receptors in the brain. These proteins, which are also found in the intestine and elsewhere in the body, lodge within the outer membranes of nerve cells and relay chemical signals to the cells’ interiors. But no one knew exactly how LSD fit into the receptor, or why the drug was so powerful.

Roth and colleagues’ work shows the drug hunkered deep inside a pocket of the receptor, grabbing onto an amino acid that acts like a handle to pull down a lid. It’s like a person holding the door of a storm cellar closed during a tornado, Roth says.

When the team did additional molecular experiments, tweaking the lid’s handle so that LSD could no longer hang on, the drug slipped out of the pocket faster than when the handle was intact. That was true whether the team used receptor 5-HT2B or 5-HT2A, Roth says. (Though the researchers couldn’t crystallize 5-HT2A, they were able to grow the protein inside cells in the lab for use in their other experiments.) The results suggest that LSD’s grip on the receptor is what keeps it trapped inside. “That explains to a great extent why LSD is so potent and why it’s so long-lasting,” Roth says.

David Nutt, a neuropsychopharmacologist at Imperial College London, agrees. He calls the work an “elegant use of molecular science.”

Weinstein remains skeptical. The 5-HT2A receptor is the interesting one, he maintains. A structure of that protein “has been needed for a very long time.” That’s what would really help explain the hallucinogenic effects of LSD, he says.

Mysteries of time still stump scientists

The topic of time is both excruciatingly complicated and slippery. The combination makes it easy to get bogged down. But instead of an exhaustive review, journalist Alan Burdick lets curiosity be his guide in Why Time Flies, an approach that leads to a light yet supremely satisfying story about time as it runs through — and is perceived by — the human body.

Burdick doesn’t restrict himself to any one aspect of his question. He spends time excavating what he calls the “existential caverns,” where philosophical questions, such as the shifting concept of now, dwell. He describes the circadian clocks that keep bodies running efficiently, making sure our bodies are primed to digest food at mealtimes, for instance. He even covers the intriguing and slightly insane self-experimentation by the French scientist Michel Siffre, who crawled into caves in 1962 and 1972 to see how his body responded in places without any time cues.
In the service of his exploration, Burdick lived in constant daylight in the Alaskan Arctic for two summery weeks, visited the master timekeepers at the International Bureau of Weights and Measures in Paris to see how they precisely mete out the seconds and plunged off a giant platform to see if time felt slower during moments of stress. The book not only deals with fascinating temporal science but also how time is largely a social construct. “Time is what everybody agrees the time is,” one researcher told Burdick.
That subjective truth also applies to the brain. Time, in a sense, is created by the mind. “Our experience of time is not a cave shadow to some true and absolute thing; time is our perception,” Burdick writes. That subjective experience becomes obvious when Burdick recounts how easily our brains’ clocks can be swayed. Emotions, attention (SN: 12/10/16, p. 10) and even fever can distort our time perception, scientists have found.

Burdick delves deep into several neuroscientific theories of how time runs through the brain (SN: 7/25/15, p. 20). Here, the story narrows somewhat in an effort to thoroughly explain a few key ideas. But even amid these details, Burdick doesn’t lose the overarching truth  — that for the most part, scientists simply don’t know the answers. That may be because there is no one answer; instead, the brain may create time by stitching together a multitude of neural clocks.
After reading Why Time Flies, readers will be convinced that no matter how much time passes, the mystery of time will endure.

Germanium computer chips gain ground on silicon — again

First germanium integrated circuits

Integrated circuits made of germanium instead of silicon have been reported … by researchers at International Business Machines Corp. Even though the experimental devices are about three times as large as the smallest silicon circuits, they reportedly offer faster overall switching speed. Germanium … has inherently greater mobility than silicon, which means that electrons move through it faster when a current is applied. — Science News, February 25, 1967

UPDATE:
Silicon circuits still dominate computing. But demand for smaller, high-speed electronics is pushing silicon to its physical limits, sending engineers back for a fresh look at germanium. Researchers built the first compact, high-performance germanium circuit in 2014, and scientists continue to fiddle with its physical properties to make smaller, faster circuits. Although not yet widely used, germanium circuits and those made from other materials, such as carbon nanotubes, could help engineers make more energy-efficient electronics.

Helium’s inertness defied by high-pressure compound

Helium — the recluse of the periodic table — is reluctant to react with other elements. But squeeze the element hard enough, and it will form a chemical compound with sodium, scientists report.

Helium, a noble gas, is one of the periodic table’s least reactive elements. Originally, the noble gases were believed incapable of forming any chemical compounds at all. But after scientists created xenon compounds in the early 1960s, a slew of other noble gas compounds followed. Helium, however, has largely been a holdout.
Although helium was known to hook up with certain elements, the bonds in those compounds were weak, or the compounds were short-lived or electrically charged. But the new compound, called sodium helide or Na2He, is stable at high pressure, and its bonds are strong, an international team of scientists reports February 6 in Nature Chemistry.

As a robust helium compound, “this is really the first that people ever observed,” says chemist Maosheng Miao of California State University, Northridge, who was not involved with the research.

The material’s properties are still poorly understood, but it is unlikely to have immediate practical applications — scientists can create it only in tiny amounts at very high pressures, says study coauthor Alexander Goncharov, a physicist at the Carnegie Institution for Science in Washington, D.C. Instead, the oddball compound serves as inspiration for scientists who hope to produce weird new materials at lower pressures. “I would say that it’s not totally impossible,” says Goncharov. Scientists may be able to tweak the compound, for example, by adding or switching out elements, to decrease the pressure needed.

To coerce helium to link up with another element, the scientists, led by Artem Oganov of Stony Brook University in New York, first performed computer calculations to see which compounds might be possible. Sodium, calculations predicted, would form a compound with helium if crushed under enormously high pressure. Under such conditions, the typical rules of chemistry change — elements that refuse to react at atmospheric pressure can sometimes become bosom buddies when given a squeeze.

So Goncharov and colleagues pinched small amounts of helium and sodium between a pair of diamonds, reaching pressures more than a million times that of Earth’s atmosphere, and heated the material with lasers to temperatures above 1,500 kelvins (about 1200° Celsius). By scattering X-rays off the compound, the scientists could deduce its structure, which matched the one predicted by calculations.
“I think this is really the triumph of computation,” says Miao. In the search for new compounds, computers now allow scientists to skip expensive trial-and-error experiments and zero in on the best candidates to create in a laboratory.

Na2He is an unusual type of compound known as an electride, in which pairs of electrons are cloistered off, away from any atoms. But despite the compound’s bizarre nature, it behaves somewhat like a commonplace compound such as table salt, in which negatively charged chloride ions alternate with positively charged sodium. In Na2He, the isolated electron pairs act like negative ions in such a compound, and the eight sodium atoms surrounding each helium atom are the positive ions.

“The idea that you can make compounds with things like helium which don’t react at all, I think it’s pretty interesting,” says physicist Eugene Gregoryanz of the University of Edinburgh. But, he adds, “I would like to see more experiments” to confirm the result.

The scientists’ calculations also predicted that a compound of helium, sodium and oxygen, called Na2HeO, should form at even lower pressures, though that one has yet to be created in the lab. So the oddball new helium compound may soon have a confirmed cousin.

New, greener catalysts are built for speed

Platinum, one of the rarest and most expensive metals on Earth, may soon find itself out of a job. Known for its allure in engagement rings, platinum is also treasured for its ability to jump-start chemical reactions. It’s an excellent catalyst, able to turn standoffish molecules into fast friends. But Earth’s supply of the metal is limited, so scientists are trying to coax materials that aren’t platinum — aren’t even metals — into acting like they are.

For years, platinum has been offering behind-the-scenes hustle in catalytic converters, which remove harmful pollutants from auto exhaust. It’s also one of a handful of rare metals that move along chemical reactions in many well-established industries. And now, clean energy technology opens a new and growing market for the metal. Energy-converting devices like fuel cells being developed to power some types of electric vehicles rely on platinum’s catalytic properties to transform hydrogen into electricity. Even generating the hydrogen fuel itself depends on platinum.

Without a cheaper substitute for platinum, these clean energy technologies won’t be able to compete against fossil fuels, says Liming Dai, a materials scientist at Case Western Reserve University in Cleveland.

To reduce the pressure on platinum, Dai and others are engineering new materials that have the same catalytic powers as platinum and other metals — without the high price tag. Some researchers are replacing expensive metals with cheaper, more abundant building blocks, like carbon. Others are turning to biology, using catalysts perfected by years of evolution as inspiration. And when platinum really is best for a job, researchers are retooling how it is used to get more bang for the buck.
Moving right along
Catalysts are the unsung heroes of the chemical reactions that make human society tick. These molecular matchmakers are used in manufacturing plastics and pharmaceuticals, petroleum and coal processing and now clean energy technology. Catalysts are even inside our bodies, in the form of enzymes that break food into nutrients and help cells make energy.
During any chemical reaction, molecules break chemical bonds between their atomic building blocks and then make new bonds with different atoms — like swapping partners at a square dance. Sometimes, those partnerships are easy to break: A molecule has certain properties that let it lure away atoms from another molecule. But in stable partnerships, the molecules are content as they are. Left together for a very long period of time, a few might eventually switch partners. But there’s no mass frenzy of bond breaking and rebuilding.

Catalysts make this breaking and rebuilding happen more efficiently by lowering the activation energy — the threshold amount of energy needed to make a chemical reaction go. Starting and ending products stay the same; the catalyst just changes the path, building a paved highway to bypass a bumpy dirt road. With an easier route, molecules that might take years to react can do so in seconds instead. A catalyst doesn’t get used up in the reaction, though. Like a wingman, it incentivizes other molecules to react, and then it bows out.

A hydrogen fuel cell, for example, works by reacting hydrogen gas (H2) with oxygen gas (O2) to make water (H2O) and electricity. The fuel cell needs to break apart the atoms of the hydrogen and oxygen molecules and reshuffle them into new molecules. Without some assistance, the reshuffling happens very slowly. Platinum propels those reactions along.
Platinum works well in fuel cell reactions because it interacts just the right amount with both hydrogen and oxygen. That is, the platinum surface attracts the gas molecules, pulling them close together to speed along the reaction. But then it lets its handiwork float free. Chemists call that “turnover” — how efficiently a catalyst can draw in molecules, help them react, then send them back out into the world.

Platinum isn’t the only superstar catalyst. Other metals with similar chemical properties also get the job done — palladium, ruthenium and iridium, for example. But those elements are also expensive and hard to get. They are so good at what they do that it’s hard to find a substitute. But promising new options are in the works.
Carbon is key
Carbon is a particularly attractive alternative to precious metals like platinum because it’s cheap, abundant and can be assembled into many different structures.

Carbon atoms can arrange themselves into flat sheets of orderly hexagonal rings, like chicken wire. Rolling these chicken wire sheets — known as graphene — into hollow tubes makes carbon nanotubes, which are stronger than steel for their weight. But carbon-only structures don’t make great catalysts.

“Really pure graphene isn’t catalytically active,” says Huixin He, a chemist at Rutgers University in Newark, N.J. But replacing some of the carbon atoms in the framework with nitrogen, phosphorus or other atoms changes the way electric charge is distributed throughout the material. And that can make carbon behave more like a metal. For example, nitrogen atoms sprinkled like chocolate chips into the carbon structure draw negatively charged electrons away from the carbon atoms. The carbon atoms are left with a more positive charge, making them more attractive to the reaction that needs a nudge.

That movement of electrical charge is a prerequisite for a material to act as a catalyst, says Dai, who has pioneered the development of carbon-based, metal-free catalysts. His lab group demonstrated in 2009 in Science that clumps of nitrogen-containing carbon nanotubes aligned vertically — like a fistful of uncooked spaghetti — could stand in for platinum to help break apart oxygen inside fuel cells.
To perfect the technology, which he has patented, Dai has been swapping in different atoms in different combinations and experimenting with various carbon structures. Should the catalyst be a flat sheet of graphene or a forest of rolled up nanotubes, or some hybrid of both? Should it contain just nitrogen and carbon, or a smorgasbord of other elements, too? The answer depends on the specific application.

In 2015 in Science Advances, Dai demonstrated that nitrogen-studded nanotubes worked in acid-containing fuel cells, one of the most promising designs for electric vehicles.

Other researchers are playing their own riffs on the carbon concept. To produce graphene’s orderly structure requires just the right temperature and specific reaction conditions. Amorphous carbon materials — in which the atoms are randomly clumped together — can be easier to make, Rutgers’ He says.

In one experiment, He’s team started with liquid phytic acid, a substance made of carbon, oxygen and phosphorus. Microwaving the liquid for less than a minute transformed it into a sooty black powder that she describes as a sticky sort of sand.

“Phytic acid strongly absorbs microwave energy and changes it to heat so fast,” she says. The heat rearranges the atoms into a jumbled carbon structure studded with phosphorus atoms. Like the nitrogen atoms in Dai’s nanotubes, the phosphorus atoms changed the movement of electric charge through the material and made it catalytically active, He and colleagues reported last year in ACS Nano.

The sooty phytic acid–based catalyst could help move along a different form of clean energy: It sped up a reaction that turns a big, hard-to-use molecule found in cellulose — a tough, woody component of plants — into something that can react with other molecules. That product could then be used to make fuel or other chemicals. He is still tweaking the catalyst to make it work better.

He’s catalyst particles get mixed into the chemical reaction (and later need to be strained out). These more jumbled carbon structures with nitrogen or phosphorus sprinkled in can work in fuel cells, too — and, she says, they’re easier to make than graphene.

Enzyme-inspired energy
Rather than design new materials from the bottom up, some scientists are repurposing catalysts already used in nature: enzymes. Inside living things, enzymes are involved in everything from copying genetic material to breaking down food and nutrients.

Enzymes have a few advantages as catalysts, says M.G. Finn, a chemist at Georgia Tech. They tend to be very specific for a particular reaction, so they won’t waste much energy propelling undesired side reactions. And because they can evolve, enzymes can be tailored to meet different needs.

On their own, enzymes can be too fragile to use in industrial manufacturing, says Trevor Douglas, a chemist at Indiana University in Bloomington. For a solution, his team looked to viruses, which already package enzymes and other proteins inside protective cases.

“We can use these compartments to stabilize the enzymes, to protect them from things that might chew them up in the environment,” Douglas says. The researchers are engineering bacteria to churn out virus-inspired capsules that can be used as catalysts in a variety of applications.
His team mostly uses enzymes called hydrogenases, but other enzymes can work, too. The researchers put the genetic instructions for making the enzymes and for building a protective coating into Escherichia coli bacteria. The bacteria go into production mode, pumping out particles with the hydrogenase enzymes protected inside, Douglas and colleagues reported last year in Nature Chemistry. The protective coating keeps chunky enzymes contained, but lets the molecules they assist get in and out.

“What we’ve done is co-opt the biological processes,” Douglas says. “All we have to do is grow the bacteria and turn on these genes.” Bacteria, he points out, tend to grow quite easily. It’s a sustainable system, and one that’s easily tailored to different reactions by swapping out one enzyme for another.

The enzyme-containing particles can speed along generation of the hydrogen fuel, he has found. But there are still technical challenges: These catalysts last only a couple of days, and figuring out how to replace them inside a consumer device is hard.

Other scientists are using existing enzymes as templates for catalysts of their own design. The same family of hydrogenase enzymes that Douglas is packaging into capsules can be a launching point for lab-built catalysts that are even more efficient than their natural counterparts.

One of these hydrogenases has an iron core plus an amine — a nitrogen-containing string of atoms — hanging off. Just as the nitrogen worked into Dai’s carbon nanotubes affected the way electrons were distributed throughout the material, the amine changes the way the rest of the molecule acts as a catalyst.

Morris Bullock, a researcher at Pacific Northwest National Laboratory in Richland, Wash., is trying to figure out exactly how that interaction plays out. He and colleagues are building catalysts with cheap and abundant metals like iron and nickel at their core, paired with different types of amines. By systematically varying the metal core and the structure and position of the amine, they’re testing which combinations work best.

These amine-containing catalysts aren’t ready for prime time yet — Bullock’s team is focused on understanding how the catalysts work rather than on perfecting them for industry. But the findings provide a springboard for other scientists to push these catalysts toward commercialization.

Sticking with the metals
These new types of catalysts are promising — many of them can speed up reactions almost as well as a traditional platinum catalyst. But even researchers working on platinum alternatives agree that making sustainable and low-cost catalysts isn’t always as simple as removing the expensive and rare metals.

“The calculation of sustainability is not completely straightforward,” Finn says. Though he works with enzymes in his lab, he says, “a platinum-based catalyst that lasts for years is probably going to be more sustainable than an enzyme that degrades.” It might end up being cheaper in the long run, too. That’s why researchers working on these alternative catalysts are pushing to make their products more stable and longer-lasting.
“If you think about a catalyst, it’s really the atoms on the surface that participate in the reaction. Those in the bulk may just provide mechanical support or are just wasted,” says Younan Xia, a chemist at Georgia Tech. Xia is working on minimizing that waste.

One promising approach is to shape platinum into what Xia dubs “nanocages” — instead of a solid cube of metal, just the edges remain, like a frame.

It’s also why many scientists haven’t given up on metal. “I don’t think you can say, ‘Let’s do without metals,’ ” says James Clark, a chemist at the University of York in England. “Certain metals have a certain functionality that’s going to be very hard to replace.” But, he adds, there are ways to use metals more efficiently, such as using nanoparticle-sized pieces that have a higher surface area than a flat sheet, or strategically combining small amounts of a rare metal with cheaper, more abundant nickel or iron. Changing the structure of the material on a nanoscale level also can make a difference.

In one experiment, Xia started with cubes of a different rare metal, palladium. He coated the palladium cubes with a thin layer of platinum just a few atoms thick — a pretty straightforward process. Then, a chemical etched away the palladium inside, leaving a hollow platinum skeleton. Because the palladium is removed from the final product, it can be used again and again. And the nanocage structure leaves less unused metal buried inside than a large flat sheet or a solid cube, Xia reported in 2015 in Science.

Since then, Xia’s team has been developing more complex shapes for the nanocages. An icosahedron, a ball with 20 triangular faces, worked especially well. The slight disorder to the structure — the atoms don’t crystallize quite perfectly — helped make it four times as active as a commercial platinum catalyst. He has made similar cages out of other rare metals like rhodium that could work as catalysts for other reactions.

It’ll take more work before any of these new catalysts fully dethrone platinum and other precious metals. But once they do, that’ll leave more precious metals to use in places where they can truly shine.