Nail-biting and thumb-sucking may not be all bad

There are plenty of reasons to tell kids not to bite their nails or suck their thumbs. Raw fingernail areas pick up infection, and thumbs can eventually move teeth into the wrong place. Not to mention these habits slop spit everywhere. But these bad habits might actually good for something: Kids who sucked their thumbs or chewed their nails had lower rates of allergic reactions in lab tests, a new study finds.

The results come from a group of more than 1,000 children in New Zealand. When the kids were ages 5, 7, 9 and 11, their parents were asked if the kids sucked their thumbs or bit their nails. At age 13, the kids came into a clinic for an allergen skin prick test. That’s a procedure in which small drops of common allergens such as pet dander, wool, dust mites and fungus are put into a scratch on the skin to see if they elicit a reaction.

Kids whose parents said “certainly” to the question of thumb-sucking or nail-biting were less likely to react to allergens in the skin prick test, respiratory doctor Robert Hancox of the University of Otago in New Zealand and colleagues report July 11 in Pediatrics. And this benefit seemed to last. The childhood thumb-suckers and nail-biters still had fewer allergic reactions at age 32.

The results fit with other examples of the benefits of germs. Babies whose parents cleaned dirty pacifiersby popping them into their own mouths were more protected against allergies. And urban babies exposed to roaches, mice and cats had fewer allergies, too. These scenarios all get more germs in and on kids’ bodies. And that may be a good thing. An idea called the hygiene hypothesis holds that exposure to germs early in life can train the immune system to behave itself, preventing overreactions that may lead to allergies and asthma.

It might be the case that germy mouths bring benefits, but only when kids are young. Hancox and his colleagues don’t know when the kids in their study first started sucking thumbs or biting nails, but having spent time around little babies, I’m guessing it was pretty early.

So does this result mean that parents shouldn’t discourage — or even encourage — these habits? Hancox demurs. “We don’t have enough evidence to suggest that parents change what they do,” he says. Still, the results may offer some psychological soothing, he says. “Perhaps if children have habits that are difficult to break, there is some consolation for parents that there might be a reduced risk of developing allergy.”

Tabby’s star drama continues

A star that made headlines for its bizarre behavior has got one more mystery for astronomers to ponder.

Tabby’s star, also known as KIC 8462852, has been inexplicably flickering and fading. The Kepler Space Telescope caught two dramatic drops in light — by up to 22 percent — spaced nearly two years apart. Photographs from other telescopes dating back to 1890 show that the star also faded by roughly 20 percent over much of the last century. Possible explanations for the behavior range from mundane comet swarms to fantastical alien engineering projects (SN Online: 2/2/16).
A new analysis of data from Kepler, NASA’s premier planet hunter, shows that Tabby’s star steadily darkened throughout the telescope’s primary four-year mission. That’s in addition to the abrupt flickers already seen during the same time period. Over the first 1,100 days, the star dimmed by nearly 1 percent. Then the light dropped another 2.5 percent over the following six months before leveling off during the mission’s final 200 days.

Astronomers Benjamin Montet of Caltech and Josh Simon of the Observatories of the Carnegie Institution of Washington in Pasadena, Calif., report the findings online August 4 at arXiv.org.

The new data support a previous claim that the star faded between 1890 and 1989, a claim that some researchers questioned. “It’s just getting stranger,” says Jason Wright, an astronomer at Penn State University. “This is a third way in which the star is weird. Not only is it getting dimmer, it’s doing so at different rates.”
The slow fading hadn’t been noticed before because data from Kepler are processed to remove long-term trends that might confuse planet-finding algorithms. To find the dimming, Montet and Simon analyzed images from the telescope that are typically used only to calibrate data.
“Their analysis is very thorough,” says Tabetha Boyajian, an astronomer at Yale University who in 2015 reported the two precipitous drops in light (and for whom the star is nicknamed). “I see no flaws in that at all.”

While the analysis is an important clue, it doesn’t yet explain the star’s erratic behavior. “It doesn’t push us in any direction because it’s nothing that we’ve ever encountered before,” says Boyajian. “I’ve said ‘I don’t know’ so many times at this point.”

An object (or objects) moving in front of the star and blocking some of the light is still the favored explanation — though no one has figured out what that object is. The drop in light roughly 1,100 days into Kepler’s mission is reminiscent of a planet crossing in front of a star, Montet says. But given how slowly the light dropped, such a planet (or dim star) would have to live on an orbit more than 60 light-years across. The odds of catching a body on such a wide, slow orbit as it passed in front of the star are so low, says Montet, that you would need 10,000 Kepler missions to see just one. “We figure that’s pretty unlikely.”

An interstellar cloud wandering between Earth and KIC 8462852 is also unlikely, Wright says. “If the interstellar medium had these sorts of clumps and knots, it should be a ubiquitous phenomenon. We would have known about this for decades.” While some quasars and pulsars appear to flicker because of intervening material, the variations are minute and nothing like the 20 percent dips seen in Tabby’s star.

A clump of gas and dust orbiting the star — possibly produced by a collision between comets — is a more likely candidate, although that doesn’t explain the century-long dimming. “Nothing explains all the effects we see,” says Montet.

Given the star’s unpredictable nature, astronomers need constant vigilance to solve this mystery. The American Association of Variable Star Observers is working with amateur astronomers to gather continuous data from backyard telescopes around the globe. Boyajian and colleagues are preparing to monitor KIC 8462852 with the Las Cumbres Observatory Global Telescope Network, a worldwide web of telescopes that can keep an incessant eye on the star. “At this point, that’s the only thing that’s going to help us figure out what it is,” she says.

Trio wins physics Nobel for math underlying exotic states of matter

The 2016 Nobel Prize in physics is awarded for discoveries of exotic states of matter known as topological phases that can help explain phenomena such as superconductivity.

The prize is shared among three researchers: David J. Thouless, of the University of Washington in Seattle, F. Duncan M. Haldane of Princeton University and J. Michael Kosterlitz of Brown University. The Royal Swedish Academy of Sciences announced the prize October 4.

At the heart of their work is topology, a branch of mathematics that describes steplike changes in a property. An object can have zero, one or two holes, for example, but not half a hole. This year’s Nobel laureates found that topological effects could explain behaviors seen in superconductors and superfluids. “Like most discoveries, you stumble onto them and you just come to realize there is something really interesting there,” Haldane said in a phone call during the announcement.

In some ways, hawks hunt like humans

A hunter’s gaze betrays its strategy. And looking at what an animal looks at when it’s hunting for prey has revealed foraging patterns in humans, other primates — and now, birds.

Suzanne Amador Kane of Haverford College in Pennsylvania and her colleagues watched archival footage of three raptor species hunting: northern goshawks (Accipiter gentilis), Cooper’s hawks (A. cooperii) and red-tailed hawks (Buteo jamaicensis). They also mounted a video camera to the head of a goshawk to record the bird’s perspective (a technique that’s proved useful in previous studies of attack behavior). The team noted how long birds spent fixating on specific points before giving up, moving their head and, thus, shifting their gaze.

When searching for prey, raptors don’t turn their heads in a predictable pattern. Instead, they appear to scan and fixate randomly based on what they see in their environment, Kane and her colleagues report November 16 in The Auk. In primates, a buildup of sensory information drives foraging animals to move their eyes in similar patterns.

Though the new study only examines three species and focuses on head tracking rather than eye tracking, Kane and her colleagues suggest that the same basic neural processes may drive search decisions of human and hawk hunters.

Ice gave Pluto a heavy heart

Pluto’s heart might carry a heavy burden.

Weight from massive deposits of frozen nitrogen, methane and carbon monoxide, built up billions of years ago, could have carved out the left half of the dwarf planet’s heart-shaped landscape, researchers report online November 30 in Nature.

The roughly 1,000-kilometer-wide frozen basin dubbed Sputnik Planitia was on display when the New Horizons spacecraft tore past in July 2015 (SN: 12/26/15, p. 16). Previous studies have proposed that the region could be a scar left by an impact with interplanetary debris (SN: 12/12/15, p. 10).

Sputnik Planitia sits in a cold zone, a prime location for ice to build up, planetary scientist Douglas Hamilton of the University of Maryland in College Park and colleagues calculate. Excess ice deposited early in the planet’s history would have led to a surplus of mass. Gravitational interactions between Pluto and its largest moon, Charon, slowed the planet’s rotation until that mass faced in the opposite direction from Charon. Once Charon became synced to Pluto’s rotation — it’s always over the same spot on Pluto — gravity would have held Sputnik Planitia in Pluto’s cold zone, attracting even more ice. As the ice cap grew, the weight could have depressed Pluto’s surface, creating the basin that exists today.

How scientists are hunting for a safer opioid painkiller

An opioid epidemic is upon us. Prescription painkillers such as fentanyl and morphine can ease terrible pain, but they can also cause addiction and death. The Centers for Disease Control and Prevention estimates that nearly 2 million Americans are abusing or addicted to prescription opiates. Politicians are attempting to stem the tide at state and national levels, with bills to change and monitor how physicians prescribe painkillers and to increase access to addiction treatment programs.

Those efforts may make access to painkillers more difficult for some. But pain comes to everyone eventually, and opioids are one of the best ways to make it go away.

Morphine is the king of pain treatment. “For hundreds of years people have used morphine,” says Lakshmi Devi, a pharmacologist at the Ichan School of Medicine Mount Sinai in New York City. “It works, it’s a good drug, that’s why we want it. The problem is the bad stuff.”

The “bad stuff” includes tolerance — patients have to take higher and higher doses to relieve their pain. Drugs such as morphine depress breathing, an effect that can prove deadly. They also cause constipation, drowsiness and vomiting. But “for certain types of pain, there are no medications that are as effective,” says Bryan Roth, a pharmacologist and physician at the University of North Carolina at Chapel Hill. The trick is constructing a drug with all the benefits of an opioid painkiller, and few to none of the side effects. Here are three ways that scientists are searching for the next big pain buster, and three of the chemicals they’ve turned up.

Raid the chemical library
To find new options for promising drugs, scientists often look to chemical libraries of known molecules. “A pharmaceutical company will have libraries of a few million compounds,” Roth explains. Researchers comb through these libraries trying to find those compounds that connect to specific molecules in the body and brain.

When drugs such as morphine enter the brain, they bind to receptors on the outside of cells and cause cascades of chemical activity inside. Opiate drugs bind to three types of opiate receptors: mu, kappa and delta. The mu receptor type is the one associated with the pain-killing — and pleasure-causing — activities of opiates. Activation of this receptor type spawns two cascades of chemical activity. One, the Gi pathway, is associated with pain relief. The other — known as the beta-arrestin pathway — is associated with slowed breathing rate and constipation. So a winning candidate molecule would be one that triggered only the Gi pathway, without triggering beta-arrestin.
Roth and colleagues set out to find a molecule that fit those specifications. But instead of the intense, months-long process of experimentally screening molecules in a chemical library, Roth’s group chose a computational approach, screening more than 3 million compounds in a matter of days. The screen narrowed the candidates down to 23 molecules to test the old fashioned way — both chemically and in mice. Each of these potential painkillers went through even more tests to find those with the strongest bond to the receptor and the highest potency.

In the end, the team focused on a chemical called PMZ21. It activates only the pathway associated with pain relief, and is an effective painkiller in mice. It does not depress breathing rate, and it might even avoid some of the addictive potential of other opiates, though Roth notes that further studies need to be done. He and his colleagues published their findings September 8 in Nature.

Letting the computer handle the initial screen is “a smart way of going about it,” notes Nathanial Jeske, a neuropharmacologist at the University of Texas Health Science Center in San Antonio. But mice are only the first step. “I’m interested to see if the efficacy applies to different animals.”

Making an opiate 2.0
Screening millions of compounds is one way to find a new drug. But why buy new when you can give a chemical makeover to something you already have? This is a “standard medicinal chemistry approach,” Roth says: “Pick a known drug and make analogs [slightly tweaked structures], and that can work.”

That was the approach that Mei-Chuan Ko and his group at Wake Forest University School of Medicine in Winston-Salem, N.C., decided to take with the common opioid painkiller buprenorphine. “Compared to morphine or fentanyl, buprenorphine is safer,” Ko explains, “but it has abuse liability. Physicians still have concerns about the abuse and won’t prescribe it.” Buprenorphine is what’s called a partial agonist at the mu receptor — it can’t fully activate the receptor, even at the highest doses. So it’s an effective painkiller that is harder to overdose on — so much so that it’s used to treat addiction to other opiates. But it can still cause a high, so doctors still worry about people abusing the drug.

So to make a version of buprenorphine with lower addictive potential, Ko and his colleagues focused on a chemical known as BU08028. It’s structurally similar to buprenorphine, but it also hits another type of opioid receptor called the nociceptin-orphanin FQ peptide (or NOP) receptor.

The NOP receptor is not a traditional target. This is partially because its effect in rodents — usually the first recipients of a new drug — is “complicated,” says Ko. “It does kill pain at high doses but not at low doses.” In primates, however, it’s another matter. In tests in four monkeys, BU08028 killed pain effectively at low doses and didn’t suppress breathing. The monkeys also showed little interest in taking the drug voluntarily, which suggests it might not be as addictive as classic opioid drugs. Ko and his colleagues published their results in the Sept. 13 Proceedings of the National Academy of Sciences.*

Off the beaten path
Combing through chemical libraries or tweaking drugs that are already on the market takes advantage of systems that are already well-established. But sometimes, a tough question requires an entirely new approach. “You can either target the receptors you know and love … or you can do the complete opposite and see if there’s a new receptor system,” Devi says.

Jeske and his group chose the latter option. Of the three opiate receptor types — mu, kappa and delta — most drugs (and drug studies) focus on the mu receptor. Jeske’s group chose to investigate delta instead. They were especially interested in targeting delta receptors in the body — far away from the brain and its side effects.

The delta receptor has an unfortunate quirk. When activated by a drug, it can help kill pain. But most of the time, it can’t be activated at all. The receptor is protected — bound up tight by another molecule — and only released when an area is injured. So Jeske’s goal was to find out what was binding up the delta receptor, and figure out how to get rid of it.

Working in rat neurons, Jeske and his group found that when a molecule called GRK2 was around, the delta receptor was inactive. “Knock down GRK2 and the receptor works just fine,” Jeske says. By genetically knocking out GRK2 in rats, Jeske and his group left the delta receptor free to respond to a drug — and to prevent pain. The group published their results September 6 in Cell Reports.

It’s “a completely new target and that’s great,” says Devi. “But that new target with a drug is a tall order.” A single drug is unlikely to be able to both push away GRK2 and then activate the delta receptor to stop pain.

Jeske agrees that a single molecule probably couldn’t take on both roles. Instead, one drug to get rid of GRK2 would be given first, followed by another to activate the delta receptors.

Each drug development method has unearthed drug candidates with early promise. “We’ve solved these problems in mice and rats many times,” Devi notes. But whether sifting through libraries, tweaking older drugs or coming up with entirely new ones, the journey to the clinic has only just begun.

*Paul Czoty and Michael Nader, two authors on the PNAS paper, were on my Ph.D. dissertation committee. I have had neither direct nor indirect involvement with this research.

Evidence falls into place for once and future supercontinents

Look at any map of the Atlantic Ocean, and you might feel the urge to slide South America and Africa together. The two continents just beg to nestle next to each other, with Brazil’s bulge locking into West Africa’s dimple. That visible clue, along with several others, prompted Alfred Wegener to propose over a century ago that the continents had once been joined in a single enormous landmass. He called it Pangaea, or “all lands.”

Today, geologists know that Pangaea was just the most recent in a series of mighty super-continents. Over hundreds of millions of years, enormous plates of Earth’s crust have drifted together and then apart. Pangaea ruled from roughly 400 million to about 200 million years ago. But wind the clock further back, and other supercontinents emerge. Between 1.3 billion and 750 million years ago, all the continents amassed in a great land known as Rodinia. Go back even further, about 1.4 billion years or more, and the crustal shards had arranged themselves into a supercontinent called Nuna.

Using powerful computer programs and geologic clues from rocks around the world, researchers are painting a picture of these long-lost worlds. New studies of magnetic minerals in rock from Brazil, for instance, are helping pin the ancient Amazon to a spot it once occupied in Nuna. Other recent research reveals the geologic stresses that finally pulled Rodinia apart, some 750 million years ago. Scientists have even predicted the formation of the next supercontinent — an amalgam of North America and Asia, evocatively named Amasia — some 250 million years from now.
Reconstructing supercontinents is like trying to assemble a 1,000-piece jigsaw puzzle after you’ve lost a bunch of the pieces and your dog has chewed up others. Still, by figuring out which puzzle pieces went where, geologists have been able to illuminate some of earth science’s most fundamental questions.
For one thing, continental drift, that gradual movement of landmasses across Earth’s surface, profoundly affected life by allowing species to move into different parts of the world depending on what particular landmasses happened to be joined. (The global distribution of dinosaur fossils is dictated by how continents were assembled when those great animals roamed.)

Supercontinents can also help geologists hunting for mineral deposits — imagine discovering gold ore of a certain age in the Amazon and using it to find another gold deposit in a distant landmass that was once joined to the Amazon. More broadly, shifting landmasses have reshaped the face of the planet — as they form, supercontinents push up mountains like the Appalachians, and as they break apart, they create oceans like the Atlantic.

“The assembly and breakup of these continents have profoundly influenced the evolution of the whole Earth,” says Johanna Salminen, a geophysicist at the University of Helsinki in Finland.
Push or pull
For centuries, geologists, biogeographers and explorers have tried to explain various features of the natural world by invoking lost continents. Some of the wilder concepts included Lemuria, a sunken realm between Madagascar and India that offered an out-there rationale for the presence of lemurs and lemurlike fossils in both places, and Mu, an underwater land supposedly described in ancient Mayan manuscripts. While those fantastic notions have fallen out of favor, scientists are exploring the equally mind-bending story of the supercontinents that actually existed.
Earth’s constantly shifting jigsaw puzzle of continents and oceans traces back to the fundamental forces of plate tectonics. The story begins in the centers of oceans, where hot molten rock wells up from deep inside the Earth along underwater mountain chains. The lava cools and solidifies into newborn ocean crust, which moves continually away from either side of the mountain ridge as if carried outward on a conveyor belt. Eventually, the moving ocean crust bumps into a continent, where it either stalls or begins diving beneath that continental crust in a process called subduction.

Those competing forces — pushing newborn crust away from the mid-ocean mountains and pulling older crust down through subduction — are constantly rearranging Earth’s crustal plates. That’s why North America and Europe are getting farther away from each other by a few centimeters each year as the Atlantic widens, and why the Pacific Ocean is shrinking, its seafloor sucked down by subduction along the Ring of Fire — looping from New Zealand to Japan, Alaska and Chile.

By running the process backward in time, geologists can begin to see how oceans and continents have jockeyed for position over millions of years. Computers calculate how plate positions shifted over time, based on the movements of today’s plates as well as geologic data that hint at their past locations.

Those geologic clues — such as magnetic minerals in ancient rocks — are few and far between. But enough remain for researchers to start to cobble together the story of which crustal piece went where.

“To solve a jigsaw puzzle, you don’t necessarily need 100 percent of the pieces before you can look at it and say it’s the Mona Lisa,” says Brendan Murphy, a geophysicist at St. Francis Xavier University in Antigonish, Nova Scotia. “But you need some key pieces.” He adds: “With the eyes and nose, you have a chance.”

No place like Nuna
For ancient Nuna, scientists are starting to find the first of those key pieces. They may not reveal the Mona Lisa’s enigmatic smile, but they are at least starting to fill in a portrait of a long-vanished supercontinent.

Nuna came together starting around 2 billion years ago, with its heart a mash-up of Baltica (the landmass that today contains Scandinavia), Laurentia (which is now much of North America) and Siberia. Geologists argue over many things involving this first supercontinent, starting with its name. “Nuna” is from the Inuktitut language of the Arctic. It means lands bordering the northern oceans, so dubbed for the supercontinent’s Arctic-fringing components. But some researchers prefer to call it Columbia after the Columbia region of North America’s Pacific Northwest.

Whatever its moniker, Nuna/Columbia is an exercise in trying to get all the puzzle pieces to fit. Because Nuna existed so long ago, subduction has recycled many rocks of that age back into the deep Earth, erasing any record of what they were doing at the time. Geologists travel to rocks that remain in places like India, South America and North China, analyzing them for clues to where they were at the time of Nuna.

One of the most promising techniques targets magnetic minerals. Paleomagnetic studies use the minerals as tiny time capsule compasses, which recorded the direction of the magnetic field at the time the rocks formed. The minerals can reveal information about where those rocks used to be, including their latitude relative to where the Earth’s north magnetic pole was at the time.

Salminen has been gathering paleomagnetic data from Nuna-age rocks in Brazil and western Africa. Not surprisingly, given their current lock-and-key configuration, these two chunks were once united as a single ancient continental block, known as the Congo/São Francisco craton. For millions of years, it shuffled around as a single geologic unit, occasionally merging with other blocks and then later splitting away.

Salminen has now figured out where the Congo/São Francisco puzzle piece fit in the jigsaw that made up Nuna. In 1.5-billion-year-old rocks in Brazil, she unearthed magnetic clues that placed the Congo/São Francisco craton at the southeastern tip of Baltica all those years ago. She and her colleagues reported the findings in November in Precambrian Research.

It is the first time scientists have gotten paleomagnetic information about where the craton may have been as far back as Nuna. “This is quite remarkable — it was really needed,” she says. “Now we can say Congo could have been there.” Like building out a jigsaw puzzle from its center, the work essentially expands Nuna’s core.

Rodinia’s radioactive decay
By around 1.3 billion years ago, Nuna was breaking apart, the pieces of the Mona Lisa face shattering and drifting away from each other. It took another 200 million years before they rejoined in the configuration known as Rodinia.

Recent research suggests that Rodinia may not have looked much different than Nuna, though. The Mona Lisa in its second incarnation may still have looked like the portrait of a woman — just maybe with a set of earrings dangling from her lobes.
of Carleton University in Ottawa, Canada, recently explored the relative positions of Laurentia and Siberia between 1.9 billion and 720 million years ago, a period that spans both Nuna and Rodinia. Ernst’s group specializes in studying “large igneous provinces” — the huge outpourings of lava that build up over millions of years. Often the molten rock flows along sheetlike structures known as dikes, which funnel magma from deep in the Earth upward.

By using the radioactive decay of elements in the dike rock, such as uranium decaying to lead, scientists can precisely date when a dike formed. With enough dates on a particular dike, researchers can produce a sort of bar code that is unique to each dike. Later, when the dikes are broken apart and shifted over time, geologists can pinpoint the bar codes that match and thus line up parts of the crust that used to be together.

Ernst’s team found that dikes from Laurentia and Siberia matched during four periods between 1.87 billion and 720 million years ago — suggesting they were connected for that entire span, the team reported in June in Nature Geoscience. Such a long-term relationship suggests that Siberia and Laurentia may have stuck together through the Nuna-Rodinia transition, Ernst says.

Other parts of the puzzle tend to end up in the same relative locations as well, says Joseph Meert, a paleomagnetist at the University of Florida in Gainesville. In each supercontinent, Laurentia, Siberia and Baltica knit themselves together in roughly the same arrangement: Siberia and Baltica nestle like two opposing knobs on one end of Laurentia’s elongated blob. Meert calls these three continental fragments “strange attractors,” since they appear conjoined time after time.

It’s the outer edges of the jigsaw puzzle that change. Fragments like north China and southern Africa end up in different locations around the supercontinent core. “I call those bits the lonely wanderers,” Meert says.

Getting to know Pangaea
While some puzzle-makers try to sort out the reconstructions of past supercontinents, other geologists are exploring deeper questions about why big landmasses come together in the first place. And one place to look is Pangaea.

“Most people would accept what Pangaea looks like,” Murphy says. “But as soon as you start asking why it formed, how it formed and what processes are involved — then all of a sudden you run into problems.”
Around 550 million years ago, subduction zones around the edges of an ancient ocean began dragging that oceanic crust under continental crust. But around 400 million years ago, that subduction suddenly stopped. In a major shift, a different, much younger seafloor began to subduct instead beneath the continents. That young ocean crust kept getting sucked up until it all disappeared, and the continents were left merged in the giant mass of Pangaea.

Imagine in today’s world, if the Pacific stopped shrinking and all of a sudden the Atlantic started shrinking instead. “That’s quite a significant problem,” Murphy says. In unpublished work, he has been exploring the physics of how plates of oceanic and continental crust — which have different densities, buoyancies and other physical characteristics — could have interacted with one another in the run-up to Pangaea.

Supercontinent breakups are similarly complicated. Once all the land amasses in a single big chunk, it cannot stay together forever. In one scenario, its sheer bulk acts as an electric blanket, allowing heat from the deep Earth to pond up beneath it until things get too hot and the supercontinent splinters (SN: 1/21/17, p. 14). In another, physical stressors pull the supercontinent apart.

Peter Cawood, a geologist at the University of St. Andrews in Fife, Scotland, likes the second option. He has been studying mountain ranges that arose when the crustal plates that made up Rodinia collided, pushing up soaring peaks where they met. These include the Grenville mountain-building event of about 1 billion years ago, traces of which linger today in the eroded peaks of the Appalachians. Cawood and his colleagues analyzed the times at which such mountains appeared and put together a detailed timeline of what happened as Rodinia began to break apart.

They note that crustal plates began subducting around the edges of Rodinia right around the time of its breakup. That sucking down of crust caused the supercontinent to be pulled from all directions and eventually break apart, Cawood and his colleagues wrote in Earth and Planetary Science Letters in September. “The timing of major breakup corresponds with this timing of opposing subduction zones,” he says.
The future is Amasia
That stressful situation is similar to what the Pacific Ocean finds itself in today. Because it is flanked by subduction zones around the Ring of Fire, the Pacific Plate is shrinking over time. Some geologists predict that it will vanish entirely in the future, leaving North America and Asia to merge into the next supercontinent, Amasia. Others have devised different possible paths to Amasia, such as closing the Arctic Ocean rather than the Pacific.

“Speculation about the future supercontinent Amasia is exactly that, speculation,” says geologist Ross Mitchell of Curtin University in Perth, Australia, who in 2012 helped describe the mechanics of how Amasia might arise. “But there’s hard science behind the conjecture.”

For instance, Masaki Yoshida of the Japan Agency for Marine-Earth Science and Technology in Yokosuka recently used sophisticated computer models to analyze how today’s continents would continue to move atop the flowing heat of the deep Earth. He combined modern-day plate motions with information on how that internal planetary heat churns in three dimensions, then ran the whole scenario into the future. In a paper in the September Geology, Yoshida describes how North America, Eurasia, Australia and Africa will end up merged in the Northern Hemisphere.

No matter where the continents are headed, they are destined to reassemble. Plate tectonics says it will happen — and a new supercontinent will shape the face of the Earth. It might not look like the Mona Lisa, but it might just be another masterpiece.

There’s still a lot we don’t know about the proton

Nuclear physicist Evangeline Downie hadn’t planned to study one of the thorniest puzzles of the proton.

But when opportunity knocked, Downie couldn’t say no. “It’s the proton,” she exclaims. The mysteries that still swirl around this jewel of the subatomic realm were too tantalizing to resist. The plentiful particles make up much of the visible matter in the universe. “We’re made of them, and we don’t understand them fully,” she says.

Many physicists delving deep into the heart of matter in recent decades have been lured to the more exotic and unfamiliar subatomic particles: mesons, neutrinos and the famous Higgs boson — not the humble proton.
But rather than chasing the rarest of the rare, scientists like Downie are painstakingly scrutinizing the proton itself with ever-higher precision. In the process, some of these proton enthusiasts have stumbled upon problems in areas of physics that scientists thought they had figured out.

Surprisingly, some of the particle’s most basic characteristics are not fully pinned down. The latest measurements of its radius disagree with one another by a wide margin, for example, a fact that captivated Downie. Likewise, scientists can’t yet explain the source of the proton’s spin, a basic quantum property. And some physicists have a deep but unconfirmed suspicion that the seemingly eternal particles don’t live forever — protons may decay. Such a decay is predicted by theories that unite disparate forces of nature under one grand umbrella. But decay has not yet been witnessed.

Like the base of a pyramid, the physics of the proton serves as a foundation for much of what scientists know about the behavior of matter. To understand the intricacies of the universe, says Downie, of George Washington University in Washington, D.C., “we have to start with, in a sense, the simplest system.”

Sizing things up
For most of the universe’s history, protons have been VIPs — very important particles. They formed just millionths of a second after the Big Bang, once the cosmos cooled enough for the positively charged particles to take shape. But protons didn’t step into the spotlight until about 100 years ago, when Ernest Rutherford bombarded nitrogen with radioactively produced particles, breaking up the nuclei and releasing protons.

A single proton in concert with a single electron makes up hydrogen — the most plentiful element in the universe. One or more protons are present in the nucleus of every atom. Each element has a unique number of protons, signified by an element’s atomic number. In the core of the sun, fusing protons generate heat and light needed for life to flourish. Lone protons are also found as cosmic rays, whizzing through space at breakneck speeds, colliding with Earth’s atmosphere and producing showers of other particles, such as electrons, muons and neutrinos.

In short, protons are everywhere. Even minor tweaks to scientists’ understanding of the minuscule particle, therefore, could have far-reaching implications. So any nagging questions, however small in scale, can get proton researchers riled up.

A disagreement of a few percent in measurements of the proton’s radius has attracted intense interest, for example. Until several years ago, scientists agreed: The proton’s radius was about 0.88 femtometers, or 0.88 millionths of a billionth of a meter — about a trillionth the width of a poppy seed.
But that neat picture was upended in the span of a few hours, in May 2010, at the Precision Physics of Simple Atomic Systems conference in Les Houches, France. Two teams of scientists presented new, more precise measurements, unveiling what they thought would be the definitive size of the proton. Instead the figures disagreed by about 4 percent (SN: 7/31/10, p. 7). “We both expected that we would get the same number, so we were both surprised,” says physicist Jan Bernauer of MIT.

By itself, a slight revision of the proton’s radius wouldn’t upend physics. But despite extensive efforts, the groups can’t explain why they get different numbers. As researchers have eliminated simple explanations for the impasse, they’ve begun wondering if the mismatch could be the first hint of a breakdown that could shatter accepted tenets of physics.

The two groups each used different methods to size up the proton. In an experiment at the MAMI particle accelerator in Mainz, Germany, Bernauer and colleagues estimated the proton’s girth by measuring how much electrons’ trajectories were deflected when fired at protons. That test found the expected radius of about 0.88 femtometers (SN Online: 12/17/10).

But a team led by physicist Randolf Pohl of the Max Planck Institute of Quantum Optics in Garching, Germany, used a new, more precise method. The researchers created muonic hydrogen, a proton that is accompanied not by an electron but by a heftier cousin — a muon.

In an experiment at the Paul Scherrer Institute in Villigen, Switzerland, Pohl and collaborators used lasers to bump the muons to higher energy levels. The amount of energy required depends on the size of the proton. Because the more massive muon hugs closer to the proton than electrons do, the energy levels of muonic hydrogen are more sensitive to the proton’s size than ordinary hydrogen, allowing for measurements 10 times as precise as electron-scattering measurements.

Pohl’s results suggested a smaller proton radius, about 0.841 femtometers, a stark difference from the other measurement. Follow-up measurements of muonic deuterium — which has a proton and a neutron in its nucleus — also revealed a smaller than expected size, he and collaborators reported last year in Science. Physicists have racked their brains to explain why the two measurements don’t agree. Experimental error could be to blame, but no one can pinpoint its source. And the theoretical physics used to calculate the radius from the experimental data seems solid.

Now, more outlandish possibilities are being tossed around. An unexpected new particle that interacts with muons but not electrons could explain the difference (SN: 2/23/13, p. 8). That would be revolutionary: Physicists believe that electrons and muons should behave identically in particle interactions. “It’s a very sacred principle in theoretical physics,” says John Negele, a theoretical particle physicist at MIT. “If there’s unambiguous evidence that it’s been broken, that’s really a fundamental discovery.”

But established physics theories die hard. Shaking the foundations of physics, Pohl says, is “what I dream of, but I think that’s not going to happen.” Instead, he suspects, the discrepancy is more likely to be explained through minor tweaks to the experiments or the theory.

The alluring mystery of the proton radius reeled Downie in. During conversations in the lab with some fellow physicists, she learned of an upcoming experiment that could help settle the issue. The experiment’s founders were looking for collaborators, and Downie leaped on the bandwagon. The Muon Proton Scattering Experiment, or MUSE, to take place at the Paul Scherrer Institute beginning in 2018, will scatter both electrons and muons off of protons and compare the results. It offers a way to test whether the two particles behave differently, says Downie, who is now a spokesperson for MUSE.

A host of other experiments are in progress or planning stages. Scientists with the Proton Radius Experiment, or PRad, located at Jefferson Lab in Newport News, Va., hope to improve on Bernauer and colleagues’ electron-scattering measurements. PRad researchers are analyzing their data and should have a new number for the proton radius soon.

But for now, the proton’s identity crisis, at least regarding its size, remains. That poses problems for ultrasensitive tests of one of physicists’ most essential theories. Quantum electrodynamics, or QED, the theory that unites quantum mechanics and Albert Einstein’s special theory of relativity, describes the physics of electromagnetism on small scales. Using this theory, scientists can calculate the properties of quantum systems, such as hydrogen atoms, in exquisite detail — and so far the predictions match reality. But such calculations require some input — including the proton’s radius. Therefore, to subject the theory to even more stringent tests, gauging the proton’s size is a must-do task.
Spin doctors
Even if scientists eventually sort out the proton’s size snags, there’s much left to understand. Dig deep into the proton’s guts, and the seemingly simple particle becomes a kaleidoscope of complexity. Rattling around inside each proton is a trio of particles called quarks: one negatively charged “down” quark and two positively charged “up” quarks. Neutrons, on the flip side, comprise two down quarks and one up quark.

Yet even the quark-trio picture is too simplistic. In addition to the three quarks that are always present, a chaotic swarm of transient particles churns within the proton. Evanescent throngs of additional quarks and their antimatter partners, antiquarks, continually swirl into existence, then annihilate each other. Gluons, the particle “glue” that holds the proton together, careen between particles. Gluons are the messengers of the strong nuclear force, an interaction that causes quarks to fervently attract one another.
As a result of this chaos, the properties of protons — and neutrons as well — are difficult to get a handle on. One property, spin, has taken decades of careful investigation, and it’s still not sorted out. Quantum particles almost seem to be whirling at blistering speed, like the Earth rotating about its axis. This spin produces angular momentum — a quality of a rotating object that, for example, keeps a top revolving until friction slows it. The spin also makes protons behave like tiny magnets, because a rotating electric charge produces a magnetic field. This property is the key to the medical imaging procedure called magnetic resonance imaging, or MRI.

But, like nearly everything quantum, there’s some weirdness mixed in: There’s no actual spinning going on. Because fundamental particles like quarks don’t have a finite physical size — as far as scientists know — they can’t twirl. Despite the lack of spinning, the particles still behave like they have a spin, which can take on only certain values: integer multiples of 1/2.

Quarks have a spin of 1/2, and gluons a spin of 1. These spins combine to help yield the proton’s total spin. In addition, just as the Earth is both spinning about its own axis and orbiting the sun, quarks and gluons may also circle about the proton’s center, producing additional angular momentum that can contribute to the proton’s total spin.

Somehow, the spin and orbital motion of quarks and gluons within the proton combine to produce its spin of 1/2. Originally, physicists expected that the explanation would be simple. The only particles that mattered, they thought, were the proton’s three main quarks, each with a spin of 1/2. If two of those spins were oriented in opposite directions, they could cancel one another out to produce a total spin of 1/2. But experiments beginning in the 1980s showed that “this picture was very far from true,” says theoretical high-energy physicist Juan Rojo of Vrije University Amsterdam. Surprisingly, only a small fraction of the spin seemed to be coming from the quarks, befuddling scientists with what became known as the “spin crisis” (SN: 9/6/97, p. 158). Neutron spin was likewise enigmatic.

Scientists’ next hunch was that gluons contribute to the proton’s spin. “Verifying this hypothesis was very difficult,” Rojo says. It required experimental studies at the Relativistic Heavy Ion Collider, RHIC, a particle accelerator at Brookhaven National Laboratory in Upton, N.Y.

In these experiments, scientists collided protons that were polarized: The two protons’ spins were either aligned or pointed in opposite directions. Researchers counted the products of those collisions and compared the results for aligned and opposing spins. The results revealed how much of the spin comes from gluons. According to an analysis by Rojo and colleagues, published in Nuclear Physics B in 2014, gluons make up about 35 percent of the proton’s spin. Since the quarks make up about 25 percent, that leaves another 40 percent still unaccounted for.

“We have absolutely no idea how the entire spin is made up,” says nuclear physicist Elke-Caroline Aschenauer of Brookhaven. “We maybe have understood a small fraction of it.” That’s because each quark or gluon carries a certain fraction of the proton’s energy, and the lowest energy quarks and gluons cannot be spotted at RHIC. A proposed collider, called the Electron-Ion Collider (location to be determined), could help scientists investigate the neglected territory.

The Electron-Ion Collider could also allow scientists to map the still-unmeasured orbital motion of quarks and gluons, which may contribute to the proton’s spin as well.
An unruly force
Experimental physicists get little help from theoretical physics when attempting to unravel the proton’s spin and its other perplexities. “The proton is not something you can calculate from first principles,” Aschenauer says. Quantum chromo-dynamics, or QCD — the theory of the quark-corralling strong force transmitted by gluons — is an unruly beast. It is so complex that scientists can’t directly solve the theory’s equations.

The difficulty lies with the behavior of the strong force. As long as quarks and their companions stick relatively close, they are happy and can mill about the proton at will. But absence makes the heart grow fonder: The farther apart the quarks get, the more insistently the strong force pulls them back together, containing them within the proton. This behavior explains why no one has found a single quark in isolation. It also makes the proton’s properties especially difficult to calculate. Without accurate theoretical calculations, scientists can’t predict what the proton’s radius should be, or how the spin should be divvied up.
To simplify the math of the proton, physicists use a technique called lattice QCD, in which they imagine that the world is made of a grid of points in space and time (SN: 8/7/04, p. 90). A quark can sit at one point or another in the grid, but not in the spaces in between. Time, likewise, proceeds in jumps. In such a situation, QCD becomes more manageable, though calculations still require powerful supercomputers.

Lattice QCD calculations of the proton’s spin are making progress, but there’s still plenty of uncertainty. In 2015, theoretical particle and nuclear physicist Keh-Fei Liu and colleagues calculated the spin contributions from the gluons, the quarks and the quarks’ angular momentum, reporting the results in Physical Review D. By their calculation, about half of the spin comes from the quarks’ motion within the proton, about a quarter from the quarks’ spin, with the last quarter or so from the gluons. The numbers don’t exactly match the experimental measurements, but that’s understandable — the lattice QCD numbers are still fuzzy. The calculation relies on various approximations, so it “is not cast in stone,” says Liu, of the University of Kentucky in Lexington.

Death of a proton
Although protons seem to live forever, scientists have long questioned that immortality. Some popular theories predict that protons decay, disintegrating into other particles over long timescales. Yet despite extensive searches, no hint of this demise has materialized.

A class of ideas known as grand unified theories predict that protons eventually succumb. These theories unite three of the forces of nature, creating a single framework that could explain electromagnetism, the strong nuclear force and the weak nuclear force, which is responsible for certain types of radioactive decay. (Nature’s fourth force, gravity, is not yet incorporated into these models.) Under such unified theories, the three forces reach equal strengths at extremely high energies. Such energetic conditions were present in the early universe — well before protons formed — just a trillionth of a trillionth of a trillionth of a second after the Big Bang. As the cosmos cooled, those forces would have separated into three different facets that scientists now observe.
“We have a lot of circumstantial evidence that something like unification must be happening,” says theoretical high-energy physicist Kaladi Babu of Oklahoma State University in Stillwater. Beyond the appeal of uniting the forces, grand unified theories could explain some curious coincidences of physics, such as the fact that the proton’s electric charge precisely balances the electron’s charge. Another bonus is that the particles in grand unified theories fill out a family tree, with quarks becoming the kin of electrons, for example.

Under these theories, a decaying proton would disintegrate into other particles, such as a positron (the antimatter version of an electron) and a particle called a pion, composed of a quark and an antiquark, which itself eventually decays. If such a grand unified theory is correct and protons do decay, the process must be extremely rare — protons must live a very long time, on average, before they break down. If most protons decayed rapidly, atoms wouldn’t stick around long either, and the matter that makes up stars, planets — even human bodies — would be falling apart left and right.

Protons have existed for 13.8 billion years, since just after the Big Bang. So they must live exceedingly long lives, on average. But the particles could perish at even longer timescales. If they do, scientists should be able to monitor many particles at once to see a few protons bite the dust ahead of the curve (SN: 12/15/79, p. 405). But searches for decaying protons have so far come up empty.

Still, the search continues. To hunt for decaying protons, scientists go deep underground, for example, to a mine in Hida, Japan. There, at the Super-Kamiokande experiment (SN: 2/18/17, p. 24), they monitor a giant tank of water — 50,000 metric tons’ worth — waiting for a single proton to wink out of existence. After watching that water tank for nearly two decades, the scientists reported in the Jan. 1 Physical Review D that protons must live longer than 1.6 × 1034 years on average, assuming they decay predominantly into a positron and a pion.

Experimental limits on the proton lifetime “are sort of painting the theorists into a corner,” says Ed Kearns of Boston University, who searches for proton decay with Super-K. If a new theory predicts a proton lifetime shorter than what Super-K has measured, it’s wrong. Physicists must go back to the drawing board until they come up with a theory that agrees with Super-K’s proton-decay drought.

Many grand unified theories that remain standing in the wake of Super-K’s measurements incorporate supersymmetry, the idea that each known particle has another, more massive partner. In such theories, those new particles are additional pieces in the puzzle, fitting into an even larger family tree of interconnected particles. But theories that rely on supersymmetry may be in trouble. “We would have preferred to see supersymmetry at the Large Hadron Collider by now,” Babu says, referring to the particle accelerator located at the European particle physics lab, CERN, in Geneva, which has consistently come up empty in supersymmetry searches since it turned on in 2009 (SN: 10/1/16, p. 12).

But supersymmetric particles could simply be too massive for the LHC to find. And some grand unified theories that don’t require supersymmetry still remain viable. Versions of these theories predict proton lifetimes within reach of an upcoming generation of experiments. Scientists plan to follow up Super-K with Hyper-K, with an even bigger tank of water. And DUNE, the Deep Underground Neutrino Experiment, planned for installation in a former gold mine in Lead, S.D., will use liquid argon to detect protons decaying into particles that the water detectors might miss.
If protons do decay, the universe will become frail in its old age. According to Super-K, sometime well after its 1034 birthday, the cosmos will become a barren sea of light. Stars, planets and life will disappear. If seemingly dependable protons give in, it could spell the death of the universe as we know it.

Although protons may eventually become extinct, proton research isn’t going out of style anytime soon. Even if scientists resolve the dilemmas of radius, spin and lifetime, more questions will pile up — it’s part of the labyrinthine task of studying quantum particles that multiply in complexity the closer scientists look. These deeper studies are worthwhile, says Downie. The inscrutable proton is “the most fundamental building block of everything, and until we understand that, we can’t say we understand anything else.”

Kepler shows small exoplanets are either super-Earths or mini-Neptunes

Small worlds come in two flavors. The complete dataset from the original mission of the planet-hunting Kepler space telescope reveals a split in the exoplanet family tree, setting super-Earths apart from mini-Neptunes.

Kepler’s final exoplanet catalog, released in a news conference June 19, now consists of 4,034 exoplanet candidates. Of those, 49 are rocky worlds in their stars’ habitable zones, including 10 newly discovered ones. So far, 2,335 candidates have been confirmed as planets and they include about 30 temperate, terrestrial worlds.
Careful measurements of the candidates’ stars revealed a surprising gap between planets about 1.5 and two times the size of Earth, Benjamin Fulton of the University of Hawaii at Manoa and Caltech and his colleagues found. There are a few planets in the gap, but most straddle it.

That splits the population of small planets into those that are rocky like Earth — 1.5 Earth radii or less — and those that are gassy like Neptune, between 2 and 3.5 Earth radii.

“This is a major new division in the family tree of exoplanets, somewhat analogous to the discovery that mammals and lizards are separate branches on the tree of life,” Fulton said.

The Kepler space telescope launched in 2009 and stared at a single patch of sky in the constellation Cygnus for four years. (Its stabilizing reaction wheels later broke and it began a new mission called K2 (SN Online: 5/15/13).) Kepler watched sunlike stars for telltale dips in brightness that would reveal a passing planet. Its ultimate goal was to come up with a single number: The fraction of stars like the sun that host planets like Earth.
The Kepler team has still not calculated that number, but astronomers are confident that they have enough data to do so, said Susan Thompson of the SETI Institute in Mountain View, Calif. She presented the results during the Kepler/K2 Science Conference IV being held at NASA’s Ames Research Center in Moffett Field, Calif.

Thompson and her colleagues ran the Kepler dataset through “Robovetter” software, which acted like a sieve to catch all the potential planets it contained. Running fake planet data through the software pinpointed how likely it was to confuse other signals for a planet or miss true planets.

“This is the first time we have a population that’s really well-characterized so we can do a statistical study and understand Earth analogs out there,” Thompson said.

Astronomers’ knowledge of these planets is only as good as their knowledge of their stars. So Fulton and his colleagues used the Keck telescope in Hawaii to precisely measure the sizes of 1,300 planet-hosting stars in the Kepler field of view. Those sizes in turn helped pin down the sizes of the planets with four times more precision than before.

The split in planet types they found could come from small differences in the planets’ sizes, compositions and distances from their stars. Young stars blow powerful winds of charged particles, which can blowtorch a growing planet’s atmosphere away. If a planet was too close to its star or too small to have a thick atmosphere — less than 75 percent larger than Earth — it would lose its atmosphere and end up in the smaller group. The planets that look more like Neptune today either had more gas to begin with or grew up in a gentler environment, Fulton said.

That divergence could have implications for the abundance of life in the galaxy. The surfaces of mini-Neptunes — if they exist — would suffer under the crushing pressure of such a thick atmosphere.

“These would not be nice places to live,” Fulton said. “Our result sharpens up the dividing line between potentially habitable planets and those that are inhospitable.”

Upcoming missions, like the Transiting Exoplanet Survey Satellite due to launch in 2018, will fill in the details of the exoplanet landscape with more observations of planets around bright stars. Later, telescopes like the James Webb Space Telescope, also scheduled to launch in 2018, will be able to check the atmospheres of those planets for signs of life.

“We can now really ask the question, ‘Is our planetary system unique in the galaxy?’” exoplanet astronomer Courtney Dressing of Caltech says. “My guess is the answer’s no. We’re not that special.”

What channel is Formula 1 on today? TV schedule, start time for 2021 Qatar Grand Prix

And then there were three.

Just three races in the 2021 Formula 1 world championship remain, and it looks like Red Bull's Max Verstappen is in the driver's seat to secure his first world driver's championship.
But hot on his tail is still Lewis Hamilton, who took home the victory in the Brazilian Grand Prix to once again tighten the gap at the top between he and Verstappen entering the final three sprints of the season.
To say "hot on his tail" would maybe be a bit of an undersell. Hamilton put together a fantastic trio of drives during the weekend, from qualifying to sprint qualifying to the race, starting in 10th and ending up first, even after taking a five-spot grid penalty for a violation.

It doesn't get much hotter than Qatar — or the 2021 F1 championship.

Here's what you need to know about this weekend's F1 race:

What channel is the F1 race on today?
Race: Qatar Grand Prix
Date: Sunday, Nov. 21
TV channel: ESPN 2
Live stream: fuboTV
The ESPN family of networks will broadcast all 2021 F1 races in the United States using Sky Sports' feed, with select races heading to ABC later in the season.

ESPN Deportes serves as the exclusive Spanish-language home for all 2021 F1 races in the U.S.

What time does the F1 race start today?
Date: Sunday, Nov. 21
Start time: 9 a.m. ET
The 9 a.m. ET start time for Sunday's race means the 2021 Qatar Grand Prix will start at 5 p.m. local time. Lights out will likely take place just after 9 a.m. ET. ESPN's prerace show usually airs in the hour before the start of the race.

Below is the complete TV schedule for the weekend's F1 events at the Qatar Grand Prix. All times are Eastern.

Date Event Time TV channel
Friday, Nov. 19 Practice 1 5:30 a.m. ESPN2
Friday, Nov. 19 Practice 2 9 a.m. ESPN2
Saturday, Nov. 20 Practice 3 6 a.m. ESPN2
Saturday, Nov. 20 Qualifying 9 a.m. ESPN2
Sunday, Nov. 21 Race 9 a.m. ESPN2
Formula 1 live stream for Qatar Grand Prix
For those who don't have a cable or satellite subscription, there are five major OTT TV streaming options that carry ESPN — fuboTV, Sling, Hulu, YouTubeTV and AT&T Now. Of the five, Hulu, fuboTV and YouTubeTV offer free-trial options.

Below are links to each.
For those who do have a cable or satellite subscription but are not in front of a TV, Formula 1 races in 2021 can be streamed live via phones, tablets and other devices on the ESPN app with authentication.

Formula 1 schedule 2021
In all, there are 23 scheduled races in the 2021 F1 season, with the Portuguese Grand Prix sliding onto the docket the first week in March. The originally scheduled Vietnam Grand Prix was removed after the arrest of Nguyen Duc Chung, while the Chinese Grand Prix is up in the air. It was originally scheduled for April 11 but will likely not take place this season.

The Singapore Grand Prix was also removed from the schedule, with the Turkish Grand Prix returning to the schedule in its stead.

All races will be broadcast in the U.S. on the ESPN family of networks, with the United States Grand Prix and Mexico City Grand Prix both airing on ABC.

Please note: The on-the-hour start times do not include the broadcast start time, which is typically five minutes before the start of the race. Times do not include ESPN's customary prerace shows.

MORE: Live stream F1 races all season on fuboTV (7-day free trial)

Here's the latest schedule:

Date Race Course Start time (ET) TV channel Winner
March 28 Bahrain Grand Prix Bahrain International Circuit 11 a.m. ESPN2 Lewis Hamilton (Mercedes)
April 18 Emilia Romagna Grand Prix Autodromo Internazionale Enzo e Dino Ferrari 9 a.m. ESPN Max Verstappen (Red Bull)
May 2 Portuguese Grand Prix Algarve International Circuit 10 a.m. ESPN Lewis Hamilton (Mercedes)
May 9 Spanish Grand Prix Circuit de Barcelona-Catalunya 9 a.m. ESPN Lewis Hamilton (Mercedes)
May 23 Monaco Grand Prix Circuit de Monaco 9 a.m. ESPN2 Max Verstappen (Red Bull)
June 6 Azerbaijan Grand Prix Baku City Circuit 8 a.m. ESPN Sergio Perez (Red Bull)
June 20 French Grand Prix Circuit Paul Ricard 9 a.m. ESPN Max Verstappen (Red Bull)
June 27 Styrian Grand Prix Red Bull Ring 9 a.m. ESPN Max Verstappen (Red Bull)
July 4 Austrian Grand Prix Red Bull Ring 9 a.m. ESPN Max Verstappen (Red Bull)
July 18 British Grand Prix Silverstone Circuit 10 a.m. ESPN Lewis Hamilton (Mercedes)
Aug. 1 Hungarian Grand Prix Hungaroring 9 a.m. ESPN Esteban Ocon (Alpine)
Aug. 29 Belgian Grand Prix Circuit de Spa-Francorchamps 9 a.m. ESPN2 Max Verstappen (Red Bull)
Sept. 5 Dutch Grand Prix Circuit Zandvoort 9 a.m. ESPN2 Max Verstappen (Red Bull)
Sept. 12 Italian Grand Prix Autodromo Nazionale di Monza 9 a.m. ESPN2 Daniel Ricciardo (McLaren)
Sept. 26 Russian Grand Prix Sochi Autodrom 8 a.m. ESPN2 Lewis Hamilton (Mercedes)
Oct. 10 Turkish Grand Prix Intercity Istanbul Park 8 a.m. ESPN2 Valtteri Bottas (Mercedes)
Oct. 24 United States Grand Prix Circuit of the Americas 3 p.m. ABC Max Verstappen (Red Bull)
Nov. 7 Mexico City Grand Prix Autodromo Hermanos Rodriguez 2 p.m. ABC Max Verstappen (Red Bull)
Nov. 14 São Paulo Grand Prix Autodromo Jose Carlos Pace Noon ESPN2 Lewis Hamilton (Mercedes)
Nov. 21 Qatar Grand Prix Losail International Circuit 9 a.m. ESPNews TBD
Dec. 5 Saudi Arabian Grand Prix Jeddah Street Circuit 11 p.m. ESPN2 TBD
Dec. 12 Abu Dhabi Grand Prix Yas Marina Circuit 8 a.m. ESPN2 TBD