Earth

Under climate stress, human innovation set stage for population surge

Climate alone is not a driver for human behavior. The choices that people make in the face of changing conditions take place in a larger human context. And studies that combine insights from archaeologists and environmental scientists can offer more nuanced lessons about how people have responded -- sometimes successfully -- to long-term environmental changes.

One such study, from researchers at Washington University in St. Louis and the Chinese Academy of Sciences, shows that aridification in the central plains of China during the early Bronze Age did not cause population collapse, a result that highlights the importance of social resilience to climate change.

Instead of a collapse amid dry conditions, development of agriculture and increasingly complex human social structures set the stage for a dramatic increase in human population around 3,900 to 3,500 years ago.

"In China, especially, there has been a relatively simplistic view of the effects of climate," said Tristram R. "T.R." Kidder, the Edward S. and Tedi Macias Professor of Anthropology in Arts & Sciences. The new study was posted online in Environmental Research Letters.

"Our work shows that we need to have a nuanced appreciation of human resilience as we consider the effects of climate and its effects on human societies," Kidder said. "We have remarkable capacity to adapt. But part of the lesson here is that our social, political and technological systems have to be flexible.

"People in the past were able to overcome climate adversity because they were willing to change," he said.

The new study is one of the first attempts to quantify the types and rates of demographic and subsistence changes over the course of thousands of years in the central plains of China.

By combining information about climate, archaeology and vegetation, the authors mapped out an ambitious story about what changed, when it changed and how those changes were related to human social structures at the time.

Researchers used pollen data from a lake sediment core collected in Henan Province to interpret historical climate conditions. In this area, they found that a warm and wet climate about 9,000 to 4,000 years ago shifted to a cool and dry climate during the Neolithic-Bronze Age transition (about 4,000 to 3,700 years ago). The researchers then used radiocarbon dating and other archaeological data to determine what people were growing and eating during periods of significant population surges and declines in this timeframe.

Confronted with the fluctuation and limitation of resources caused by episodes of climatic aridification, people expanded the number of plants they cultivated for food, the researchers found. They embraced new diversity in agriculture -- including foxtail millet, broomcorn millet, wheat, soybean and rice -- all of which reduced the risks of food production.

This also was a time marked by innovations in water management approaches for irrigation, as well as new metal tools. Social structures also shifted to accommodate and accelerate these examples of human adaptive ingenuity.

"Certainly, by 4,000 years ago, which is when we see this change in the overall environmental condition, this is a society with complicated political, social and economic institutions," Kidder said. "And what I think we are seeing is the capacity of these institutions to buffer and to deal with the climatic variation. When we talk about changes in subsistence strategies, these changes didn't happen automatically. These are human choices."

With this and other related research work, Kidder has argued that early Chinese cities provide an important context that closely resembles modern cities, where high-density urbanism is supported by intensive agriculture. They provide a better historical analog than the Maya world or those in southeast Asia, notably Angkor Wat and the Khmer Kingdom. Those were cities where lower density and food production did not put the same sorts of demands on the physical environment.

Lead author Ren Xiaolin, assistant professor at the Institute for the History of Natural Sciences at the Chinese Academy of Sciences in Beijing, worked closely with Kidder and others in his laboratory to develop the theory and framework for how to think about environmental changes and urbanism in China.

"Climate change does not always equal collapse -- and this is an important point in both a prehistoric and modern context," said Michael Storozum, another co-author and research fellow at The Hebrew University of Jerusalem. Storozum is a PhD graduate of Washington University, where he studied under Kidder.

"Humans have been heavily modifying their environments for thousands of years, often in the pursuit of increasing food production which grants societies a higher degree of social resilience," Storozum said.

He draws connections between the findings from this paper and his current research as part of The Wall project, a study of people and ecology in medieval Mongolia and China.

"As more environmental scientists and archaeologists work together, I expect that our understanding of what makes a society resilient to climate change in prehistoric and historical times will grow as well," Storozum said.

Kidder added: "We need to think carefully about how we understand the capacity of people to change their world."

Credit: 
Washington University in St. Louis

Agents of food-borne zoonoses confirmed to parasitise newly-recorded in Thailand snails

image: Shells of the studied thiarid snails (genus Stenomelania) found in Thailand.

Image: 
Apiraksena K, Namchote S, Komsuwan J, and Krailas D.

Parasitic flatworms known as agents of food-borne zoonoses were confirmed to use several species of thiarid snails, commonly found in freshwater and brackish environments in southeast Asia, as their first intermediate host. These parasites can cause severe ocular infections in humans who consume raw or improperly cooked fish that have fed on infected snails. The study, conducted in South Thailand by Thai and German researchers and led by Kitja Apiraksena, Silpakorn University, is published in the peer-reviewed open-access journal Zoosystematics and Evolution.

"Trematode infections are major public health problems affecting humans in southeast Asia," explain the scientists. "Trematode infections depend not only on the habit of people, but also on the presence of first and second intermediate host species, resulting in the endemic spread of parasites, such as intestinal and liver flukes in Thailand".

The snails of concern belong to the genus Stenomelania, have elongated and pointed shells and can be found near and in the brackish water environment of estuaries in the Oriental Region, from India to the Western Pacific islands. Worryingly enough, science does not know much else about these snails to date. Further, these species are hard to distinguish from related trumpet snails, because of the similarities in their shell morphology.

In order to provide some basic knowledge about the parasitic worms in Thailand and neighbouring countries, the research team collected a total of 1,551 Stenomelania snails, identified as four species, from streams and rivers near the coastline of the south of Thailand in Krabi, Trang and Satun Provinces. Of them, ten were infected with trematodes. The parasites were found at seven of the studied localities and belonged to three different species. In Krabi Province, the researchers observed all three species.

Speculating on their presence, the scientists suspect that it could be related to the circulation of sea currents, as the flow of water along the Andaman coast is affected by the monsoon season.

In conclusion, the researchers note that it is a matter of public health that further research looks into the biodiversity and biology of these snails, in order to improve our knowledge about the susceptibility of Stenomelania snails to food-borne zoonotic.

"This finding indicated that the resulting parasitic diseases are still largely neglected in tropical medicine, so further studies should be performed on the prevalence of various trematode-borne diseases in locations with snail occurrences in Thailand," they say.

Credit: 
Pensoft Publishers

Atherosclerosis can accelerate the development of clonal hematopoiesis, study finds

BOSTON -- Billions of peripheral white blood cells are produced every day by the regular divisions of hematopoietic stem cells and their descendants in the bone marrow. Under normal circumstances, thousands of stem cells contribute progeny to the blood at any given time, making white blood cells a group with diverse ancestry.

Clonal hematopoiesis is a common age-related condition in which the descendants of one of these hematopoietic stem cells begin to dominate substantial portions of the blood. Genome-wide analyses have determined that clonal hematopoiesis is frequently driven by recurrent genetic alterations that confer a competitive advantage to specific hematopoietic stem cells, thus giving them the ability to expand disproportionately.

Multiple independent studies have shown that clonal hematopoiesis often goes hand in hand with atherosclerosis and cardiovascular disease. Since its discovery, this surprising association has been the subject of intense interest from clinicians and researchers alike.

Cardiovascular disease is the main cause of morbidity and mortality in Western countries and represents a massive public health burden. Do clonal expansions in the blood contribute to the progression of atherosclerosis, and if so, how?

Subsequent work showed that indeed, atherosclerotic plaque formation can be exacerbated by immune cells with clonal hematopoiesis-related mutations, thus raising the question whether clonal expansions in the blood should be targeted therapeutically for the prevention of cardiovascular disease.

In a new study published in Cell, researchers at Massachusetts General Hospital and Harvard Medical School now suggest a different, additional possibility: Atherosclerosis causes clonal hematopoiesis. Patients with atherosclerosis suffer from hyperlipidemia and inflammation, two conditions that are known to chronically boost hematopoietic stem cell division rates. In the new study, the researchers now demonstrate that this increased division accelerates the development of clonal hematopoiesis.

Kamila Naxerova, PhD, a principal investigator in MGH's Center for Systems Biology and senior author of the study, says: "Patients with atherosclerosis essentially experience 'accelerated time.' This is because the speed with which genetic alterations arise and spread through the hematopoietic system is determined by the underlying rate of stem cell division. From a genetic point of view, you could say that atherosclerosis accelerates aging of the blood. Since clonal hematopoiesis is an age-related condition, atherosclerosis patients are prone to developing it earlier than healthy individuals," says Naxerova, who is also an assistant professor of Radiology at Harvard Medical School.

Naxerova says that her team's findings may potentially be good news for patients with clonal hematopoiesis: "There is no doubt that more research is needed to carefully dissect the connection between clonal hematopoiesis and cardiovascular disease. But our results indicate that clonal hematopoiesis might in some cases only be a relatively harmless sign of an overactive hematopoietic system, and not a danger in itself."

"What makes this study unique is that the interdisciplinary team incorporated mathematical modeling to discover a new paradigm in the atherosclerosis field and further elucidated the interplay between cardiovascular disease and clonal hematopoiesis," says Michelle Olive, PhD, Program Officer in the Division of Cardiovascular Sciences at the National Heart, Lung, and Blood Institute, part of the National Institutes of Health.

Credit: 
Massachusetts General Hospital

New machine learning tool facilitates analysis of health information, clinical forecasting

Clinical research requires that data be mined for insights. Machine learning, which develops algorithms to find patterns, has difficulty doing this with data related to health records because this type of information is neither static nor regularly collected. A new study developed a transparent and reproducible machine learning tool to facilitate analysis of health information. The tool can be used in clinical forecasting, which can predict trends as well as outcomes in individual patients.

The study, by a researcher at Carnegie Mellon University (CMU), appears in Proceedings of Machine Learning Research.

"Temporal Learning Lite, or TL-Lite, is a visualization and forecasting tool to bridge the gap between clinical visualization and machine learning analysis," explains Jeremy Weiss, assistant professor of health informatics at CMU's Heinz College, who authored the study. "While the individual elements of this tool are well known, their integration into an interactive clinical research tool is new and useful for health professionals. With familiarization, users can conduct preliminary analyses in minutes."

Time is a key part of clinical data that are collected in health care delivery. For example, during discussions of patients on rounds, in which doctors visit hospital patients to determine how they are doing, medical staff use visual aids that depict measurements of progression and recovery. Since electronic health records have been widely adopted, significant advances have been made in visualizing clinical data as well as in clinical forecasting. Yet a gap remains between the two.

TL-Lite begins with visualizations of information from databases and ends with visual risk assessments of a temporal model. Along the way, users can see the effects of their design choices through visual summaries at the levels of individuals as well as groups. This allows users to understand their data more completely and adjust machine learning settings for their analysis.

To show how the tool can be used, Weiss demonstrated the model with three electronic health records pertaining to three health matters: predicting severe thrombocytopenia (having abnormally low levels of platelets in the blood) during stays in the intensive care unit (ICU) among patients with sepsis, predicting survival of patients admitted to the ICU one day after admission, and predicting microvascular complications of type 2 diabetes among patients with the illness.

"The central goal of TL-Lite is to facilitate well-specified and well-crafted predictive forecasting, and this visualization tool is meant to ease the process," says Weiss. "At the same time, organizing the clinical data stream into meaningful visualizations can be aided by introducing machine learning elements. These approaches are complementary, so leveraging the benefits of one where another hits roadblocks results in a better overall solution."

Credit: 
Carnegie Mellon University

Social dilemma follows 2018 eruption of Kilauea volcano

image: Fissure 8 erupts in lava hazard zone 1, back-lighting a front gate, a mailbox and utility lines. May 5, 2018.

Image: 
Bruce Houghton

The unprecedented cost of the 2018 Kilauea eruption in Hawai'i reflects the intersection of distinct physical and social phenomena: infrequent, highly destructive eruptions, and atypically high population growth, according to a new study published in Nature Communications and led by University of Hawai'i at Mānoa researchers.

It has long been recognized that areas in Puna, Hawai'i, are at high risk from lava flows. This ensured that land values were lower in Puna--which lies within the three highest risk lava hazard zones 1, 2 and 3--which actively promoted rapid population growth.

"Low prices on beautiful land and a scarcity of recent eruptions led to unavoidable consequences--more people and more development," said Bruce Houghton, the lead author of the study and Gordan Macdonald Professor of Volcanology in the UH Mānoa School of Ocean and Earth Science and Technology (SOEST). "Ultimately this drastically increased the value of what was at risk in 2018, relative to earlier eruptions of Ki?lauea."

Kilauea is one of the most active volcanoes on Earth and has one of the earliest, most comprehensive volcanic monitoring systems. Its recent history has been dominated by activity at the summit caldera and from one of two lines of vents called the Eastern Rift Zone. Between 1967 and 2018, volcanic activity was dominated by eruptions from the upper part of the Eastern Rift Zone. In contrast, no damaging eruptions occurred after 1961 in the more heavily populated Puna district from the vents within the lower portion of the Eastern Rift Zone.

The UH team assessed trends in population growth in Pāhoa-Kalapana, Hilo and Puna using census data, and compared median cost of land and household income in these areas.

Valuable lessons regarding the complex interplay of science, policy, and public behavior emerged from the 2018 disaster.

"Steep population growth occurred during the absence of any locally sourced eruptions between 1961 and 2018, and set the scene for the unprecedented levels of infra-structural damage during the 2018 Lower Eastern Rift Zone eruption," said Wendy Cockshell, co-author on the paper and technical assistant at the National Disaster Preparedness Training Center (NDPTC) at UH Mānoa.

If population growth resumes in lava hazard zones 1 and 2, there will be increased risk in the most dangerous areas on this exceptionally active volcano translating into high cost of damage in future eruptions.

"Our funded research supports the principle of the initiatives by local and federal government to provide buy-out funding to land owners affected by the 2018 eruption to able them to relocate outside of these hazardous areas," said Houghton.

Credit: 
University of Hawaii at Manoa

Signal transduction without signal -- receptor clusters can direct cell movement

video: Upon irradiation with laser light (white rings), receptors cluster in the cell (light green circles). Thereupon, the cell moves into the direction of the receptor clusters.

Image: 
Copyright: M. Florencia Sánchez & Robert Tampé, Goethe University Frankfurt. Reprinted with permission from M. F. Sánchez et al., Science 10.1126/science.abb7657(2021)

Our body consists of 100 trillion cells that communicate with each other, receive signals from the outside world and react to them. A central role in this communication network is attributed to receiver proteins, called receptors, which are anchored at the cell membrane. There, they receive and transmit signals to the inside of the cell, where a cell reaction is triggered.

In humans, G protein-coupled receptors (GPC receptors) represent the largest group of these receptor molecules, with around 700 different types. The research of the Frankfurt and Leipzig scientists focused on a GPC receptor that serves as a receptor for the neuropeptide Y in cells and is accordingly called the Y2 receptor. Neuropeptide Y is a messenger substance that primarily mediates signals between nerve cells, which is why Y2 receptors are mainly present in nerve cells and among other activities trigger the formation of new cell connections.

In the laboratory, the researchers engineered cells, which had approx. 300,000 Y2 receptors on their surface and were grown on specifically developed, light-sensitive matrices. Each of the Y2 receptors was provided with a small molecular "label". Once the scientists created a spot of light with a fine laser beam on the cell surface, the Y2 receptor under this spot were trapped via the molecular label to the exposed matrix in such a way that the Y2 receptors moved closely together to form an assembly known as a cluster. The whole reaction could be immediately observed at the defined spot and within a few seconds.

Professor Robert Tampé from the Institute of Biochemistry at Goethe University Frankfurt explains: "The serendipity about this experiment is that the clustering of receptors triggers a signal that is similar to that of neuropeptide Y. Solely by the clustering, we were able to trigger cell movement as a reaction of the cell. The laser spots even allowed us to control the direction of the cell movement." As the light-sensitive lock-and-key pairs utilized are very small compared to the receptors, the organization of the receptors in the cell membrane can be controlled with high precision using the laser spot. "This non-invasive method is thus particularly well suited to study the effects of receptor clustering in living cells," Tampé continues. "Our method can be used to investigate exciting scientific questions, such as how receptors are organized in networks and how new circuits are formed in the brain."

Credit: 
Goethe University Frankfurt

Farmers in developing countries can protect both profits and endangered species

HOUSTON - (Feb. 25, 2021) - Low-income livestock farmers in developing countries are often faced with a difficult dilemma: protect their animals from endangered predators, or spare the threatened species at the expense of their livestock and livelihood.

A new paper by Rice University economist Ted Loch-Temzelides examines such circumstances faced by farmers in Pakistan. "Conservation, risk aversion, and livestock insurance: The case of the snow leopard" outlines a plan under which farmers can protect themselves from crippling financial losses while preserving and possibly benefiting from the lives of endangered predators.

"These livestock owners often have very low incomes," Loch-Temzelides said. "The loss of even one animal can be financially devastating. They're faced with the difficult task of weighing conservation efforts against economic losses due to attacks on their herds. And this situation isn't limited to snow leopards -- it applies anywhere large predators live near livestock."

Loch-Temzelides proposes establishing community livestock insurance contracts for farmers in developing countries who don't have access to the types of policies available in more developed nations. Under these contracts, farmers would agree to share the cost of lost animals with other farmers in their community. For example: If one farmer in a community of 10 lost an animal valued at $100, each community member would lose the equivalent of about $10.

By aiding conservation efforts, he added, farmers may stand to reap additional benefits.

"Tourists around the world are willing to pay to see endangered species such as snow leopards in their natural habitats," Loch-Temzelides said. "And revenue from ecotourism can benefit communities and their residents significantly."

While Loch-Temzelides' study focuses on Pakistan, he hopes community livestock insurance can be useful around the world.

Credit: 
Rice University

Could a common barnacle help find missing persons lost at sea?

image: Lepas are a genus of goose barnacles which are abundantly found on flotsam all around the world.

Image: 
Shutterstock

A common barnacle could be used to help trace missing persons lost at sea, according to research by UNSW Science.

Researchers from the Centre for Marine Science and Innovation have developed an equation that can estimate the minimum time an object has spent drifting at sea by counting the number of Lepas anserifera attached to the object.

They also developed an equation which can help plot possible drift paths of a missing boat.

"We saw this opportunity that Lepas could possibly fill in this gap in that marine forensic process and possibly contribute (to finding missing people)," study lead author Thomas Mesaglio said.

The study, published in Marine Biology, looked at the ecology of the understudied Lepas, a genus of goose barnacles, which are abundantly found on flotsam all around the world.

Goose barnacles are unique as they only attach to floating objects, such as boats.

The Centre for Marine Science and Innovation scientists monitored the abundance of other species of Lepas, and two amphipod species (crustaceans), on both fixed moorings and free-drifting debris over six months.

They also monitored other biofouling organisms such as crabs and sea slugs that were also settling on the mooring to understand how that community evolved over time.

They developed a new, invaluable equation from this information.

"Let's say a fisherman out on his boat goes missing, but we don't know exactly where or when his boat sank," Mr Mesaglio said.

"Two weeks later, debris from his boat washes up ashore. We can measure and count the Lepas (as well as counting those other amphipods if present) to give a minimum estimate for how long that debris was drifting. This would give us a smaller and more accurate time window of when he may have sunk, therefore also narrowing down the options for where he may have sunk."

The scientists found the average growth rate for Lepas anserifera was 1.05mm a day.

The researchers also discovered the fastest daily growth rate of Lepas anserifera was around 1.45mm a day, significantly greater than the original data of 1mm a day last recorded in the 1940s.

They also developed a new equation from oxygen isotope analyses of Lepas shells, which can estimate sea surface temperature history, and therefore help plot possible drift paths of a missing boat.

"We can conduct an isotopic analysis of the Lepas shells attached to a missing boat and reconstruct the sea surface temperature they experienced while attached to the debris," he said.

"We can then compare this to actual measurements of sea surface temperature (from satellites or moorings) from the broad area the missing fisherman could have been in, and try to match times and temperatures to understand the path the debris may have taken."

Study co-author Professor Iain Suthers said the new data can also be used to rule out debris that isn't related to the missing fisherman.

"Say some debris washes up, but we don't know if it's actually from the fisherman's boat, or another boat," Professor Suthers said.

"Given we know a maximum possible time since he launched the boat, we can then compare that with the size of any Lepas on the debris. If the Lepas are so big that the time required for them to grow to that length exceeds that maximum time he was missing then of course we can say the debris doesn't belong to his boat."

Prof. Suthers said Lepas are forensically useful for flotsam that was adrift for one to three months.

"Unfortunately for crash investigators, the new, faster Lepas growth rates suggest that the large (36 mm) Lepas found on the missing Malaysian Airline flight MH370 wreckage at Reunion Island - 16 months after the aircraft was believed to have crashed in 2014 - were much younger than previously realised," he said.

"These Lepas probably settled on wreckage at least halfway across the Indian Ocean, and nowhere near the crash site."

Mr Mesaglio said the next steps in this research would be conduct further studies on a bigger sample size and in other oceanic regions, such as off the coast of Queensland and Western Australia.

Credit: 
University of New South Wales

Nature's funhouse mirror: understanding asymmetry in the proton

image: Graphical representation of the proton. The large spheres represent the three valence quarks, the small spheres represent the other quarks that make up the proton, and the springs represent the nuclear force holding them together.

Image: 
(Image by Brookhaven National Laboratory.)

Asymmetry in the proton confounds physicists, but a new discovery may bring back old theories to explain it.

Symmetry — displayed in areas ranging from mathematics and art, to living organisms and galaxies — is an important underlying structure in nature. It characterizes our universe and enables it to be studied and understood.

Because symmetry is such a pervasive theme in nature, physicists are especially intrigued when an object seems like it should be symmetric, but it isn’t. When scientists are confronted with these broken symmetries, it’s as if they’ve found an object with a strange reflection in the mirror.

“Nature is leading the way for concepts in older models of the proton to get a second look.” — Argonne physicist Don Geesaman

The proton, a positively charged particle that exists at the center of every atom, displays asymmetry in its makeup. Physicists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory and their collaborators recently investigated the intricacies of this known broken symmetry through an experiment conducted at DOE’s Fermi National Accelerator Laboratory. The results of the experiment could shift research of the proton by reviving previously discarded theories of its inner workings.

The outcome of this experiment contradicts the conclusion of a study from the late 90s, also performed at Fermilab. Scientists can now revisit theories to describe asymmetry in the proton that were ruled out by the old experiment.

Understanding the properties of the proton helps physicists answer some of the most fundamental questions in all of science, and by investigating the world at the smallest level, scientists are advancing technology we use every day. Studies of the proton have led to the development of proton therapy for cancer treatment, measurement of proton radiation during space travel and even understanding of star formation and the early universe.

“We were able to look at the puzzling dynamics within the proton,” said Argonne physicist Don Geesaman, “and through this experiment, nature is leading the way for concepts in older models of the proton to get a second look.”

Mismatched matter

Just as shapes can have symmetry, particles can, too. A perfect circle consists of two semicircles of the same size facing opposite directions, and each type of particle in the universe has an antiparticle of the same mass with opposite electric charge.

The building blocks of the proton include particles called quarks, and their antiparticles, called antiquarks. They come in “flavors”, such as up, down, anti-up and anti-down. Quarks and antiquarks are bound together inside the proton by a strong nuclear force. The strength of this force can pull pairs of quarks and antiquarks out of nothing, and these pairs exist for a short time before annihilating each other. This “sea” of quarks and antiquarks popping in and out of existence is ever-present inside the proton.

Curiously, at any given time, there are three more quarks than antiquarks: two more up quarks than anti-up quarks, and one more down quark than anti-down quarks. In other words, these mismatched quarks have no antimatter counterparts. This asymmetry is the reason protons are positively charged, allowing atoms — and therefore all matter — to exist.

“We still have an incomplete understanding of quarks in a proton and how they give rise to the proton’s properties,” said Paul Reimer, an Argonne physicist on the study. “The fleeting nature of the quark-antiquark pairs makes their presence in the protons difficult to study, but in this experiment, we detected the annihilations of the antiquarks, which gave us insight into the asymmetry.”

The experiment determined that there are always more anti-down quarks in the proton than anti-up quarks, no matter the quarks’ momentums. The significance of this result is its contradiction with the conclusion of the Fermilab experiment in the late 90s, which suggested that at high momentums, the proton’s asymmetry reverses, meaning the anti-up quarks begin to dominate anti-down quarks.

“We designed the new experiment to look at these high momentums to determine if this change really occurs,” said Reimer. “We showed that there is a smooth asymmetry with no flip of the ratio between anti-up and anti-down quarks.”

Reconstructing annihilation

To probe the quarks and antiquarks in the proton, the scientists shot beams of protons at targets and studied the aftermath of the particle collisions. Specifically, they studied what happens after a proton from the beam hits a proton in the target.

When protons collide, quarks and antiquarks from the protons annihilate each other. Then, two new fundamental particles called muons come out of the annihilation, acting as the interaction’s signature. From these interactions, the scientists determined the ratio of anti-up quarks to anti-down quarks at a range of high momentums.

“We chose to measure muons because they can pass through material better than most of the other collision fragments,” said Reimer. In between the targets and their measurement devices, the team placed a five-meter-thick iron wall to stop other particles from passing through and clouding their signals.

When the muons hit the measurement devices at the end of their journey, the scientists reconstructed the quark-antiquark annihilations from the measurements, enabling them to confirm the smooth, consistent ratio of anti-up quarks to anti-down quarks.

A second look

“What we thought we saw in the previous experiment isn’t what happens,” said Geesaman, who was part of both the present and previous studies. “Why, though? That’s the next step.”

Theories that were rejected after they contradicted the previous experiment’s results now give a great description of the new data, and scientists can revisit them with greater confidence because of this experiment. These theories will inform further experiments on asymmetry in the proton and other particles, adding to our understanding of the theory surrounding quarks.

Clues about the nature of quarks in the proton ultimately lead to better understanding of the atomic nucleus. Understanding the nucleus can demystify properties of the atom and how different chemical elements react with each other. Proton research touches upon fields including chemistry, astronomy, cosmology and biology, leading to advances in medicine, materials science and more.

“You need experiment to lead the thinking and constrain theory, and here, we were looking for nature to give us insight into the proton’s dynamics,” said Geesaman. “It’s an interlacing cycle of experiment and theory that leads to impactful research.”

A paper on the study, “The asymmetry of antimatter in the proton”, was published in Nature on Feb. 24.

The work was performed by the SeaQuest Collaboration, which is supported in part by DOE’s Office of Nuclear Physics and the National Science Foundation.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.

Credit: 
DOE/Argonne National Laboratory

Using a multipronged approach to investigate the diet of ancient dogs

image: Coprolites are found at archeological sites and they provide dietary insight

Image: 
Malhi lab

Coprolites, or fossilized dog feces, are often used to understand the dietary preferences of ancient civilizations. However, the samples are often contaminated, making the analysis difficult. A new study, published in Scientific Reports, uses different techniques to improve the investigation of coprolites.

"We have been interested in analyzing coprolites for many years. We have attempted to extract DNA and look at the microbiome before, but the tools were not as robust," said Ripan Malhi (GNDP/GSP/IGOH), a professor of anthropology. "As far as I know, this is the first time anyone has used multiple approaches to provide a snapshot of the daily diet, health, and the long-term trends in ancient dogs of the Americas, all in one study."

The samples were recovered from Cahokia, near modern St. Louis, Missouri. At its peak, Cahokia was a large urban center with a population greater than London or Paris. Several other investigations have shown that there is an overlap between the diet of dogs and humans, either because the dogs were fed the same food or because they ate human food scraps. Therefore, investigating coprolites also provides an insight into human health and diet.

"Initially, the residents were growing crops such as squash and sunflowers. As the city got bigger, it is believed that the diet shifted to maize. Our analysis suggests the same since we saw that some of the dogs were also eating maize," said Kelsey Witt, a postdoctoral researcher at Brown University and former PhD student in the Malhi lab.

The maize samples were examined using stable isotope analysis, which is used to measure different forms of carbon in a sample. Depending on the carbon concentrations, one can identify what kind of plant was consumed. The researchers also investigated the animal and plant remains in the coprolites to show that walnuts, grapes, a variety of fish, and duck were a part of the dogs' diet.

The researchers also used DNA sequencing to determine the microbiome--the community of microbes--of the coprolites. "The technique we used came out in 2020. It helped us verify whether the samples were from dogs or humans, as well as confirm general aspects of diet which can only be done by comparing the microbiomes," said Karthik Yarlagadda, a PhD student in the Malhi lab.

Although the techniques are novel and more sensitive, coprolites are still challenging to study for a number of reasons. The DNA has already passed through the digestive process in the dogs and has therefore been broken down. Furthermore, since the samples are ancient, the extracted DNA is degraded to a large extent due to weathering.

"One of the biggest challenges we faced was dealing with sample contamination," Yarlagadda said. "These samples were deposited a thousand years ago. After that, the environment changed, certain microbes died off, and new microbes took over. All these factors complicate the analysis."

The researchers are working with the Indigenous communities to further understand what the diets looked like in their ancestors. "Since there are a lot of limitations to our research, talking to community members about what their ancestors ate and how they interacted with dogs helps us understand our results better," Witt said.

Credit: 
Carl R. Woese Institute for Genomic Biology, University of Illinois at Urbana-Champaign

Scientists begin building highly accurate digital twin of our planet

To become climate neutral by 2050, the European Union launched two ambitious programmes: "Green Deal" and "DigitalStrategy". As a key component of their successful implementation, climate scientists and computer scientists launched the "Destination Earth" initiative, which will start in mid-?2021 and is expected to run for up to ten years. During this period, a highly accurate digital model of the Earth is to be created, a digital twin of the Earth, to map climate development and extreme events as accurately as possible in space and time.

Observational data will be continuously incorporated into the digital twin in order to make the digital Earth model more accurate for monitoring the evolution and predict possible future trajectories. But in addition to the observation data conventionally used for weather and climate simulations, the researchers also want to integrate new data on relevant human activities into the model. The new "Earth system model" will represent virtually all processes on the Earth's surface as realistically as possible, including the influence of humans on water, food and energy management, and the processes in the physical Earth system.

Information system for decision-?making

The digital twin of the Earth is intended to be an information system that develops and tests scenarios that show more sustainable development and thus better inform policies. "If you are planning a two-?metre high dike in The Netherlands, for example, I can run through the data in my digital twin and check whether the dike will in all likelihood still protect against expected extreme events in 2050," says Peter Bauer, deputy director for Research at the European Centre for Medium-?Range Weather Forecasts (ECMWF) and co-?initiator of Destination Earth. The digital twin will also be used for strategic planning of fresh water and food supplies or wind farms and solar plants.

The driving forces behind Destination Earth are the ECMWF, the European Space Agency (ESA), and the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT). Together with other scientists, Bauer is driving the climate science and meteorological aspects of the Earth's digital twin, but they also rely on the know-?how of computer scientists from ETH Zurich and the Swiss National Supercomputing Centre (CSCS), namely ETH professors Torsten Hoefler, from the Institute for High Performance Computing Systems, and Thomas Schulthess, Director of CSCS.

In order to take this big step in the digital revolution, Bauer emphasises the need for earth sciences to be married to the computer sciences. In a recent publication in Nature Computational Science, the team of researchers from the earth and computer sciences discusses which concrete measures they would like to use to advance this "digital revolution of earth-?system sciences", where they see the challenges and what possible solutions can be found.

Weather and climate models as a basis

In their paper, the researchers look back on the steady development of weather models since the 1940s, a success story that took place quietly. Meteorologists pioneered, so to speak, simulations of physical processes on the world's largest computers. As a physicist and computer scientist, CSCS's Schulthess is therefore convinced that today's weather and climate models are ideally suited to identify completely new ways for many more scientific disciplines how to use supercomputers efficiently.

In the past, weather and climate modelling used different approaches to simulate the Earth system. Whereas climate models represent a very broad set of physical processes, they typically neglect small-?scale processes, which, however, are essential for the more precise weather forecasts that in turn, focus on a smaller number of processes. The digital twin will bring both areas together and enable high-?resolution simulations that depict the complex processes of the entire Earth system. But in order to achieve this, the codes of the simulation programmes must be adapted to new technologies promising much enhanced computing power.

With the computers and algorithms available today, the highly complex simulations can hardly be carried out at the planned extremely high resolution of one kilometre because for decades, code development stagnated from a computer science perspective. Climate research benefited from being able to gain higher performance by ways of new generations of processors without having to fundamentally change their programme. This free performance gain with each new processor generation stopped about 10 years ago. As a result, today's programmes can often only utilise 5 per cent of the peak performance of conventional processors (CPU).

For achieving the necessary improvements, the authors emphasize the need of co-?design, i.e. developing hardware and algorithms together and simultaneously, as CSCS successfully demonstrated during the last ten years. They suggest to pay particular attention to generic data structures, optimised spatial discretisation of the grid to be calculated and optimisation of the time step lengths. The scientists further propose to separate the codes for solving the scientific problem from the codes that optimally perform the computation on the respective system architecture. This more flexible programme structure would allow a faster and more efficient switch to future architectures.

Profiting from artificial intelligence

The authors also see great potential in artificial intelligence (AI). It can be used, for example, for data assimilation or the processing of observation data, the representation of uncertain physical processes in the models and data compression. AI thus makes it possible to speed up the simulations and filter out the most important information from large amounts of data. Additionally, the researchers assume that the use of machine learning not only makes the calculations more efficient, but also can help describing the physical processes more accurately.

The scientists see their strategy paper as a starting point on the path to a digital twin of the Earth. Among the computer architectures available today and those expected in the near future, supercomputers based on graphics processing units (GPU) appear to be the most promising option. The researchers estimate that operating a digital twin at full scale would require a system with about 20,000 GPUs, consuming an estimated 20MW of power. For both economic and ecological reasons, such a computer should be operated at a location where CO2-?neutral generated electricity is available in sufficient quantities.

Credit: 
ETH Zurich

Cellular seafood

Meat alternatives are officially mainstream. To wit, Burger King added the plant-based Impossible Burger to its menu nationwide in 2019, and McDonald's plans to unveil its own McPlant in 2021. Alongside these vegetarian options, many companies are also working to culture meat outside of animals grown from cell lines. Proponents highlight a range of potential environmental and health benefits offered by this emerging industry, and several companies believe that these benefits could also play out with seafood.

A multidisciplinary team of researchers has taken a good, hard look at what it would take for cell-based seafood to deliver conservation benefits. They have compiled their findings into a paper in the journal Fish & Fisheries(link is external) in which they lay out the road map to change, comprising nine distinct steps. The authors contend that cell-based seafood faces a long, narrow path toward recovering fish stocks in the ocean, with success ultimately determined by the complex interplay of behavioral, economic and ecological factors.

"The core question of our work was, can this new technology -- cell-based seafood -- have a conservation benefit in the ocean?" said lead author Ben Halpern(link is external), a professor at UC Santa Barbara's Bren School of Environmental Science & Management and executive director of the National Center for Ecological Analysis and Synthesis (NCEAS).

A team of 12 researchers from UC Santa Barbara converged to answer this question, including economists, ecologists and data scientists as well as experts on fisheries, aquaculture and cell-based meat technology. They brought their expertise and the scientific literature to bear in order to flesh out the key steps along this pathway. Eventually they distilled it to nine significant phases.

The journey begins by developing a viable product and introducing it to the market, where it must then drop to a price competitive with existing seafood. At this point, a significant proportion of consumers have to adopt the new product as a substitute for traditional seafood. This is a key step, the authors said, and particularly tricky to pull off.

The first four steps may be sufficient for the success of a new product, but achieving conservation outcomes is a much longer process. The new product must drive down demand for wild-caught seafood, and the decline in price must pass through a complicated supply chain to fishermen. The drop in price then needs to decrease fishing efforts, which may or may not enable fish stocks to recover. Finally, the ecological impacts of producing cellular seafood can't be greater than those of fishing, the researchers said.

Each of these steps brings with it a variety of hurdles, perhaps none harder than getting consumers to adopt the cultured seafood instead of buying wild-caught fish. Convincing people to take on something new and leave behind something old is a huge challenge, Halpern explained. It's also an understudied part of this process, he added.

Of course, millions of dollars have gone toward studying product adoption and diffusion, coauthor Jason Maier(link is external) pointed out. Researchers pore over the factors that influence a consumer's willingness to try, and ultimately take-up, a new product. The ability to sample the product, it's relative advantage over the item it intends to replace, and how well it suits consumer habits and values all affect the likelihood it will be adopted, he explained.

"So why do we say it is understudied?" Maier asks. "Well, because previous research has primarily focused on only the adoption process." But when it comes to environmental outcomes, substitution for the existing product is as important as adoption of the new one. For instance, many believed that farm-raised fish could release pressure on wild stocks. Instead, what researchers have seen are massive increases in seafood consumption with little direct evidence that aquaculture has reduced fishing pressure.

"The take home is that the pathway to get from creating this technology to more fish in the ocean is long and narrow," said Halpern. "There are a lot of steps that have to happen and the path gets narrower and narrower as you go along. So it's not impossible, but it is difficult for many reasons to get a conservation outcome, in terms of more fish in the ocean, from this cell-based seafood."

Most of these hurdles apply to any consumer-driven intervention in the ocean. It's challenging to harness people's preferences, their buying habits, to drive change. "Trying to use consumer behavior as a way to influence the ocean requires a lot more steps than top-down approaches like regulations," said coauthor Heather Lahr(link is external), the cell-based seafood project manager at UC Santa Barbara's Environmental Market Solutions Lab (emLab).

Society also needs to weigh the costs and impacts of other conservation measures against new technologies like cultured seafood, she added. Strategies like fishery management and marine reserves have already proven their worth.

And while the technologies for culturing beef and swordfish may be similar, the context could scarcely be less alike. Seafood comes from hundreds of species, with different life histories, habitats and diets, Lahr explained. What's more, consumers tend to group many species under a single culinary experience. "For instance, when consumers eat a fish taco they are expecting a white flaky fish which could be anything from farm raised tilapia to locally sourced halibut," Lahr said. Compare that with beef, which primarily comes from one species: Bos taurus, the European, or "taurine," cattle.

And, unlike terrestrial meats, seafood still comes primarily from the wild. Humans have less control over fish stocks than livestock, and fishing activity responds to consumer, economic and environmental changes differently than ranching. Fishermen also fall under different regulations than farmers.

There's also a mismatch between the fish that could benefit most from this technology and the species that the industry is focusing on. Financially important stocks and popular seafood items, like tuna and salmon, are typically already well managed, Lahr said.

"The stocks where the need is greatest are not actually where the clean seafood technology, the cell-based seafood companies, are focusing their efforts," Halpern added, "because there's not much money in those species." For instance, fish like anchovies and sardines used for feed and oil may be able to benefit more from cell-based technologies, but currently the price point for these species is too low to make the investment worthwhile.

This paper is one of several upcoming studies exploring the conservation benefits of cell-based seafood. The team will further investigate the possibilities of demand-driven conservation interventions and review the impacts that the rise of aquiculture has had on fisheries and wild stocks. They also plan to dive further into understanding how and why consumers change their behavior when confronted with new products. The initiative is part of a joint project(link is external) between NCEAS and emLab.

Halpern believes that, if society truly applies its resources toward developing technology to address a challenge, it will likely find one. "But whether the technology will actually achieve the intended outcome depends on so many other steps," he said. "So we need to think carefully through all those steps before counting on any particular solution to deliver the outcome we hope for."

Credit: 
University of California - Santa Barbara

Scientists suggested using 'defective' diamonds in x-ray optics

X-rays are used to study the atomic and microstructure properties of matter. Such studies are conducted with special accelerator complexes called synchrotrons. A synchrotron source generates powerful electromagnetic radiation with a wavelength equal to fractions of a nanometer. Some X-rays are reflected from the atomic planes of a crystal and some go through the crystal plane that plays the role of a beam-splitter (or the so-called semitransparent mirror). If the radiation passes through monochromators-optical devices that consist of two or more ideal crystals - its optimal exit wavelength can be regulated. The parameters of electromagnetic radiation depend on the material that the optical element is made of. By improving the properties of optical devices one can increase the quality and efficiency of X-ray research methods and use modern scientific unique megascience facility to their full potential.

Most modern-day X-ray optical elements are based on silicon and germanium crystals. However, they get heated under the X-ray radiation from a synchrotron source, and high temperatures cause their crystal lattice to change leading to the distortion of the reflected beam. Optical elements made of artificial diamonds provide better beam quality, as their coefficient of thermal expansion and thermal conductivity are higher than in silicon elements. However, lab-grown diamonds contain not only carbon but also nitrogen. This inconsistency creates tension in the crystal and leads to uneven distances between the atoms. The cut of a crystal mainly depends on its internal structure, and the distribution of growth sectors (the areas that are formed when layers of substance grow on top of each other) correlates with the placement of nitrogen atoms. On the borders of these growth sectors, stress fields are formed. When a crystal is grown artificially, it is extremely difficult to control nitrogen level and distribution. Therefore, historically, the quality of plates made of nitrogen-bearing diamonds had been considered low for them to be used in optical elements. A team from BFU, together with their foreign colleagues, managed to disproof this belief and to obtain plates with sufficient defectless areas.

The team used BARS, a unique device for the manufacture of ultrahard materials, to grow two synthetic diamond crystals at 1,500°? and under the pressure of over 50 thousand atmospheres. The obtained crystals had almost perfect atomic grids. Then, small bits were chipped off from the crystals, and thin plates were made from them. First, their quality was assessed using X-ray examination, and after that, the plates were studied using the high-resolution diffractometry method on a synchrotron source. After scanning the plates, the team obtained high-resolution rocking curves--the charts that helped them evaluate the structural perfection of the crystals.

"The deflection angle of a crystal towards radiation changes depending on the energy of the incoming beam and the plane that it reflects from. This angle is called the Bragg angle. We incline a crystal at this angle, reflected radiation hits a detector, and then we start rocking it. The rocking curve that we get shows the correlation between the intensity of the reflected radiation and the deflection angle of the crystal. Then we compare the rocking curve with a pre-calculated theoretical curve of a perfect crystal," said Anatoly Snigirev, the head of the International Science and Research Center "Coherent X-ray Optics for Megascience facilities", BFU.

Having analyzed the charts, the team concluded, that although the crystal plates had many imperfections along the edges, there were large clear areas in their centers that accounted for over 50% of the total plate. Given that the defects usually become visible during the cutting and polishing of diamonds, the potential use of nitrogen-bearing diamonds in X-ray optics depends on improving these processes. Diamond crystals are needed for manufacturing of different optical elements, such as monochromators, beam-splitters, interferometers, and refractive lenses.

The study was carried out jointly with colleagues from the V.S. Sobolev Institute of Geology and Mineralogy SB RAS (Russia, Novosibirsk) and the German Electron Synchrotron DESY (Germany, Hamburg).

We are grateful to Nataliya Klimova, a scientific consultant and a junior researcher at the International Science and Research Center "Coherent X-ray Optics for Megascience facilities", BFU, for her assistance in preparing this article.

Credit: 
Immanuel Kant Baltic Federal University

Study finds human-caused North Atlantic right whale deaths are being undercounted

image: Catalog #3522 swims off the coast of Georgia in 2006 with fresh propeller cuts on his back.
He was never seen again and is presumed dead due to these injuries.

Image: 
New England Aquarium, collected under NOAA Permit #655-1652-01

A study co-authored by scientists at the New England Aquarium has found that known deaths of critically endangered North Atlantic right whales represent a fraction of the true death toll. This comes as the death of a calf and recent sightings of entangled right whales off the southeastern United States raise alarm.

The study, published this month in Conservation Science and Practice, analyzed cryptic mortality of right whales. Cryptic mortality refers to deaths resulting from human activities that do not result in an observed carcass. The study's authors combined data on whale encounters, animal health, serious injuries, and necropsies from the North Atlantic Right Whale Consortium Identification Database curated by the New England Aquarium with the serious injury and mortality database held by the National Marine Fisheries Service. The scientists concluded that known deaths of the critically endangered species accounted for only 36% of all estimated death from 1990 to 2017.

"Our work has shown that 83% of identified right whales have been entangled one or more times in fishing gear, and an increasing number of these events result in severe injuries or complex entanglements that the whales initially survive. But we know their health becomes compromised and they eventually succumb and sink upon death," said Amy Knowlton, senior scientist with the Aquarium's Anderson Cabot Center for Ocean Life.

The study--led by Richard Pace and Rob Williams and co-authored by Knowlton, New England Aquarium Associate Scientist Heather Pettis, and Aquarium Emeritus Scientist Scott Kraus--determined that several factors interact to cause undercounting of human-caused mortalities of marine mammals. First, in order for a human-caused mortality to be determined, a whale carcass must float or strand, be detected before decomposition or scavenging occurs, be evaluated to determine cause of death, and then have that result reported. In the absence of any of these steps, information about the cause of mortality can easily be lost.

Additionally, a number of right whales have been observed entangled or injured from vessel strikes and never seen again. This suggests they died and their carcasses were not discovered.

"We have long known that the number of detected right whale carcasses does not align with the number of whales that disappear from the sightings records," Pettis said. "Since 2013 alone, we have documented 40 individual right whales seen with severe injuries resulting from vessel strikes and entanglements that disappeared following their injury. This study allowed us to quantify just how underrepresented true right whale mortalities are when we rely on observed carcasses alone."

The estimated population number for North Atlantic right whales stands at just over 350 whales. Right whales are one of the most endangered large whale species in the world, facing serious ongoing threats from vessels and fishing gear. Just in the past month, a right whale calf died in an apparent vessel strike and two right whales have been spotted entangled in fishing gear. A sport fishing boat hit and killed the calf in the calving grounds off the Florida coast on February 12. The calf was the first born to Infinity (Catalog #3230), who also suffered injuries consistent with a vessel strike. Catalog #1803, a 33-year-old male, was seen badly entangled off the coast of Georgia and Florida in mid-January, and on February 18, Cottontail (Catalog #3920) was sighted entangled and emaciated off the Florida coast. Cottontail, an 11-year-old male, was first seen entangled in southern New England last fall. In both cases, disentanglement efforts were not successful and these whales will likely die.

"These serious entanglements are preventable with regulatory changes and a commitment from the fishing industry and the U.S. and Canadian governments to do more to address this threat," said Knowlton.

For 40 years, the Aquarium's Right Whale Research Program has extensively studied this critically endangered species. Scientists focus on solutions-based work, collaborating with fishermen on new techniques to reduce deadly entanglements in fishing gear, facilitating communication across the maritime industry to reduce vessel strikes, and working with lawmakers locally, nationally, and internationally to develop science-based protections for the whales.

Credit: 
New England Aquarium

Making a difference: comparative biologists tackle climate change

For many, 2020 was notorious for the COVID-19 pandemic, but for climate scientists, the year is also infamous for tying with 2016 as the hottest since records began. 'Nine of the warmest years on record have occurred since 2010', says JEB Editor-in-Chief, Craig Franklin. With the ice caps and glaciers melting, devastating bushfires scorching arid regions, and hurricanes and typhoons battering coastal communities, the impact on local ecosystems has been catastrophic. 'Physiologists can play a critical role in the conversation around climate change', says Franklin, explaining that knowledge of physiology ideally positions comparative physiologists to predict the impact of climate change on species and to inform conservation policy and action.

Franklin and Hans Hoppeler (JEB Editor-in-Chief, 2004-2020) have commissioned a series of review articles dedicated to strategies for, and predictions of, the impact of climate change on ecosystems across the globe. The collection discusses our current understanding of the physiological impact of the climate crisis and the lessons that will inform biodiversity management and conservation in the coming decade.

Looking to the future, Franklin is optimistic. 'Physiologists and experimental biologists can make valuable contributions to our understanding of threats to biodiversity', he says, adding that comparative physiologists can clearly make pivotal contributions to the dialogue surrounding conservation. Understanding the interactions between animals and their environments and how those will change as temperatures rise, climate patterns change and population distributions shift, is essential for building policies to protect keystone and vulnerable species alike. From Diamond and Martin's review detailing the lessons that we can learn from urban heat islands to Putnam's optimism for the future of corals, this collection of reviews provides a foundation for comparative physiologists to build on as the vanguard in the battle to protect biodiversity. 'We see this as an important special issue. We know we are imperilled by climate change, but this collection of articles talks about what we should be doing in the future to protect the planet', says Franklin.

Credit: 
The Company of Biologists