Culture

Newly developed AI uses combination of ECG and X-ray results to diagnose arrhythmic disorders

image: Diagram of the developed AI's architecture

Image: 
Kobe University

Kobe University Hospital's Dr. NISHIMORI Makoto and Project Assistant Professor KIUCHI Kunihiko et al. (of the Division of Cardiovascular Medicine, Department of Internal Medicine) have developed an AI that uses multiple kinds of test data to predict the location of surplus pathways in the heart called 'accessory pathways', which cause the heart to beat irregularly. In this study, the researchers were able to improve diagnosis accuracy by having the AI learn from two completely different types of test results- electrocardiography (ECG) data and X-ray images. It is hoped that this methodology can be applied to other disorders based upon the successful results of this research.

These research results were published online in 'Scientific Reports' on April 13, 2021.

Research Background

Wolff-Parkinson-White (WPW) is an arrhythmic disorder. Patients with WPW syndrome are born with surplus pathways inside their hearts called 'accessory pathways', which can cause tachycardia episodes where the pulse speeds up. Catheter ablation involves using a catheter to selectively cauterize accessory pathways and can completely cure this disorder. However, the success rate of catheter ablation varies depending on the location of the accessory pathways. Conventionally, a 12-lead ECG (i.e. a regular electrocardiography) has been used to predict accessory pathway location prior to treatment. However, this current method that relies solely on ECG is insufficiently accurate, which makes it difficult to give patients a full explanation that includes the success rate of treatment. This research study tried using AI to solve this problem.

The researchers used a methodology for teaching AI called deep learning. Deep learning involves entering the data for each patient and the corresponding answers into a program. By repeating this learning process, the program automatically becomes smarter. Using this methodology, the research group was able to present a solution to a previously unresolved problem, thus further promoting the application of AI to modern medicine.

Research Methodology

Firstly, Dr. Nishimori's team developed AI using only ECG data and compared its performance to previous methods. They conducted repeated learning where they gave the AI each patients' ECG data and the accessory pathway location (i.e. the answer) in each case at the same time, successfully creating an AI with a higher accuracy rate than previous methods. However, the AI was unable to perform correct predictions every time from ECG data alone. The cause of this issue was thought to be that the ECG data is affected by the differences in size and position of each heart, therefore the ECG data did not match even when the location of the accessory pathway was the same. This problem was resolved by having the AI learn data, such as information on each heart's size, from chest x-ray images at the same time (Figure 1). By simultaneously learning both the pre-treatment ECG and X-ray image data, the AI was able to obtain the missing information and its diagnostic accuracy was significantly improved (Figure 2) compared to when only ECG data was used.

Further Developments

The advancement of AI technology in recent years has made it possible for AI to make highly accurate diagnoses based on various kinds of test data in the field of medicine. However, there are cases where data from a single test is insufficient for AI to perform an accurate diagnosis. This research study successfully increased the accuracy by having the AI learn not only from ECG results but also from chest X-ray images, which are a completely different type of data. AI-mediated accurate diagnoses will enable doctors to give pre-treatment patients a more accurate explanation of their condition, which will hopefully put patients at ease. In addition, this research could be applied to various other disorders and will hopefully lead to the implementation of AI diagnosis software.

Credit: 
Kobe University

Using floodwaters to weather droughts

image: Flooding vineyards and increasing groundwater recharge, Terranova Ranch, near Fresno, California, diverts water from a full flood-control channel.

Image: 
Courtesy Terranova Ranch Inc.

Floodwaters are not what most people consider a blessing. But they could help remedy California's increasingly parched groundwater systems, according to a new Stanford-led study. The research, published in Science Advances, develops a framework to calculate future floodwater volumes under a changing climate and identifies areas where investments in California's aging water infrastructure could amplify groundwater recharge. As the state grapples with more intense storms and droughts, stowing away floodwaters would not only reduce flood risks but also build more water reserves for drier times.

"This is the first comprehensive assessment of floodwater recharge potential in California under climate change," said study lead author Xiaogang He, an assistant professor in civil and environmental engineering at the National University of Singapore who pursued the research as a postdoctoral fellow at Stanford's Program on Water in the West.

Whether it's rivers overflowing in the Central Valley flatlands, high-tide storms hitting lowland coastal areas, flash floods drenching southern deserts or impermeable concrete-laden cities pooling with water, California is susceptible to flooding. Alternately, looming droughts often raise concern about water supply, as diminished groundwater sinks land, contaminates drinking water and reduces surface supplies. These declining reserves also hamper climate resilience - during periods of drought up to 60 percent of the state's water comes from groundwater and 85 percent of Californians depend on the resource for at least a portion of their water supply.

Water banking

As climate change intensifies the severity and frequency of these extreme events, amplifying refill rates could help the state reach a more balanced groundwater budget. One practice, called water banking or managed aquifer recharge, involves augmenting surface infrastructure, such as reservoirs or pipelines, with underground infrastructure, such as aquifers and wells, to increase the transfer of floodwater for storage in groundwater basins.

A newer strategy for managing surface water, compared to more traditional methods like reservoirs and dams, water banking poses multiple benefits including flood risk reduction and improved ecosystem services. While groundwater basins offer a vast network for water safekeeping, pinpointing areas prime for replenishment, gauging infrastructure needed and the amount of water available remains key, especially in a warming and uncertain climate.

"Integrating managed aquifer recharge with floodwaters into already complex water management infrastructure offers many benefits, but requires careful consideration of uncertainties and constraints. Our growing understanding of climate change makes this an opportune time to examine the potential for these benefits," said senior author David Freyberg, an associate professor of civil and environmental engineering at Stanford.

The researchers designed a framework to estimate future floodwater availability across the state. Developing a hybrid computer model using hydrologic and climate simulations and statistical tools, the team calculated water available for recharge under different climate change scenarios through 2090. They also identified areas where infrastructure investments should be prioritized to tap floodwater potential and increase recharge.

Future floodwaters

The team found California will experience increased floodwater from both heavier rain patterns and earlier snowmelt due to warmer temperatures, under a narrowing window of concentrated wet weather. In particular, the Sacramento River and North Coast, along with the northern and central Sierra Nevada region, will see more substantial floodwater volumes. These deluges could overload current water infrastructure, such as reservoirs and aqueducts. However, if the region is standing ready with additional floodwater diversion infrastructure, such as canals or pipelines, it could maximize recharge potential and transfer more of it toward arid Southern California.

Future projections find unchanging or in some cases even drier conditions in Southern California. This widening divide is bad news for the region, which currently has greater groundwater depletion and recharge needs than its northern counterpart. This mismatch of water abundance and need reveals a profound challenge for recharge practices, in terms of moving high volumes of water from where it will be available in Northern California to where it will be needed southward.

The researchers also found recharge estimates for the San Joaquin Valley - one of the world's most productive agricultural regions - could help restock a large portion of depleted groundwater aquifers. Sitting at the base of the Sierra Nevada mountains, this region will need to accommodate larger volumes of water both above and below the surface in order to maximize refill potential. Water managers will need to expand conveyance projects and reopen reservoirs there.

While climate impacts are the most dominant influence, the researchers point out that other factors, including infrastructure capacity, policy constraints, financial and environmental concerns must be jointly considered during the planning process.

The study's framework is adaptable and scalable for managing drought, flood and depleted groundwater aquifers worldwide.

"At the global scale only 1 percent of groundwater recharge occurs from managed aquifer recharge," He said. "This work can be applied to help other depleted aquifers, such as the North China Plain or India's Upper Ganges, reach and maintain sustainable groundwater levels."

Credit: 
Stanford University

Chronic stress may reduce lifespan in wild baboons, according to new multi-decadal study

Addressing a much-debated question about the impact of stress on survival in wild, nonhuman primates, a new multi-decadal study involving 242 wild female baboons found evidence to support chronic stress as a significant factor affecting survival. The study found that a female baboon with a stress response - as reflected in fecal glucocorticoid concentrations, a biomarker of stress response - in the top 90% for her age throughout adulthood was expected to lose 5.4 years of life compared to a female with glucocorticoid concentrations in the bottom 10% for her age group. The findings, which leveraged more than 14,000 fecal glucocorticoid measurements over a total of more than 1,600 years of female adult lifespan, support that these stress response measurements may be strong predictors of survival in female baboons. The results could also help explain why some nonhuman primate individuals, as well as humans, live longer than others. Scientists have debated whether differences in chronic activation of the hypothalamic-pituitary-adrenal (HPA) axis (the stress response) during adulthood are tied to individual differences in survival. Previous research on humans has shown that factors such as social isolation and low socioeconomic status are associated with elevated glucocorticoid levels from increased HPA axis activation, and further research has found that chronically elevated glucocorticoid levels can lead to health problems such as cardiovascular disease and diabetes. However, evidence for the link between chronic stress and detrimental health effects in non-human primates had not been previously identified in natural animal populations. To close this research gap, Fernando Campos and colleagues measured and analyzed 14,173 fecal glucocorticoid measurements from adult female baboons living in the Amboseli ecosystem in Kenya, over the course of their entire adult (>5 years old) lives. They chose to use fecal measurements because they are more stable than those from blood or plasma, reflect concentrations over several hours rather than at one instance, and are not altered by restraint or handling. The researchers then developed models to test the links between longevity and lifelong high glucocorticoid levels as opposed to increases in glucocorticoids associated with discrete events and specific conditions (such as dominance rank, group size, pregnancy, and lactation). The authors observed greater differences in life expectancy between those individuals with high- and low- fecal glucocorticoid measurements in the cumulative effect model than in a current value model, the latter of which illustrated a spike in concentrations prior to death caused by trauma or illness. The findings suggest that high stress levels throughout adult life affect survival, not just high stress levels close to the time of an individual's death. Campos et al. note that the effects of chronic stress on the survival of adult female baboons are robustly linked to those of social isolation and early-life adversity, as they are in humans.

Credit: 
American Association for the Advancement of Science (AAAS)

First of its kind study links wildfire smoke to skin disease

Wildfire smoke can trigger a host of respiratory and cardiovascular symptoms, ranging from runny nose and cough to a potentially life-threatening heart attack or stroke. A new study suggests that the dangers posed by wildfire smoke may also extend to the largest organ in the human body, and our first line of defense against outside threat: the skin.

During the two weeks in November 2018 when wildfire smoke from the Camp Fire choked the San Francisco Bay Area, health clinics in San Francisco saw an uptick in the number of patients visiting with concerns of eczema, also known as atopic dermatitis, and general itch, compared to the same time of the year in 2015 and 2016, the study found.

The findings suggest that even short-term exposure to hazardous air quality from wildfire smoke can be damaging to skin health. The report, carried out by physician researchers at the University of California, San Francisco, in collaboration with researchers at the University of California, Berkeley, appears on April 21 in the journal JAMA Dermatology.

"Existing research on air pollution and health outcomes has focused primarily on cardiac and respiratory health outcomes, and understandably so. But there is a gap in the research connecting air pollution and skin health," said study lead author Raj Fadadu, a student in the UC Berkeley-UCSF Joint Medical Program. "Skin is the largest organ of the human body, and it's in constant interaction with the external environment. So, it makes sense that changes in the external environment, such as increases or decreases in air pollution, could affect our skin health."

Air Pollutants Can Slip through Skin Barriers

Air pollution from wildfires, which consists of fine particulate matter (PM2.5), polycyclic aromatic hydrocarbons (PAHs), and gases, can impact both normal and eczema-prone skin in a variety of ways. These pollutants often contain chemical compounds that act like keys, allowing them to slip past the skin's outer barrier and penetrate into cells, where they can disrupt gene transcription, trigger oxidative stress or cause inflammation.

Eczema, or atopic dermatitis, is a chronic condition which affects the skin's ability to serve as an effective barrier against environmental factors. Because the skin's barrier has been compromised, people with this condition are prone to flare-ups of red, itchy skin in response to irritants, and may be even more prone to harm from air pollution.

"Skin is a very excellent physical barrier that separates us and protects us from the environment," said study senior author Dr. Maria Wei, a dermatologist and melanoma specialist at UCSF. "However, there are certain skin disorders, such as atopic dermatitis, in which the barrier is not fully functional. It's not normal even when you don't have a rash. So, it would make sense that when exposed to significant air pollution, people with this condition might see an effect on the skin."

Even Short Burst of Air Pollution During the Camp Fire Harms Skin Health

Earlier studies have found a link between atopic dermatitis and air pollution in cities with high background levels of air pollution from cars and industry. However, this is the first study to examine the impacts of a very short burst of extremely hazardous air from wildfires. Despite being located 175 miles away from the Camp Fire, San Francisco saw an approximately nine-fold increase in baseline PM2.5 levels during the time of the blaze.

To conduct the study, the team examined data from more than 8,000 visits to dermatology clinics by both adults and children between October of 2015, 2016 and 2018, and February of the following year. They found that, during the Camp Fire, clinic visits for atopic dermatitis and general itch increased significantly in both adult and pediatric patients.

"Fully 89 percent of the patients that had itch during the time of the Camp Fire did not have a known diagnosis of atopic dermatitis, suggesting that folks with normal skin also experienced irritation and/or absorption of toxins within a very short period of time," Wei said.

While skin conditions like eczema and itch may not be as life-threatening as the respiratory and cardiovascular impacts of wildfire smoke, they can still severely impact people's lives, the researchers say. The study also documented increased rates of prescribed medications, such as steroids, during times of high air pollution, suggesting that patients can experience severe symptoms.

Individuals can protect their skin during wildfire season by staying indoors, wearing clothing that covers the skin if they do go outside, and using emollients, which can strengthen the skin's barrier function. A new medication to treat eczema, called Tapinarof, is now in clinical trials and could also be a useful tool during times of bad air.

"A lot of the conversations about the health implications of climate change and air pollution don't focus on skin health, but it's important to recognize that skin conditions do affect people's quality of life, their social interactions and how they feel psychologically," Fadadu said. "I hope that these health impacts can be more integrated into policies and discussions about the wide-ranging health effects of climate change and air pollution."

Credit: 
University of California - San Francisco

Scientists reveal origin of neuronal diversity in hypothalamus

image: The cascade diversifying model for generating extreme neuronal diversity in hypothalamus

Image: 
IGDB

A mechanistic understanding of brain development requires a systematic survey of neural progenitor cell types, their lineage specification and maturation of postmitotic neurons. Cumulative evidences based on single-cell transcriptomic analysis have revealed the heterogeneity of cortical neural progenitors, their temporal patterning and the developmental trajectories of excitatory and inhibitory neurons in the developing neocortex. Nevertheless, the developmental hierarchy of the hypothalamus, which contains an astounding diversity of neurons that regulate endocrine, autonomic and behavioral functions, has not been well understood.

Recently, however, Prof. WU Qingfeng's group from the Institute of Genetics and Developmental Biology of the Chinese Academy of Sciences (CAS) conducted a study focusing on the origin of this neuronal diversity. For their work, they profiled the transcriptome of 43,261 hypothalamic neural cells to map the developmental landscape of the mouse hypothalamus and mapped the trajectory of radial glial cells (RGCs), intermediate progeny+++tor cells (IPCs), nascent neurons and peptidergic neurons.

The researchers found that RGCs adopt a conserved strategy for multipotential differentiation, but generate both Ascl1+ and Neurog2+ IPCs. Ascl1+ IPCs differ from their telencephalic counterpart by displaying fate bifurcation whereby they can differentiate into both glutamatergic (excitatory) and GABAergic (inhibitory) neurons. Postmitotic nascent neurons derived from IPCs further resolve into multiple peptidergic neuronal subtypes. Clonal analysis also demonstrates that single RGCs can produce multiple neuronal subtypes.

This finding reveals that multiple cell types along the lineage hierarchy contribute to the fate diversification of hypothalamic neurons in a stepwise fashion, thus uncovering an effective strategy for neural progenitors to generate extreme neuronal diversity.

Furthermore, this study provides a developmental perspective for understanding hypothalamus plasticity and gaining valuable insights into hypothalamic diseases such as anorexia, narcolepsy and insomnia.

This work, entitled "Cascade diversification directs generation of neuronal diversity in the hypothalamus," was published in Cell Stem Cell on April 21.

Credit: 
Chinese Academy of Sciences Headquarters

'Ice cube tray' scaffold is next step in returning sight to injured retinas

image: This electron microscope image of the new "scaffolding" for growing and implanting retinal cells shows the ice-cube-tray-shaped reservoirs that hold cells and cylinder-shaped holes in the bottom layer, which provide channels for maturing photoreceptors to make contact with a patient's retinal tissue.

Image: 
Ma Lab

MADISON, Wis. -- Tens of millions of people worldwide are affected by diseases like macular degeneration or have had accidents that permanently damage the light-sensitive photoreceptors within their retinas that enable vision.

The human body is not capable of regenerating those photoreceptors, but new advances by medical researchers and engineers at the University of Wisconsin-Madison may provide hope for those suffering from vision loss. They described their work today in the journal Science Advances.

Researchers at UW-Madison have made new photoreceptors from human pluripotent stem cells. However, it remains challenging to precisely deliver those photoreceptors within the diseased or damaged eye so that they can form appropriate connections, says David Gamm, director of the McPherson Eye Research Institute and professor of ophthalmology and visual sciences at the UW School of Medicine and Public Health.

"While it was a breakthrough to be able to make the spare parts -- these photoreceptors -- it's still necessary to get them to the right spot so they can effectively reconstruct the retina," he says. "So, we started thinking, 'How can we deliver these cells in a more intelligent way?' That's when we reached out to our world-class engineers at UW-Madison."

Gamm is collaborating with colleagues Shaoqin (Sarah) Gong, a professor of biomedical engineering, Wisconsin Institute for Discovery faculty member and an expert in biomaterials, and Zhenqiang (Jack) Ma, a professor of electrical and computer engineering and an expert in semiconductors whose lab is experienced in sophisticated micro- and nanofabrication. Together, their research groups have developed a micro-molded scaffolding photoreceptor "patch" designed to be implanted under a damaged or diseased retina.

In 2018, the team developed its first biodegradable polymer scaffolding with wine-glass-shaped pores to hold the photoreceptor cells in place. However, that design wasn't optimal since it could not fit many photoreceptors in each pore.

In this second-generation scaffold, the team opted for an "ice cube tray" design, which can hold three times as many cells while reducing the amount of biomaterial used for the scaffolding to facilitate faster degradation of the synthetic material within the eye.

Gong and her team, led by graduate student Ruosen (Alex) Xie, screened a long list of potential biomaterials before deciding on poly(glycerol-sebacate), or PGS, a material that is compatible with the retina and can be safely metabolized by the body after degradation. The Gong lab optimized the formulation and further developed a curing process to achieve desirable material properties for making the scaffolds.

"We wanted the material to be very strong," says graduate student and co-first author Allison Ludwig, who works in Gamm's lab, "and in the eye, it degrades pretty quickly over about two months. That's ideal for the human retina."

The process of crafting the scaffold with the desired mechanical strength and precise dimensions was performed by co-first author Inkyu Lee and graduate student Juhwan Lee, who work in Ma's lab. To achieve highly ordered 3D ice cube tray-shaped microstructures from the biodegradable and biocompatible PGS films with micron-sized features, they developed multi-step micro-molding techniques that can transfer patterns to flexible polymer films.

The final scaffold fabrication work was tedious and frustrating. Fractures and imperfections occurred on the soft scaffolds during demounting from the micro molds, rendering the micro molds inoperable for further use -- but Inkyu Lee ultimately discovered that soaking the scaffold in isopropyl alcohol allowed it to release cleanly.

"The fabrication processes creating a scaffold with micron-sized features involve a lot of person-dependent technical handling skills, which makes the production of scaffolds with a uniform quality difficult," he says. "I wanted to achieve something that is repeatable regardless of an operator's handling skills. I was enlightened by the fact that the PGS polymer swells in isopropyl alcohol. Exploiting this property ultimately facilitated the release of the scaffolds from the micro molds."

Using this approach, Ma's lab was able to reliably demount the scaffold from micro molds without surface defects and retain the mold's microstructures, maintaining the mold surface integrity for reuse. In the end, microscopy revealed the fabrication technique was a success, reliably reproducing a perfect ice cube tray-shaped scaffold capable of holding more than 300,000 photoreceptors in approximately the area of the human macula, the center of the retina.

"Overall, the results are very exciting and significant," says Ma. "Once we figured out the recipe, mass production became immediately possible, and commercialization will be very easy. The fabrication methods can be used to create many other types of soft scaffolds for various biomedical applications, such as complicated tissue engineering, etc."

The team has disclosed the scaffold structure and the fabrication method to the Wisconsin Alumni Research Foundation, which has filed a patent application.

The team plans to continue optimizing its scaffolding shape, fabrication technique and bio-resorbable materials for faster production to satisfy future surgical needs. In the meantime, the current iteration of the scaffolding patch is almost ready for surgical testing in large animals. If successful, the patch will eventually be tested in humans.

"We're hoping these early generation retinal patches will be safe and restore some vision. Then we'll be able to innovate and improve upon the technology and the outcomes over time," says Gamm. "We didn't start out with supercomputers on our wrists and we're not going to start out by completely erasing blindness in our first attempt. But we're very excited about taking a significant step in that direction."

Credit: 
University of Wisconsin-Madison

Scientists find CO2-rich liquid water in ancient meteorite

image: (A) Inclusions in a calcite grain in the Sutter's Mill meteorite recognized by X-ray nanotomography. Fluids were not detected in relatively large inclusions as they had already escaped. (B) TEM image of a non-inclusion filled with CO2-bearing fluid (indicated by arrow). (C) H2O, CO2, and CO snow lines and Sutter's Mill parent body formation. The formation region can be estimated from the presence of the CO2-bearing fluid. Nebular accretion rate, ?, corresponds to the time axis for the evolution of the early solar system.

Image: 
Dr. Akira Tsuchiyama from Ritsumeikan University

Water is abundant in our solar system. Even outside of our own planet, scientists have detected ice on the moon, in Saturn's rings and in comets, liquid water on Mars and under the surface of Saturn's moon Enceladus, and traces of water vapor in the scorching atmosphere of Venus. Studies have shown that water played an important role in the early evolution and formation of the solar system. To learn more about this role, planetary scientists have searched for evidence of liquid water in extraterrestrial materials such as meteorites, most of which originate from asteroids that formed in the early history of the solar system.

Scientists have even found water as hydroxyls and molecules in meteorites in the context of hydrous minerals, which are basically solids with some ionic or molecular water incorporated within them. Dr. Akira Tsuchiyama, Visiting Research Professor at Ritsumeikan University, says, "Scientists further expect that liquid water should remain as fluid inclusions in minerals that precipitated in aqueous fluid" (or, to put it simply, formed from drops of water that contained various other things dissolved inside them). Scientists have found such liquid water inclusions inside salt crystals located within a class of meteorites known as ordinary chondrites, which represent the vast majority of all meteorites found on Earth though the salt actually originated from other, more primitive parent objects.

Prof. Tsuchiyama and his colleagues wanted to know whether liquid water inclusions are present in a form of calcium carbonate known as calcite within a class of meteorites known as "carbonaceous chondrites", which come from asteroids that formed very early in the history of the solar system. They therefore examined samples of the Sutter's Mill meteorite, a carbonaceous chondrite originating in an asteroid that formed 4.6 billion years ago. The results of their investigation, led by Prof. Tsuchiyama, appear in an article recently published in the prestigious journal Science Advances.

The researchers used advanced microscopy techniques to examine the Sutter's Mill meteorite fragments, and they found a calcite crystal containing a nanoscale aqueous fluid inclusion that contains at least 15% carbon dioxide. This finding confirms that calcite crystals in ancient carbonaceous chondrites can indeed contain not only liquid water, but also carbon dioxide.

The presence of liquid water inclusions within the Sutter's Mill meteorite has interesting implications concerning the origins of the meteorite's parent asteroid and the early history of the solar system. The inclusions likely occurred due to the parent asteroid forming with bits of frozen water and carbon dioxide inside of it. This would require the asteroid to have formed in a part of the solar system cold enough for water and carbon dioxide to freeze, and these conditions would place the site of formation far outside of Earth's orbit, likely beyond even the orbit of Jupiter. The asteroid must then have been transported to the inner regions of the solar system where fragments could later collide with the planet Earth. This assumption is consistent with recent theoretical studies of the solar system's evolution that suggest that asteroids rich in small, volatile molecules like water and carbon dioxide formed beyond Jupiter's orbit before being transported to areas closer to the sun. The most likely cause of the asteroid's transportation into the inner solar system would be the gravitational effects of the planet Jupiter and its migration.

In conclusion, the discovery of water inclusions within a carbonaceous chondrite meteorite from the early history of the solar system is an important achievement for planetary science. Prof. Tsuchiyama proudly notes, "This achievement shows that our team could detect a tiny fluid trapped in a mineral 4.6 billion years ago."

By obtaining chemical snapshots of an ancient meteorite's contents, his team's work can provide important insights into processes at work in the solar system's early history.

Credit: 
Ritsumeikan University

In calculating the social cost of methane, equity matters

What is the cost of 1 ton of a greenhouse gas? When a climate-warming gas such as carbon dioxide or methane is emitted into the atmosphere, its impacts may be felt years and even decades into the future - in the form of rising sea levels, changes in agricultural productivity, or more extreme weather events, such as droughts, floods, and heat waves. Those impacts are quantified in a metric called the "social cost of carbon," considered a vital tool for making sound and efficient climate policies.

Now a new study by a team including researchers from Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley reports that the social cost of methane - a greenhouse gas that is 30 times as potent as carbon dioxide in its ability to trap heat - varies by as much as an order of magnitude between industrialized and developing regions of the world.

Published recently in the journal Nature, the study finds that by accounting for economic inequalities between countries and regions, the social cost of methane drops by almost a factor of 10 in sub-Saharan Africa and jumps by almost a factor of 10 for industrialized countries, such as the United States. The study calculated a global mean estimate of the social cost of methane of $922 per metric ton (not accounting for the inequity), decreasing to $130 per metric ton for sub-Saharan Africa and rising to $8,040 per metric ton for the U.S.

"The paper broadly supports the previous U.S. government estimates of the social cost of methane, but if you use the number the way it's typically used - as a global estimate, as if all countries are equal - then it doesn't account for the inequities," said Berkeley Lab scientist William Collins, one of the study's co-authors.

The lead authors of the study were David Anthoff, a professor in UC Berkeley's Energy and Resources Group, and Frank Errickson, a graduate student in the group at the time of the study. "The Biden administration's climate policy agenda calls for prioritizing environmental justice and equity. We provide a way for them to directly incorporate concerns for equity in methane emission regulations," said Errickson, now a postdoctoral fellow at Princeton University. "Our results capture that the same climate impact, when measured in dollars, causes a greater loss in well-being for low-income regions relative to wealthy ones."

Like the social cost of carbon, the social cost of methane is a metric that is not widely used by the public but is increasingly used by government agencies and corporations in making decisions around policies and capital investments. By properly accounting for future damages that may be caused by greenhouse gas emissions, policymakers can weigh present costs against future avoided harms. In fact, the recent White House executive order on the climate crisis established a working group to provide an accurate accounting of the social costs of carbon, methane, and nitrous oxide within a year.

"President Biden's action represents a much-needed return of science-based policy in the United States," said Anthoff. "Devastating weather events and wildfires have become more common, and the costs of climate impacts are mounting."

"The social costs of methane and carbon dioxide are used directly in cost-benefit analyses all the time," Collins said. "You have to figure out how to maximize the benefit from a dollar spent on mitigating methane emissions, as opposed to any of the other ways in which one might choose to spend that dollar. You want to make sure that you are not using a gold-plated band-aid."

Given the current estimate of global methane emissions of 300 million metric tons per year, that puts the annual social cost of methane at nearly $300 billion, said Collins, the head of Berkeley Lab's Climate and Ecosystem Sciences Division and also a professor in UC Berkeley's Earth and Planetary Science Department. "Wet areas will get wetter and dry areas dryer, so there's an increase in severity of storms and droughts," he said. "The cost would include all the things that flow from that, such as infrastructure damaged, increased expenditures around keeping places cool, health risks associated with heat, and so on."

While some methane comes from natural sources - mostly wetlands - about 60% of methane emissions come from human activity, including agriculture, fossil fuel production, landfills, and livestock production. It is considered a short-lived climate pollutant, staying in the atmosphere for only a decade or so, compared to more than 100 years for carbon dioxide.

"Given its potency as a greenhouse gas, regulating emissions of methane has long been recognized as critical component for designing an economically efficient climate policy," said Anthoff. "Our study updates the social cost of methane estimates and fills a critical gap in determining social costs."

Under the Obama administration, the price was estimated at about $1,400 per metric ton. The Berkeley researchers made a technical correction in accounting for offsetting influences on the climate system, arriving at global mean estimate of $922 per metric ton. "We're suggesting they slightly overestimated it," Collins said.

But more importantly, the uncertainty around the social cost of methane comes more from the social side, not the physics. "As a climate scientist, we've been busy trying to improve our estimates of the warming caused by methane," Collins said. "But it turns out the physics side is no longer the major source of uncertainty in the social cost of methane. It's now moved to the socio-economic sector, accounting for the damages and inequities."

How societies choose to develop in the future - such as expanding cities along coastlines or areas prone to flooding or wildfires, or moving away from such areas - are a big unknown. "If we choose mitigate climate change more aggressively, the social cost of methane drops drastically," Collins said.

"Continuing our work to further explore the relationship between climate change and socioeconomic uncertainties - not to mention the complex but important issues that arise when we account for equity - is a promising area for future research and policy exploration," said Anthoff.

Credit: 
DOE/Lawrence Berkeley National Laboratory

Astronomers release new all-sky map of the Milky Way's outer reaches

video: The new map reveals how a small galaxy called the Large Magellanic Cloud (LMC) - so-named because it is the larger of two dwarf galaxies orbiting the Milky Way - has sailed through the Milky Way's galactic halo like a ship through water, its gravity creating a wake in the stars behind it. The LMC is located about 160,000 light-years from Earth, and is less than one quarter the mass of the Milky Way. Though the inner portions of the halo have been mapped with a high level of accuracy, this is the first map to provide a similar picture of the halo's outer regions, where the wake is found - about 200,000 light years to 325,000 light years from the galactic center. Previous studies have hinted at the wake's existence, but the all-sky map confirms its presence and offers a detailed view of its shape, size, and location.

Image: 
NASA/JPL-Caltech/NSF/R. Hurt/N. Garavito-Camargo & G. Besla

Astronomers using data from NASA and the ESA (European Space Agency) telescopes have released a new all-sky map of the outermost region of our galaxy. Known as the galactic halo, this area lies outside the swirling spiral arms that form the Milky Way's recognizable central disk and is sparsely populated with stars. Though the halo may appear mostly empty, it is also predicted to contain a massive reservoir of dark matter, a mysterious and invisible substance thought to make up the bulk of all the mass in the universe.

The data for the new map comes from ESA's Gaia mission and NASA's Near Earth Object Wide Field Infrared Survey Explorer, or NEOWISE, which operated from 2009 to 2013 under the moniker WISE. The study, led by astronomers at the Center for Astrophysics | Harvard & Smithsonian and published today in Nature, makes use of data collected by the spacecraft between 2009 and 2018.

The new map reveals how a small galaxy called the Large Magellanic Cloud (LMC) - so-named because it is the larger of two dwarf galaxies orbiting the Milky Way - has sailed through the Milky Way's galactic halo like a ship through water, its gravity creating a wake in the stars behind it. The LMC is located about 160,000 light-years from Earth, and is less than one quarter the mass of the Milky Way. Though the inner portions of the halo have been mapped with a high level of accuracy, this is the first map to provide a similar picture of the halo's outer regions, where the wake is found - about 200,000 light years to 325,000 light years from the galactic center. Previous studies have hinted at the wake's existence, but the all-sky map confirms its presence and offers a detailed view of its shape, size, and location.

This disturbance in the halo also provides astronomers with an opportunity to study something they can't observe directly: dark matter. Though it doesn't emit, reflect, or absorb light, the gravitational influence of dark matter has been observed across the universe. It is thought to create a scaffolding on which galaxies are built, such that without it, galaxies would fly apart as they spin. Dark matter is estimated to be five times more common in the universe than all the matter that emits or interacts with light, from stars to planets to gas clouds.

While there are multiple theories about the nature of dark matter, all of them indicate that it should be present in the Milky Way's halo. If that's the case, then as the LMC sails through this region, it should leave a wake in the dark matter as well. The wake observed in the new star map is thought to be the outline of this dark matter wake; the stars are like leaves on the surface of this invisible ocean, their position shifting with the dark matter.

The interaction between the dark matter and the Large Magellanic Cloud has big implications for our galaxy. As the LMC orbits the Milky Way, the dark matter's gravity drags on the LMC and slows it down. This will cause the dwarf galaxy's orbit to get smaller and smaller, until the galaxy finally collides with the Milky Way in about 2 billion years. These types of mergers might be a key driver in the growth of massive galaxies across the universe. In fact, astronomers think the Milky Way merged with another small galaxy about 10 billion years ago.

"This robbing of a smaller galaxy's energy is not only why the LMC is merging with the Milky Way but also why all galaxy mergers happen," said Rohan Naidu, a graduate student in astronomy at Harvard University and a co-author of the new paper. "The wake in our map is a really neat confirmation that our basic picture for how galaxies merge is on point!"

A Rare Opportunity

The authors of the paper also think the new map - along with additional data and theoretical analyses - may provide a test for different theories about the nature of dark matter, such as whether it consists of particles, like regular matter, and what the properties of those particles are.

"You can imagine that the wake behind a boat will be different if the boat is sailing through water or through honey," said study co-author Charlie Conroy, a professor at Harvard University and astronomer at the Center for Astrophysics. "In this case, the properties of the wake are determined by which dark matter theory we apply."

Conroy led the team that mapped the positions of over 1,300 stars in the halo. The challenge arose in trying to measure the exact distance from Earth to a large portion of those stars: It's often impossible to figure out if a star is faint and close by or bright and far away. The team used data from ESA's Gaia mission, which provides the location of many stars in the sky but cannot measure distances to the stars in the Milky Way's outer regions.

After identifying stars most likely located in the halo (because they were not obviously inside our galaxy or in the LMC), the team looked for stars that belong to a class of giant stars that have a specific light "signature" detectable by NEOWISE. Knowing the basic properties of the selected stars enabled the team to figure out their distance from Earth and create the new map. It charts a region starting about 200,000 light-years from the Milky Way's center, or about where the LMC's wake was predicted to begin, and extends about 125,000 light-years beyond that.

Conroy and his colleagues were inspired to hunt for LMC's wake after learning about a team of astrophysicists at the University of Arizona in Tucson who make computer models predicting what dark matter in the galactic halo should look like. The two groups worked together on the new study. One of the models by the Arizona team, which is in the new study, predicted the general structure and specific location of the star wake revealed in the new map. Once the data had confirmed that the model was correct, the team was able to confirm what other investigations have also hinted at: that the LMC is likely on its first orbit around the Milky Way. If the smaller galaxy had already made multiple orbits, the shape and location of the wake would be significantly different from what has been observed. Astronomers think the LMC formed in the same environment as the Milky Way and another nearby galaxy, M31, and was on a very long first orbit around our galaxy (about 13 billion years). Its next orbit will be much shorter due to its interaction with the Milky Way.

"Confirming our theoretical prediction with observational data tells us that our understanding of the interaction between these two galaxies, including the dark matter, is on the right track," said University of Arizona doctoral student in astronomy Nicolás Garavito-Camargo, who led work on the model used in the paper.

The new map also provides astronomers with a rare opportunity to test the properties of the dark matter (the notional water or honey) in our own galaxy. In the new study, Garavito-Camargo and colleagues used a popular dark matter theory called cold dark matter that fits the observed star map relatively well. Now the University of Arizona team is running simulations that use different dark matter theories, to see which one best matches the wake observed in the stars.

"It's a really special set of circumstances that came together to create this scenario that lets us test our dark matter theories," said Gurtina Besla, a co-author of the study and an associate professor at the University of Arizona. "But we can only realize that test with the combination of this new map and the dark matter simulations that we built."

Credit: 
Center for Astrophysics | Harvard & Smithsonian

Mechanical engineers develop new high-performance artificial muscle technology

image: Cavatappi pasta (A) and the actuators developed (C-H) from the simple drawn polymer tubes (B)

Image: 
Northern Arizona University

In the field of robotics, researchers are continually looking for the fastest, strongest, most efficient and lowest-cost ways to actuate, or enable, robots to make the movements needed to carry out their intended functions.

The quest for new and better actuation technologies and 'soft' robotics is often based on principles of biomimetics, in which machine components are designed to mimic the movement of human muscles--and ideally, to outperform them. Despite the performance of actuators like electric motors and hydraulic pistons, their rigid form limits how they can be deployed. As robots transition to more biological forms and as people ask for more biomimetic prostheses, actuators need to evolve.

Associate professor (and alum) Michael Shafer and professor Heidi Feigenbaum of Northern Arizona University's Department of Mechanical Engineering, along with graduate student researcher Diego Higueras-Ruiz, published a paper in Science Robotics presenting a new, high-performance artificial muscle technology they developed in NAU's Dynamic Active Systems Laboratory. The paper, titled "Cavatappi artificial muscles from drawing, twisting, and coiling polymer tubes," details how the new technology enables more human-like motion due to its flexibility and adaptability, but outperforms human skeletal muscle in several metrics.

"We call these new linear actuators cavatappi artificial muscles based on their resemblance to the Italian pasta," Shafer said.

Because of their coiled, or helical, structure, the actuators can generate more power, making them an ideal technology for bioengineering and robotics applications. In the team's initial work, they demonstrated that cavatappi artificial muscles exhibit specific work and power metrics ten and five times higher than human skeletal muscles, respectively, and as they continue development, they expect to produce even higher levels of performance.

"The cavatappi artificial muscles are based on twisted polymer actuators (TPAs), which were pretty revolutionary when they first came out because they were powerful, lightweight and cheap. But they were very inefficient and slow to actuate because you had to heat and cool them. Additionally, their efficiency is only about two percent," Shafer said. "For the cavatappi, we get around this by using pressurized fluid to actuate, so we think these devices are far more likely to be adopted. These devices respond about as fast as we can pump the fluid. The big advantage is their efficiency. We have demonstrated contractile efficiency of up to about 45 percent, which is a very high number in the field of soft actuation."

The engineers think this technology could be used in soft robotics applications, conventional robotic actuators (for example, for walking robots), or even potentially in assistive technologies like exoskeletons or prostheses.

"We expect that future work will include the use of cavatappi artificial muscles in many applications due to their simplicity, low-cost, lightweight, flexibility, efficiency and strain energy recovery properties, among other benefits," Shafer said.

Technology is available for licensing, partnering opportunities

Working with the NAU Innovations team, the inventors have taken steps to protect their intellectual property. The technology has entered the protection and early commercialization stage and is available for licensing and partnering opportunities. For more information, please contact NAU Innovations.

Shafer joined NAU in 2013. His other research interests are related to energy harvesting, wildlife telemetry systems and unmanned aerial systems. Feigenbaum joined NAU in 2007, and her other research interest include ratcheting in metals and smart materials. The graduate student on this project, Diego Higueras-Ruiz, received his MS in Mechanical Engineering from NAU in 2018 and will be completing his PhD in Bioengineering in Fall 2021. This work has been supported through a grant from NAU's Research and Development Preliminary Studies program.

Credit: 
Northern Arizona University

Esophage cancer: Discovery of the mechanisms involved

Metaplasia is defined as the replacement of a fully differentiated cell type by another. There are several classical examples of metaplasia, one of the most frequent is called Barrett's oesophagus. Barrett's oesophagus is characterized by the replacement of the keratinocytes by columnar cells in the lower oesophagus upon chronic acid reflux. This metaplasia is considered a precancerous lesion that increases by around 50 times the risk of this oesophageal adenocarcinoma. Nonetheless, the mechanisms involved in the development of metaplasia in the oesophagus are still partially unknown.

In a new study published in Cell Stem Cell, researchers led by Mr. Benjamin Beck, (FNRS research associate and WELBIO investigator at the IRIBHM, Université libre de Bruxelles, Belgium), report the mechanisms involved in the transdifferentiation of oesophageal keratinocytes into columnar cells.

Alizée Vercauteren Drubbel and colleagues used state-of-the-art genetic tools and mouse models to dissect the molecular mechanisms by which cells from the oesophagus can participate to metaplasia. In collaboration with Prof. Sachiyo Nomura (Tokyo Medical University, Japan), they demonstrated that the reactivation of the Hedgehog pathway occurs in epithelial cells upon chronic acid reflux. The sole reactivation of this pathway in normal oesophageal cells changes them and make them look like embryonic oesophageal cells. Subsequently, a subset of these cells is converted into columnar cells. "It was really surprising to see the cells from the oesophagus slowly changing over time and getting features of other tissues just by activating a signaling pathway in vivo" comments Alizée Vercauteren Drubbel, the first author of this study.

The authors demonstrate that the hedgehog pathway alters the squamous differentiation program in virtually all the oesophageal cells but induces a full squamous-to-columnar conversion in a subset of progenitors only. Interestingly, an embryonic-like epigenetic and transcriptomic program precedes the columnar conversion, suggesting that keratinocytes need to be dedifferentiated before activating another differentiation program. Conditional knockout in vivo demonstrates that the transcription factor Sox9 plays a pivotal role in the process of columnar conversion.

In conclusion, this work highlights mechanisms modulating cellular plasticity that may constitute the very first step of transdifferentiation and metaplasia development in the oesophagus. Oesophageal adenocarcinoma incidence has dramatically increased over the past decades. This increase appears to be a result of the increased prevalence of Barrett's oesophagus. Hence, "we hope that a better understanding of the processes involved in the development of metaplasia and their progression into cancer will help detecting people with a high risk of developing cancer." comments Benjamin Beck, the last and corresponding author of this study.

Credit: 
Université libre de Bruxelles

SARS-CoV-2: Infection induces antibodies capable of killing infected cells

image: Antibodies kill infected cells by activating NK cells. In green, an infected cell, cultured in the presence of antibodies (invisible) and NK cells (white arrow). The NK cell comes into contact with the infected cell, then destroys it (the dying cell turns red). These images were taken using video microscopy.

Image: 
© Virus & Immunity Unit, Institut Pasteur

Drawing on epidemiological field studies and the FrenchCOVID hospital cohort coordinated by Inserm, teams from the Institut Pasteur, the CNRS and the Vaccine Research Institute (VRI, Inserm/University Paris-Est Créteil) studied the antibodies induced in individuals with asymptomatic or symptomatic SARS-CoV-2 infection. The scientists demonstrated that infection induces polyfunctional antibodies. Beyond neutralization, these antibodies can activate NK (natural killer) cells or the complement system, leading to the destruction of infected cells. Antibody levels are slightly lower in asymptomatic as opposed to symptomatic individuals, but polyfunctional antibodies were found in all individuals. These findings show that infection induces antibodies capable of killing infected cells regardless of the severity of the disease. The research was published in the journal Cell Reports Medicine on April 21, 2021.

Nearly half of those infected with SARS-CoV-2 do not develop symptoms. Yet, the immune response induced by asymptomatic forms of COVID-19 remains poorly characterized. The extent of the antiviral functions of SARS-CoV-2 antibodies is also poorly characterized. Antibodies are capable of both neutralizing the virus and activating "non-neutralizing" functions. The latter include antibody-dependent cellular cytotoxicity (ADCC) and complement activation, which are major components of the immune response and play a key role in the efficacy of some vaccines. ADCC is a two-stage process in which infected cells are first recognized by antibodies, then destroyed by NK cells. The complement system consists of a series of plasma proteins that also enable the elimination of cells targeted by antibodies. The ability of antibodies to activate these non-neutralizing functions has been little described for SARS-CoV-2 infection so far.

The teams from the Institut Pasteur, the CNRS and the VRI (Inserm/University Paris-Est Créteil) initially developed new assays to measure the various antibody functions. They produced assays to study cell death induced by NK cells or by complement in the presence of antibodies. By analyzing cultures in real time using video microscopy, the scientists showed that NK cells kill infected cells in the presence of antibodies, demonstrating new antiviral activity employed by SARS-CoV-2 antibodies.

The scientists then examined the serum of patients with symptomatic or asymptomatic forms of COVID-19 with their new assays. They also used methods previously developed at the Institut Pasteur, such as the S-Flow assay, to detect SARS-CoV-2 anti-spike antibodies, and the S-Fuse assay, to measure the neutralization capacity of these antibodies.

"This study demonstrated that individuals infected with SARS-CoV-2 have antibodies that are capable of attacking the virus in different ways, by preventing it from entering cells (neutralization) or by activating NK cells to kill infected cells (via ADCC). We therefore use the term polyfunctional antibodies," explains Timothée Bruel, co-last author of the study and a scientist in the Institut Pasteur's Virus & Immunity Unit and at the VRI.

By comparing different groups of patients, the scientists then showed that asymptomatic individuals also have polyfunctional antibodies and that their response is slightly weaker than those of patients with moderate forms of COVID-19.

"The study reveals new mechanisms of action of SARS-CoV-2 antibodies and suggests that the protection induced by an asymptomatic infection is very close to that observed after a symptomatic infection," concludes Olivier Schwartz, co-last author of the study, head of the Virus & Immunity Unit and at the VRI.

Credit: 
Institut Pasteur

Genes linked to creativity were the "secret weapon" in the survival of Homo sapiens

image: Comparison between Homo Sapiens and Homo Neanderthalensis

Image: 
University of Granada

Creativity--the "secret weapon" of Homo sapiens--constituted a major advantage over Neanderthals and played an important role in the survival of the human species. This is the finding of an international team of scientists, led by the University of Granada (UGR), which has identified for the first time a series of 267 genes linked to creativity that differentiate Homo sapiens from Neanderthals.

This important scientific finding, published today in the prestigious journal Molecular Psychiatry (Nature), suggests that it was these genetic differences linked to creativity that enabled Homo sapiens to eventually replace Neanderthals. It was creativity that gave Homo sapiens the edge, above and beyond the purely cognitive level, by facilitating superior adaptation to the environment compared to that of now-extinct hominids and providing greater resilience to ageing, injury, and disease.

The research team comprises Igor Zwir, Coral del Val, Rocío Romero, Javier Arnedo, and Alberto Mesa from the UGR's Department of Computer Science and Artificial Intelligence, the Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), and the Biohealth Research Institute in Granada (ibs.GRANADA), together with Robert Cloninger of Washington University in St. Louis and colleagues from the Young Finns Study (Finland), the American Museum of Natural History (New York), and the Menninger Clinic (Houston, Texas).

Their findings are the result of an interdisciplinary study that brings together Artificial Intelligence (AI), Molecular Genetics, Neurosciences, Psychology, and Anthropology. This is the fifth consecutive paper published by this research team in one of the most prestigious scientific journals in the area concerned with the human personality.

The 267 genes identified by these scientists as being unique to Homo sapiens are part of a larger group of 972 that are linked to personality in healthy adults and were also discovered by the same authors. In previous studies, they showed that these 972 genes are organized into three dissociable brain networks of personality traits that are responsible for learning and memory.

Evolution of genetic networks

"These networks evolved in stages. The most primitive network emerged among monkeys and apes about 40 million years ago, and is responsible for emotional reactivity--in other words, it regulates impulses, the learning of habits, social attachment, and conflict-resolution," explain the UGR researchers. Less than 2 million years ago, the second network emerged. This regulates intentional self-control: self-direction and social cooperation for mutual benefit. Finally, about 100,000 years ago, the network relating to creative self-awareness emerged.

The new study that is published this week reveals that the genes of the oldest network, that of emotional reactivity, were almost identical in Homo sapiens, Neanderthals, and chimpanzees. By contrast, the genes linked to self-control and self-awareness among Neanderthals were "halfway between" those of chimpanzees and Homo sapiens.

Most of these 267 genes that distinguish modern humans from Neanderthals and chimpanzees are RNA regulatory genes and not protein-coding genes. Almost all of the latter are the same across all three species, and this research shows that what distinguishes them is the regulation of expression of their proteins by genes found exclusively in humans. Using genetic markers, gene-expression data, and integrated brain magnetic resonance imaging based on AI techniques, the scientists were able to identify the regions of the brain in which those genes (and those with which they interacted) were overexpressed. These regions are involved in human self-awareness and creativity, and include the regions that are strongly associated with human well-being and that appeared relatively recently, phylogenetically speaking.

Superior resilience

Furthermore, the authors continue, "thanks to these genes, Homo sapiens enjoyed greater physical fitness than now-extinct hominids, providing them with a superior level of resilience to ageing, injury, and disease." Using genetic data, the researchers were able to estimate from these genes that the adaptability and well-being of Neanderthals were approximately 60%-70% those of Homo sapiens, meaning that the difference between them in terms of physical fitness was significant.

The findings have far-reaching implications in our understanding of the factors that ultimately enabled Homo sapiens to replace Neanderthals and other species in the geologically-recent past. The authors hypothesize that creativity may have given Homo sapiens selective advantages beyond the purely cognitive realm.

"Living longer and healthier lives may have prolonged the period of learning associated with youth and adolescence, which would facilitate the accumulation of knowledge. This is a remarkable characteristic of behaviourally-modern humans and an important factor in economic and social success," explain the researchers. Creativity may have encouraged cooperation between individuals in a bid to encourage success among their descendants and their community. This would have set the stage for technological innovation, behavioural flexibility, and openness to exploration, all of which were necessary for Homo sapiens to spread across the world more successfully than other human lineages.

In the five studies published to date by these researchers in Nature, they have found--and verified using multiple data sources--that human behaviour is neither entirely fixed nor solely determined by our genes, but rather is influenced also by multiple interactions with the environment. "We have the capacity to learn and adapt in light of our experience, even to the extent of modifying the expression of our genes. Human creativity, prosociality, and healthy longevity emerged as a response to the need to adapt to the harsh and diverse conditions that reigned between 400,000 and 100,000 years ago," note the UGR researchers.

This study is just one example of how the use of AI techniques and the entirely bias-free treatment of data can help to solve many puzzles about the evolution of human beings. The results obtained pave the way to the development of new lines of research that can ultimately promote human well-being and help us to adapt creatively in order to overcome critical situations.

Credit: 
University of Granada

First study into prevalence of COVID-19 symptoms amongst high-risk children

Children with weakened immune systems have not shown a higher risk of developing severe COVID-19 infection despite commonly displaying symptoms, a new study suggests.

During a 16-week period which covered the first wave of the pandemic, researchers from Southampton carried out an observational study of nearly 1500 immunocompromised children - defined as requiring annual influenza vaccinations due to underlying conditions or medication. The children, their parents or guardians completed weekly questionnaires to provide information about any symptoms they had experienced, COVID-19 test results and the impact of the pandemic on their daily life.

The results, published in BMJ Open, showed that symptoms of COVID-19 infection were common in many of the children - with over two thirds of participants reporting at least one symptom and one third experiencing three or more symptoms simultaneously. One hundred and ten patients with symptoms undertook viral PCR tests, none of whom tested positive.

Dr Hans de Graaf from the University of Southampton who led the research said, "Whilst we cannot be certain of the prevalence of COVID-19 amongst the children who took part, because testing was only done when patients were admitted and these children were told to adhere to strict shielding measures, we can assume that any infections would have been mild cases since none of these high risk patients required hospital admissions."

More than half of the patients or parents reported high levels of anxiety at the start of the study and despite the absence of severe symptoms, these scores remained consistently high throughout the study period.

The researchers believe that these results show that widespread symptom screening for early detection of COVID-19 in not going to be useful in these cases as the children may have frequent upper respiratory tract symptoms likely to be unrelated to COVID-19.

Dr de Graaf continued, "This study was the first to observe the impact of the pandemic on children with compromised immune systems. During the first wave of the pandemic, many may have been shielding so our results suggest that either the shielding measures were effective or that immunocompromised children are less affected by COVID-19 than adults, just like healthy children."

The report also concludes that the continuous high level of anxiety among participants highlights the need to clearly define and communicate the risk of COVID-19 in children and young people, particularly as lockdown restrictions ease.

Credit: 
University of Southampton

Early Neolithic farmers modified the reproductive cycle of sheep

image: The study on the remains of animals found at the site of the Chaves cave in Huesca, led by the Universitat Autònoma de Barcelona, obtains new data on the control of breeding and feeding of the first domesticated sheep heards found in the western Mediterranean region during the Neolithic. The modification of their natural birthing cycles affected their physiology and resulted in prolonged periods of fertility.

Image: 
Alejandro Sierra

The results, exceptional first time evidence of how early flocks of domesticated sheep fed and reproduced within the Iberian Peninsula, are currently the first example of the modification of sheep's seasonal reproductive rhythms with the aim of adapting them to human needs.

The project includes technical approaches based on stable isotope analysis and dental microwear of animal remains from more than 7,500 years ago found in the Neolithic Chaves cave site in Huesca, in the central Pyrennean region of Spain. The research was coordinated from the Arqueozoology Laboratory of the UAB Department of Antiquity, with the participation of researchers from the University of Zaragoza, the Museum of Natural History of Paris, and the Catalan Institute of Human Palaeocology and Social Evolution (IPHES) in Tarragona.

"The alteration of seasonal breeding rhythms in livestock represented a huge milestone for prehistoric societies, making it possible to have access to meat and milk throughout the year, and this in turn had a huge impact on diet, on the economy and on the social organisation of the first farming communities, and set down the bases for farming strategies which continue to be carried out now. Until very recently, animal husbandry in the Neolithic period was thought to be in its initial stages, although new possibilities in biogeochemical analyses used in this study have revealed husbandry practices that were fully consolidated since the beginning of the Neolithic", says Dr Maria Saña, lecturer at the UAB Department of Prehistory and coordinator of the project.

The domestication of sheep did not occur in the Iberian Peninsula. Its agriotype, the Ovis orientalis, can be found in central and southeastern Asia. "What is surprising is the speed in which the sheep are integrated into animal husbandry strategies and their enormous economic importance in the earliest periods of the Neolithic. What we see is a rapid and successful adoption, which demonstrates that their mechanisms of adaptation to both the new environment and their new economic role were well known and controlled by a part of human communities. The selective pressures applied on the species were artificial, they pursued specific objectives and were well defined. This new evidence represents a turning point in the research into animal domestication and the origins of animal husbandry. It was made possible by the new approach that we took with this study, focused on exploring the changes in breeding and feeding of these first flocks of sheep", states Alejandro Sierra, researcher at the UAB and at the University of Zaragoza, and first author of the article recently published in Journal of Archaeological Science: Reports.

The research focused on the study of sheep rearing in the Neolithic Chaves cave (5600-5300 BCE) in the Pyrennean foothills, a site that is "spectacular for the quality and number of remains recovered. When compared to Neolithic levels of fauna, its 12,754 recognisible remains are at least threefold of what is found in other Neolithic sites on the peninsula, with domesticated sheep and goats being the most numerous species, and with the largest presence of pigs of all the Neolithic sites. All of this points to the stabling of animals and to the type of stable settlement known to be dedicated to animal husbandry, and within a large cave that had 3,000 square metres of habitable space", affirms Pilar Utrilla, professor at the University of Zaragoza and director of the archaeological interventions.

The results obtained at the Chaves site show that in the Iberian Peninsula, the birth of lambs also occurred in autumn and winter seasons, which is what is now considered to be an "out of optimal season" birthing, an aspect that contrasts significantly with the livestock regimes documented in other parts of Europe during the Neolithic, with births occurring mainly in spring. The modification of the natural birthing cycles of wild sheep affected the physiology of the animals of this species, prolonging their fertility period. That was the result of a more intense and continued human control, alterating interactions between females and males, a breeding strategy that seeked greater predictability in livestock production. "Autumn birthing in the early Neolithic in the Chaves cave would confirm the antiquity of this practice in the Western Mediterranean basin, implying a combination of the biological capacity of sheep, zootechnical skills of the agricultors, and favourable environmental conditions", states Dr Marie Balasse, researcher at the Museum of Natural History in Paris.

The study also demonstrates that this greater control and selective pressure also had an effect on the diet and movement of the species. By applying for the first time a combination of dental microwear and stable C-13 and O-18 isotope analyses on sequencial samples of second and third molar enamel bioapatite, scientists were able to detect that the flock of sheep at Chaves did not eat a greatly varied diet, neither among the sheep nor throughout the year. The results of the dental microwear show that Neolithic sheep had a more controlled diet than wild animals living in the same types of environments and which grazed on good plant covers, with still very little human impact on their lives. The sheep would graze near the cave during most of the year, and were probably also fed forage. The verification of the use of extraordinary forage is also a novelty. "The results of what the sheep from the Chaves cave ate are surprising when compared with what we expected. We were able to document diets consisting of intensive and established differences between young and adult sheep, and these characteristics can be related to a tight control on livestock production during those earliest periods of the Neolithic", states Dr Florent Rivals, ICREA research professor at the IPHES.

"The results obtained on the breeding and feeding of sheep of the Chaves cave are key for the discovery of economic systems in early farming societies of the Iberian Peninsula. The new methodology applied in this study will no doubt be fundamental in further studying animal husbandry in prehistoric times", concludes Dr Alejandro Sierra.

Credit: 
Universitat Autonoma de Barcelona