Earth

In a warming world, could air conditioning make things worse?

As climate change continues to push summer temperatures ever higher, the increased use of air conditioning in buildings could add to the problems of a warming world by further degrading air quality and compounding the toll of air pollution on human health, according to a new study.

Writing today (July 3, 2018) in a special climate change issue of the journal Public Library of Science (PLOS) Medicine, a team of researchers from the University of Wisconsin-Madison forecasts as many as a thousand additional deaths annually in the Eastern United States alone due to elevated levels of air pollution driven by the increased use of fossil fuels to cool the buildings where humans live and work.

"What we found is that air pollution will get worse," explains David Abel, the lead author of the new report and a UW-Madison graduate student in the Nelson Institute for Environmental Studies' Center for Sustainability and the Global Environment. "There are consequences for adapting to future climate change."

The analysis combines projections from five different models to forecast increased summer energy use in a warmer world and how that would affect power consumption from fossil fuels, air quality and, consequently, human health just a few decades into the future.

In hot summer weather, and as heat waves are projected to increase in frequency and intensity with climate change, there is no question that air conditioning does and will save lives, says Jonathan Patz, a senior author of the study and a UW-Madison professor of environmental studies and population health sciences.

However, he cautions that if the increased use of air conditioning due to climate change depends on power derived from fossil fuels, there will be an air quality and human health tradeoff. "We're trading problems," says Patz, an expert on climate change and human health. "Heat waves are increasing and increasing in intensity. We will have more cooling demand requiring more electricity. But if our nation continues to rely on coal-fired power plants for some of our electricity, each time we turn on the air conditioning we'll be fouling the air, causing more sickness and even deaths."

Another senior author of the new PLOS Medicine report, air quality expert Tracey Holloway, a UW-Madison professor of environmental studies as well as atmospheric and oceanic sciences, says the study adds to our understanding of the effects of adapting to climate change by simulating the scope of fossil fuel use to cool buildings under future climate change scenarios. Buildings, she notes, are the biggest energy sinks in the United States, responsible for more than 60 percent of power demand in the Eastern United States, the geographic scope of the study. Air conditioning, she says, is a significant component of that electrical demand.

"Air quality is a big issue for public health," she explains, noting that increases in ground-level ozone and fine particulate matter in the air - byproducts of burning fossil fuels and known hazards to human health - will be one result of adding to fossil-fuel power consumption.

The study forecasts an additional 13,000 human deaths annually caused by higher summer levels of fine particulate matter and 3,000 caused by ozone in the Eastern U.S. by mid-century. Most of those deaths will be attributable to natural processes like atmospheric chemistry and natural emissions, which are affected by rising temperatures. However, about 1,000 of those deaths each year would occur because of increased air conditioning powered by fossil fuel. "Climate change is here and we're going to need to adapt," says Abel. "But air conditioning and the way we use energy is going to provide a feedback that will exacerbate air pollution as temperatures continue to get warmer."

The results of the new study, according to the Wisconsin team, underscore the need to change to more sustainable sources of energy such as wind and solar power, and to deploy more energy-efficient air conditioning equipment. "The answer is clean energy," says Abel. "That is something we can control that will help both climate change and future air pollution. If we change nothing, both are going to get worse."

Credit: 
University of Wisconsin-Madison

UA forecast: Below-average hurricane activity

Hurricane season didn't officially start until June 1, but Subtropical Storm Alberto made an appearance early, causing more than $50 million in damage as it made its way inland and up the coast in late May. Twelve people -- seven in Cuba and five in the U.S. -- died as Alberto's fallout included flooding, landslides, tornados and mudslides.

Is Alberto's early-season appearance an indicator of another active Atlantic hurricane season? Not necessarily, according to predictions by researchers at the University of Arizona.

The UA forecasting model predicted a below-average number of hurricanes for the 2018 hurricane season, which runs through November 30. UA researchers are predicting four hurricanes, two of which will be major hurricanes, defined as those reaching Category 3, 4 or 5. That forecast falls below the median of seven hurricanes with two majors.

The UA prediction is among the lowest of all published forecasts, which include predictions by the National Oceanic and Atmospheric Administration, the London, United Kingdom-based consortium Tropical Storm Risk and other universities.

Last year, the UA's forecast was among the highest -- 11 hurricanes with six majors -- and came closest to hitting the mark. The 2017 hurricane season ended with 10 hurricanes and six majors, making it the most active since 2005 and the seventh-most active in the NOAA's historical records dating back to 1851. Last year, Irma (Florida) and Maria (Puerto Rico) were 5s, and Harvey (Texas) and Jose (offshore Caribbean) were 4s.

Xubin Zeng, his former graduate student Kyle Davis, and former UA professor Elizabeth Ritchie developed the UA's hurricane forecasting model, which has proved to be extremely accurate over the last four years.

"Since we began issuing our annual hurricane prediction in 2014, our average error is 1.5 hurricanes," said Zeng, director of the UA's Climate Dynamics and Hydrometeorology Center, a professor of atmospheric sciences and the Agnes N. Haury Endowed Chair in Environment in the Department of Hydrology and Atmospheric Sciences at the UA.

A main factor in this year's prediction is the low sea surface temperatures over the Atlantic, where little warming occurred from April to May. The sea surface temperatures are the lowest Zeng and his team have seen since 2014, but similar to long-term average temperatures. The Atlantic Multidecadal Oscillation index in May, which describes multidecadal climate variability in the North Atlantic, is zero, which is below the threshold at which El Niño would affect hurricane activity in the UA model.

"These conditions imply an average year for hurricane activities; however, tropical Atlantic Ocean surface easterly wind -- from east to west, the so-called trade wind -- is stronger than in most years," Zeng said. "This implies a stronger wind shear, which usually reduces hurricane activities. Therefore, together, we predict a slightly below average year for hurricane activities."

If the 2018 UA hurricane forecast is as accurate as it has been over the last few years, the U.S. can expect smoother sailing as it continues to recover from an estimated $282.16 billion in damages caused last year during one of the most catastrophic hurricane seasons in history.

Credit: 
University of Arizona

Loss of cilia leads to melanoma

image: Pigment cells (blue/green) with cilium (light green with red base).

Image: 
Daniel Zingg, Netherlands Cancer Institute

Melanomas are one of the most aggressive types of tumors in humans. Despite remarkable success with new forms of treatment such as immunotherapies, there are still many melanoma patients who cannot be cured or who later suffer a recurrence of the disease following successful treatment. An in-depth understanding of the tumor's biology is thus essential for developing novel therapeutic approaches. The main question is which changes in a benign cell cause it to progress into a malignant tumor.

Formation and spread of melanoma also regulated epigenetically

A team of researchers led by Lukas Sommer, professor at the Institute of Anatomy at the University of Zurich (UZH), has now been able to show that in addition to genetic causes such as mutations in the DNA, epigenetic factors also play a role in the formation and spread of melanoma. While epigenetic factors don't directly influence the gene sequence, they do regulate how efficiently certain genes are transcribed in the cells. The UZH researchers focused on the EZH2 protein, which - unlike in benign cells - is very common in melanoma cells and plays a central role in melanoma formation.

EZH2 suppresses ciliary genes and leads to metastasis

To find out how epigenetic factors contribute to the melanoma's aggressive behavior, the scientists examined all the genes that are regulated by EZH2. "We were very surprised to find many genes that are jointly responsible for the formation of cilia," says study leader Sommer. It seems that cilia genes are suppressed by EZH2, which means that malignant melanoma cells have much fewer of these fine sensory hairs than the skin's benign pigment cells. With the help of human melanoma cells and mouse models, the researchers succeeded in demonstrating that loss of cilia in pigment cells activates carcinogenic signaling pathways, ultimately resulting in the formation of aggressive, metastatic melanoma.

Approach for novel tumor therapies

There are many types of cancers composed of cells that have lost their cilia. "The epigenetic regulation of cilia formation that we've now discovered in melanoma is, therefore, likely also relevant for the formation of other types of cancers, such as breast or brain tumors," remarks Lukas Sommer. Drugs that block EZH2 probably offer a promising strategy when it comes to treating melanoma, possibly in combination with immunotherapies, according to Sommer.

Credit: 
University of Zurich

An artificial ovary for fertility preservation without the risk of reintroducing malignancy

Important steps in the development of an artificial ovary have been successfully completed by one of the world's leading groups in fertility preservation. Researchers from the Rigshospitalet in Copenhagen, Denmark, report today that they have for the first time isolated and grown human follicles to a point of "biofunctionality" on a bioengineered ovarian scaffold made of "decellularised" ovarian tissue. The early-stage follicles were isolated from patients having ovarian tissue frozen for fertility preservation ahead of other medical treatments likely to compromise ovarian function.

Credit: 
European Society of Human Reproduction and Embryology

Air pollution contributes significantly to diabetes globally

New research links outdoor air pollution -- even at levels deemed safe -- to an increased risk of diabetes globally, according to a study from Washington University School of Medicine in St. Louis and the Veterans Affairs (VA) St. Louis Health Care System.

The findings raise the possibility that reducing pollution may lead to a drop in diabetes cases in heavily polluted countries such as India and less polluted ones such as the United States.

Diabetes is one of the fastest growing diseases, affecting more than 420 million people worldwide and 30 million Americans. The main drivers of diabetes include eating an unhealthy diet, having a sedentary lifestyle, and obesity, but the new research indicates the extent to which outdoor air pollution plays a role.

"Our research shows a significant link between air pollution and diabetes globally," said Ziyad Al-Aly, MD, the study's senior author and an assistant professor of medicine at Washington University. "We found an increased risk, even at low levels of air pollution currently considered safe by the U.S. Environmental Protection Agency (EPA) and the World Health Organization (WHO). This is important because many industry lobbying groups argue that current levels are too stringent and should be relaxed. Evidence shows that current levels are still not sufficiently safe and need to be tightened."

The findings are published June 29 in The Lancet Planetary Health.

While growing evidence has suggested a link between air pollution and diabetes, researchers have not attempted to quantify that burden until now. "Over the past two decades, there have been bits of research about diabetes and pollution," Al-Aly said. "We wanted to thread together the pieces for a broader, more solid understanding."

To evaluate outdoor air pollution, the researchers looked at particulate matter, airborne microscopic pieces of dust, dirt, smoke, soot and liquid droplets. Previous studies have found that such particles can enter the lungs and invade the bloodstream, contributing to major health conditions such as heart disease, stroke, cancer and kidney disease. In diabetes, pollution is thought to reduce insulin production and trigger inflammation, preventing the body from converting blood glucose into energy that the body needs to maintain health.

Overall, the researchers estimated that pollution contributed to 3.2 million new diabetes cases globally in 2016, which represents about 14 percent of all new diabetes cases globally that year. They also estimated that 8.2 million years of healthy life were lost in 2016 due to pollution-linked diabetes, representing about 14 percent of all years of healthy life lost due to diabetes from any cause. (The measure of how many years of healthy life are lost is often referred to as "disability-adjusted life years.")

In the United States, the study attributed 150,000 new cases of diabetes per year to air pollution and 350,000 years of healthy life lost annually.

The Washington University team, in collaboration with scientists at the Veterans Affairs' Clinical Epidemiology Center, examined the relationship between particulate matter and the risk of diabetes by first analyzing data from 1.7 million U.S. veterans who were followed for a median of 8.5 years. The veterans did not have histories of diabetes. The researchers linked that patient data with the EPA's land-based air monitoring systems as well as space-borne satellites operated by the National Aeronautics and Space Administration (NASA). They used several statistical models and tested the validity against controls such as ambient air sodium concentrations, which have no link to diabetes, and lower limb fractures, which have no link to outdoor air pollution, as well as the risk of developing diabetes, which exhibited a strong link to air pollution. This exercise helped the researchers weed out spurious associations.

Then, they sifted through all research related to diabetes and outdoor air pollution and devised a model to evaluate diabetes risk across various pollution levels.

Finally, they analyzed data from the Global Burden of Disease study, which is conducted annually with contributions from researchers worldwide. The data helped to estimate annual cases of diabetes and healthy years of life lost due to pollution.

The researchers also found that the overall risk of pollution-related diabetes is tilted more toward lower-income countries such as India that lack the resources for environmental mitigation systems and clean-air policies. For instance, poverty-stricken countries facing a higher diabetes-pollution risk include Afghanistan, Papua New Guinea and Guyana, while richer countries such as France, Finland and Iceland experience a lower risk. The U.S. experiences a moderate risk of pollution-related diabetes.

In the U.S., the EPA's pollution threshold is 12 micrograms per cubic meter of air, the highest level of air pollution considered safe for the public, as set by the Clean Air Act of 1990 and updated in 2012. However, using mathematical models, Al-Aly's team established an increased diabetes risk at 2.4 micrograms per cubic meter of air. Based on VA data, among a sample of veterans exposed to pollution at a level of between 5 to 10 micrograms per cubic meter of air, about 21 percent developed diabetes. When that exposure increases to 11.9 to 13.6 micrograms per cubic meter of air, about 24 percent of the group developed diabetes. A 3 percent difference appears small, but it represents an increase of 5,000 to 6,000 new diabetes cases per 100,000 people in a given year.

In October 2017, The Lancet Commission on pollution and health published a report outlining knowledge gaps on pollution's harmful health effects. One of its recommendations was to define and quantify the relationship between pollution and diabetes.

"The team in St. Louis is doing important research to firm up links between pollution and health conditions such as diabetes," said commission member Philip J. Landrigan, MD, a pediatrician and epidemiologist who is the dean for global health at Mount Sinai School of Medicine in New York and chair of its Department of Preventive Medicine. "I believe their research will have a significant global impact."

Credit: 
Washington University in St. Louis

HKUST scientists discover autophagy inhibitory peptides from giant ankyrins

image: This visual abstract presents the super strong Atg8s binding peptides can effectively inhibit autophagy.

Image: 
Division of Life Science, HKUST

Autophagy, meaning "self-eating" in Greek, is a general metabolic mechanism adopted by nearly all the eukaryotic species, from the single cell yeast to humans. It is a process that cells degrade unnecessary components for materials recycling and energy generation to survive against stress or maintain homeostasis.

On the good side, autophagy can protect cells by eliminating harmful materials (e.g. amyloid aggregates in neurodegenerative diseases and pathogen invasions), but defects in autophagy are often related to numerous diseases, such as Alzheimer's diseases or Parkinson diseases, and in case of tumors, the autophagy pathway can be hijacked to supply enough nutrients for their massive growth. As a result, either activating or inhibiting autophagy in a precisely spatiotemporally controlled manner could be a promising treatment against various kinds of diseases.

Recently, a research team led by structural biologist Prof. Mingjie Zhang from HKUST has discovered potent and specific inhibitory peptides to target the Atg8 family proteins (including LC3s and GABARAPs), central components in the autophagy pathway. These genetically encodable autophagy inhibitory peptides can be used to occlude autophagy spatiotemporally in living animals, which leads to many situations where they can be utilized in a variety of designs.

Their findings were published on Jun 4, 2018 in the journal Nature Chemical Biology (doi: 10.1038/s41589-018-0082-8).

During their study on ankyrins, a long-term interest in their laboratory, the researchers first identified a GABARAP-selective inhibitory peptide naturally harbored in 270/480?kDa ankyrin-G and a super-potent pan-Atg8 inhibitory peptide from 440?kDa ankyrin-B. Based on the crystal structures they solved, they further optimized the ankyrin-G derived peptide to be a more GABARAP-selective one, "The distinct function of LC3s and GABARAPs in the autophagy pathway is still a wide-open area. At the current stage, the late function of these proteins are always masked by their early effect and/or redundancy. The peptides developed here probably will serve as a great tool to dissect the different roles of these two sub-families of Atg8 proteins in autophagy, " said Prof. Hong Zhang, one of the senior co-authors in this paper from Institute of Biophysics, Chinese Academy of Science.

The researchers also provided evidence that the peptides they developed can effectively block autophagy in cultured COS7 cells as well as living animals C. elegans at a given time and a given location. "The super strong Atg8 binding peptides are genetically encodable and can be expressed in tissue- and temporal-specific manners in living animals as we have demonstrated, and thus are far better than any of the small molecule-based drugs existing in autophagy research in cell cultures and more importantly in living animals, " Prof. Mingjie Zhang said.

"The inhibitory peptides can directly serve as leads to develop drugs for potential cancer treatments. They can also be indirectly used as a research tool to look for autophagy inducers for treating neurodegenerative diseases, " said Jianchao Li, one of the leading authors in Prof. Mingjie Zhang's laboratory.

Credit: 
Hong Kong University of Science and Technology

The culprit of some GaN defects could be nitrogen

image: As silicon-based semiconductors reach performance limits, gallium nitride is becoming the next go-to material for several technologies. Holding GaN back, however, is its high numbers of defects. Better understanding how GaN defects form at the atomic level could improve the performance of the devices made using this material. Researchers have taken a significant step by examining and determining six core configurations of the GaN lattice. They present their findings in the Journal of Applied Physics. This image shoes the distribution of stresses per atom (a) and (b) of a-edge dislocations along the direction in wurtzite GaN.

Image: 
Physics Department, Aristotle University of Thessaloniki

WASHINGTON, D.C., June 29, 2018 -- As silicon-based semiconductors reach their performance limits, gallium nitride (GaN) is becoming the next go-to material to advance light-emitting diode (LED) technologies, high-frequency transistors and photovoltaic devices. Holding GaN back, however, is its high numbers of defects.

This material degradation is due to dislocations -- when atoms become displaced in the crystal lattice structure. When multiple dislocations simultaneously move from shear force, bonds along the lattice planes stretch and eventually break. As the atoms rearrange themselves to reform their bonds, some planes stay intact while others become permanently deformed, with only half planes in place. If the shear force is great enough, the dislocation will end up along the edge of the material.

Layering GaN on substrates of different materials makes the problem that much worse because the lattice structures typically don't align. This is why expanding our understanding of how GaN defects form at the atomic level could improve the performance of the devices made using this material.

A team of researchers has taken a significant step toward this goal by examining and determining six core configurations of the GaN lattice. They presented their findings in the Journal of Applied Physics, from AIP Publishing.

"The goal is to identify, process and characterize these dislocations to fully understand the impact of defects in GaN so we can find specific ways to optimize this material," said Joseph Kioseoglou, a researcher at the Aristotle University of Thessaloniki and an author of the paper.

There are also problems that are intrinsic to the properties of GaN that result in unwanted effects like color shifts in the emission of GaN-based LEDs. According to Kioseoglou, this could potentially could be addressed by exploiting different growth orientations.

The researchers used computational analysis via molecular dynamics and density functional theory simulations to determine the structural and electronic properties of a-type basal edge dislocations along the direction in GaN. Dislocations along this direction are common in semipolar growth orientations.

The study was based on three models with different core configurations. The first consisted of three nitrogen (N) atoms and one gallium (Ga) atom for the Ga polarity; the second had four N atoms and two Ga atoms; the third contained two N atoms and two Ga core-associated atoms. Molecular dynamic calculations were performed using approximately 15,000 atoms for each configuration.

The researchers found that the N polarity configurations exhibited significantly more states in the bandgap compared to the Ga polarity ones, with the N polar configurations presenting smaller bandgap values.

"There is a connection between the smaller bandgap values and the great number of states inside them," said Kioseoglou. "These findings potentially demonstrate the role of nitrogen as a major contributor to dislocation-related effects in GaN-based devices."

Credit: 
American Institute of Physics

The scent of a man: What odors do female blackbuck find enticing in a male?

image: Indian blackbuck.

Image: 
NCBS

At Tal Chhapar, a wildlife sanctuary in the heart of the Thar desert, a strange drama is staged twice every year. In the blistering heat of summer from March to April and the post-monsoon months of September and October, up to a hundred blackbuck males stake out territories on the flat land to entice females to mate with them in a unique assemblage called a lek.

Female blackbuck who visit the lek generally spend large amounts of time evaluating males before choosing one as a mate. A large part of this evaluation seems to be based on sniffing--even when being courted, females are so intent on inspecting odors from the dung piles, that they are often oblivious to the males' antics.

What are these females nosing around for?

To answer this question, Jyothi Nair, a student from Uma Ramakrishnan's group at the National Centre for Biological Sciences (NCBS), Bangalore, collaborated with Shannon Olsson's team, also from NCBS, to develop a pipeline for investigating odors in a quick, efficient way. In a publication in the journal, Ecology and Evolution, the researchers document their evaluation of different odor collection, identification, and analysis techniques, and describe a protocol optimized for large-scale sampling of odors. Using this protocol, the team have also found that dung piles of males with high mating success seem to be much richer in the chemical meta-cresol than those of less successful males.

"Collecting odor samples from a remote area like Tal Chhapar is an extremely difficult task," says Nair. This is because most collection methods require many hours to obtain enough amounts of odors for successful analysis. Furthermore, depending on collection methods, samples are often unstable and decompose very quickly even when stored at low temperatures. This is often impossible in remote field sites where refrigeration facilities are non-existent.

Through trial and error, however, the research team from NCBS found a solution--solid phase extraction. In this technique, odors from fecal samples were absorbed onto tiny tubes made of a silicone polymer called polydimethylsiloxane (PDMS). The odor samples in these PDMS tubes were found to be stable enough that they could then be transported safely without refrigeration to NCBS, Bangalore for chemical analyses.

In the laboratory, standard procedures such as thermal desorption (where the odor-laden PDMS tubes are heated to release trapped volatiles), gas chromatography, and mass spectrometry were used to separate and analyze the compounds making up each odor sample.

"During analysis, we faced a lot of problems in identifying compounds," says V.S. Pragadeesh, who helped Nair with this work. "Compared to plant volatiles, there are comparatively few studies on mammalian volatiles, so we had very little information to help us recognize chemicals in these odors."

Routine analysis for such data usually involves manual identification and documentation of the detected compounds to create a chemical profile of each odor sample. Conventionally, the process would have taken more than 8 months for all the data in this study. However, the analysis time was reduced to just two weeks through a collaboration with computational biologist, Snehal Karpe, who helped the team develop a semi-automated process that could quickly and efficiently analyze components of each sample with fairly low error rates.

To make sense of all this information, Nair then used a complex statistical technique called, 'Random Forests' to compare the chemical profiles of dung piles from different locations within the lek. What emerged, was a strong spatial pattern--dung piles of males at the centre of the lek, where mating success was highest, had much higher levels of the chemical meta-cresol than those of males towards the periphery.

Meta-cresol is a well-known chemical used for communication in many insects and a few mammals such as elephants and horses. The team is now busy testing different chemicals identified in this study, including meta-cresol, on the behavior of captive blackbuck at Mysore zoo.

"This research has been exciting on so many fronts. There are relatively few population studies on chemical communication, and this is the first field-based chemical ecology study for this amazing Indian mammal," says Dr. Shannon Olsson, who heads the NICE (Naturalist-Inspired Chemical Ecology) laboratory at NCBS, and has been a close collaborator in this study.

"It's amazing to think that we can map 'smells' on the blackbuck lek! This is the first step to better understand whether smells vary across the lek, and potentially, how successful males smell. All thanks to our collaboration with the NICE lab," says Dr. Uma Ramakrishnan, who is Nair's mentor at NCBS.

"We hope that our pipeline will inspire more large-scale studies in chemical ecology that can be used to understand the remarkable biodiversity of this country and the world," adds Olsson.

Credit: 
National Centre for Biological Sciences

Drinking changes young adults' metabolite profile

image: Drinking changes young adults' metabolite profile.

Image: 
UEF

Adolescent drinking is associated with changes in the metabolite profile, a new study from the University of Eastern Finland and Kuopio University Hospital shows. Some of these changes were found to correlate with reduced brain grey matter volume, especially in young women who are heavy drinkers. The findings shed new light on the biological implications of adolescent drinking, and could contribute to the development of new treatments.

"For instance, heavy-drinking adolescents showed increased concentrations of 1-methylhistamine, which, in turn, was associated with reduced brain grey matter volume," Researcher Noora Heikkinen from the University of Eastern Finland explains.

1-methylhistamine is formed in the brain from histamine produced by immune responses.

"Our findings suggest that the production of histamine is increased in the brains of heavy-drinking adolescents. This observation can help in the development of methods that make it possible to detect adverse effects caused by alcohol at a very early stage. Possibly, it could also contribute to the development of new treatments to mitigate these adverse effects."

The study was a 10-year follow-up study among adolescents living in eastern Finland. The researchers determined the metabolite profiles of heavy- and light-drinking young adults, and used MRI to measure their brain grey matter volumes. These two methods have not been used in combination before, although previous studies have shown an association between heavy drinking and metabolite profile changes.

"What is new and significant about our study is the fact that we observed metabolite profile changes even in young people who consumed alcohol at a level that is socially acceptable. Moreover, none of the study participants had a diagnosis of alcohol dependence."

The findings indicate that even drinking that is not considered excessive has adverse effects on young people, both on their metabolism and brain grey matter volume, on the latter of which the research group has published findings already earlier.

"Although adolescent drinking is declining on average, we can see polarization: some adolescents are very heavy drinkers and they also use other substances," Heikkinen adds.

Credit: 
University of Eastern Finland

Regional Earth system modeling: Review and future directions

image: Schematic depiction of a coupled RESM framework and the interactions across its different components and global climate model drivers (CTM: chemical transport model). Arrows indicate the flow of information. Blue arrows: interaction with driving global models; red arrows: interaction inside the RESM.

Image: 
Filippo Giorgi

The regional climate modeling community is actively engaged in the development of regional earth system models (RESMs), applicable in different regional contexts. In a paper published in Atmospheric and Oceanic Science Letters, Prof. Filippo Giorgi from the Abdus Salam International Centre for Theoretical Physics in Italy, and Xuejie Gao from the Institute of Atmospheric Physics, Chinese Academy of Sciences in Beijing, review recent progress in, and future directions for, the field of regional earth system modeling.

But what is an RESM? Professor Gao explains: "Previously, RCMs [regional climate models] were atmosphere-only. However, the climate system consists of more than just the atmosphere. Hence, RESMs have emerged, which also take into account other key components of the earth system--for instance, the ocean, land, and sea ice".

The basic structure of a coupled RESM is illustrated in the Figure. Currently, several coupled RESMs exist, varying in their sets of components, and have been applied to a wide range of different regions, including East Asia and China. Some models include multiple earth system components, such as the atmosphere, ocean, sea ice, hydrology, land and/or marine biogeochemistry, while others adopt a simpler set of constituent models.

"Clearly, the development of RESMs is still in its infancy, but will undoubtedly receive increasing attention in the coming years," adds Prof. Gao. Indeed, the paper highlights that more work and intercomparison studies are needed to assess the transferability of these coupled models. The inclusion of an interactive biosphere has been limited so far, but the interest in this aspect is growing and is especially important within the context of future climate change, which may lead to pronounced changes in natural ecosystems.

Aside from fully coupling the atmosphere, ocean, cryosphere, biosphere, and chemosphere, the next challenge in RESM modeling is the inclusion of the human factor, as concluded by the authors. Human activities are currently considered in most model experiments as external players in the climate system, either as forcings or as receptors (e.g., impacts). However, there is a two-way interaction between human societies and the natural environment. In an era where humans are now a key component of the climate system, these processes will have to be included in the next generation of earth system models.

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Plant fossils provide new insight into the uplift history of SE Tibet

image: This is a fossil site in Kajun village, Markan Basin, SE Tibet (~3,900 m in present elevation).

Image: 
©Science China Press

The Tibetan Plateau, the highest and largest plateau in the world, is well known as 'The Third Pole'. Tibet has also been called 'Asia's water tower' because so many of Asia's major rivers such as the Ganges, Indus, Tsangpo/Brahmaputra, Mekong, Yellow and Yangse rivers originate there. Despite its importance, the uplift history of the plateau and the mechanisms underpinning its evolution are still unclear, largely because reliable measurements of past surface elevation are hard to obtain.

Plant fossils might seem an unlikely way of determining surface height and thus what is happening deep in the Earth to build mountains and plateaus. However, because plants live at the Earth's surface and have to constantly interact with the atmosphere, their leaves are very good at recording their surroundings, including properties of the atmosphere that are related to height. This approach has shown that the rise of the Himalaya was a relatively recent phenomenon, and took place after parts of Tibet were already above 4.5 km. However, well-dated plant fossils are rare in Tibet.

Recently, a large collection of plant fossils was made from the Lawula Formation in the Markam Basin in SE Tibet. This collection was made by Tao Su and his colleagues from Xishuangbanna Tropical Botanical Garden, Chinese Academy of Sciences. Remarkably, the fossils were preserved between volcanic ash layers that allowed them to be precisely dated using 40Ar/39Ar analysis. It turned out that the fossil assemblages were much older than their relatively modern appearance would suggest.

Tao Su and his colleagues recorded several thousand fossil leaves from four different layers, but two layers have the richest plant fossils with the best preservation. The lower layer (MK3) was deposited 34.6 million years (Ma) ago and the upper layer (MK1) at 33.4 Ma. As such they spanned the Eocene-Oligocene Transition (33.9 Ma), a time when deep sea sediments show significant cooling.

Interestingly, layer MK3 is dominated by leaves of the ring-cupped oak and members of the birch family, whereas MK1 consists almost exclusively of alpine taxa with small leaves. Assemblage composition and leaf form show clearly a transition from evergreen and deciduous broad-leaved mixed forest to alpine shrub. That climate change was quantified by Climate-Leaf Analysis Multivariate Program (CLAMP), a proxy that uses leaf form to estimate a range of climate variables such as temperature and moisture, as well as surface height, in the geological past.

Using this approach, Tao Su and colleagues showed that at the E-O transition southeastern Tibet was ~3 km high and actively rising to close its present height. Their results demonstrate clearly the early onset of uplift in this region, rather than uplift beginning some 10 million years later near the start of the Miocene. The results show that the elevation of southeastern Tibet took place largely in the Eocene, which has major implications for uplift mechanisms, landscape development and biotic evolution.

Furthermore, 40Ar/39Ar analysis of the volcanic ashes bounding the Markam fossil floras adds to a growing list of Paleogene sites in southeastern Tibet and Yunnan, which are far older than previously thought based on biostratigraphy and lithostratigraphy. It is already clear that the evolution of the modern highly diverse Asian biota is a Paleogene, not a Neogene, phenomenon and took place before the E-O transition. This implies a modernisation deeply-rooted in the Paleogene, possibly driven by a combination of complex Tibetan topography and climate change.

The Xishuangbanna group are continuing to collect spectacular plant fossils in different parts of the Tibetan Plateau. In the coming years, it would expect to see a revolution in the understanding of Tibetan uplift and its relationship to climate and biotic evolution in Asia.

Credit: 
Science China Press

URI drug study produces 'promising therapy' for alcohol abuse

KINGSTON, JUNE 28, 2018 -- Alcohol abuse is among the leading causes of preventable death in the United States, killing more than 88,000 people a year, according to the Centers for Disease Control. That total is higher than the combined death tolls of HIV/AIDS, gun violence and car crashes. Despite this, current medications are not highly effective in addressing alcohol abuse.

A University of Rhode Island College of Pharmacy professor is working to change that, and a new clinical trial is right around the corner. Fatemeh Akhlaghi, the Ernest Mario Distinguished Chair in Pharmaceutics, is part of a team working to develop a novel medication to treat alcohol use disorder, the term scientists and health practitioners use.

Funded by a $1.65 million grant from the National Center for Advancing Translational Sciences, a branch of the National Institutes of Health (NIH), the URI team is testing the safety and efficacy of a drug originally developed by Pfizer to treat obesity and diabetes. The grant formalizes a partnership among Akhlaghi, Pfizer and Dr. Lorenzo Leggio, chief of the Section on Clinical Psycho-neuroendocrinology and Neuro-psychopharmacology, an NIH laboratory funded by the National Institute on Alcohol Abuse and Alcoholism and the National Institute on Drug Abuse.

The drug focuses on ghrelin, a peptide with 28 amino acids that stimulates appetite and food intake. Known as "the hunger hormone," ghrelin levels and feelings of hunger increase in tandem. In those with alcohol use disorder, higher concentrations of ghrelin are associated with higher alcohol craving and consumption. The researchers believe that an oral medication that blocks ghrelin may help stave off cravings for alcohol. Initial findings have shown positive results in lab rats and in 12 patients who volunteered for a study at NIH. The result of this study was published last month in the journal Molecular Psychiatry.

"Addictions share similar pathways in the brain -- food addiction, alcohol addiction, drug addiction. If this drug can block the ghrelin receptor, even if you have high ghrelin level, your ghrelin receptors become numb, and do not respond to the hunger signal," said Akhlaghi, co-principal investigator on the study. "In 12 patients, there was a statistically significant reduction in alcohol craving and food craving. The main outcome was that the drug was safe and well-tolerated, did not affect alcohol pharmacokinetics, and that there was a significant dampening of the effect of ghrelin."

The researchers are working on a larger placebo-controlled clinical trial to further test the medication on patients who misuse alcohol. Researchers will study the impact of the medication on patients' alcohol cue response using functional magnetic resonance imaging to determine the drug's efficacy.

"The drugs that are available to treat alcohol use disorder either came from opioids or other drugs that make you have an aversive effect if you drink, and each of them has only small effects," Akhlaghi said. "The study with the 12 patients shows potential success, although the results are clearly very preliminary and in need for replication. In the new phase, we are looking at the efficacy of the drug. We cannot say this is a cure; we can say it is a promising therapy."

Credit: 
University of Rhode Island

Largest ever multimorbidity trial in primary care challenge current thinking

In the largest ever trial of an intervention to treat people with multiple long-term conditions (multimorbidity) in primary care, researchers at the Universities of Bristol, Manchester, Dundee and Glasgow found that the patient-centred approach taken improved patients' experience of their care but did not improve their health-related quality of life. This is a challenge to current thinking on which UK and international guidelines are based.

In a study involving 1,546 patients from England and Scotland, they found that by making health reviews more patient-centred, such as involving patients in the planning and delivery of their care, overall patient satisfaction improved significantly. However, their health-related quality of life, which included measures of mobility, self-care, pain and discomfort, and anxiety and depression, did not.

The findings, published in The Lancet today [Thursday 28 June], provide the best evidence to date of the effectiveness of a person-centred approach for multimorbidity, for which there is international consensus but little evidence.

One in four people in the UK and the US have two or more long-term health conditions, increasing to two-thirds for patients aged over 65, placing a major strain on health services. Conditions include diabetes, heart disease and asthma, and can include mental health conditions such as depression and dementia. Multimorbidity is associated with reduced quality of life, worse physical and mental health, and increased mortality. Treatment for multimorbidity places an additional burden on patients, who may have to take large numbers of drugs, make lifestyle changes and attend numerous appointments for health care.

The study, funded by the National Institute for Health Research (NIHR), tested a new approach to caring for people with three or more long-term conditions, which aimed to improve their health-related quality of life and experience of patient-centred care, and reduce their burden of illness and treatment compared with usual care. The '3D' approach, which encourages clinicians to think broadly about the different dimensions of health, simplify complex drug treatment and consider mental health (depression) as well as physical health, was designed to treat the whole person and overcome the disadvantages of treating individual conditions in isolation.

Professor Chris Salisbury, from the University of Bristol's Centre for Academic Primary Care and lead author of the study, said: "Existing treatment is based on guidelines for each separate condition meaning that patients often have to attend multiple appointments for each disease which can be repetitive, inconvenient and inefficient. They see different nurses and doctors who may give conflicting advice. Patients with multiple physical health problems frequently get depressed and they also sometimes complain that no-one treats them as a 'whole person' or takes their views into account.

"Internationally, there is broad consensus about the key components of an approach to improve care for people with multimorbidity but we found little evidence about their effectiveness. We incorporated these components in the 3D approach, including a regular review of patients' problems according to their individual circumstances. We were surprised to find no evidence of improved quality of life for patients as a result of the intervention but this was balanced by significant improvements in patients' experience of care.

"The question now is whether improved patient experience is sufficient justification for this approach. Given that improving patient experience is one of the triple arms of health care, alongside improving health and reducing costs, our view is that providing care that significantly improves patients' experience is justification in itself."

Patients from 33 primary care practices in Bristol, Greater Manchester and Ayrshire in Scotland took part in the study. Roughly half of the practices offered the 3D intervention (to 797 patients) and other half offered usual care (to 749 patients). Patients were aged 18 and older. The 3D intervention replaced disease-focused reviews of each health condition with one comprehensive 'patient-centred' review every 6 months with a nurse and doctor. These reviews focused on discussing the problems that bothered the patients most, how to improve their quality of life and how to improve management of their health conditions. A pharmacist reviewed the patient's medication. A health care plan was then devised with each patient and reviewed six months later.

All measures of patient experience showed benefits after 15 months, with patients widely reporting that they felt their care was more joined up and attentive to their needs. However, there was no difference between the two groups in their reported quality of life at the end of the study period.

Credit: 
University of Bristol

The odds of living to 110-plus level out -- once you hit 105

Want to be a supercentenarian? The chances of reaching the ripe old age of 110 are within reach - if you survive the perilous 90s and make it to 105 when death rates level out, according to a study of extremely old Italians led by the University of California, Berkeley, and Sapienza University of Rome.

Researchers tracked the death trajectories of nearly 4,000 residents of Italy who were aged 105 and older between 2009 and 2015. They found that the chances of survival for these longevity warriors plateaued once they made it past 105.

The findings, to be published in the June 29 issue of the journal Science, challenge previous research that claims the human lifespan has a final cut-off point. To date, the oldest human on record, Jeanne Calment of France, died in 1997 at age 122.

"Our data tell us that there is no fixed limit to the human lifespan yet in sight," said study senior author Kenneth Wachter, a UC Berkeley professor emeritus of demography and statistics. "Not only do we see mortality rates that stop getting worse with age, we see them getting slightly better over time."

Specifically, the results show that people between the ages of 105 and 109, known as semi-supercentenarians, had a 50/50 chance of dying within the year and an expected further life span of 1.5 years. That life expectancy rate was projected to be the same for 110-year-olds, or supercentenarians, hence the plateau.

The trajectory for nonagenarians is less forgiving. For example, the study found that Italian women born in 1904 who reached age 90 had a 15 percent chance of dying within the next year, and six years, on average, to live. If they made it to 95, their odds of dying within a year increased to 24 percent and their life expectancy from that point on dropped to 3.7 years.

Overall, Wachter and fellow researchers tracked the mortality rate of 3,836 Italians -- supercentenarians and semi-supercentenarians - born between 1896 and 1910 using the latest data from the Italian National Institute of Statistics.

They credit the institute for reliably tracking extreme ages due to a national validation system that measures age at time of death to the nearest day: "These are the best data for extreme-age longevity yet assembled," Wachter said.

As humans live into their 80s and 90s, mortality rates surge due to frailty and a higher risk of such ailments as heart disease, dementia, stroke, cancer and pneumonia.

Evolutionary demographers like Wachter and study co-author James Vaupel theorize that those who survive do so because of demographic selection and/or natural selection. Frail people tend to die earlier while robust people, or those who are genetically blessed, can live to extreme ages, they say.

Wachter notes that similar lifecycle patterns have been found in other species, such as flies and worms.

"What do we have in common with flies and worms?" he asked. "One thing at least: We are all products of evolution."

Credit: 
University of California - Berkeley

Path to zero emissions starts out easy, but gets steep

image: Workers in a steel mill.

Image: 
Public domain

Washington, DC--Carbon dioxide emissions from human activities must approach zero within several decades to avoid risking grave damage from the effects of climate change. This will require creativity and innovation, because some types of industrial sources of atmospheric carbon lack affordable emissions-free substitutes, according to a new paper in Science from team of experts led by University of California Irvine's Steven Davis and Carnegie's Ken Caldeira.

In addition to heating, cooling, lighting, and powering individual vehicles--subjects that are often the focus of the emissions discussion--there are other major contributors to atmospheric carbon that are much more challenging to address. These tough nuts to crack include air travel; long-distance freight by truck, train, or ship; and the manufacture of steel and cement.

"We wanted to look closely at the barriers and opportunities related to the most difficult-to-decarbonize services," said lead author Davis.

The barriers they analyzed included:

The expected increase in demand for air travel and freight shipping, sectors that already contribute about 6 percent of global emissions.

The manufacture of cement and steel, which release 1.3 and 1.7 billion tons of carbon dioxide emissions into the atmosphere annually and are also expected to grow as infrastructure demands increase, particularly in the developing world.

The necessity of generating and transmitting electricity with near 100 percent reliability, despite variability in renewable energy sources such as wind and solar.

"Taken together these 'tough-nut' sources account for a substantial fraction of global emissions," Caldeira said. "To effectively address them, we will need to develop new processes and systems. This will require both development of new technologies and coordination and integration across industries."

Possibilities that the team analyzed include, but aren't limited to, the synthesis of energy dense hydrogen or ammonia-based fuels for aviation and shipping, new furnace technologies in the manufacture of concrete and steel, and tools to capture and safely store hydrocarbon emissions.

But the costs of implementing and scaling up these technologies to overhaul the transportation, construction, and energy storage industries will present hurdles, they warn. Plus, it will be necessary to overcome the inertia of existing systems and policies to create something new and better.

"We don't have a crystal ball to foresee what technologies will exist a century from now," Caldeira continued. "But we know that people will want buildings, transportation, and other energy services and we can try to design our energy system so that it is able to take advantage of new inventions as they come along."

Credit: 
Carnegie Institution for Science