Tech

Cancer cells mediate immune suppression in the brain

Scientists have long believed that the brain protects itself from an aggressive immune response to keep down inflammation. However, that evolutionary control may work against it when a cancer cell attempts to spread to the brain, researchers at the University of Notre Dame have discovered.

In newly published research in the journal Cell, researchers showed that one type of cell important for immunity, called a myeloid cell, can suppress the immune response -- which has the effect of allowing breast cancer cells to metastasize to the brain to form secondary tumor cells there.

"We wanted to understand how the brain immune environment responds to the tumor, and there are so many different cells, and so many changes," said Siyuan Zhang, the Dee Associate Professor in the Department of Biological Sciences, a researcher for Harper Cancer Research Institute and a co-author on the paper. "The traditional belief was that the process described in this paper would be anti-tumor, but in our case, after a lot of experimenting, we discovered it is a proponent of metastasis."

Through single-cell sequencing -- not powerful enough even a few years ago for this type of work -- and an imaging technique, the researchers discovered that a myeloid cell type called microglia promoted the outgrowth of breast cancer that has spread to the brain by the expression of several proteins. The microglia release one protein -- an immune cell-attracting protein called CXCL10 -- to recruit more microglia to the metastasis. All these microglia express a protein named VISTA, which serves as protection against brain inflammation. But when faced with a cancer cell, this two-part process suppressed important T-cells. T-cells, which heighten the body's immune response, would usually prevent the spread of cancer throughout the body.

The activation of the VISTA checkpoint had not previously been known as a potential promoter of brain metastasis, said the paper's lead author, Ian Guldner, a graduate student in Zhang's lab. In addition to using a mouse model for the research, the team used data mining techniques to validate how humans' brains would respond.

Clinically, the discovery is relevant, because antibodies have been developed that blocked VISTA in humans, Guldner said. However, significant additional work needs to be performed to ensure the safe and effective use of VISTA-blocking antibodies in people with brain metastases.

Learning about the structures within cells in the brain will help researchers not only understand cancer, but also degenerative diseases such as Parkinson's, multiple sclerosis and Alzheimer's, Zhang said.

"The brain immune system is a very active field, since brain cells are dysregulated during the aging process," Zhang said. "There is so much to learn."

Credit: 
University of Notre Dame

A question of affinity

image: Dye molecules in modern organic solar cells led to a two-fold improvement of organic solar cell efficiency as compared to the widely used fullerenes.

Image: 
MPI-P

Most of us are familiar with silicon solar cells, which can be found on the rooftops of modern houses. These cells are made of two silicon layers, which contain different atoms such as boron and phosphorus. When combined, these layers direct charges generated by the absorbed sunlight towards the electrodes - this (photo)current can then be used to power electronic devices.

The situation is somewhat different in organic solar cells. Here, two organic materials are mixed together, rather than arranged in a layered structure. They are blends of different types of molecules. One type, the acceptor, likes to take electrons from the other, the donor. To quantify how likely "electron transfer" between these materials takes place, one measures the so-called "electron affinity" and "ionization energy" of each material. These quantities indicate how easy it is to add or extract an electron from a molecule. In addition to determining the efficiency of organic solar cells, electron affinity and ionization energy also control other material properties, such as color and transparency.

By pairing donor and acceptor materials, one creates a solar cell. In an organic solar cell, light-particles ("photons") transfer their energy to electrons. Excited electrons leave behind positive charges, called "holes". These electron-hole pairs are then separated at the interface between the two materials, driven by the differences in the electron affinity and ionization energy.

Until now, scientists assumed that both electron affinity and ionization energy are equally important for the solar cell functionality. Researchers from KAUST and MPI-P have now discovered that in many donor-acceptor blends, it is mainly the difference of the ionization energy between the two materials, which determines the efficiency of the solar cell. The combination of results from optical spectroscopy experiments, performed in the group of Frédéric Laquai at KAUST, as well as computer simulations performed in the group of Denis Andrienko, MPI-P, in the department headed by Kurt Kremer, allowed precise design rules for molecular dyes to be derived, aimed at maximizing solar cell efficiency.

"In the future, for example, it would be conceivable to produce transparent solar cells that only absorb light outside the range visible to humans - but then with the maximum efficiency in this range," says Denis Andrienko, co-author of the study published in the journal "Nature Materials". "With such solar cells, whole fronts of houses could be used as active surface", Laquai adds.

The authors envision that these studies will allow them to reach 20 % solar cell efficiency, a target that industry has in mind for cost-effective application of organic photovoltaics.

Credit: 
Max Planck Institute for Polymer Research

Tailoring 2D materials to improve electronic and optical devices

image: Researchers led by Shengxi Huang, assistant professor of electrical engineering and biomedical engineering at Penn State, have altered 2D materials to enhance light emission and increase signal strength.

Image: 
Penn State College of Engineering

New possibilities for future developments in electronic and optical devices have been unlocked by recent advancements in two-dimensional (2D) materials, according to Penn State researchers.

The researchers, led by Shengxi Huang, assistant professor of electrical engineering and biomedical engineering at Penn State, recently published the results of two separate but related discoveries regarding their success with altering the thin 2D materials for applications in many optical and electronic devices. By altering the material in two different ways -- atomically and physically -- the researchers were able to enhance light emission and increase signal strength, expanding the bounds of what is possible with devices that rely on these materials.

In the first method, the researchers modified the atomic makeup of the materials. In commonly used 2D materials, researchers rely on the interaction between the thin layers, known as van der Waals interlayer coupling, to create charge transfer that is then used in devices. However, this interlayer coupling is limited because the charges are traditionally distributed evenly on the two sides of each layer.

In order to strengthen the coupling, the researchers created a new type of 2D material known as Janus transition metal dichalcogenides by replacing atoms on one side of the layer with a different type of atoms, creating uneven distribution of the charge.

"This [atomic change] means the charge can be distributed unevenly," Huang said. "That creates an electric field within the plane, and can attract different molecules because of that, which can enhance light emission."

Also, if van der Waals interlayer coupling can be tuned to the right level by twisting layers with a certain angle, it can induce superconductivity, carrying implications for advancements in electronic and optical devices.

In the second method of altering 2D materials to improve their capabilities, the researchers strengthened the signal that resulted from an energy up-conversion process by taking a layer of MoS2, a common 2D material that is usually flat and thin, and rolling it into a roughly cylindrical shape.

The energy conversion process that takes place with the MoS2 material is part of a nonlinear optical effect where, if a light is shined into an object, the frequency is doubled, which is where the energy conversion comes in.

"We always want to double the frequency in this process," Huang said. "But the signal is usually very weak, so enhancing the signal is very important."

By rolling the material, the researchers achieved a more than 95 times signal improvement.

Now, Huang plans to put these two advances together.

"The next step for our research is answering how we can combine atomic engineering and shape engineering to create better optical devices," she said.

Credit: 
Penn State

Focused efforts needed to help health IT reach its promise

Despite significant investment in health information technology such as computerized health records and clinical decision support, leveraging the technology to improve the quality of care will require significant and sustained effort by health systems, according to a new RAND Corporation study.

In order to accelerate change, better mechanisms for creating and disseminating best practices are needed, in addition to providing advanced technical assistance to health systems, according to the analysis based on in-depth interviews with leaders from 24 health systems.

The study is published online by the journal Healthcare.

"Health systems are spending the most effort on foundational activities such as standardizing data and work processes that may not directly improve performance, but lay the groundwork for doing so," said Robert S. Rudin, the study's lead author and a senior information scientist at RAND, a nonprofit research organization. "Our study findings may help explain why the hoped-for IT-enabled transformation in health care has not yet occurred."

The study suggests the policy debate should move beyond federal incentives and requirements to adopt health IT. Instead, there should be a reorientation toward supporting efforts that create and disseminate best practices for how to optimally leverage the technology to improve performance. Payment reform is helping to introduce better incentives but may be insufficient.

"Leveraging IT to improve performance requires significant and sustained effort by
health systems, in addition to significant investments in hardware and software," Rudin said. "To accelerate change, better mechanisms for creating and disseminating best practices and providing advanced technical assistance are needed."

Health system executives said that standardizing data and analytics across an organization was necessary to use the data for future activities to achieve high performance, such as monitoring areas for improvement. Health systems that were less advanced in this area were just beginning to establish a data analytics department, and planned for it to be operational within a few years.

"Some health systems clearly had developed or adopted best practices for using health IT that others had not," Rudin said. "It is likely that many health systems are spending considerable effort rediscovering the same lessons that others have already mastered. If lessons could be disseminated better, it could make a huge difference."

Federal officials in 2009 passed legislation that prompted a majority of physicians and hospitals to adopt electronic health records. The investment was expected to improve the health of Americans and the performance of the nation's health care system.

Despite the investment, the study notes that the performance of health systems across the U.S. continues to lag. The information technology revolution that has catalyzed transformations in industries such as finance and commerce has yet to yield the quality and cost improvements in health care that policymakers intended.

Researchers from the RAND Center of Excellence on Health System Performance collected information from 24 health systems across four states (California, Washington, Minnesota and
Wisconsin).

A total of 162 executives from the health systems were interviewed about their experiences implementing health IT and whether IT is enabling them to make making meaningful changes in quality and cost control.

The researchers found a series of IT-related activities that could lead to higher performance, which were sorted into two broad categories: laying the foundation for performance improvement and actually using IT to improve performance. While the types of activities described were similar across health systems, some health systems were more notably advanced than others in their progress within these activities.

For health IT to make a big change in performance, the researchers say health systems may need direct help to accelerate change, such as through the widespread dissemination of proven best practices, and more targeted technical and implementation assistance.

"The benefits of health information technology won't come overnight - there's no silver bullet," Rudin said "Most health systems are working on building the foundation. It suggests patience and sustained effort over years and even decades is needed to realize the major benefits of health IT."

Credit: 
RAND Corporation

Hurricanes pack a bigger punch for Florida's west coast

image: Dr. Joanne Muller (left) and Ilexxis Morales (right) using a hand coring technique, with a 3-meter core and core candles around the aluminum pipe, in Florida's Indian River Lagoon.

Image: 
James Javaruski.

Boulder, Colo., USA: Hurricanes, the United States' deadliest and most destructive weather disasters, are notoriously difficult to predict. With the average storm intensity as well as the proportion of storms that reach category 4 or 5 likely to increase, more accurate predictions of future hurricane impacts could help emergency officials and coastal populations better prepare for such storms--and ultimately, save lives.

Such predictions rely on historical records that reveal cyclic changes, such as the El Niño-Southern Oscillation, that can affect hurricane frequency. But the short observational records that exist for many regions, including Florida's East Coast, are inadequate for detecting climate patterns that fluctuate over longer timeframes.

Now new research presented Wednesday at the annual meeting of The Geological Society of America is extending Florida's hurricane record thousands of years back in time--and hinting at a surprise finding.

"There has been little to no research done on the hurricane record for Florida's East Coast," explains Ilexxis Morales, a graduate student in the Environmental Science program at Florida Gulf Coast University and the study's lead author. "The national hurricane database for this area currently only extends back to the 1850s," she says.

But what that record suggests, says Morales, is quite intriguing, especially with respect to intense (category 3-5) storms. "It shows that at least for the past 170 years, Florida's Atlantic Coast has been hit by fewer intense hurricanes than the state's Gulf Coast," she says.

To better understand this discrepancy, Morales and her Florida Gulf Coast University co-authors, Joanne Muller and James Javaruski, collected sediment cores from a series of lagoons tucked behind narrow barrier islands along the state's eastern coast. Their analysis shows that in contrast to the dark organic matter that comprises most of the cores, hurricanes leave behind a coarser deposit distinctive enough to be called a "tempest".

"When a large storm comes through the area," says Morales, "it picks up light-colored sand from the beach and deposits it in the lagoon." Because the grains of sand deposited by large storms are coarser than the organic-rich muds, the researchers can detect ancient tempest deposits using simple grain-size analyses.

After identifying the tempest deposits (called tempestites), the team used a variety of methods, including a Lead-210 germanium detector and radiocarbon dating, to determine their ages. While still preliminary, the results from the seven cores the researchers have analyzed to date suggest that there are fewer visible tempestites in the East Coast cores compared to those analyzed from the West Coast.

The results hint that the pattern of more major hurricanes hitting Florida's Gulf Coast may extend thousands of years back in time. Morales speculates this difference could be due to the shifting position of the Bermuda High, a semi-permanent ridge of high pressure that can affect a hurricane's direction. "When the Bermuda High is in a more northeasterly position, hurricanes tend to track along Florida's East Coast and up to the Carolinas," says Morales. "When it shifts southwestward towards the U.S., the high tends to push storms into the Gulf of Mexico instead." Sea-surface temperatures can also help explain the difference, says Morales. "Normally the Atlantic is colder than the Gulf, and this colder water makes it harder for hurricanes to sustain their strength," she explains.

Similar "paleotempestology" studies have been conducted in other locations that are also susceptible to hurricanes, including Texas, Louisiana, New England, and even Australia, and the results have a number of practical applications. "This data will go to the national hurricane database, which will then help meteorologists better predict storm paths," Morales says. The data will also help show which areas are more susceptible to hurricane damage, enabling insurance companies to better adjust hurricane-insurance rates and developers to select building sites less susceptible to storm surge.

Once complete, says study co-author James Javaruski, the longer storm record could help researchers determine whether changes observed in it can be attributed to human-induced climate change. The findings can also offer insight into what could happen in the future. "If we see in other studies that sea surface temperatures were increasing over a certain time frame and find that hurricanes also increased over that same time frame," Javaruski says, "it can give us a good idea of what to expect as we artificially raise sea surface temperatures now."

Credit: 
Geological Society of America

USTC develops single crystalline quaternary sulfide nanobelts

Copper-based quaternary sulfide nanomaterials, especially for Cu-Zn-In-S (CZIS) and Cu-Zn-Ga-S (CZGS), which consist of non-toxic elements are attractive candidate for solar photocatalytic hydrogen production due to their tunable bandgap, good chemical and thermal stability, environmental benignity, and facile synthesis from abundant and inexpensive starting materials. Unfortunately, the low electric conductivity, rapid recombination rate of photogenerated electrons and holes as well as the less accessible surface-active sites have greatly limited their photocatalytic performance.

Recently, the research group led by Prof. YU Shuhong at the University of Science and Technology of China have designed a simple colloidal method to synthesize single crystalline wurtzite CZIS nanobelts, as well as the single crystalline wurtzite CZGS nanobelts assisted with oleylamine and 1-dodecanethiol. The research article entitled "Single crystalline quaternary sulfide nanobelts for efficient solar-to-hydrogen conversion" was published on Nature Communications on Oct. 15th.

Researchers first used first principle density functional theory (DFT) calculation to explore the explore the reaction Gibbs energy (ΔGH) of (0001), (1010), and (1011) facets of wurtzite CZIS. The calculation results showed that the (0001) facet had the smallest binding strength to atomic hydrogen. Following the Bell-Evans-Polanyi principle, researchers expected that the (0001) facet was the most favorable surface for photocatalytic hydrogen production on CZIS.

Researchers then designed a simple colloidal method to synthesize single crystalline wurtzite CZIS nanobelts (NBs) exposing the (0001) facet, as well as the single crystalline wurtzite CZGS NBs with the exposed (0001) facet assisted with oleylamine and 1-dodecanethiol. The as-prepared nanobelt photocatalysts show excellent composition-dependent photocatalytic performances, for CZIS and CZGS nanobelts under visible-light irradiation (λ>420 nm) without co-catalyst.

This work shows the significance of surface engineering of quaternary sulfide photocatalyst to achieve better performance. This photocatalyst design method can be exploited to other semiconductor material systems, thereby enabling novel photocatalysts that use the low-cost elements to efficiently catalyze special reactions.

Credit: 
University of Science and Technology of China

Researchers prove titanate nanotubes composites enhance photocatalysis of hydrogen

image: The titanate nanotubes (TNTs) composites enhanced the photocatalytic selectivity for H2 generation from formic acid better than Pt/TiO2. In addition, intensified electronic interactions occur between the components of TNTs and the Pt atoms in terms of the strong metal-support interaction, consequently influencing the behavior of photocatalysts. Therefore, the photocatalyst formed by Pt and TNTs has higher photocatalytic performance than TiO2 from a 20% v/v methanol solution under UV and visible light irradiation.

Image: 
Hsiu-Yu Chen

In a paper published in NANO, researchers from National Taiwan University examined the photocatalytic performances of titanate nanotubes (TNTs) against commonly-used titanium dioxide (TiO2) and discovered superior performance of TNTs.

In the study, TiO2 was used as a reference support compared with TNTs synthesized by a facile method. The results showed that Platinum (Pt/)TNTs fabricated using the microwave heating process enhanced the hydrogen evolution from methanol to a greater extent than Pt/TiO2. The high surface area of TNTs can improve adsorption of methanol on the active site and prevent the formation of agglomerated fine Pt particles.

Additionally, the high surface area led to an increased contact area between Pt and Ti atoms, which enhanced the strong metal-support interaction and increased H2 production performance. This is due to the absorption spectra of TNTs shifting toward the visible light region to a greater extent after loading Pt, thereby improving the selectivity of formic acid decomposition to CO2. Therefore, Pt/TNTs, which have considerably high photocatalytic efficiency, are viable in further applications as promising photocatalysts.

The titanate nanotubes (TNTs) composites enhanced the photocatalytic selectivity for H2 generation from formic acid better than Pt/TiO2. In addition, intensified electronic interactions occur between the components of TNTs and the Pt atoms in terms of the strong metal-support interaction, consequently influencing the behavior of photocatalysts. Therefore, the photocatalyst formed by Pt and TNTs has higher photocatalytic performance than TiO2 from a 20% v/v methanol solution under UV and visible light irradiation.

TNTs offer higher active surface area than TiO2 nanoparticles. The high surface area provides short diffusion paths for electrons and holes, prompting them to transfer to the surface and reducing the recombination of electrons and holes. Also, X-ray Photoelectron Spectroscopy (XPS) results of the paper showed negative shifts of the Pt binding energies and positive shifts of Ti binding energies due to the strong metal-support interaction between Pt and TNTs. Thus, the remarkably high photocatalytic efficiency of TNT composites facilitates their application as promising photocatalysts.

Besides, it is worth noting that one mole of HCOOH decomposes into one mole of CO2 and one mole of H2, or one mole of CO and one mole of H2O. Thus, it is important to increase the selectivity of formic acid decomposition for CO2 evolution. The results show bare TNTs and Pt/TNTs resulted in lower CO generation than bare TiO2 and Pt/TiO2. This result may be attributed to the inability of CO to diffuse into the pores of TNTs because of the diameter difference, because the kinetic diameter of CO (0.38 nm) is larger than that of CO2 (0.33 nm).

Will the different structure of the photocatalyst promote the photocatalytic selectivity of formic acid to H2? The researchers prove tubular TNTs composites enhanced the photocatalytic hydrogen generation better than TiO2.

Credit: 
World Scientific

Water consumption for trees is calculated in order to design precision irrigation systems

image: Almond tree plantation

Image: 
Universidad de Córdoba / PxHere

In 1995, the severe drought that devastated Spain left some farms using irrigation agriculture without water supplies. Though it has not happened again since, climate change increases the chance of this threat. For farmers growing annual crops, an occurrence such as this one would mean losing a year's work but those who have groves of trees risk losing not only their year's production, but their long-term investment as well.

A research team from the University of Cordoba and the Institute for Sustainable Agriculture at the Spanish National Research Council in Cordoba has been working for years on several projects to improve water management and maximize the productivity of tree crops such as olives, almonds and citrus fruits. One of their lines of research is based on the fact that when there is a water shortage, trees transpire less, get warmer, and end up producing less.

In their latest research project, they studied how an indicator called Crop Water Stress Index (abbreviated to CWSI), based on detecting temperature increase in trees with water stress, is related to relative water consumption in an almond grove. Tree water consumption or transpiration is very difficult to measure whereas a tree's temperature is easily taken using remote sensors, similar to those used on a daily basis during the pandemic to detect people with fevers. In their latest work, this group experimentally demonstrated for the first time that there is a relationship between relative transpiration and the CWSI in almond trees. So, farmers can find out at any moment if the trees are consuming water at 80-90% of their capacity, meaning within optimal levels, or if they have high levels of stress and urgently need to be supplied with more water.

"This indicator, the CWSI, has the advantage that relative water consumption can be determined via remote sensing, using drones or manned planes and a map of the transpiration in different areas of a plantation can be obtained. In the future, satellites will most likely be used to do this work very precisely on big plantations", explains Elías Fereres, Professor Emeritus of the Agronomy Department at the University of Cordoba and a member of the research team, which is led by Victoria González Dugo from the Institute for Sustainable Agriculture at the Spanish National Research Council.

Therefore, these CWSI maps will allow for irrigating different areas of a farm in different ways in terms of the water level needed at each moment, thus maximizing production with the minimal necessary water resources or those available at the time. This research is within the framework of the technique known as precision irrigation, a new system that uses the most advanced technology to irrigate at an optimal level, supplying the exact amount of water to every part of the grove and circumventing losses. "The aim is to use water effectively and where it is most needed", points out Elías Fereres.

Though the research was performed on almond plantations, this research could be used on other tree crops such as olive trees, which are so important to the economy in Andalusia and on many occasions suffer from times of water shortages.

Credit: 
University of Córdoba

Cloud-based framework leads to improved efficiency in disaster-area management

For the first time, researchers have implemented a cloud-based, highly efficient control system to aid first responders in disaster-area management.

When disaster strikes, nothing is certain. From hazardous chemical leaks to destroyed communications infrastructure, the terrain encountered by first responders can be cumbersome, time-consuming, and lethal. These conditions often hinder efforts to adequately assess the damage to infrastructure, identify hazards, and to locate and rescue victims of the disaster.

With the advent of autonomous vehicles, namely unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs), efforts have been made to utilize these platforms to increase operational capabilities as well as protect first responders. For example, the use of UGVs during the disaster at the Fukushima nuclear power plant enabled the quick detection of hazardous sources as well as victims. In scenarios where many sensors attached to UAVs and UGVs are implemented and must collaborate with each other, however, the protocols that have been in use often degrade in speed and efficiency.

"The existing unmanned vehicle collaborative frameworks commonly lack the means of efficient and realistic communication protocols to enable effective wireless communication among resource-constrained devices in the framework" said Abenezer Girma at North Carolina A&T State University and first author of the study, published in IEEE/CAA Journal of Automatica Sinica.

Conventional protocols, such as HTTP, are often inadequate for larger scenarios and lack key power requirements for UAVs. "Most of the existing frameworks cannot extend network service to the autonomous systems, such as UAVs, working in network-denied disaster environment and they do not also have a nearby UAVs' charging point," Girma said.

The research team from North Carolina A&T State University has, for the first time, designed a cloud-based autonomous system framework utilizing the standard messaging protocol for the internet-of-things (IoT). This framework is robust to network-denied environments by utilizing each vehicle, along with a clustering algorithm, to maximize the network coverage area. In addition to the resilience of the framework, the speed of communication between the unmanned sensors was increased through the usage of the messaging protocol used in IoT. The team was even able to address the UAV recharging requirement by implementing a control mechanism for landing a UAV on a moving UGV to recharge as well as transport the UAV long distances. These developments will greatly assist first-responders in disaster-area management and likely save lives.

In the future, the researchers plan to build on their design by adding even more sophisticated capabilities by implementing both computer vision and machine learning for better monitoring of disaster areas.

Credit: 
Chinese Association of Automation

Endangered trees in Guam contribute to ecosystem diversity and health

image: Root nodules from Guam's Serianthes nelsonii legume tree within which bacteria convert atmospheric nitrogen into a fixed form that the tree can exploit.

Image: 
University of Guam

Research at the University of Guam has shown that the decomposition of leaf litter from three threatened tree species releases nitrogen and carbon into the soil for use by other plants. The results illuminate the importance of biodiversity and the role certain organisms play in extracting nitrogen and carbon from the atmosphere and sequestering these elements in the biosphere. The findings were published in the September issue of the MDPI journal Nitrogen (doi:10.3390/nitrogen1020010).

A critically important nitrogen source

Carbon and nitrogen are abundant in the atmosphere, but the atmospheric forms are not directly used by plants. Green plants possess the ability to fix the atmospheric carbon through the process of photosynthesis, and this occurs without the aid of symbiotic microorganisms. Other plants have nitrogen-fixing microorganisms inside their roots, which allows them to directly benefit from atmospheric nitrogen.

Nitrogen is required in great quantities to sustain plant health, but most plants absorb the essential nitrogen from the soil. The source of this soil nitrogen is largely through the death and decay of leaves and roots of plants that form symbioses with nitrogen-fixing microorganisms.

"This means in a forest community, the trees that possess this specialized symbiosis are critically important as a nitrogen source for the other members of the forest," said Dr. Adrian Ares, associate director of the Western Pacific Tropical Research Center, where the research was conducted.

The model trees studied in the Guam research included the cycad species Cycas micronesica, the legume species Intsia bijuga, and the legume species Serianthes nelsonii.

The symbiosis between legume plants and the bacteria that grow inside root nodules has been heavily studied for decades, as many of the world's food crops are legumes and their contributions to the protein needs of humans are dependent on the nitrogen from their root symbionts. The symbiosis between cycad plants and the nitrogen-fixing cyanobacteria that grow inside specialized root structures, however, has been less studied.

"A greater understanding of the cycad-cyanobacteria symbiosis is of critical importance to understanding the biochemistry of cycad plants," said Benjamin Deloso, a cycad specialist at the University of Guam.

Rate of leaf decomposition

The research approach invoked the global plant research theme called the leaf economics spectrum. The Serianthes nelsonii leaflets are small and thin and do not require many resources for construction. Contrarily, the Cycas micronesica leaflets are large and thick and require copious resources for construction. Resources needed to build the Intsia bijuga leaflets fall in between the other two species.

"The principles that govern the leaf economics spectrum predicted that the speed of release of carbon and nitrogen from the dead leaf material would be rapid for Serianthes nelsonii and slow for Cycas micronesica," Deloso said.

The predictions were verified by the study. About 80% of the carbon and nitrogen pool was released from the Serianthes nelsonii litter in less than three months, and complete decomposition occurred in less than one year. In contrast, the release of carbon and nitrogen from Cycas micronesica litter was gradual and 25% to 30% of the initial carbon and nitrogen were still locked in the remaining litter after a full year of decomposition. The Intsia bijuga leaf litter decomposition rates were intermediate.

Knowledge for conservation decisions

One primary outcome from the research was the verification that these tree species modulate localized soil processes in a highly contrasting way, and these contrasts increase spatial heterogeneity in a manner that improves ecosystem health.

"Two of these tree species are endangered, and this new knowledge about the services that they provide to Guam's ecosystems is a critical part of developing improved conservation decisions," Ares said.

Credit: 
University of Guam

Scientists discover how a common mutation leads to 'night owl' sleep disorder

image: Cryptochrome is one of four main clock proteins that drive daily biological rhythms. This illustration shows a "pocket" in the clock protein complex where binding of the "tail" of the cryptochrome protein helps regulate the timing of the biological clock.

Image: 
G. Carlo Parico

A new study by researchers at UC Santa Cruz shows how a genetic mutation throws off the timing of the biological clock, causing a common sleep syndrome called delayed sleep phase disorder.

People with this condition are unable to fall asleep until late at night (often after 2 a.m.) and have difficulty getting up in the morning. In 2017, scientists discovered a surprisingly common mutation that causes this sleep disorder by altering a key component of the biological clock that maintains the body's daily rhythms. The new findings, published October 26 in Proceedings of the National Academy of Sciences, reveal the molecular mechanisms involved and point the way toward potential treatments.

"This mutation has dramatic effects on people's sleep patterns, so it's exciting to identify a concrete mechanism in the biological clock that links the biochemistry of this protein to the control of human sleep behavior," said corresponding author Carrie Partch, professor of chemistry and biochemistry at UC Santa Cruz.

Daily cycles in virtually every aspect of our physiology are driven by cyclical interactions of clock proteins in our cells. Genetic variations that change the clock proteins can alter the timing of the clock and cause sleep phase disorders. A shortened clock cycle causes people to go to sleep and wake up earlier than normal (the "morning lark" effect), while a longer clock cycle makes people stay up late and sleep in (the "night owl" effect).

Most of the mutations known to alter the clock are very rare, Partch said. They are important to scientists as clues to understanding the mechanisms of the clock, but a given mutation may only affect one in a million people. The genetic variant identified in the 2017 study, however, was found in around one in 75 people of European descent.

How often this particular mutation is involved in delayed sleep phase disorder remains unclear, Partch said. Sleep behavior is complex--people stay up late for many different reasons--and disorders can be hard to diagnose. So the discovery of a relatively common genetic variation associated with a sleep phase disorder was a striking development.

"This genetic marker is really widespread," Partch said. "We still have a lot to understand about the role of lengthened clock timing in delayed sleep onset, but this one mutation is clearly an important cause of late night behavior in humans."

The mutation affects a protein called cryptochrome, one of four main clock proteins. Two of the clock proteins (CLOCK and BMAL1) form a complex that turns on the genes for the other two (period and cryptochrome), which then combine to repress the activity of the first pair, thus turning themselves off and starting the cycle again. This feedback loop is the central mechanism of the biological clock, driving daily fluctuations in gene activity and protein levels throughout the body.

The cryptochrome mutation causes a small segment on the "tail" of the protein to get left out, and Partch's lab found that this changes how tightly cryptochrome binds to the CLOCK:BMAL1 complex.

"The region that gets snipped out actually controls the activity of cryptochrome in a way that leads to a 24-hour clock," Partch explained. "Without it, cryptochrome binds more tightly and stretches out the length of the clock each day."

The binding of these protein complexes involves a pocket where the missing tail segment normally competes and interferes with the binding of the rest of the complex.

"How tightly the complex partners bind to this pocket determines how quickly the clock runs," Partch explained. "This tells us we should be looking for drugs that bind to that pocket and can serve the same purpose as the cryptochrome tail."

Partch's lab is currently doing just that, conducting screening assays to identify molecules that bind to the pocket in the clock's molecular complex. "We know now that we need to target that pocket to develop therapeutics that could shorten the clock for people with delayed sleep phase disorder," she said.

Partch has been studying the molecular structures and interactions of the clock proteins for years. In a study published earlier this year, her lab showed how certain mutations can shorten clock timing by affecting a molecular switch mechanism, making some people extreme morning larks.

She said the new study was inspired by the 2017 paper on the cryptochrome mutation from the lab of Nobel Laureate Michael Young at Rockefeller University. The paper had just come out when first author Gian Carlo Parico joined Partch's lab as a graduate student, and he was determined to discover the molecular mechanisms responsible for the mutation's effects.

Credit: 
University of California - Santa Cruz

Scientists explain the paradox of quantum forces in nanodevices

image: Scientists proposed a new approach to describe the interaction of metals with electromagnetic fluctuations.

Image: 
Peter the Great St.Petersburg Polytechnic University

Researchers from Peter the Great St.Petersburg Polytechnic University (SPbPU) proposed a new approach to describe the interaction of metals with electromagnetic fluctuations (i.e., with random bursts of electric and magnetic fields). The obtained results have great potential for application in both fundamental physics and for creating nanodevices for various purposes. The article was published in the International Journal "European Physical Journal C".

The operation of microdevices used in modern technology is influenced by the Casimir force caused by electromagnetic fluctuations. This is the force of attraction acting between two surfaces in the vacuum. Such an interaction between electrically neutral bodies located at a distance of less than one micrometer was theoretically described in the middle of the 20th century by Academician Evgeny Lifshitz. In some cases, however, Lifshitz's theory contradicted the experimental results. A mysterious paradox was discovered in the process of precise measurements of the Casimir forces in nanodevices.

"The predictions of the Lifshitz's theory were in agreement with the measurement results only if the energy losses of conduction electrons in metals were not taken into account in calculations. These losses, however, do exist! It is common knowledge that electric current slightly heats the wire. In the literature, this situation is called the Casimir puzzle," explains Galina Klimchitskaya, Professor of the Institute of Physics, Nanotechnology and Telecommunications, SPbPU.

The scientists of Polytechnic University were able to simultaneously take into account the energy losses of electrons in metals and to reach an agreement between the predictions of the Lifshitz theory and high-precision measurements of the Casimir force. A new approach, describing the interaction of metals with electromagnetic fluctuations, takes into account that there are two types of fluctuations: The real fluctuations (similar to the observed electromagnetic fields), and the so-called virtual fluctuations which cannot be directly observed (similar to the virtual particles that constitute the quantum vacuum).

"The proposed approach leads to approximately the same contribution of real fluctuations to the Casimir force, as the commonly used one, but significantly changes the contribution of virtual fluctuations. As a result, Lifshitz's theory comes into agreement with experiment, while taking into account the energy losses of electrons in metals," comments Vladimir Mostepanenko, Professor of the Institute of Physics, Nanotechnology and Telecommunications, SPbPU.

The published results refer to nonmagnetic metals. In the future, researchers plan to extend the results to materials with ferromagnetic properties. Thus, there will be an opportunity for reliable calculation and creation of more miniature nanodevices operated under the influence of the Casimir force.

Credit: 
Peter the Great Saint-Petersburg Polytechnic University

Back to the future of climate

image: ETH researchers trying to find siderites near Los Angeles (CA).

Image: 
Joep van Dijk / ETH Zurich

Between 57 and 55 million years ago, the geological epoch known as the Paleocene ended and gave way to the Eocene. At that time, the atmosphere was essentially flooded by the greenhouse gas carbon dioxide, with concentration levels reaching 1,400 ppm to 4,000 ppm. So it's not hard to imagine that temperatures on Earth must have resembled those of a sauna. It was hot and humid, and the ice on the polar caps had completely disappeared.

The climate in that era provides researchers with an indication as to how today's climate might develop. While pre-?industrial levels of atmospheric CO2 stood at 280 ppm, today's measure 412 ppm. Climate scientists believe that CO2 emissions generated by human activity could drive this figure up to 1,000 ppm by the end of the century.

Using tiny siderite minerals in soil samples taken from former swamps, a group of researchers from ETH Zurich, Pennsylvania State University and CASP in Cambridge (UK) reconstructed the climate that prevailed at the end of the Paleocene and in the early Eocene. Their study has just been published in the journal Nature Geoscience.

The siderite minerals formed in an oxygen-free soil environment that developed under dense vegetation in swamps, which were abundant along the hot and humid coastlines in the Paleocene and Eocene.

To reconstruct the climatic conditions from the equator to the polar regions, the researchers studied siderites from 13 different sites. These were all located in the northern hemisphere, covering all geographical latitudes from the tropics to the Arctic.

Prevailing humidity

"Our reconstruction of the climate based on the siderite samples shows that a hot atmosphere also comes with high levels of moisture," says lead author Joep van Dijk, who completed his doctorate in ETH Professor Stefano Bernasconi's group at the Geological Institute from 2015 to 2018.

Accordingly, between 57 and 55 million years ago, the mean annual air temperature at the equator where Colombia lies today was around 41 °C. In Arctic Siberia, the average summer temperature was 23 °C.

Using their siderite "hygrometer", the researchers also demonstrated that the global moisture content in the atmosphere, or the specific humidity, was much higher in the Paleocene and Eocene eras than it is today. In addition, water vapour remained in the air for longer because specific humidity increased at a greater rate than evaporation and precipitation. However, the increase in specific humidity was not the same everywhere.

Since they had access to siderite from all latitudes, the researchers were also able to study the spatial pattern of the specific humidity. They found that the tropics and higher latitudes would have had very high humidity levels.

The researchers attribute this phenomenon to water vapour that was transported to these zones from the subtropics. Specific humidity rose the least in the subtropics. While evaporation increased, precipitation decreased. This resulted in a higher level of atmospheric water vapour, which ultimately reached the poles and the equator. And the atmospheric vapour carried heat along with it.

Climate scientists still observe the flow of water vapour and heat from the subtropics to the tropics today. "Latent heat transport was likely to have been even greater during the Eocene," van Dijk says. "And the increase in the transport of heat to high latitudes may well have been conducive to the intensification of warming in the polar regions," he adds.

Not enough time to adapt

These new findings suggest that today's global warming goes hand in hand with increased transport of moisture, and by extension heat, in the atmosphere. "Atmospheric moisture transport is a key process that reinforces warming of the polar regions," van Dijk explains.

 "Although the CO2 content in the atmosphere was much higher back then than it is today, the increase in these values took place over millions of years," he points out. "Things are different today. Since industrialisation began, humans have more than doubled the level of atmospheric CO2 over a period of just 200 years," he explains. In the past, animals and plants had much more time to adapt to the changing climatic conditions. "They simply can't keep up with today's rapid development," van Dijk says.

Strenuous search for siderite crystals

Finding the siderites was not easy. For one thing, the minerals are tiny, plus they occur solely in fossil swamps, which today are often found only several kilometres below the Earth's surface. This made it difficult or even impossible for the researchers to dig up siderites themselves. "We made several expeditions to sites where we believed siderites might occur but we found them at only one of those locations," van Dijk says.

Fortunately, one of the study's co-authors - Tim White, an American from Pennsylvania State University - owns the world's largest collection of siderite.

Credit: 
ETH Zurich

Butterfly color diversity due to female preferences

image: Dorsal wing color by sex of European butterflies.

Image: 
Kalle Tunström

Butterflies have long captured our attention due to their amazing color diversity. But why are they so colorful? A new publication led by researchers from Sweden and Germany suggests that female influence butterfly color diversity by mating with colorful males.

In many species, especially birds and butterflies, males are typically more colorful than females, a phenomenon known as dichromatism. In many dichromatic species, the more conspicuous sex is more vulnerable to predation. Certainly, the male peacock is a much easier target than the more camouflaged hen. Explaining why one member of a species would place itself in more danger was a challenge to Charles Darwin's early views on evolution by natural selection, as Darwin envisioned natural selection acting to reduce such risks.

Examples of dichromatism in fact were one of the issues that lead him to develop his theory of sexual selection, where elaborate male traits could evolve through female preference for conspicuous males, even in the face of the increased dangers such males would encounter.

Today, many naturalists and biologists alike generally ascribe the exaggerated coloration of males as being due to sexual selection. However, when we see a species in which males are more colorful than females, sexual selection is not necessarily the only answer. An alternative route to dichromatism might begin with males and females both being very colorful, followed by natural selection acting upon females to make them less conspicuous, perhaps due to the cost of being easier prey. Stated another way, perhaps females become less colorful so they are better camouflaged and therefore preyed upon less. The argument that natural selection could give rise to dichromatism was posited by Darwin's contemporary, Alfred Russel Wallace. Darwin and Wallace in fact argued for decades about the origins of dichromatism in birds and butterflies.

The reason for this long debate between Darwin and Wallace arises because, without knowing how males and females looked in evolutionary past, either sexual selection or natural selection could give rise to dichromatism. Since they had no way of formally assessing what species used to look like, their argument had few routes for resolution.

This is where researchers from Sweden (Stockholm University and Lund University) and Germany (University of Marburg) have recently made progress, by developing statistical means for inferring the ancestral color states of males and females over evolutionary time.

To do this, they first reconstructed the evolutionary relationships among European butterflies and put this into a time calibrated framework. Then they scanned scientific drawings of all these male and female butterfly species, and used that color information in its evolutionary context to estimate the direction of butterfly color evolution for each sex, and in relation to the amounts of dichromatism per species. "Tracking evolutionary colour vectors through time made it possible to quantify both the male and female contribution to dichromatism", says Dr. Dirk Zeuss from the University of Marburg, who is coauthor of the new study.

"We find that the rates of color evolution in males are faster than in females", says Dr. Wouter van der Bijl, the lead author of the study. While this finding itself suggested that males might be the target of sexual selection, further analysis was needed to rule out alternative explanations. For example, male color could be evolving rapidly when species are already dichromatic, but not when males and females start to first diverge from each other in color. By modelling both the changes in dichromatism and the changes in male and female color over evolutionary time, the researchers could calculate that changes in male color are twice as important to the evolution of dichromatism than changes in female color.

This finding suggests that Darwin was right, as it is consistent with female preference and thus sexual selection for colorful males being the driving force in color evolution. Thus, the researchers provided some resolution to the 150-year-old argument between Darwin and Wallace about the origins of dichromatism in butterflies, finding that Darwin's, but not Wallace's, model of dichromatism evolution explains the patterns better.

Credit: 
Stockholm University

Energy at risk: the impact of climate change on supply and costs

The energy sector is the biggest source of greenhouse gas emissions and therefore the main responsible of the observed human-caused changes in the climate system, but it is also vulnerable to the changing climate.

To understand the future climate impacts on energy systems, a team of scientists - included researchers from the CMCC Foundation - reviewed the literature on the subject, identifying key knowledge gaps in the existing research. The paper "Impacts of climate change on energy systems in global and regional scenarios", published in Nature Energy, encompasses a summary of 220 papers from the worldwide literature on the projected impacts of climate change on energy supply and energy demand, at both global and regional scales.

The study reveals that, at a global level, climate change is expected to influence energy demand by affecting the duration and magnitude of diurnal and seasonal heating and cooling requirements. Indeed, due to the rising temperatures, an increase in cooling demand and a decrease in heating demand is expected in the future.

"There is a sort of double impact" explain Enrica De Cian and Shouro Dasgupta, researchers at the CMCC Foundation, Ca' Foscari University of Venice, and RFF-CMCC European Institute on Economics and the Environment, among the authors of the study. "On the one hand, as cooling demand is increasing, especially in the hot season, the energy systems are working at capacity. But at the same time, this peak energy demand in summer is coinciding with reduced transmission and distribution capacity, because high temperatures and extreme heat events will affect energy infrastructures - especially power grids and transmission lines - reducing their efficiency and thus the energy reliability".

Moreover, if thermal electricity generation bears most of the risk from heatwaves and droughts, transmission and renewable technologies are highly risk-sensitive to many other extreme climate-related events, such as cold waves, wildfires, flooding, heavy snow, ice storms and windstorms. The expected change in the frequency and strength of such events may result in more power grid and transmission lines interruptions, thus affecting energy costs and supply.

"Understanding the impacts of climate change on the energy systems at a global level is an important input for the Sixth Assessment Report of the IPCC (Intergovernmental Panel on Climate Change) and for the implementation of the Paris Agreement. Moreover, results from this work can be used for studies related to the implementation of the Sustainable Development Goals (SDGs), and in particular to clarify synergies and trade-offs between SDG7 (Affordable and Clean Energy) and SDG13 (Climate Action)", explains Dasgupta. "But deep studies at a regional and national level are also critical, because they allow us to face also behavioural issues: people's behaviour is extremely important when it comes to our energy demand in the future."

At the regional level, results from the literature are more mixed and uncertain. Large regional differences have been observed by the authors, not only due to geographic peculiarities, but also to methodological differences between studies. "Despite the uncertainties, which highlight the need for more research - especially in the context of renewable energy - we have regional results that it is worth considering", specifies De Cian. "For example, the strongest climate change impacts on the energy sector are expected in South Asia and Latin America, two emerging economies that have in common a high population density. This information is critical when it comes to plan climate change adaptation strategies."

The wide variety of methodologies and datasets that are currently being used in the literature limits the scope of assessing climate change impacts on the energy sector, leading to significant differences in results across various studies. For this reason, the authors recommend a consistent multi-model assessment framework to support regional-to-global-scale energy planning.

Credit: 
CMCC Foundation - Euro-Mediterranean Center on Climate Change