Tech

Odors as navigational cues for pigeons

image: Scientists were able to identify volatile organic compounds that can be used by carrier pigeons for olfactory navigation.

Image: 
MPI of Animal Behavior

Odors are known to be essential for navigation during homeward orientation and migration of some bird species. Yet little was known about their chemical composition. An international team has now identified volatile organic compounds that can be used for olfactory navigation by homing pigeons and proved the existence of suitable regional chemical gradients in the air on a landscape-scale in Tuscany.

Many bird species can find their way home even after being brought to remote or unfamiliar locations. Over 40 years of research on homing pigeons have shown that environmental odors play a crucial role in this process. Yet the chemical identity of these odors has remained a mystery. An international team of scientists from the Max Planck Institutes for Chemistry (Mainz) and of Animal Behavior (Radolfzell), and the Universities of Konstanz, Pisa and Mainz has now identified potential chemical navigational cues that could be used by homing pigeons. Based on the collected data, the researchers were also able to create regional olfactory maps for marine emissions, biogenic compounds, and anthropogenic mixed air and to establish the existence of regional navigable chemical gradients in the air.

During the scientific mission, which took place in 2017 and 2018 in the Italian region of Tuscany, the researchers measured a suite of airborne volatile organic compounds (VOCs) over a period of months at the pigeon's home aviary. Some of these compounds are emitted by trees, the pine fragrance one smells during a walk in the forest. Other pungent natural emissions come from the sea, while still further VOCs can be emitted from industry. The measurements enabled regional maps to be constructed connecting chemicals with wind direction and speed. Additional measurements were taken in selected regional forest environments and by air using an ultralight plane flying at 180 meters - the average altitude of flying pigeon. The scientists merged the information they had gathered during the field campaigns with GPS tracks obtained from released birds. Thus, they generated multiple regional, horizontal and vertical spatial chemical gradients that can form the basis of an olfactory map.

Olfactory maps based on environmental odors

"Ornithologists from Germany and Italy have shown in more than 40 years of experiments, that pigeons use airborne odors to navigate home," explains Nora Zannoni, post-doctoral researcher at Max Planck Institute for Chemistry and the study's first author. Those results have shown that pigeons construct an olfactory map based on the distribution of environmental odors they have perceived over several months at the home aviary. This knowledge is then used as a compass at the point of release to return back home from unfamiliar sites. "By proving the existence of regional chemical gradients in the air around the experimental site we provide support for the olfactory navigation hypothesis and with atmospheric measurements we have found which chemicals can be used for navigation," adds Zannoni. Some compounds come from forested areas (monoterpenes) or the sea (DMS) while others are emitted from cities and industrial complexes (trimethylbenzene), spots that act like chemical lighthouses.

One of the biggest challenges during this research campaign was its multidisciplinary character. "We had to combine the different approaches of several scientific disciplines - atmospheric and analytical chemistry, ornithology and animal behavior, computer science and statistics," says Martin Wikelski, managing director at the Max Planck Institute of Animal Behavior.

"It's amazing really," adds Jonathan Williams the project leader at the MPIC's Atmospheric Chemistry Department in Mainz. "We uncovered these chemical gradients using several tones of ultrasensitive scientific equipment, but the same complex odor information can be analyzed and converted to a regional map by a 400-gram pigeon."

Credit: 
Max-Planck-Gesellschaft

68% of deaths from firearms are from self-harm, majority in older men in rural regions

A new study of gun injuries and deaths in Ontario found that 68% of firearm-related deaths were from self-harm, and they most often occurred in older men living in rural regions, pointing to the need for targeted prevention efforts. The study is published in CMAJ (Canadian Medical Association Journal).

There were 2009 injuries secondary to self-harm over the study period, and "this is equivalent to a firearm-related injury ... every 3 days; 92% of these injuries were fatal," writes Dr. David Gomez, a trauma surgeon at St. Michael's Hospital, Unity Health Toronto, adjunct staff scientist at ICES, and assistant professor at the University of Toronto, with coauthors.

In Canada, nonfatal firearm-related injuries are largely unmeasured.

To better understand injuries and deaths from firearms, a major cause of morbidity and mortality, researchers looked at data on all residents of Ontario with a valid OHIP number who were injured or died of gun injuries between 2002 and 2016. They used hospital discharge and provincial death records to categorize injuries as assault, unintentional, self-harm and undetermined intent.

Some findings:

Injuries related to assault accounted for 40% of nonfatal injuries and 25% of deaths. Young men living in low income neighborhoods were overrepresented in this group.

Injuries and death rates were higher in rural areas versus urban areas, largely due to higher rates of self-harm in these regions.

Injury patterns varied by age, with assault most common in people aged 15-34, and self-harm being most common among those aged 45 or older.

Five of 10 Census divisions with the highest injury rates from assault were in the Greater Toronto Area and Hamilton.

Firearm injury rates varied over time, with a high of 4.71 per 100 000 in 2005/06 after which rates declined, followed by an increase in the last 2 years of the study (3.51 per 100 000 in 2015/16). Both peaks were related to injuries from assault as self-harm rates showed less variability.

Targeted initiatives are required to address the different causes of injuries in rural and urban regions.

"This urban-rural divide highlights the need for tailored interventions to address these 2 contrasting injury patterns," write the authors. "Our findings highlight the need for suicide-prevention strategies in rural Ontario targeted at men aged 45 or older. Restricting access to lethal methods by such means as safe-storage campaigns and reduction in firearm ownership must go hand in hand with depression screening and treatment."

"Firearm-related injuries and deaths in Ontario, Canada, 2002-2016: a population-based study" is published October 19, 2020.

Credit: 
Canadian Medical Association Journal

Creating perfect edges in 2D-materials

image: Researchers at Chalmers University of Technology present a method to finely control the edges of two-dimensional materials, by using a 'magic' chemical - hydrogen peroxide.

Image: 
Alexander Ericson/Yen Strandqvist/Chalmers University of Technology

Ultrathin materials such as graphene promise a revolution in nanoscience and technology. Researchers at Chalmers University of Technology, Sweden, have now made an important advance within the field. In a recent paper in Nature Communications they present a method for controlling the edges of two-dimensional materials using a 'magic' chemical.

"Our method makes it possible to control the edges - atom by atom - in a way that is both easy and scalable, using only mild heating together with abundant, environmentally friendly chemicals, such as hydrogen peroxide," says Battulga Munkhbat, a postdoctoral researcher at the Department of Physics at Chalmers University of Technology, and first author of the paper.

Materials as thin as just a single atomic layer are known as two-dimensional, or 2D, materials. The most well-known example is graphene, as well as molybdenum disulphide, its semiconductor analogue. Future developments within the field could benefit from studying one particular characteristic inherent to such materials - their edges. Controlling the edges is a challenging scientific problem, because they are very different in comparison to the main body of a 2D material. For example, a specific type of edge found in transition metal dichalcogenides (known as TMD's, such as the aforementioned molybdenum disulphide), can have magnetic and catalytic properties.

Typical TMD materials have edges which can exist in two distinct variants, known as zigzag or armchair. These alternatives are so different that their physical and chemical properties are not at all alike. For instance, calculations predict that zigzag edges are metallic and ferromagnetic, whereas armchair edges are semiconducting and non-magnetic. Similar to these remarkable variations in physical properties, one could expect that chemical properties of zigzag and armchair edges are also very different. If so, it could be possible that certain chemicals might 'dissolve' armchair edges, while leaving zigzag ones unaffected.

Now, such a 'magic' chemical is exactly what the Chalmers researchers have found - in the form of ordinary hydrogen peroxide. At first, the researchers were completely surprised by the new results.

"It was not only that one type of edge was dominant over the others, but also that the resulting edges were extremely sharp - nearly atomically sharp. This indicates that the 'magic' chemical operates in a so-called self-limiting manner, removing unwanted material atom-by-atom, eventually resulting in edges at the atomically sharp limit. The resulting patterns followed the crystallographic orientation of the original TMD material, producing beautiful, atomically sharp hexagonal nanostructures," says Battulga Munkhbat.

"An extremely fascinating development"

The new method, which includes a combination of standard top-down lithographic methods with a new anisotropic wet etching process, therefore makes it possible to create perfect edges in two-dimensional materials.

"This method opens up new and unprecedented possibilities for van der Waals materials (layered 2D materials). We can now combine edge physics with 2D physics in one single material. It is an extremely fascinating development," says Timur Shegai, Associate Professor at the Department of Physics at Chalmers and leader of the research project.

These and other related materials often attract significant research attention, as they enable crucial advances within in nanoscience and technology, with potential applications ranging from quantum electronics to new types of nano-devices. These hopes are manifested in the Graphene Flagship, Europe's biggest ever research initiative, which is coordinated by Chalmers University of Technology.

To make the new technology available to research laboratories and high-tech companies, the researchers have founded a start-up company that offers high quality atomically sharp TMD materials.
The researchers also plan to further develop applications for these atomically sharp metamaterials.

Credit: 
Chalmers University of Technology

Molecular design strategy reveals near infrared-absorbing hydrocarbon

image: as-indacenoterrylene is a bowl-shaped compound made only of hydrogen and carbon atoms that can absorb near infrared light.

Image: 
Norihito Fukui

Nagoya University researchers have synthesized a unique molecule with a surprising property: it can absorb near infrared light. The molecule is made only of hydrogen and carbon atoms and offers insights for making organic conductors and batteries. The details were published in the journal Nature Communications.

Organic chemist Hiroshi Shinokubo and physical organic chemist Norihito Fukui of Nagoya University work on designing new, interesting molecules using organic, or carbon-containing, compounds. In the lab, they synthesized an aromatic hydrocarbon called methoxy-substituted as-indacenoterrylene. This molecule has a unique structure, as its methoxy groups are located internally rather than at its periphery.

"Initially, we wanted to see if this hydrocarbon demonstrated novel phenomena due to its unique structure," says Fukui.

But during their investigations, the researchers discovered they could convert it into a new bowl-shaped hydrocarbon called as-indacenoterrylene.

"We were surprised to find that this new molecule exhibits near infrared absorption up to 1300 nanometers," Shinokubo explains.

What's unique about as-indacenoterrylene is not that it absorbs near infrared light. Other hydrocarbons can do this as well. as-indacenoterrylene is interesting because it does this despite being made of only 34 carbon and 14 hydrogen atoms, without containing other kinds of stabilizing atoms at its periphery.

When the scientists conducted electrochemical measurements, theoretical calculations, and other tests, they found that as-indacenoterrylene was intriguingly stable and also had a remarkably narrow gap between its highest occupied molecular orbital (HOMO) and its lowest unoccupied molecular orbital (LUMO). This means that the molecule has two electronically different subunits, one that donates and another that withdraws electrons. The narrow HOMO-LUMO gap makes it easier for electrons to become excited within the molecule.

"The study offers an effective guideline for the design of hydrocarbons with a narrow HOMO-LUMO gap, which is to fabricate molecules with coexisting electron-donating and electron-withdrawing subunits," says Fukui. "These molecules will be useful for the development of next-generation solid-state materials, such as organic conductors and organic batteries."

The team next plans to synthesize other near infrared-absorbing aromatic hydrocarbons based on the design concepts garnered in this current study.

Credit: 
Nagoya University

When honey flows faster than water

video: A quick explainer of the work, including sample data from the paper

Image: 
Aalto University

It's widely known that thick, viscous liquids - like honey - flow more slowly than low-viscosity liquids, like water. Researchers were surprised to find this behaviour flipped on its head when the liquids flow through chemically coated capillaries. In fact, through these specially coated tubes, liquids a thousand times more viscous flow ten times faster.

The speed at which different fluids flow through pipes is important for a large range of applications: from industrial processes such as oil refineries to biological systems like the human heart. Traditionally, if you need to make a fluid flow faster through a pipe, you increase the pressure on it. This technique, however, has its limits; there is only so much pressure you can put into a pipe before you run the risk of bursting it. This is especially true for thin and narrow pipes, like the ones used in microfluidics for producing medicine and other complex chemicals, so researchers are investigating if they can increase the speed at which liquids flow through narrow tubes without having to increase the pressure.

In the paper published on 16 October in the journal Science Advances, researchers found that by coating the inside of the pipes with compounds that repel liquids, they could make viscous liquids flow faster than those with low viscosity.

'A superhydrophobic surface consists of tiny bumps that traps air within the coating, so that a liquid droplet that rests on the surface sits as if on a cushion of air,' explains Professor Robin Ras, whose research team at Aalto University's Department of Applied Physics has made a range of interesting discoveries in the area of extremely water repellent coatings, including recent papers in Science and Nature.

Superhydrophobic coatings themselves don't speed up the flow of the more viscous liquids. If you place a drop of honey and a drop of water on a superhydrophobic coated surface and then tilt the surface so gravity makes the droplets move, the low-viscosity water will flow down faster.

But when a droplet is confined to one of the very narrow tubes used in microfluidics, things change drastically. In this system, the superhydrophobic coating on the walls of the tube creates a small air gap between the inside wall of the tube and the outside of the droplet. 'What we found was that when a droplet is confined to a sealed superhydrophobic capillary, the air gap around the droplet is larger for more viscous liquids. This larger air gap is what allowed for the viscous fluids to move through the tube faster than the less viscous ones when flowing due to gravity,' says Dr Maja Vuckovac, the first author of the paper.

The size of the effect is quite substantial. Droplets of glycerol a thousand times more viscous than water flow through the tube more than ten times faster than water droplets. The researchers filmed the droplets as they moved through the tube, tracking not only how fast the liquid moved through the tube, but also how the liquid flowed inside the droplet. For viscous liquids, the liquid inside the droplet hardly moved around at all, whereas a fast mixing motion was detected in the lower viscosity droplets.

'The crucial discovery is that the less-viscous liquids also managed to penetrate a bit into the air cushion surrounding the droplets, rendering a thinner air gap around these. This means that the air beneath a low-viscosity droplet in the tube couldn't move out of the way as fast as for a more viscous droplet with a thicker air gap. With less air managing to squeeze past the low-viscosity droplets, these were forced to move down the tube with a slower speed than their more viscous counterparts,' explains Dr Matilda Backholm, one of the researchers on the project.

The team developed a fluid dynamics model that can be used to predict how droplets would move in tubes coated with different superhydrophobic coatings. They hope that further work on these systems could have significant applications for microfluidics, a type of chemical engineering technique that is used to precisely control liquids in small quantities and in manufacturing complex chemicals like medicines. By being able to predict how the coatings can be used to modify fluid flow, the coatings may be helpful for engineers developing new microfluidics systems.

Credit: 
Aalto University

A controllable membrane to pull carbon dioxide out of exhaust streams

A new system developed by chemical engineers at MIT could provide a way of continuously removing carbon dioxide from a stream of waste gases, or even from the air. The key component is an electrochemically assisted membrane whose permeability to gas can be switched on and off at will, using no moving parts and relatively little energy.

The membranes themselves, made of anodized aluminum oxide, have a honeycomb-like structure made up of hexagonal openings that allow gas molecules to flow in and out when in the open state. However, gas passage can be blocked when a thin layer of metal is electrically deposited to cover the pores of the membrane. The work is described in the journal Science Advances, in a paper by Professor T. Alan Hatton, postdoc Yayuan Liu, and four others.

This new "gas gating" mechanism could be applied to the continuous removal of carbon dioxide from a range of industrial exhaust streams and from ambient air, the team says. They have built a proof-of-concept device to show this process in action.

The device uses a redox-active carbon-absorbing material, sandwiched between two switchable gas gating membranes. The sorbent and the gating membranes are in close contact with each other and are immersed in an organic electrolyte to provide a medium for zinc ions to shuttle back and forth. These two gating membranes can be opened or closed electrically by switching the polarity of a voltage between them, causing ions of zinc to shuttle from one side to the other. The ions simultaneously block one side, by forming a metallic film over it, while opening the other, by dissolving its film away.

When the sorbent layer is open to the side where the waste gases are flowing by, the material readily soaks up carbon dioxide until it reaches its capacity. The voltage can then be switched to block off the feed side and open up the other side, where a concentrated stream of nearly pure carbon dioxide is released.

By building a system with alternating sections of membrane that operate in opposite phases, the system would allow for continuous operation in a setting such as an industrial scrubber. At any one time, half of the sections would be absorbing the gas while the other half would be releasing it.

"That means that you have a feed stream coming into the system at one end and the product stream leaving from the other in an ostensibly continuous operation," Hatton says. "This approach avoids many process issues" that would be involved in a traditional multicolumn system, in which adsorption beds alternately need to be shut down, purged, and then regenerated, before being exposed again to the feed gas to begin the next adsorption cycle. In the new system, the purging steps are not required, and the steps all occur cleanly within the unit itself.

The researchers' key innovation was using electroplating as a way to open and close the pores in a material. Along the way the team had tried a variety of other approaches to reversibly close pores in a membrane material, such as using tiny magnetic spheres that could be positioned to block funnel-shaped openings, but these other methods didn't prove to be efficient enough. Metal thin films can be particularly effective as gas barriers, and the ultrathin layer used in the new system requires a minimal amount of the zinc material, which is abundant and inexpensive.

"It makes a very uniform coating layer with a minimum amount of materials," Liu says. One significant advantage of the electroplating method is that once the condition is changed, whether in the open or closed position, it requires no energy input to maintain that state. Energy is only required to switch back again.

Potentially, such a system could make an important contribution toward limiting emissions of greenhouse gases into the atmosphere, and even direct-air capture of carbon dioxide that has already been emitted.

While the team's initial focus was on the challenge of separating carbon dioxide from a stream of gases, the system could actually be adapted to a wide variety of chemical separation and purification processes, Hatton says.

"We're pretty excited about the gating mechanism. I think we can use it in a variety of applications, in different configurations," he says. "Maybe in microfluidic devices, or maybe we could use it to control the gas composition for a chemical reaction. There are many different possibilities."

Credit: 
Massachusetts Institute of Technology

Study explains the process that exacerbates MS

image: Research group, from the left: André Ortlieb Guerreiro-Cacais (researcher), Maja Jagodic (docent and group leader for research on MS epigenetics), Rasmus Berglund (doctoral student), and Tomas Olsson (professor).

Image: 
Ulf Sirborn

People with multiple sclerosis (MS) gradually develop increasing functional impairment. Researchers at Karolinska Institutet have now found a possible explanation for the progressive course of the disease in mice and how it can be reversed. The study, which is published in Science Immunology, can prove valuable to future treatments.

MS is a chronic inflammatory disease of the central nervous system (CNS) and one of the main causes of neurological functional impairment.

The disease is generally diagnosed between 20 and 30 years of age. It can cause severe neurological symptoms, such as loss of sensation and trembling, difficulties walking and maintaining balance, memory failure and visual impairment.

MS is a life-long disease with symptoms that most often gradually worsen over time.

In the majority of cases the disease comes in bouts with a certain amount of subsequent recovery. A gradual loss of function with time is, however, inevitable. Research has made great progress in treatments that reduce the frequency and damaging effects of these bouts.

"Despite these important breakthroughs, the disease generally worsens when the patient has had it for 10 to 20 years," says Maja Jagodic, docent of experimental medicine at the Department of Clinical Neuroscience and the Centre for Molecular Medicine, Karolinska Institutet. "There is currently only one, recently approved, treatment for what is called the secondary progressive phase. The mechanisms behind this progressive phase require more research."

Researchers at Karolinska Institutet have now shown that recovery from MS-like symptoms in mice depends on the ability of the CNS's own immune cells - microglia - to break down the remains of damaged cells, such as myelin.

The processes was interrupted when the researchers removed a so-called autophagy gene, Atg7. Autophagy is a process where cells normally break down and recycle their own proteins and other structural components.

Without Atg7 the ability of the microglia to clean away tissue residues created by the inflammation was reduced. These residues accumulated over time, which is a possible explanation for the progressiveness of the disease.

The study also shows how microglia from aged mice resemble the cells from young mice that lacked Atg7 in terms of deficiencies in this process, which had a negative effect on the course of the disease.

This is a significant result since increasing age is an important risk factor in the progressive phase of MS. The researchers also show how this process can be reversed.

"The plant and fungi-derived sugar Trehalose restores the functional breakdown of myelin residues, stops the progression and leads to recovery from MS-like disease." says doctoral student Rasmus Berglund. "By enhancing this process we hope one day to be able to treat and prevent age-related aspects of neuroinflammatory conditions."

Credit: 
Karolinska Institutet

Unprecedented energy use since 1950 has transformed humanity's geologic footprint

A new study coordinated by CU Boulder makes clear the extraordinary speed and scale of increases in energy use, economic productivity and global population that have pushed the Earth towards a new geological epoch, known as the Anthropocene. Distinct physical, chemical and biological changes to Earth's rock layers began around the year 1950, the research found.

Led by Jaia Syvitski, CU Boulder professor emerita and former director of the Institute of Alpine Arctic Research (INSTAAR), the paper, published today in Nature Communications Earth and Environment, documents the natural drivers of environmental change throughout the past 11,700 years--known as the Holocene Epoch--and the dramatic human-caused shifts since 1950. Such planetary-wide changes have altered oceans, rivers, lakes, coastlines, vegetation, soils, chemistry and climate.

"This is the first time that scientists have documented humanity's geological footprint on such a comprehensive scale in a single publication," said Syvitski, former executive director of the Community Surface Dynamics Modeling System, a diverse community of international experts from who study the interactions between the Earth's surface, water and atmosphere.

In the past 70 years, humans have exceeded the energy consumption of the entire preceding 11,700 years--largely through combustion of fossil fuels. This huge increase in energy consumption has then allowed for a dramatic increase in human population, industrial activity, pollution, environmental degradation and climate change.

The study is the result of work by the Anthropocene Working Group (AWG), an interdisciplinary group of scientists analyzing the case for making the Anthropocene a new epoch within the official Geological Time Scale, characterized by the overwhelming human impact on the Earth.

The word Anthropocene follows the naming convention for assigning geologically defined lengths of time and has come to embody the present time during which humans are dominating planetary-scale Earth systems.

In geological time, an epoch is longer than an Age but shorter than a Period, measured in tens of millions of years. Within the Holocene epoch, there are several Ages--but the Anthropocene is proposed as a separate Epoch within Earth's planetary history.

"It takes a lot to change the Earth's system," said Syvitski. "Even if we were to get into a greener world where we were not burning fossil fuels, the main culprit of greenhouse gases, we would still have a record of an enormous change on our planet."

Unambiguous markers of the Anthropocene

The 18 authors of the study compiled existing research to highlight 16 major planetary impacts caused by increased energy consumption and other human activities, spiking in significance around or since 1950.

Between 1952 and 1980, humans set off more than 500 thermonuclear explosions above ground as part of global nuclear weapons testing, which have forever left a clear signature of human-caused radionuclides--atoms with excess nuclear energy--on or near the surface of the entire planet.

Since about 1950, humans have also doubled the amount of fixed nitrogen on the planet through industrial production for agriculture, created a hole in the ozone layer through the industrial scale release of chlorofluorocarbons (CFCs), released enough greenhouse gasses from fossil fuels to cause planetary level climate change, created tens of thousands more synthetic mineral-like compounds than naturally occur on Earth and caused almost one-fifth of river sediment worldwide to no longer reach the ocean due to dams, reservoirs and diversions.

Humans have produced so many millions of tons of plastic each year since the middle of the 20th century that microplastics are "forming a near-ubiquitous and unambiguous marker of Anthropocene," according to the study.

Not all of these planetary level changes may define the Anthropocene geologically, according to Syvitski and her co-authors, but if present trends continue, they can lead to markers in the rock record that will.

Syvitski credits her time as director of INSTAAR from 1995 to 2007 for enabling her to bring together scientists from the different environmental disciplines needed for the study, including geology, biology, geography, anthropology and history.

In a similar way, she sees a need for people of different backgrounds and experiences around the world to come together to work toward solutions.

"We humans collectively got ourselves into this mess, we need to work together to reverse these environmental trends and dig ourselves out of it," said Syvitski. "Society shouldn't feel complacent. Few people who read the manuscript should come away without emotions bubbling up, like rage, grief and even fear."

Credit: 
University of Colorado at Boulder

Membranes for capturing carbon dioxide from the air

image: Technological solutions for the CO2 emission into the atmosphere should include variety of approaches as there is no one "silver bullet" solution. In this work researchers from I2CNER, Kyushu University and NanoMebrane Technologies Inc. Japan suggest using the gas separation membranes as a tool for direct air capture. When combined with advanced technologies for CO2 conversion the envisaged systems can be widely employed in carbon-recycling sustainable society.

Image: 
Kyushu University

Climate change caused by emissions of greenhouse gases into the atmosphere is a most important issue for our society. Acceleration of global warming results in catastrophic heatwaves, wildfires, storms and flooding. The anthropogenic nature of climate change necessitates development of novel technological solutions in order to reverse the current CO2 trajectory.

Direct capture of the carbon dioxide (CO2) from the air (direct air capture, DAC) is one among a variety of negative emission technologies that are expected to keep global warming below 1.5 °C, as recommended by the Intergovernmental Panel for Climate Change (IPCC). Extensive deployment of the DAC technologies is needed to mitigate and remove so-called legacy carbon or historical emissions. Effective reduction of the CO2 content in the atmosphere would be achieved only by extracting huge amounts of CO2 that are comparable to that of the current global emissions. Current DAC technologies are mainly based on sorbent-based systems where CO2 is trapped in the solution or on the surface of the porous solids covered with the compounds with high CO2 affinity. These processes are currently rather expensive, although the cost is expected to go down as the technologies developed and deployed at scale.

The ability of membranes to separate carbon dioxide is well documented and its usefulness is established for industrial processes. Unfortunately, its efficiency is less than satisfactory for the practical operation of the DAC.

In a recent paper, researchers from International Institute for Carbo-Neutral Energy Research (I2CNER), Kyushu University and NanoMembrane Technologies Inc. in Japan discussed the potential of membrane-based DAC (m-DAC), by taking advantage of the state-of-the-art performance of organic polymer membranes. Based on the process simulation, they showed the targeted performance for the m-DAC is achievable with competitive energy expenses. It is shown that a mult-stage application separation process can enable the preconcentration of air CO2 (0.04%) to 40%. This possibility and combination of the membranes with advanced CO2 conversion may lead to realistic means for opening circular CO2 economy.
`Based on this finding, Kyushu University team has initiated a Government-supported Moonshot Research and Development Program (Program Manager: Dr. Shigenori Fujikawa). In this program, direct CO2 capture from the atmosphere by membranes and the subsequent conversiont to valuable materials is the major development target.

Credit: 
Kyushu University, I2CNER

The mental health impact of pandemics for front line health care staff

Mental health problems such as Post-Traumatic Stress Disorder, anxiety and depression are common among healthcare staff during and immediately after pandemics - according to new research from the University of East Anglia.

Researchers investigated how treating patients in past pandemics such as SARS and MERS affected the mental health of front-line staff.

They found that almost a quarter of health-care workers (23.4 per cent) experienced PTSD symptoms during the most intense 'acute' phase of previous pandemic outbreaks - with 11.9 per cent of carers still experiencing symptoms a year on.

They also looked at data about elevated levels of mental distress and found that more than a third of health workers (34.1 per cent) experienced symptoms such as anxiety or depression during the acute phase, dropping to 17.9 per cent after six months. This figure however increased again to 29.3 per cent after 12 months or longer.

The team hope that their work will help highlight the impact that the Covid-19 pandemic could be having on the mental health of doctors and nurses around the world.

Prof Richard Meiser-Stedman, from UEA's Norwich Medical School, said: "We know that Covid-19 poses unprecedented challenges to the NHS and to healthcare staff worldwide.

"Nurses, doctors, allied health professionals and all support staff based in hospitals where patients with Covid-19 are treated are facing considerable pressure, over a sustained period.

"In addition to the challenge of treating a large volume of severely unwell patients, front line staff also have to contend with threats to their own physical health through infection, particularly as they have had to face shortages of essential personal protective equipment.

"The media has reported that healthcare workers treating coronavirus patients will face a 'tsunami' of mental health problems as a result of their work.

"We wanted to examine this by looking closely at the existing data from previous pandemics to better understand the potential impact of Covid-19.

"We estimated the prevalence of common mental health disorders in health care workers based in pandemic-affected hospitals. And we hope our work will help inform hospital managers of the level of resources required to support staff through these difficult times."

A team of trainee clinical psychologists - Sophie Allan, Rebecca Bealey, Jennifer Birch, Toby Cushing, Sheryl Parke and Georgina Sergi - all from UEA's Norwich Medical School, investigated how previous pandemics affected healthcare workers' mental health, with support from Prof Meiser-Stedman and Dr Michael Bloomfield, University College London.

They looked at 19 studies which included data predominantly from the SARS outbreak in Asia and Canada, and which tended to focus on the acute stage of the pandemic - during and up to around six weeks after the pandemic.

Sophie Allan said: "We found that post-traumatic stress symptoms were elevated during the acute phase of a pandemic and at 12 months post-pandemic.

"There is some evidence that some mental health symptoms such as Post Traumatic Stress symptoms get better naturally over time but we cannot be sure about this. The studies we looked at had very different methods - for example they used different questionnaires about mental health - so we need to be cautious about the results.

"We didn't find any differences between doctors and nurses experiencing PTSD or other psychiatric conditions, but the available data was limited and more research is needed to explore this.

"Overall there are not enough studies examining the impact of pandemics on the mental health of healthcare staff. More research is needed that focusses on Covid-19 specifically and looks at the mental health of healthcare workers longer-term," she added.

'The prevalence of common and stress-related mental health disorders in healthcare workers based in pandemic-affected hospitals: a rapid systematic review and meta-analysis' is published in the European Journal of Psychotraumatology on October 16, 2020.

Credit: 
University of East Anglia

Malice leaves a nasty smell

image: Part of the human brain contributing the most to the prediction of pain and olfactory disgust.

Image: 
UNIGE/Corradi-Dell'Aqua

Unhealthy behaviours trigger moral judgments that are similar to the basic emotions that contribute to our ability to survive. Two different hypotheses are to be found in the current scientific literature as to the identity of these emotions. Some researchers single out disgust, while others opt for pain. After developing a new approach to brain imaging, a research team from the University of Geneva (UNIGE) has come down on the side of disgust. The study, which can be found in Science Advances, shows that unhealthy behaviours trigger brain responses that are similar to those prompted by bad smells. The research also identifies for the first time a biomarker in the brain for disgust.

Disgust is a basic emotion linked to our survivability. Smell provides information about the freshness of foodstuffs, while disgust means we can take action to avoid a potential source of poisoning. Following the same principle, pain helps us cope with any injuries we might suffer by activating our withdrawal reflexes. Psychologists believe that these types of survival reflexes might come into play in response to other people's bad behaviour.

Disgust or pain

"These connections have been demonstrated via associations between situations and sensations," begins Professor Corrado Corradi-Dell'Acqua, a researcher in UNIGE's Department of Psychology and the study's lead investigator. "For instance, if I drink something while reading an article about corruption that affects my moral judgment, I may find that my drink smells bad and tastes vile. Equally, the reverse is true: smells can generate inappropriate moral judgment. In concrete terms, if someone smells bad, other people tend to make the judgment that they're unhealthy."

While some studies suggest that disgust is involved in the process, others opt for pain, since they consider that moral judgments are made based on actual facts - hence the parallel with the mechanisms involved in pain. "If a driver is distracted, and does not see a pedestrian crossing a road, I will judge this person more negatively if the pedestrian was actually harmed, rather than avoided by chance", explains the psychologist. His team set up an experimental paradigm and customised magnetic resonance imaging (MRI) techniques in an attempt to decide between the contradictory hypotheses.

The train dilemma as a paradigm

The first step was for Corradi-Dell'Acqua's laboratory to subject volunteers to unpleasant odours or heat-induced pain. "The whole idea was to elicit a similar degree of discomfort with the two techniques so that they could work on the same levels." Once the calibration had been performed, participants in the study were subjected to readings that evoked value judgments. "We used the train dilemma when five people are stuck on a railway track as a train approaches. The only possible way to save them is to push someone off the top of a bridge so that the switch is hit as they fall. In other words, it's necessary to kill one person to save five in a highly immoral situation," explains the researcher. The act of reading this unpleasant dilemma had an influence on the odours the participants smelt and caused disgust, but did not influence the pain, an outcome that was backed up by the participants' electrodermal activity. This is a physiological measurement of the electrical conductance of the skin. It reflects the rate of sweating and the activity of the nervous system responsible for involuntary behaviour.

Neural pathways identified

Professor Corradi-Dell'Acqua then concentrated on the brain response. "It is difficult to infer pain and disgust from neural activity, as these two experiences often recruit the same brain areas. To dissociate them, we had to measure the global neuronal activity via MRI rather than focusing on specific regions," summarises the researcher. The Geneva team adopted a technique that allows predicting disgust and pain from the overall brain activity, such as specific biomarkers.

Using this tool, the researchers were able to prove that the overall brain response to disgust was influenced by previous moral judgment. Once again, moral judgments are indeed associated with disgust. "In addition to this important discovery for psychology, this study was the occasion for the development of a biomarker prototype for olfactory disgust. It's a double step forward!" concludes Corradi-Dell'Acqua.

Credit: 
Université de Genève

Slowing light in an optical cavity with mechanical resonators and mirrors

We are all taught at high school that the speed of light through a vacuum is about 300000 km/s, which means that a beam from Earth takes about 2.5 seconds to reach the Moon. It naturally moves more slowly through transparent objects, however, and scientists have found ways to slow it dramatically. Optomechanics, or the interaction of electromagnetic radiation with mechanical systems, is a relatively new and effective way of approaching this. Theoretical physicists Kamran Ullah from Quaid-i-Azam University, Islamabad, Pakistan and Hameed Ullah from the Institute of Physics, Porto Alegre, Brazil have now demonstrated how light is slowed in a position-based mass optomechanical system. This work has been published in EPJ D.

Ullah and Ullah describe cavity optomechanics, which involves optical modes set up in a cavity between mirrors. The cavity mode, which is driven by a strong field and probed by a weak field, provides a 'playground' for investigating phenomena including slow light and optomechanically induced transparency (OMIT). The latter is a quantum effect in which the optical response of atoms and molecules is controlled by an electromagnetic field. In this work, the physicists studied a cavity system comprising a fixed mirror and a movable one. The moving mirror oscillates along the axis of the cavity with a single harmonic frequency. By considering the total mass of the resonator as dependent on its position, and calculating the effective Hamiltonian of the whole system (which describes its total energy), Ullah and Ullah showed how the system can enhance OMIT and slow light. As the mass is position-dependent, the system is non-linear and the nature and magnitude of the quantum effects observed depend strongly on the value of a non-linear parameter, alpha.

And this work is not entirely abstruse. OMIT and slow light already have important applications in quantum information processing, optical switches and optical sensing, and these technologies can only become more useful as quantum computing moves out of the lab into the workaday world.

Credit: 
Springer

Energy System 2050: solutions for the energy transition

image: Energy System 2050" is an initiative of the research field Energy of the Helmholtz Association aimed at developing tangible and usable findings and technical solutions.

Image: 
Pascal Armbruster, KIT

To contribute to global climate protection, Germany has to rapidly and comprehensively minimize the use of fossil energy sources and to transform the energy system accordingly. The Helmholtz Association's research initiative "Energy System 2050" has studied how and by which means this can be achieved. One of the partners is Karlsruhe Institute of Technology (KIT). At the final conference in Berlin, scientists of the participating research centers presented their results.

Having decided to achieve climate neutrality by 2050, Germany as an industrialized country is facing a tremendous challenge: Organizing a comprehensive and sustainable transformation of the energy system while ensuring stable energy supply for our everyday life, for industry, and for the operation of central communication and transport infrastructures. Within the framework of the research initiative "Energy System 2050" (ES2050), scientists of the Helmholtz Association have developed concrete strategies and technical approaches to both improving climate protection and enhancing supply security. These have already been picked up by politics and industry.

"Climate-friendly transformation of the energy system requires adequate technologies and clear systemic solutions. Within 'Energy System 2050,' we have not only succeeded in developing them. We have also tested them in real operation and elaborated flexible strategies for their use," says Professor Holger Hanselka, coordinator of the research initiative, Research Field Coordinator Energy of the Helmholtz Association, and President of KIT. "Our research initiative pools the competencies of eight research centers to make the energy transition a success."

Professor Otmar D. Wiestler, President of the Helmholtz Association, says: "Local, national, and international energy systems have to be switched to renewable energy sources as quickly as possible. This not only is an important step to cope with climate change and increasing degradation of the environment. With the help of regenerative energy systems, we can also produce energy at low costs without being dependent on imports. The 'Energy System 2050' initiative clearly shows which fundamental contributions can be made by the Helmholtz Association in line with its mission to conduct cutting-edge research for solving grand challenges facing society."

Strategies, Technologies, and Open-source Tools for the Energy Transition

The research initiative was launched in 2015 to make a relevant and forward-looking contribution to the transformation of the energy system. 170 scientists conducted research in teams that focused on a piece of the energy transition puzzle each. Based on a systemic analysis of the German energy supply system, they developed economically efficient and climate-friendly transformation paths until 2050. This work was complemented by research into the architecture and security of the future power grid and integration of hydrogen and biogenic energy sources in the energy system. Moreover, power grid components, such as redox flow storage systems, biogas facilities, or gas turbines for the reconversion of synthesis gas and biogas were subjects of studies. Researchers tested the technologies in detail and systemically analyzed their interaction. As a result, the best "team players" for sector coupling were found, including technologies to combine heat and power supply. In addition, lifecycle-oriented sustainability analyses were made. Apart from costs and CO2 emissions, such analyses consider other ecological and social factors when producing fuel from biogenic residues, for instance.

To carry out dynamic experiments on the system level, the researchers of ES2050 established a large-scale network of research infrastructures, including the Energy Lab 2.0 on the campus of KIT and the Living Lab Energy Campus of Forschungszentrum Jülich (FZJ). These detailed models of the energy system have meanwhile been equipped with own grid infrastructures and power-to-x facilities, residential buildings, and transport system components. The physical models are closely interlinked with virtual structures for the smart extension of the energy system. With the help of "digital twins," it is possible to integrate system components in experiments, although they do not yet exist - for instance, the future hydrogen infrastructure. The research initiative understands its modeling tools, datasets, and benchmarks as parts of an open ecosystem and makes them available as open sources. This "toolkit for the energy transition" is used by large transmission grid operators already.

Sustainable Contribution to the Energy Transition

It is still a long way to go to climate neutrality in the energy sector, but change has started: In 2019, for instance, the share of renewable energy sources in gross power consumption was 42.1 percent, in the year before 37.8 percent, according to the Federal Environment Agency. The results of the research initiative "Energy System 2050" can enhance this dynamic trend and extend it to cover the housing, transport, and industry sectors. The research initiative "Energy System 2050" was launched by the research field Energy of the Helmholtz Association. The partners are the KIT, the German Aerospace Center (DLR), Forschungszentrum Jülich (FZJ), the Helmholtz Centre Potsdam (GFZ), the Helmholtz Centre Berlin (HZB), the Helmholtz Centre Dresden-Rossendorf (HZDR), the Max Planck Institute for Plasma Physics (IPP - associated), and the Helmholtz Centre for Environmental Research (UFZ).

Credit: 
Karlsruher Institut für Technologie (KIT)

Moffitt researchers develop tool to better predict treatment course for lung cancer

TAMPA, Fla. -- Personalized treatment options for patients with lung cancer have come a long way in the past two decades. For patients with non-small cell lung cancer, the most common subtype of lung cancer and the leading cause of cancer-related death worldwide, two major treatment strategies have emerged: tyrosine kinase inhibitors and immune checkpoint inhibitors. However, choosing the right therapy for a non-small cell lung cancer patient isn't always an easy decision, as biomarkers can change during therapy rendering that treatment ineffective. Moffitt Cancer Center researchers are developing a noninvasive, accurate method to analyze a patient's tumor mutations and biomarkers to determine the best course of treatment.

In a new article published in Nature Communications, the research team demonstrates how a deep learning model using positron emission tomography/computerized tomography radiomics can identify which non-small cell lung cancer patients may be sensitive to tyrosine kinase inhibitor treatment and those who would benefit from immune checkpoint inhibitor therapy. The model uses PET/CT imaging with the radiotracer 18F-Fluorodeoxyglucose, a type of sugar molecule. Imaging with 18F-FDG PET/CT can pinpoint sites of abnormal glucose metabolism and help accurately characterize tumors.

"This type of imaging, 18F-FDG PET/CT, is widely used in determining the staging of patients with non-small cell lung cancer. The glucose radiotracer used is also known to be affected by EGFR activation and inflammation," said Matthew Schabath, Ph.D., associate member of the Cancer Epidemiology Department. "EGFR, or epidermal growth factor receptor, is a common mutation found in non-small cell lung cancer patients. EGFR mutation status can be a predictor for treatment, as patients with an active EGFR mutation have better response to tyrosine kinase inhibitor treatment."

For the study, the Moffitt team developed an 18F-FDG PET/CT-based deep learning model using retrospective data from non-small cell lung cancer patients at two institutions in China: Shanghai Pulmonary Hospital and Fourth Hospital of Hebei Medical University. The model classifies EGFR mutation status by generating an EGFR deep learning score for each patient. Once created, the researchers further validated the model using patient data from two additional institutions: Fourth Hospital of Harbin Medical University and Moffitt Cancer Center.

"Prior studies have utilized radiomics as a noninvasive approach to predict EGFR mutation," said Wei Mu, Ph.D., study first author and postdoctoral fellow in the Cancer Physiology Department. "However, compared to other studies, our analysis yielded among the highest accuracy to predict EGFR and had many advantages, including training, validating and testing the deep learning score with multiple cohorts from four institutions, which increased its generalizability."

"We found that the EGFR deep learning score was positively associated with longer progression free survival in patients treated with tyrosine kinase inhibitors, and negatively associated with durable clinical benefit and longer progression free survival in patients being treated with immune checkpoint inhibitor immunotherapy," said Robert Gillies, Ph.D., chair of the Cancer Physiology Department. "We would like to perform further studies but believe this model could serve as a clinical decision support tool for different treatments."

Credit: 
H. Lee Moffitt Cancer Center & Research Institute

Study: More than 200 million Americans could have toxic PFAS in their drinking water

WASHINGTON - A peer-reviewed study by scientists at the Environmental Working Group estimates that more than 200 million Americans could have the toxic fluorinated chemicals known as PFAS in their drinking water at a concentration of 1 part per trillion, or ppt, or higher. Independent scientific studies have recommended a safe level for PFAS in drinking water of 1 ppt, a standard that is endorsed by EWG.

The study, published today in the journal Environmental Science & Technology Letters, analyzed publicly accessible drinking water testing results from the Environmental Protection Agency and U.S. Geological Survey, as well as state testing by Colorado, Kentucky, Michigan, New Hampshire, New Jersey, North Carolina and Rhode Island.

"We know drinking water is a major source of exposure of these toxic chemicals," said Olga Naidenko, Ph.D., vice president for science investigations at EWG and a co-author of the new study. "This new paper shows that PFAS pollution is affecting even more Americans than we previously estimated. PFAS are likely detectable in all major water supplies in the U.S., almost certainly in all that use surface water."

The analysis also included laboratory tests commissioned by EWG that found PFAS chemicals in the drinking water of dozens of U.S. cities. Some of the highest PFAS levels detected were in samples from major metropolitan areas, including Miami, Philadelphia, New Orleans and the northern New Jersey suburbs of New York City.

There is no national requirement for ongoing testing and no national drinking water standard for any PFAS in drinking water. The EPA has issued an inadequate lifetime health advisory level of 70 ppt for the two most notorious fluorinated chemicals, PFOA and PFOS, and efforts to set an enforceable standard could take many years.

In the absence of a federal standard, states have started to pass their own legal limits for some PFAS. New Jersey was the first to issue a maximum contaminant limit for the compound PFNA, at 13 ppt, and has set standards of 13 ppt for PFOS and 14 ppt for PFOA. Many states have either set or proposed limits for PFOA and PFOS, including California, Massachusetts, Michigan, New Hampshire, New Jersey, New York and Vermont.

"The first step in fighting any contamination crisis is to turn off the tap," said Scott Faber, EWG senior vice president for government affairs. "The second step is to set a drinking water standard, and the third is to clean up legacy pollution. The PFAS Action Act passed by the House would address all three steps by setting deadlines for limiting industrial PFAS releases, setting a two-year deadline for a drinking water standard, and designating PFAS as 'hazardous substances' under the Superfund law. But Mitch McConnell's Senate has refused to act to protect our communities from 'forever chemicals.'"

PFAS are called forever chemicals because they are among the most persistent toxic compounds in existence, contaminating everything from drinking water to food, food packaging and personal care products. They are found in the blood of virtually everyone on Earth, including newborn babies. They never break down in the environment.

Very low doses of PFAS chemicals in drinking water have been linked to suppression of the immune system and are associated with an elevated risk of cancer and reproductive and developmental harms, among other serious health concerns.

"When we look for PFAS contamination, we almost always find it," said David Andrews, Ph.D., a senior scientist at EWG and one of the co-authors. "Americans should trust that their water is safe, but far too many communities have water supplies polluted by toxic PFAS chemicals. These are some of the most insidious chemicals ever produced, and they continue to be used. Our analysis was largely limited to PFOA and PFOS, but many more PFAS are found to contaminate drinking water and the entire class of PFAS chemicals is a concern."

The EPA has identified over 600 PFAS in active use in the U.S. According to the most recent analysis of state and federal data by EWG, 2,230 locations in 49 states are known to have PFAS contamination, including more than 300 military installations.

PFAS contamination has raised alarms among a bipartisan group of lawmakers in Congress. The PFAS Action Act also includes a provision that would set a two-year deadline for the EPA to establish a national drinking water standard for the two most notorious PFAS chemicals - PFOA, formerly used to make DuPont's Teflon, and PFOS, formerly an ingredient in 3M's Scotchgard.

The House versions of the National Defense Authorization Act and EPA spending bill also include important PFAS reforms.

"It's not too late for this Congress to protect us from the growing PFAS contamination crisis," Faber said.

Credit: 
Environmental Working Group