Tech

Plane travel destroys polar bear habitat

image: Sofia Kjellman from UiT -- The Arctic University of Norway collecting sediment cores in Svalbard as part of her Ph.D. research. She sometimes works in remote locations that can only be reached by helicopter. Now she's part of a growing group of researchers trying to find ways to cut the amount of plane flights she takes.

Image: 
Photo: Lis Allaart

We all know we should fly less as a way to reduce our individual and collective effect on the global climate. But transforming that vague understanding into concrete reasons for action can be difficult -- until now.

An international coalition of researchers can now tell you how much damage you're doing to polar bear habitat when you get on a plane. Next time you take a round-trip flight from Oslo to Copenhagen, for example, you've just been responsible for emitting enough CO2 to melt nearly 1 m2 of Arctic summer sea ice.

"There are good numbers showing how CO2 emissions correlate with decreases in sea ice," said Bjørn Munro Jenssen, a biologist at the Norwegian University of Science and Technology (NTNU) who has spent decades studying polar bears. "And we know that decreasing sea ice means less habitat for polar bears."

Jenssen was senior author on a letter detailing the relationship as a way to encourage academics, in particular, to stop flying so much. The letter was published in Environment International.

To make their estimates, the researchers made a number of assumptions based on published information.

They started with a 2016 research report in Science which describes how 30 years of September Arctic sea-ice data were used to estimate that each metric tonne of CO2 emitted causes a loss of 3 m2 of September sea-ice area. September is the month when summer sea-ice amounts are at their annual lowest.

They then took aviation data that showed there were roughly 4.3 billion passengers who flew in 2019, and estimated that each passenger flight averaged 2000 km. Using published conversion data, the researchers calculated that each passenger's carbon footprint would be 0.42 metric tonnes, for a total of 1.83 billion tonnes for all passenger flights.

That's enough to melt 5470 km2 of sea ice, or the home range for four polar bears in the Hudson Bay area of Canada, Jenssen said.

While it's possible to quibble with some of the researchers' assumptions, the trend is indisputable, he said -- more CO2 in the atmosphere means less sea ice, which is critical to polar bears.

One of the bigger ironies of climate research is that many of the researchers who study the consequences of global warming fly -- often a lot.

"We're supposed to be the ones contributing to saving the world, but we're flying all over the place," Jenssen said.

Sometimes, of course, it's unavoidable, he said. For example, Jenssen can't study polar bears without travelling to his research area on the Norwegian archipelago of Svalbard.

That's also a problem for Sofia E. Kjellman, a PhD candidate at UIT -- The Arctic University of Norway, who published an article about this dilemma in Nature in mid-2019.

Kjellman is also working on Svalbard on climate-related issues, often in remote areas that are only accessible by helicopter.

In an email, Kjellman wrote that she thinks researchers need to challenge the travel culture that pervades academia.

"I don't think our research or careers have to suffer just because we choose to fly less," she wrote. "I've been talking to my colleagues about the purpose of our trips -- do we really need to go, or do we go mostly because we want to and have the funding to do so? Or maybe because of expectations from supervisors or collaborators? It seems like talking about it helps people evaluate their decisions and look for other solutions."

Kjellman says she hasn't figured out any new solutions to cutting her carbon footprint from flying, aside from simply flying less. Choosing less carbon-intensive travel, such as trains, is an option sometimes, as is attending conferences virtually, she said.

For example, she recently gave a presentation of the carbon footprint issue via a video connection to a workshop held by the Association of Polar Early Career Scientists in Stockholm about ethical and sustainable research.

"It went very smoothly and it was great to talk to other young researchers battling with similar thoughts," she said in her email. "Avoiding flying can in some cases be limiting, of course, but I think I am getting better at prioritizing, which can be rewarding in itself."

Kjellman and Jenssen and his co-authors are among a small but growing group of researchers who are giving their travel habits a hard look.

One of the more visible efforts is a website called No Fly Climate Sci, which was started in 2017 by a climate researcher from the Jet Propulsion Lab in Pasadena, California, Peter Kalmus.

Kalmus wrote on his website that he started the effort to raise the public's sense of climate urgency in order to accelerate large-scale political action. He also wanted to give people who fly less a place to share their stories, so they would realize they weren't alone.

To date, 538 people have registered with the site, describing how they have either cut their number of flights or stopped flying altogether.

Seventeen research institutions are also listed on the site, one of which, the University of Edinburgh, took the opportunity to create a "Roundtable of Sustainable Academic Travel", where research institutions themselves can find ways to cut travel.

And in a May 2019 article in Times Higher Education, New Zealand researcher Joanna Kidman issued a strong call to her fellow academics to do something about this issue:

"I think there is a day of reckoning coming for those of us in academia who, through wilful neglect rather than deliberate planning, are gambling away our futures, one air ticket at a time," she wrote. "The deathly silence about our addiction to air travel needs to be broken as the Anthropocene era of human-driven climate change manifests itself all around us. It is high time."

Credit: 
Norwegian University of Science and Technology

Will the future's super batteries be made of seawater?

We all know the rechargeable and efficient lithium ion (Li-ion) batteries sitting in our smartphones, laptops and also in electric cars.

Unfortunately, lithium is a limited resource, so it will be a challenge to satisfy the worlds' growing demand for relatively cheap batteries. Therefore, researchers are now looking for alternatives to the Li-ion battery.

A promising alternative is to replace lithium with the metal sodium - to make Na-ion batteries. Sodium is found in large quantities in seawater and can be easily extracted from it.

- The Na-ion battery is still under development, and researchers are working on increasing its service life, lowering its charging time and making batteries that can deliver many watts, says research leader Dorthe Bomholdt Ravnsbæk of the Department of Physics, Chemistry and Pharmacy at University of Southern Denmark.

She and her team are preoccupied with developing new and better rechargeable batteries that can replace todays' widely used Li-ion batteries.

For the Na-ion batteries to become an alternative, better electrode materials must be developed - something she and colleagues from the University of Technology and the Massachusetts Institute of Technology, USA, have looked at in a new study published in the journal ACS Applied Energy Materials.

But before looking at the details of this study, let's take a look at why the Na-ion battery has the potential to become the next big battery success.

- An obvious advantage is that sodium is a very readily available resource, which is found in very large quantities in seawater. Lithium, on the other hand, is a limited resource that is mined only in a few places in the world, explains Dorthe Bomholdt Ravnsbæk.

Another advantage is that Na-ion batteries do not need cobalt, which is still needed in Li-ion batteries. The majority of the cobalt used today to make Li-ion batteries, is mined in the Democratic Republic of Congo, where rebellion, disorganized mining and child labor create uncertainty and moral qualms regarding the country's cobalt trade.

It also counts on the plus side that Na-ion batteries can be produced at the same factories that make Li-ion batteries today.

In their new study, Dorthe Bomholdt Ravnsbæk and her colleagues have investigated a new electrode material based on iron, manganese and phosphorus.

The new thing about the material is the addition of the element manganese, which not only gives the battery a higher voltage (volts), but also increases the capacity of the battery and is likely to deliver more watts. This is because the transformations that occur at the atomic level during the discharge and charge are significantly changed by the presence of manganese.

- Similar effects have been seen in Li-ion batteries, but it is very surprising that the effect is retained in a Na-ion battery, since the interaction between the electrode and Na-ions is very different from that of Li-ions, says Dorthe Bomholdt Ravnsbæk.

She will not try and predict when we can expect to find seawater-based Na-ion batteries in our phones and electric cars, because there are still some challenges to be solved.

One challenge is that it can be difficult to make small Na-ion batteries. But large batteries also have value - for example, when it comes to storing wind or solar energy.

Thus, in 2019, such a gigantic 100 kWh Na-ion battery was inaugurated to be tested by Chinese scientists at the Yangtze River Delta Physics Research Center. The giant battery consists of more than 600 connected Na-ion battery cells, and it supplies power to the building that houses the center. The current stored in the battery is surplus current from the main grid.

Credit: 
University of Southern Denmark

University of Ottawa researchers find evidence to explain behavior of slow earthquakes

image: A map of Vancouver Island showing the locations of seismic instruments considered by the research group. The grey shaded region delineates where slow earthquakes occur.

Image: 
University of Ottawa

A team of researchers at the University of Ottawa has made an important breakthrough that will help better understand the origin and behavior of slow earthquakes, a new type of earthquake discovered by scientists nearly 20 years ago.

These earthquakes produce movement so slow - a single event can last for days, even months - that they are virtually imperceptible. Less fearsome and devastating than regular earthquakes, they do not trigger seismic waves or tsunamis. They occur in regions where a tectonic plate slides underneath another one, called ''subduction zone faults'', adjacent but deeper to where regular earthquakes occur. They also behave very differently than their regular counterparts. But how? And more importantly: why?

Pascal Audet, Associate Professor in the Department of Earth and Environmental Sciences at uOttawa, along with his seismology research group (Jeremy Gosselin, Clément Estève, Morgan McLellan, Stephen G. Mosher and former uOttawa postdoctoral student Andrew J. Schaeffer), were able to find answers to these questions.

"Our work presents unprecedented evidence that these slow earthquakes are related to dynamic fluid processes at the boundary between tectonic plates," said first author and uOttawa PhD student, Jeremy Gosselin. "These slow earthquakes are quite complex, and many theoretical models of slow earthquakes require the pressure of these fluids to fluctuate during an earthquake cycle."

Using a technique similar to ultrasound imagery and recordings of earthquakes, Audet and his team were able to map the structure of the Earth where these slow earthquakes occur. By analyzing the properties of the rocks where these earthquakes happened, they were able to reach their conclusions.

In fact, in 2009, Professor Audet had himself presented evidence that slow earthquakes occurred in regions with unusually high fluid pressures within the Earth.

"The rocks at those depths are saturated with fluids, although the quantities are minuscule," explained Professor Pascal Audet. "At a depth of 40 km, the pressure exerted on the rocks is very high, which normally tends to drive the fluids out, like a sponge that someone squeezes. However, these fluids are imprisoned in the rocks and are virtually incompressible; the fluid pressure therefore rises to very high values, which essentially weakens the rocks and generates slow earthquakes."

Several studies over the past years had suggested these events are related to dynamic changes in fluid pressure, but until now, no conclusive empirical evidence had been established. "We were keen to repeat Professor Audet's previous work to look for time-varying changes in fluid pressures during slow earthquakes," explained Jeremy Gosselin. "What we discovered confirmed our suspicions and we were able to establish the first direct evidence that fluid pressures do, in fact, fluctuate during slow earthquakes."

Credit: 
University of Ottawa

New understanding of condensation could lead to better power plant condenser, de-icing materials

image: A time-lapse series of photos showing the growth and coalescence of water droplets on a newly developed specialized condensation surface.

Image: 
Images courtesy Nenad Miljkovic

CHAMPAIGN, Ill. -- For decades, it's been understood that water repellency is needed for surfaces to shed condensation buildup - like the droplets of water that form in power plant condensers to reduce pressure. New research shows that the necessity of water repellency is unclear and that the slipperiness between the droplets and solid surface appears to be more critical to the clearing of condensation. This development has implications for the costs associated with power generation and technologies like de-icing surfaces for power lines and aircraft.

The findings of the study, jointly led by University of Illinois mechanical sciences and engineering professor Nenad Miljkovic and mechanical and aerospace engineering professor Arun Kota of North Carolina State University, are published in the journal Science Advances.

In many natural and industrial systems, heat transfers through condensation. This type of energy transfer happens most efficiently when the vapor condenses onto a surface as droplets rather than as films, the researchers said. The droplets must be mobile to keep the surface clear for continuous, efficient energy transfer, and a slippery surface helps greatly.

Determining just how condensing droplets grow and coalesce during condensation is an area of science that has not been reviewed in a while, Miljkovic said. The researchers hypothesized that the condensing surface - called wetting or nonwetting, depending on its ability to attract or repel water - does not matter when it comes to keeping the surface clear of droplets, contrary to accepted thought.

To test this, Kota's team developed a unique solid surface that is simultaneously wetting and slippery to water droplets. "This is counter-intuitive, because droplets tend to stick to wetting surfaces and do not easily slip or slide on them," Kota said.

Miljkovic's team used time-lapsed images of droplets forming on these unique solid surfaces to measure the contact angle between droplets and the surface and to count how many droplets exist within different size ranges as they coalesce.

"We found that when solid surfaces are slippery, condensed droplets will coalesce with surrounding droplets and shed off of the surface, whether we are dealing with a wetting or nonwetting surface," Kota said.

This observation implies that it is possible to use a greater variety of materials as condensation surfaces, not just water-repellent ones.

"Instead of using specialized polymer surfaces, which are difficult to adhere to metal and only last a few months, power plants may be able to use a more durable wetting materials for water shedding, such as ceramic or metal that has been engineered to be slippery and could last 10 years or longer," Miljkovic said. "This same concept is valuable in other applications, too, like coming up with new de-icing surfaces for aircraft wings and power lines and water-shedding coatings for building energy applications, ventilation and conditioning systems."

The researchers report that there are challenges to overcome when developing new wetting and slippery solid materials for use in condensation surfaces. The materials will need to be chemically and texturally uniform, which can still be expensive, they said.

"This work not only demonstrates a previously unexplored idea of using well-engineered durable and wetting solid surfaces to achieve high droplet mobility, but also provides new insight to the fundamental scientific theory behind condensation," Miljkovic said.

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Scientists invent a new method of generating intense short UV vortices

image: OAM pulse wave front

Image: 
Skoltech

An international group of scientists, including Skoltech Professor Sergey Rykovanov, has found a way to generate intense "twisted" pulses. The vortices discovered by the scientists will help investigate new materials. The results of their study were published in the prestigious journal, Nature Communications.

Electromagnetic waves are known to carry energy and momentum and exert the so-called light pressure. This was demonstrated experimentally by the Russian physicist, Pyotr Lebedev, back in 1900. A little-known fact is that electromagnetic waves can also carry the angular momentum, that is, twist objects. The angular momentum (twisting ability) can be transferred in two ways. First, an object can be irradiated by an elliptically or circularly polarized electromagnetic wave to produce the rotational moment, creating the Sadovsky effect. Second, the substance can be twisted by electromagnetic waves with a "vortex" wave structure or, scientifically speaking, waves with an orbital angular momentum (OAM). Visible or IR-range electromagnetic pulses with such capability are already used in telecommunications to increase the data transfer capacity of fiber optic networks. Generating intense OAM pulses in the UV range is a rather challenging task which, if solved, will open new possibilities for exploring and developing new materials at characteristic spatial (tens of nanometers) and temporal (hundreds of attoseconds) scales. Such high-resolution visualizations are used to study and predict materials' properties.

Skoltech scientists in collaboration with researchers from the Shanghai Institute of Optics and Fine Mechanics (China) and the Helmholtz Institute in Jena (Germany) have proposed a simple way to generate intense short UV OAM pulses.

"We can apply the term "UV vortices" to the pulses we obtained through mathematical modeling. Along with twisted wave fronts, our pulses have a duration of a few hundred attoseconds only ? a temporal scale typical for atomic physics. For comparison, an electron makes one "revolution" in a hydrogen atom within a hundred attoseconds or so," explains Skoltech Professor Sergey Rykovanov.

The scientists used the most powerful supercomputers in the world and Russia, including the Zhores supercomputer installed at Skoltech last year, to ensure realistic 3D simulation of the UV vortex effect.

Currently, the team is preparing for the vortex search experiment.

The scientists are confident that the generation of intense attosecond UV vortices will break new ground in studying the electrons motion dynamics in various materials and condensed matter.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Researchers expand microchip capability with new 3D inductor technology

image: A scanning electron microscope micrograph of a rolled microinductor architecture, approximately 80 micrometers in diameter and viewed from one end looking inward. Reprinted with permission from X. Li et al., Science Advances (2020).

Image: 
Image courtesy Xiuling Li

CHAMPAIGN, Ill. -- Smaller is better when it comes to microchips, researchers said, and by using 3D components on a standardized 2D microchip manufacturing platform, developers can use up to 100 times less chip space. A team of engineers has boosted the performance of its previously developed 3D inductor technology by adding as much as three orders of magnitudes more induction to meet the performance demands of modern electronic devices.

In a study led by Xiuling Li, an electrical and computer engineering professor at the University of Illinois and interim director of the Holonyak Micro and Nanotechnology Laboratory, engineers introduce a microchip inductor capable of tens of millitesla-level magnetic induction. Using fully integrated, self-rolling magnetic nanoparticle-filled tubes, the technology ensures a condensed magnetic field distribution and energy storage in 3D space - all while keeping the tiny footprint needed to fit on a chip. The findings of the study are published in the journal Science Advances.

Traditional microchip inductors are relatively large 2D spirals of wire, with each turn of the wire producing stronger inductance. In a previous study, Li's research group developed 3D inductors using 2D processing by switching to a rolled membrane paradigm, which allows for wire spiraling out of plane and is separated by an insulating thin film from turn to turn. When unrolled, the previous wire membranes were 1 millimeter long but took up 100 times less space than the traditional 2D inductors. The wire membranes reported in this work are 10 times the length at 1 centimeter, allowing for even more turns - and higher inductance - while taking up about the same amount of chip space.

"A longer membrane means more unruly rolling if not controlled," Li said. "Previously, the self-rolling process was triggered and took place in a liquid solution. However, we found that while working with longer membranes, allowing the process to occur in a vapor phase gave us much better control to form tighter, more even rolls."

See a video of the microinductor rolling process here.

Another key development in the new microchip inductors is the addition of a solid iron core. "The most efficient inductors are typically an iron core wrapped with metal wire, which works well in electronic circuits where size is not as important of a consideration," Li said. "But that does not work at the microchip level, nor is it conducive to the self-rolling process, so we needed to find a different way."

To do this, the researchers filled the already-rolled membranes with an iron oxide nanoparticle solution using a tiny dropper.

See a video of the of the iron oxide solution application here.

"We take advantage of capillary pressure, which sucks droplets of the solution into the cores," Li said. "The solution dries, leaving iron deposited inside the tube. This adds properties that are favorable compared to industry-standard solid cores, allowing these devices to operate at higher frequency with less performance loss."

Though a significant advance on earlier technology, the new microchip inductors still have a variety of issues that the team is addressing, Li said.

"As with any miniaturized electronic device, the grand challenge is heat dissipation," she said. "We are addressing this by working with collaborators to find materials that are better at dissipating the heat generated during induction. If properly addressed, the magnetic induction of these devices could be as large as hundreds to thousands of millitesla, making them useful in a wide range of applications including power electronics, magnetic resonance imaging and communications."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Revealed an alteration related to the loss of effectiveness of a treatment in lung cancer

image: Main authors of the study. From left to right: Juanjo Alburquerque, Montse Sánchez Céspedes, Eva Pros y María Saigi

Image: 
Montse Sánchez-Céspedes

The Cancer Genetics Group of the Josep Carreras Leukaemia Research Institute, led by Montse Sánchez-Céspedes, together with Luis Montuenga from CIMA, and Enriqueta Felip from Vall d'Hebron Hospital, has revealed that inactivation of RB1 through intragenic rearrangements is frequent in lung cancer cells from non-smoking patients with EGFR mutations.

The presence of these alterations in RB1 in the tumor could indicate a higher probability of developing resistance to the treatment, especially to EGFR inhibitors, through the mechanism of histopathological transformation to non-small cell lung cancer (SCLC), or SCLC combined with a transformation to the squamous type.

Only 10% of lung cancers affect non-smokers, and the keys to their onset and development remain a mystery. Sánchez-Céspedes' team began their research establishing primary cultures of cancer cells from non-smoking patients grown in the laboratory and analyzed by Next Generation Sequencing (NGS), which genetic abnormalities they presented. They obtained a panel of genes, some already referenced before, and some new ones. Among the new genes, the one altered in all those cells with EGFR mutations was the tumor suppressor gene RB1.

Sánchez-Céspedes' team investigated if the alteration of this gene also occurred in a separate tumor cohort of patients with adenocarcinomas. They found that it was, especially in those patients treated with chemotherapy linked to another alteration: EGFR, which became ineffective over time or to which the tumor became resistant.

The cause of this resistance is that when both abnormalities occur together, the tumor changes from adenocarcinoma to small cell or squamous cell lung cancer. "Their genetic material is the same, but the genes expressed by changing shape are not, and the treatment is no longer effective."

"Given that some of the tumors presented, in addition to RB1, other alterations associated with resistance to EGFR inhibitors, such as the T790M mutation or TACC3-FGFR3 fusion, it is reasonable to think that the alteration in RB1 is not the only one responsible for refractoriness to treatment. The next steps are to determine the pre-existence of these mutations in the adenocarcinomas of these patients and to study the possible existence of minority clones with other alterations associated with acquired resistance. The inactivation of RB1 may favor the growth and tumoural versatility of these clones. If we get to observe the pre-existence of these mutations in a minimal number of cells, it could be possible to predict the mechanism of resistance and design treatments more precisely, improving the prognosis of patients".

"In the study, we also observed that some patients had a genetic syndrome of predisposition to cancer, such as Li Fraumeni. The cause of lung cancer in non-smokers is not known, so we cannot rule out that, in some cases, it is due to hereditary genetic alterations. Knowing this would allow us to anticipate whether a person is more or less likely to develop this type of cancer," explains Sánchez-Céspedes.

Credit: 
Josep Carreras Leukaemia Research Institute

Can I mix those chemicals? There's an app for that!

Improperly mixed chemicals cause a shocking number of fires, explosions, and injuries in laboratories, businesses, and homes each year.

A new open source computer program called ChemStor developed by engineers at the University of California, Riverside, can prevent these dangerous situations by telling users if it is unsafe to mix certain chemicals.

The Centers for Disease Control estimates 4,500 injuries a year are caused by the mixture of incompatible pool cleaning chemicals, half of which occur in homes. Even in laboratories and factories where workers are trained in safe storage protocols, mix-ups and accidents happen, often after chemicals are inadvertently combined in a waste container.

The UC Riverside engineers' work is published in the Journal of Chemical Information and Modeling. Their program adapts a computer science strategy to allocate resources for efficient processor use, known as graph coloring register allocation. In this system, resources are colored and organized according to a rule that states adjacent data points, or nodes, sharing an edge cannot also share a color.

"We color a graph such that no two nodes that share an edge have the same color," said first author Jason Ott, a doctoral student in computer science who led the research.

"The idea comes from maps," explained co-author William Grover, an assistant professor of bioengineering in the Marlan and Rosemary Bourns College of Engineering with a background in chemistry. "In a map of the U.S., for example, no two adjacent states share a color, which makes them easy to tell apart."

ChemStor draws from an Environmental Protection Agency library of 9,800 chemicals, organized into reactivity groups. It then builds a chemical interaction graph based on the reactivity groups and computes the smallest number of colors that will color the graph such that no two chemicals that can interact also share the same color.

ChemStor next assigns all the chemicals of each color to a storage or waste container after confirming there is enough space. Chemicals with the same color can be stored together without a dangerous reaction, while chemicals with different colors cannot.

If two or more chemicals can be combined in the same cabinet or added to a waste container without forming possibly dangerous combinations of chemicals, ChemStor determines the configuration is safe. ChemStor also indicates if no safe storage or disposal configuration can be found.

Grover, who experienced a destructive lab fire caused by incompatible chemicals during his days as an undergraduate, said he takes the threat very seriously.

"I'm responsible for the safety of the people in my lab, and ChemStor would be like a safety net under our already strict storage protocols," Grover said.

ChemStor's functionality is currently limited to a command line interface only, where the user manually enters the type of chemicals and amount of storage space into a computer.

Updates are forthcoming to make ChemStor more user-friendly, including a smartphone app utilizing the camera to gather information about chemicals and storage options, as well as an integration with digital voice assistants, some of which have already begun to be developed specifically for chemists, making ChemStor a natural addition.

"Any system can communicate with ChemStor as long as the input is fashioned in a way that ChemStor expects," Ott said. The code is available here.

Credit: 
University of California - Riverside

How old are they? Some non-photosynthetic orchids consist of dead wood

image: Figure 1: Graph showing the atmospheric levels of Δ14C values. The concentration was elevated by the nuclear bomb tests of the 1950s and 1960s.

Image: 
Kobe University

Botanists have long held a fascination for heterotrophic plants, not only because they contradict the notion that autotrophy (photosynthesis) is synonymous with plants, but also because such plants are typically rare and ephemeral. However, it is still a matter of debate as to how these plants obtain nutrition.

A research team consisting of Kobe University's Associate Professor SUETSUGU Kenji (of the Graduate School of Science's Department of Biology), Research Fellow MATSUBAYASHI Jun (of Japan Society for the Promotion of Science) and Professor TAYASU Ichiro (of the Research Institute for Humanity and Nature), has investigated the carbon age (the time since the carbon was fixed from atmospheric CO2 by photosynthesis) in some non-photosynthetic mycoheterotrophic plants. Many orchids have lost photosynthetic ability and evolved an enigmatic mycoheterotrophic lifecycle. Mycoheterotrophic plants usually obtain carbon from other photosynthetic plants through a shared mycorrhizal fungal network, while some mycoheterotrophs are believed to obtain carbon from decaying litter or dead wood by parasitizing saprotrophic fungi. However, traditional approaches have provided only indirect evidence of such nutrient transportation from dead organic matter to plants.

The current study examined the utility of radiocarbon measurements to distinguish the fungal exploitation pattern of mycoheterotrophs. Mycoheterotrophic species exploiting ectomycorrhizal fungi should take up recently synthesized photosynthates, while mycoheterotrophic plants dependent on saprotrophic fungi must obtain carbon from older sources, i.e., dead wood. Therefore, the research team calculated the carbon age of mycoheterotrophic plants, using the radiocarbon emitted from atmospheric nuclear bomb tests carried out in the 1950s and 1960s as a tracer.

Through this methodology, they revealed that the carbon in some mycoheterotrophic orchids dated from over ten years prior to the sampling period. This indicates that these orchids are relying on the 14C-enriched bomb carbon from dead wood via saprotrophic fungi. Therefore, they concluded that mycoheterotrophic plants can exploit both mycorrhizal and saprotrophic fungi, which are essential components for terrestrial ecosystems. In addition, even though the term "mycoheterotroph" has replaced the formerly misapplied term "saprophyte," some mycoheterotrophic plants are indirectly saprotrophic! The finding overturns the traditional view and opens a new perspective for understanding how these intriguing plants have become ecologically and evolutionarily successful.

The results of this research will be published online in New Phytologist on January 24, 2020.

Research Background

Mutualism, or mutually beneficial interactions between species, is a ubiquitous phenomenon in all ecological systems, and almost all organisms on Earth are involved in at least one mutualistic partnership. Most terrestrial plants, from bryophytes to angiosperms, form mutualisms (interspecific cooperative interactions) with fungi, whereby the plant provides a carbon source in exchange for essential mineral nutrients. These mutually beneficial plant-fungi associations are called mycorrhizal mutualisms.

However, organisms that were originally engaged in mutualisms can sometimes turn into parasites, obtaining the benefits while delivering none in return. In mycorrhizal mutualisms, non-photosynthetic mycorrhizal plants (i.e. mycoheterotrophic plants) are considered as such cheaters since they cannot provide photosynthates to their fungal partners. Indeed, many mycoheterotrophic plants are known to obtain carbon from other photosynthetic plants through a shared mycorrhizal fungal network.

Therefore, despite their achlorophyllous nature, mycoheterotrophic plants are not directly parasitic on other plants, nor do they directly obtain carbon from rotting plant and animal matter, as was once believed. Nonetheless, it is known that fungi play essential roles in terrestrial ecosystems, notably as saprotrophic fungi, which decompose dead wood and decaying litter. Do some mycoheterotrophic plants dependent on these saprotrophic fungi exist? The current research team set out to illuminate this question, utilizing radiocarbon analysis.

Research Details

Radiocarbon (14C) levels could be useful for precisely estimating the trophic strategies of the symbionts of mycoheterotrophic plants, by providing a direct estimation of the mean carbon age in the biomass. Atmospheric nuclear bomb testing during the mid-20th century increased the 14C concentration in the atmosphere worldwide, the peak of which was around 1963. Subsequently, the atmospheric 14C concentration gradually decreased after the ban on atmospheric nuclear testing in 1963.

Therefore, given that the 14C content of organic matter synthesized by primary producers is the same as the corresponding 14C content of atmospheric CO2, the carbon age (the time since carbon was fixed from atmospheric CO2 by photosynthesis) can be estimated by measuring the concentration of 14C arising from the bomb tests of the 1950s and 1960s. As previously explained, mycorrhizal fungi receive photosynthesized carbon from plants. In addition, the mycoheterotrophic plants exploiting mycorrhizal fungi also obtain carbon from other nearby plants through mycorrhizal network. Therefore, the research group hypothesized that the 14C values of mycoheterotrophs exploiting mycorrhizal fungi would resemble the 14C values of atmospheric CO2 for the surrounding autotrophic (photosynthesizing) plants, since this carbon should be very recent. On the other hand, the carbon of mycoheterotrophs exploiting saprotrophic fungi (particularly wood-decaying ones) should be older and therefore contain a higher concentration of the 14C that was generated by nuclear tests.

To investigate these hypotheses, the 14C concentrations of 10 species of mycoheterotrophic plants collected across 10 sites in Japan were measured. Of these ten species, six species (two ericaceous species Monotropastrum humile and Pyrola subaphylla and four orchidaceous species Cephalanthera subaphylla, Chamaegastrodia shikokiana, Neottia nidus-avis and Lecanorchis nigricans) showed low 14C concentrations which were similar to the results for autotrophic plants- confirming that they utilize very recent carbon (Figure 1).

On the other hand, it was revealed that the other four species, which were all types of orchid (Gastrodia elata, Cyrtosia septentrionalis, Yoania japonica and Eulophia zollingeri), contained very high concentrations of 14C, dating from over ten years prior to the sampling period (red circled data in Figure 2). These research results indicate that some mycoheterotrophs have acquired 14C-enriched bomb carbon from dead wood via saprotrophic fungi. This indicates that some mycoheterotrophic plants do not obtain their carbon by tapping into existing mycorrhizal networks, but recruit saprotrophic fungi into novel mycorrhizal symbioses (Figure 3).

Conclusion

This research illuminated that mycoheterotrophic plants can exploit both mycorrhizal and saprotrophic fungi, which are essential components for terrestrial ecosystems. Many botanists rejected the use of the term 'saprophyte' as incorrect and called the plants mycoheterotrophic plants to reflect their unique nutritional dependence on fungal carbon. In fact, there are no saprophytes that directly feed on dead organic matter. However, the radiocarbon approach provides conclusive evidence that some mycoheterotrophic orchids are indirectly saprophytic and are dependent on wood debris in the forest carbon cycle.

Credit: 
Kobe University

Study suggests US households waste nearly a third of the food they acquire

video: American households waste, on average, almost a third of the food they acquire, according to economists, who say this wasted food has an estimated aggregate value of $240 billion annually. Divided among the nearly 128.6 million U.S. households, this waste could be costing the average household about $1,866 per year, Penn State researchers find.

Image: 
Penn State

UNIVERSITY PARK, Pa. -- American households waste, on average, almost a third of the food they acquire, according to economists, who say this wasted food has an estimated aggregate value of $240 billion annually. Divided among the nearly 128.6 million U.S. households, this waste could be costing the average household about $1,866 per year.

This inefficiency in the food economy has implications for health, food security, food marketing and climate change, noted Edward Jaenicke, professor of agricultural economics, College of Agricultural Sciences, Penn State.

"Our findings are consistent with previous studies, which have shown that 30% to 40% of the total food supply in the United States goes uneaten -- and that means that resources used to produce the uneaten food, including land, energy, water and labor, are wasted as well," Jaenicke said. "But this study is the first to identify and analyze the level of food waste for individual households, which has been nearly impossible to estimate because comprehensive, current data on uneaten food at the household level do not exist."

The researchers overcame this limitation by borrowing methodology from the fields of production economics -- which models the production function of transforming inputs into outputs -- and nutritional science, by which a person's height, weight, gender and age can be used to calculate metabolic energy requirements to maintain body weight.

In this novel approach, Jaenicke and Yang Yu, doctoral candidate in agricultural, environmental and regional economics, analyzed data primarily from 4,000 households that participated in the U.S. Department of Agriculture's National Household Food Acquisition and Purchase Survey, known as FoodAPS. Food-acquisition data from this survey were treated as the "input."

FoodAPS also collected biological measures of participants, enabling the researchers to apply formulas from nutritional science to determine basal metabolic rates and calculate the energy required for household members to maintain body weight, which is the "output." The difference between the amount of food acquired and the amount needed to maintain body weight represents the production inefficiency in the model, which translates to uneaten, and therefore wasted, food.

"Based on our estimation, the average American household wastes 31.9% of the food it acquires," Jaenicke said. "More than two-thirds of households in our study have food-waste estimates of between 20% and 50%. However, even the least wasteful household wastes 8.7% of the food it acquires."

In addition, demographic data collected as part of the survey were used to analyze the differences in food waste among households with a variety of characteristics.

For example, households with higher income generate more waste, and those with healthier diets that include more perishable fruits and vegetables also waste more food, according to the researchers, who reported their findings in the American Journal of Agricultural Economics.

"It's possible that programs encouraging healthy diets may unintentionally lead to more waste," Jaenicke said. "That may be something to think about from a policy perspective -- how can we fine-tune these programs to reduce potential waste."

Household types associated with less food waste include those with greater food insecurity -- especially those that participate in the federal SNAP food assistance program, previously known as "food stamps" -- as well as those households with a larger number of members.

"People in larger households have more meal-management options," Jaenicke explained. "More people means leftover food is more likely to be eaten."

In addition, some grocery items are sold in sizes that may influence waste, he said.

"A household of two may not eat an entire head of cauliflower, so some could be wasted, whereas a larger household is more likely to eat all of it, perhaps at a single meal."

Among other households with lower levels of waste are those who use a shopping list when visiting the supermarket and those who must travel farther to reach their primary grocery store.

"This suggests that planning and food management are factors that influence the amount of wasted food," Jaenicke said.

Beyond the economic and nutritional implications, reducing food waste could be a factor in minimizing the effects of climate change. Previous studies have shown that throughout its life cycle, discarded food is a major source of greenhouse gas emissions, the researchers pointed out.

"According to the U.N. Food and Agriculture Organization, food waste is responsible for about 3.3 gigatons of greenhouse gas annually, which would be, if regarded as a country, the third-largest emitter of carbon after the U.S. and China," Jaenicke said.

The researchers suggested that this study can help fill the need for comprehensive food-waste estimates at the household level that can be generalized to a wide range of household groups.

"While the precise measurement of food waste is important, it may be equally important to investigate further how household-specific factors influence how much food is wasted," said Jaenicke. "We hope our methodology provides a new lens through which to analyze individual household food waste."

Credit: 
Penn State

Feel the force: new 'smart' polymer glows brighter when stretched

video: The scientists used a CCD camera to directly visualize the changes in brightness as the polymer was stretched and released. The false color red represents high light intensity and the false color blue represents low light intensity.

Image: 
OIST

Scientists from the Okinawa Institute of Science and Technology Graduate University (OIST) have created a stress-detecting "smart" polymer that shines brighter when stretched. Researchers hope to use the new polymer to measure the performance of synthetic polymers and track the wear and tear on materials used in engineering and construction industries.

The scientists developed this polymer by incorporating copper complexes - structures formed by linking copper atoms to organic (carbon-containing) molecules - into a polymer called polybutylacrylate, which is made from a chemical used to synthesize acrylic paints, adhesives and sealants.

The copper complexes, which link the polybutylacrylate chains together, naturally glow when exposed to ultraviolet light - a property known as photoluminescence. But when the polymer is stretched, the copper complexes emit light at a greater intensity, leading to a brighter glow. The copper complexes therefore act as mechanophores - compounds which undergo a change when triggered by a mechanical force.

Most mechanophores are made not from metals such as copper, but from organic compounds, which change color or emit light when mechanical stress breaks a weak chemical bond. But mechanophores that use this bond-breaking mechanism have severe limitations.

"A relatively large force is required to break the chemical bond, so the mechanophore is not sensitive to small amounts of stress," said Dr Ayumu Karimata, first author of the study and a postdoctoral scholar from the OIST Coordination Chemistry and Catalysis (CCC) Unit, led by Professor Julia Khusnutdinova. "Also, the process of breaking the bond is often irreversible and so these stress sensors can only be used once."

In contrast, the new copper mechanophores developed by the CCC unit are sensitive to much smaller stresses and can respond quickly and reversibly. In the study, published in Chemical Communications, the scientists reported that the polymer film immediately brightened and dimmed in response to being stretched and released.

Shining a light on the mechanism

Photoluminescent compounds, such as these copper complexes, have long been a topic of interest for the CCC unit. Prior to making the polymer, the researchers synthesized isolated copper complexes of varying size.

The team found that the copper complexes were very dynamic, continuously distorting in shape. But as they increased in size, the copper complexes became less flexible and glowed brighter. The CCC unit believes that the larger, less flexible complexes release light more efficiently because their motion is restricted, and they therefore lose less energy as heat.

The researchers realized they could exploit the relationship between the flexibility of the copper complexes and brightness to create a stress-detecting polymer.

"When the copper complexes are incorporated into the polymer as cross-links, the act of stretching the polymer also reduces the flexibility of the molecules," explained Karimata. "This causes the copper complexes to luminesce more efficiently with greater intensity."

Although still a long way off, Dr Karimata hopes that the acrylic polymer could eventually be adapted to create a stress-sensing acrylic paint. This could have valuable applications as a coating for different structures, such as bridges or the frames of cars and aircraft.

"As we can see even from the direct visualization of the polymer, stress is applied across a material in a non-uniform way," said Karimata. "A stress-sensing paint would allow hotspots of stress on a material to be detected and could help prevent a structure from failing."

Credit: 
Okinawa Institute of Science and Technology (OIST) Graduate University

Predicting the degradation behavior of advanced medical devices

image: The materials allowing implementation of multiple functions such as drug release or shape changing capabilities into degradable polymer devices have sophisticated molecular architectures. Studying their degradation behavior in monolayers at the air-water interface allows for rapid and straightforward assessment of the evolution of the material properties. The insights gained by this predictive tool point at design principles for the next generation of multifunctional devices.

Image: 
Copyright: HZG/Institute of Biomaterial Science

The results have been reported today in the first issue of the journal Cell Reports Physical Science. With the so-called Langmuir technique, the authors transfer the material into a 2D system, and thereby circumvent the complex transport processes that influence the degradation of three-dimensional objects. They created analytical models describing different polymer architectures that are of particular interest for the design of multifunctional implants and determined the kinetic parameters that describe the degradation of these materials. In the next step, the scientists want to use these data to carry out computer simulations of the decomposition of therapeutic polymer devices. Regulatory authorities already prescribe computer simulations of the performance of such devices, for example for some stents. The insights gained by the 2D degradation studies are certain to improve these simulations. By introducing a method to quickly understand and predict the degradation of polymer materials, the HZG researchers are contributing substantially to establishing innovative, multifunctional polymers for regenerative medicine.

Background - Multifunctional Biomaterials

An implementation of degradability can be especially helpful for implants such as sutures or staples. These objects are only needed temporarily as a mechanical support. Future medical implants are expected to perform much more complex tasks. These degradable devices will for example be able to be programmed in a compressed shape and in this way implantable by minimally invasive techniques, release a drug that supports the healing process, recruit the right cells to its surface and report back on the progress of the recovery. Here degradation is only one out of several functions that are integrated in the materials. Yet, degradation is highly critical, because it changes the material on a molecular level. In order to implement multiple functions into a material, its molecular structure is designed in a distinct, often complex way. Understanding how degradation affects this molecular architecture is key to ensuring that all the functions are executed as intended. The thin layer method presented in the study can have a transformative role for designing such degradable polymers.

Credit: 
Helmholtz-Zentrum Hereon

Wannier90 program becomes community code in major new release

image: Comparison of Wannier functions resulting from different minimization schemes in gallium arsenide (larger pink spheres are Ga cation atoms and yellow spheres are As anions).

Image: 
Valerio Vitale @Imperial College London

Wannier functions were first introduced by Gregory Wannier in 1937 as an alternative way of describing the electronic ground state of periodic systems. They were linked to Bloch orbitals, the standard method of describing these ground states, by families of transformations in a continuous space of unitary matrices. Unfortunately, this was linked to a large degree of arbitrariness.

In 1996, NCCR MARVEL director Nicola Marzari, then a postdoc at Rutgers University, and Prof. David Vanderbilt, also at Rutgers, developed a novel method allowing researchers to iteratively transform the extended Bloch orbitals of a first-principles calculation into a unique set of "maximally localized" Wannier functions. These localized orthogonal functions can very accurately represent the Bloch eigenstates of a periodic system at a very low computational cost, thanks to the minimal size of the Wannier basis set. In addition, Wannier functions can be used to analyze the nature of chemical bonding, or as a local probe of phenomena related to electric polarization and orbital magnetization. They can also be constructed and used outside the context of electronic-structure theory, for example in cases that include phonon excitations, photonic crystals, and cold-atom optical lattices.

The history of the code's development can be found here. It's notable that already 20 years ago a collaboration with Prof. Alfonso Baldereschi and Dr. Michel Posternak, here at EPFL, was key to making the code truly agnostic to any first-principles software, and thus widely usable and interoperable. Its usage and popularity can be gauged by the statistics on the papers describing either v1.0 or v2.0, collecting around 500 papers published in 2019 alone.

In its Fortran90 incarnation, Wannier90 has now transitioned from being developed by a small community of researchers to a model where developments are community-driven. This has been achieved primarily by hosting the source code and associated development efforts on a public GitHub repository, by building a community of Wannier90 developers committed and reward by new releases and associated papers, and facilitating personal interactions between individuals through community workshops, the most recent one in San Sebastian in 2016, that laid the groundwork for the present paper.

Thanks to this transition, the 3.0 release of the program includes several new functionalities and improvements that make it very robust, efficient and feature-rich. These include new methods for the calculation of WFs and for the generation of the initial projections; parallelization and optimizations; interfaces with new codes, methods and infrastructures; new user functionality; improved documentation and various bug fixes. Enlarging the community of developers also had a visible effect in terms of the modern software engineering practices that have been put into place. They help improve the code's robustness and reliability and facilitate its maintenance by the core Wannier90 developers group as well as its long-term sustainability.

Credit: 
National Centre of Competence in Research (NCCR) MARVEL

How moon jellyfish get about

image: The picture was taken in the joint aquarium of the Institutes of Genetics and Zoology of the University of Bonn.

Image: 
(c) Photo: Volker Lannert/Uni Bonn

With their translucent bells, moon jellyfish (Aurelia aurita) move around the oceans in a very efficient way. Scientists at the University of Bonn have now used a mathematical model to investigate how these cnidarians manage to use their neural networks to control their locomotion even when they are injured. The results may also contribute to the optimization of underwater robots. The study has already been published online in the journal eLife; the final version will appear soon.

Moon jellyfish (Aurelia aurita) are common in almost all oceans. The cnidarians move about in the oceans with their translucent bells, which measure from three to 30 centimeters. "These jellyfish have ring-shaped muscles that contract, thereby pushing the water out of the bell," explains lead author Fabian Pallasdies from the Neural Network Dynamics and Computation research group at the Institute of Genetics at the University of Bonn.

Moon jellyfish are particularly efficient when it comes to getting around: They create vortices at the edge of their bell, which increase propulsion. Pallasdies: "Furthermore, only the contraction of the bell requires muscle power; the expansion happens automatically because the tissue is elastic and returns to its original shape."

Jellyfish for research into the origins of the nervous system

The scientists of the research group have now developed a mathematical model of the neural networks of moon jellyfish and used this to investigate how these networks regulate the movement of the animals. "Jellyfish are among the oldest and simplest organisms that move around in water," says the head of the research group, Prof. Dr. Raoul-Martin Memmesheimer. On the basis of them and other early organisms, the origins of the nervous system will now be investigated.

Especially in the 50s and 80s of the last century, extensive experimental neurophysiological data were obtained on jellyfish, providing the researchers at the University of Bonn with a basis for their mathematical model. In several steps, they considered individual nerve cells, nerve cell networks, the entire animal and the surrounding water. "The model can be used to answer the question of how the excitation of individual nerve cells results in the movement of the moon jellyfish," says Pallasdies.

The jellyfish can perceive their position with light stimuli and with a balance organ. If a moon jellyfish is turned by the ocean current, the animal compensates for this and moves further to the water surface, for example. With their model, the researchers were able to confirm the assumption that the jellyfish uses one neural network for swimming straight ahead and two for rotational movements.

Wave-shaped propagation of the excitation

The activity of the nerve cells spreads in the jellyfish's bell in a wave-like pattern. As experiments from the 19th century already show, the locomotion even works when large parts of the bell are injured. Scientists at the University of Bonn are now able to explain this phenomenon with their simulations: "Jellyfish can pick up and transmit signals on their bell at any point," says Pallasdies. When one nerve cell fires, the others fire as well, even if sections of the bell are impaired.

However, the wave-like propagation of the excitation in the jellyfish's bell would be disrupted if the nerve cells fired randomly. As the researchers have now discovered on the basis of their model, this risk is prevented by the nerve cells not being able to become active again so quickly after firing.

The scientists hope that further research will shed light on the early evolution of the neural networks. At present, underwater robots are also being developed that move on the basis of the swimming principle of jellyfish. Pallasdies: "Perhaps our study can also help to improve the autonomous control of these robots."

Credit: 
University of Bonn

How the brain processes rewards

Researchers from HSE University, Skoltech and the University of Toronto analyzed data from 190 fMRI studies and found out that food, sex and money implicate similar brain regions whereas different types of reward favor the left and right hemispheres differently. The paper is to be published in Brain Imaging and Behavior.

Food is a reward when we are hungry, a primary reward. Erotic images are also considered a primary reward because mating is an essential obligation to our species. Money is a resource that affords our survival in the society, but it is a secondary reward, because it is a human creation. In any decision-making process, the brain assesses potential profit - i.e., size of reward that can be received for an action. Reactions to various types of rewards are processed by various brain structures. A key brain region associated with all reward types is the basal ganglia - a cluster of gray matter nuclei located at the base of the forebrain, originally know for its implication in motor and regulatory function.

However, it has been unclear as to how the activity of basal ganglia varies depending on the type of reward offered. To find the answer to this question, the researchers conducted a series of meta-analyses of 190 fMRI (functional magnetic resonance imaging) studies, which observed activities in different brain areas in response to information on a reward--food, sexual, or monetary. A total of 5,551 participants took part in these studies.

The analyses indicates that all reward types engaged the basal ganglia nuclei differently. 'Food rewards favor the left hemisphere of the brain; erotic rewards favour the right lateral globus pallidus and the left caudate body. Money rewards engage the basal ganglia bilaterally, including its most anterior part--the nucleus accumbens. The connection between these nuclei and other areas of the brain also depend on the reward type,' says Marie Arsalidou, Assistant Professor at the HSE School of Psychology. Based on the data generated, the researchers drew up and put forth a model of common reward processing via the basal ganglia and separate models for money, sexual, and food rewards.

Understanding the involvement of brain structures in processing different reward types can help us understand human decision-making mechanisms, from one's choice of a chocolate bar instead of a healthy breakfast, to attraction to potential mates and certain investment plans.

Credit: 
National Research University Higher School of Economics