Tech

Nicotinamide can 'immunize' plants to protect from fungal disease

image: Pretreatment of nicotinamide (NIM) effectively suppressed the development
of Fusarium head blight in the wheat spikes compared with nicotinamide mononucleotide (NMN).

Image: 
Kanazawa University

Kanazawa, Japan - Fungal diseases in cereal crops cause major economic losses and also threaten human and livestock health, because some fungi produce powerful toxins that might enter the food chain. Farmers use fungicides to control crop diseases, such as wheat head blight. Although agrochemicals are rigorously tested for safety, there can be concerns over chemical residues in food.

Now, researchers at Kanazawa University, in collaboration with colleagues at Ehime University and Nagoya University, have shown that the natural substance nicotinamide (NIM - a vitamin found in food and used as a dietary supplement) can help stimulate plant immune systems. Pre-treatment with NIM can prevent or reduce development of fungal disease in wheat plants. This knowledge could lead to new approaches to tackle crop diseases. The team recently published their work in the International Journal of Molecular Sciences.

When the team pre-treated with NIM the spikes of wheat plants (carrying the young grains that are later harvested to make flour) and then inoculated the plants with conidia of Fusarium graminearum (which causes head blight), the NIM pre-treatment strongly suppressed the disease. Pre-treated plants contain much less fungal biomass, and less of a mycotoxin it produces, compared with water-treated plants.

The team also performed metabolomics to analyze the contents of hundreds of compounds in the plants and found that NIM pre-treatment increased the amounts of 375 substances. Among those markedly increased were several antimicrobial and antioxidant compounds.

"We found that pre-treating wheat plants with NIM led to the activation of plant immune response and much higher content of the plant's own defense-related compounds, including antimicrobial substances", says lead author Yasir Sidiq. "This work builds on previous research using other natural chemicals related to NIM and has the added advantages of being relatively cheap, readily available, and stable at room temperature."

This work represents a significant step forward in developing environmentally friendly ways to tackle important diseases in crops. "We expect our study will lead to novel approaches in agriculture," corresponding author Takumi Nishiuchi explains, "potentially replacing toxic fungicide sprays with new ways of stimulating the plant innate immune responses - similar to how vaccinating humans or animals primes their immune systems against later infection."

Credit: 
Kanazawa University

Study of harvey flooding aids in quantifying climate change

image: Top left (a): Simulation of the actual flood depth (meters) in a South Houston neighborhood. Bottom left (b): Simulation of the counterfactual flood if climate change increased precipitation by 38%. Bottom right (c): Attributable increase in flood depth (meters) if climate change increased precipitation by 38%.

Image: 
Michael Wehner, Berkeley Lab

How much do the effects of climate change contribute to extreme weather events? It's hard to say--the variables involved are plentiful, each event is unique, and we can only do so much to investigate what didn't happen. But a new paper from Lawrence Berkeley National Laboratory (Berkeley Lab) climate scientist Michael Wehner investigates the question for one particular element of one significant storm and makes the results available to those who lived through it.

In the paper, "Attributable human-induced changes in the magnitude of flooding in the Houston, Texas region during Hurricane Harvey," published May 19 in Climatic Change, Wehner and Christopher Sampson from Fathom Bristol used a hydraulic model--a mathematical model that can analyze the flow of fluid through a particular system of natural or human-made channels--to consider the degree to which human-caused climate change may have affected flooding in and around Houston during the massive 2017 storm, and the ways in which that flooding was distributed. Wehner and Sampson used resources at the National Energy Research Scientific Computing Center (NERSC) to quantify the increase in Houston flood area and depth from the hydraulic model output and to host a portal where other scientists and the public can access the data for their own use.

From August 26 through August 31, 2017, Hurricane Harvey stalled over the Houston area, flooding 154,000 structures and 600,000 cars; 37,000 people were displaced, and more than 70 died in the floodwaters. Adjusted for inflation, it was the second-most financially expensive tropical storm in United States history, costing between $85 billion and $125 billion.

Using previously published estimates (Risser and Wehner 2017; Von Oldenborg et al 2017; Wang et al 2018) stating a range of a 7% to 38% increase in precipitation during Hurricane Harvey due to climate change, Wehner and Sampson applied a hydraulic model to produce a range of simulations showing the distribution of flooding around the Houston area, illustrating a variety of outcomes for different levels of attribution to climate change.

According to Wehner, the computational simplicity of hydraulic models allows for extremely fine resolution simulations --in this case, about 30 meters (100 feet), or approximately the size of a single house and yard. Because of the granularity of the data, residents themselves can use the model to check the flood status of their homes or blocks in different modeled scenarios and see how climate change may have affected them directly.

"[The amount of flooding you experienced] depends a lot on where you are, whether you were victimized by the flood first of all, and then by whether climate change caused that flooding or not," said Wehner. "That's why this is an interesting data set. It's so high-resolution that people can search for their own houses, or at least their own blocks, and see whether their house was flooded because of climate change--at least according to these simulations."

That's part of the impetus of this study, he emphasized: not just publishing the results, but making them easily available to other professional scientists, community scientists, and any member of the public who wants to look at them. For example, Wehner has already begun sharing his data with a team of social scientists who plan to use the data to study the disproportionate distribution of impacts across ethnic groups in Houston. On a broader scale, a public-facing portal hosted at NERSC offers Wehner and Sampson's data in easily downloadable form, in addition to links to free software.

"It's a scientific paper, but it's really motivated as a public outreach," said Wehner. "I'm trying to empower the public to go out and do their own finding, for people to say, 'I want to know if climate change impacted my neighborhood.'"

In addition to community science and passing data on to other researchers, this study may also contribute to research on the economic impacts of climate change.

"At the end of the day, our best estimate is that 14% to 15% of the cost of flooding during Hurricane Harvey is because of climate change, which doesn't sound like a whole lot...but $13 billion does. And that's going to grow as climate change continues," said Wehner.

Credit: 
DOE/Lawrence Berkeley National Laboratory

Latest tests on 6G return surprising results

Imagine you're a fisherman living by a lake with a rowboat. Every day, you row out on the calm waters and life is good. But then your family grows, and you need more fish, so you go to the nearby river. Then, you realize you go farther and faster on the river. You can't take your little rowboat out there - it's not built for those currents. So, you learn everything you can about how rivers work and build a better boat. Life is good again...until you realize you need to go farther still, out on the ocean. But ocean rules are nothing like river rules. Now you have to learn how ocean currents work, and then design something even more advanced that can handle that new space.

Communication frequencies are just like those water currents. And the boats are just like the tools we build to communicate. The challenge is twofold: learning enough about the nature of each frequency and then engineering novel devices that will work within them. In a recent paper published in Proceedings of the IEEE, the flagship publication of the largest engineering society in the world, one USC Viterbi School of Engineering researcher has done just that for the next generation of cellular networks - 6G.

Andy Molisch, professor of electrical and computer engineering at USC Viterbi and the holder of the Solomon Golomb - Andrew and Erna Viterbi Chair, together with colleagues from Lund University in Sweden, New Zealand Telecom, and King's College London, explained that we have more options for communications at 6G frequency than previously thought. Think of it as something like early explorers suddenly discovering the gulf stream.

Molisch and his team, which includes PostDoc Naveed Abbasi, several Ph.D. students, as well as undergraduate and master's students, gained that understanding by performing a series of highly detailed measurements on possible 6G frequencies, called Terahertz band. Their work yielded some surprising results that will help in the design of 6G. "Researchers have long believed that as we move up into 6G frequency, the ways in which a signal can reach a receiver will be greatly limited," said Molisch. "Our work shows that in a number of important situations that is not actually the case."

Moving up to a higher frequency like Terahertz presents several challenges. At higher frequencies these waves become harder to manage, making it easy to lose connection. New algorithms must also be developed that will allow processing to happen at the new bandwidth. Finally, completely new hardware that can function in this new zone has to be engineered. Molisch's tests on the 6G frequency will help these challenges to actually be addressed.

Making 6G technology a reality is an important step towards realizing a whole host of new applications. Molisch and his colleagues have identified three that they believe will be front and center: haptic internet, mobile edge computing, and holographic communications. All three of these areas have the potential to change the face of communications, health, transportation, education, and more.

In a short time, Molisch has already shed an enormous amount of light onto the nature of 6G frequencies. But he is quick to point out that there is still much we need to understand before we can begin building practical tools that work in this space. "Our first round of measurements has so far been extremely successful. But many more measurements must be taken before we understand communicating at these frequencies enough to make 6G an everyday reality," he said.

Credit: 
University of Southern California

Observing quantum coherence from photons scattered in free-space

image: Each optical pulse from the laser is sent through a phase Converter, which creates two coherent pulses, while the multi-mode Analyzer measures the signals scattered off the target surface, implemented with regular bright paper. A single-photon-detector-array is used as the detection device, with 8 x 8 individual pixels which are each time-tagged separately.

Image: 
by Shihan Sajeed, Thomas Jennewein

Quantum coherence is a key ingredient in many fundamental tests and applications of quantum technology including quantum communication, imaging, computing, sensing and metrology. However, the transfer of quantum coherence in free-space has so far been limited to direct line-of-sight channels as atmospheric turbulence and scattering degrade the quality of coherence severely.

In a new paper published in Light: Science & Applications, researchers from the University of Waterloo have successfully demonstrated the transfer and recovery of quantum coherence using photons scattered in free-space for the first time, enabling new research opportunities and applications in fields ranging from quantum communication to imaging and beyond.

"The ability to transfer quantum coherence via scattered photons means that now you can do many things that previously required direct line-of-sight free-space channels," said Shihan Sajeed, lead author on the paper and a postdoctoral fellow at the Institute for Quantum Computing (IQC) and in the Department of Physics and Astronomy at the University of Waterloo in Ontario, Canada.

Normally, if you try to send and receive photons through the air (free-space) for quantum communication or any other quantum-encoded protocol, you need a direct line-of-sight between transmitter and receiver. Any objects--from as big as a wall to as small as a molecule--in the optical path will reflect some photons and scatter others, depending on how reflective the surface is. Any quantum information encoded in the photons is typically lost in the scattered photons, interrupting the quantum channel.

Together with Thomas Jennewein, principal investigator of the Quantum Photonics lab at IQC, Sajeed found a way to encode quantum coherence in pairs of photon pulses sent one after the other so that they would maintain their coherence even after scattering from a diffuse surface.

The researchers emitted a train of pulse pairs with a specific phase-coherence that could be measured from the scattered photons using quantum interference. They also used a single-photon-detector-array sensor that, in addition to solving wavefront distortions caused by atmospheric turbulence, acted as an imager thereby allowing to observe single-photon interference and imaging simultaneously. They placed the detector where they would only absorb scattered photons from the laser pulses, and observed a visibility of over 90%, meaning that the scattered photons maintained their quantum coherence even after smashing against an object.

Their novel technique required custom hardware to make use of the coherent light they were generating. The single photon detector array could detect one billion photons every second with a precision of 100 picoseconds. Only cutting-edge time-tagging electronics could handle the demands of this flow of light, and the team had to design their own electronics adapter board to communicate between the detectors and the computer that would process the data.

"Our technique can help image an object with quantum signals or transmit a quantum message in a noisy environment," said Sajeed. "Scattered photons returning to our sensor will have a certain coherence, whereas noise in the environment will not, and so we can reject everything except the photons we originally sent."

Sajeed expects their findings will stimulate new research and new applications in quantum sensing, communication, and imaging in free space environments. The duo demonstrated quantum communication and imaging in their paper, but Sajeed said further research is required to find out how their techniques could be used in various practical applications.

"We believe this could be used in quantum enhanced Lidar (Light Detection and Ranging), quantum sensing, non-line-of-sight imaging, and many other areas--the possibilities are endless," said Sajeed.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Novel SERS sensor helps to detect aldehyde gases

image: Schematic illustration of the synthetic route of AgNCs@Co-Ni LDH and procedure for the SERS detection of trace benzaldehyde.

Image: 
XU Di

Prof. HUANG Qing's group from the Hefei Institutes of Physical Science (HFIPS) developed a surface-enhanced Raman spectroscopy (SERS) gas sensor to detect aldehyde with high sensitivity and selectivity, which provided a new detection method for studying the adsorption of gas molecules on porous materials. The relevant research results have been published in Analytical Chemistry.

Adsorption technology is one of the main technologies for treating Volatile organic compounds (VOCs). Over the past years, metal-organic frameworks (MOFs) have attracted high interest for their outstanding adsorption property. Closely related to MOFs, layered double hydroxides (LDHs), also known as hydrotalcite-like systems or anionic clays, have received special attention for their improved adsorption properties due to the enhanced porosity and chemical affinity at multiple active sites.

In this study, silver nanocubes (AgNCs) and Co-Ni LDH composite nanomaterials were prepared in template sacrifice method, and modified with 4-aminophenol (4-ATP) for both trapping and probe functions. Based on the as-prepared composite material, researchers constructed a high-efficiency gas sensor for selective detection of aldehyde gas.

"This SERS sensor has ultrahigh sensitivity for aldehyde gas," said XU Di, the first author of this paper, "we verified its accuracy, repeatability and selectivity in the experiment."

Combined with principal component analysis method, they successfully identified and analyzed the similar SERS spectrum of aldehyde gases with the sensor, indicating application value.

They further investigated the adsorption kinetics and thermodynamic process of benzaldehyde molecules on Co-Ni LDH with the sensor. The kinetic adsorption process could be fitted better by the pseudo-first-order kinetics with a higher correlation coefficient than by a pseudo-second-order model. The isotherm adsorption fits the Langmuir isotherm model, and its adsorption constant is 6.25 × 106 L/mol, indicating that the adsorption sites of the composites were homogeneous and dominated by monolayer chemisorption.

This study established a new measurement method for probing the adsorption process with extremely low consumption of both adsorbates and adsorbents, but also may lay the groundwork for the construction of rapid and ultra-sensitive SERS sensors for probing VOCs in the future.

Credit: 
Hefei Institutes of Physical Science, Chinese Academy of Sciences

Tuning the energy gap: A novel approach for organic semiconductors

image: Varying the ratio of 3T molecules (foreground) and 6T molecules (indicated in the background) in the blend allows tuning the gap continuously.

Image: 
Sebastian Hutsch, Frank Ortmann

Organic semiconductors have earned a reputation as energy efficient materials in organic light emitting diodes (OLEDs) that are employed in large area displays. In these and in other applications, such as solar cells, a key parameter is the energy gap between electronic states. It determines the wavelength of the light that is emitted or absorbed. The continuous adjustability of this energy gap is desirable. Indeed, for inorganic materials an appropriate method already exists - the so-called blending. It is based on engineering the band gap by substituting atoms in the material. This allows for a continuous tunability as, for example in aluminum gallium arsenide semiconductors. Unfortunately, this is not transferable to organic semiconductors because of their different physical characteristics and their molecule-based construction paradigm, thus making continuous band gap tuning much more difficult.

However, with their latest publication scientists at the Center for Advancing Electronics Dresden (cfaed, TU Dresden) and at the Cluster of Excellence "e-conversion" at TU Munich together with partners from University of Würzburg, HU Berlin, and Ulm University for the first time realized energy-gap engineering for organic semiconductors by blending.

For inorganic semiconductors, the energy levels can be shifted towards one another by atomic substitutions, thus reducing the band gap ("band-gap engineering"). In contrast, band structure modifications by blending organic materials can only shift the energy levels concertedly either up or down. This is due to the strong Coulomb effects that can be exploited in organic materials, but this has no effect on the gap. "It would be very interesting to also change the gap of organic materials by blending, to avoid the lengthy synthesis of new molecules", says Prof. Karl Leo from TU Dresden.

The researchers now found an unconventional way by blending the material with mixtures of similar molecules that are different in size. "The key finding is that all molecules arrange in specific patterns that are allowed by their molecular shape and size", explains Frank Ortmann, a professor at TU Munich and group leader at the Center for Advancing Electronics Dresden (cfaed, TU Dresden). "This induces the desired change in the material´s dielectric constant and gap energy."

The group of Frank Ortmann was able to clarify the mechanism by simulating the structures of the blended films and their electronic and dielectric properties. A corresponding change in the molecular packing depending on the shape of the blended molecules was confirmed by X-ray scattering measurements, performed by the Organic Devices Group of Prof. Stefan Mannsfeld at cfaed. The core experimental and device work was done by Katrin Ortstein and her colleagues at the group of Prof. Karl Leo, TU Dresden.

The results of this study have just been published in the renowned journal Nature Materials. While this proves the feasibility of this type of energy-level engineering strategy, its employment will be explored for optoelectronic devices in the future.

Credit: 
Technische Universität Dresden

Discovery of ray sperms' unique swimming motion and demonstration with bio-inspired robot

video: The bio-inspired robot moves skilfully in a liquid environment.

Image: 
Panbing Wang et al./ DOI number: 10.1073/pnas.2024329118

It is generally agreed that sperms "swim" by beating or rotating their soft tails. However, a research team led by scientists from City University of Hong Kong (CityU) has discovered that ray sperms move by rotating both the tail and the head. The team further investigated the motion pattern and demonstrated it with a robot. Their study has expanded the knowledge on the microorganisms' motion and provided inspiration for robot engineering design.

The research is co-led by Dr Shen Yajing, Associate Professor from CityU's Department of Biomedical Engineering (BME), and Dr Shi Jiahai, Assistant Professor of the Department of Biomedical Sciences (BMS). Their findings have been published in the science journal Proceedings of the National Academy of Sciences of the United States of America (PNAS), titled "Self-adaptive and efficient propulsion of Ray sperms at different viscosities enabled by heterogeneous dual helixes".

Surprising discovery

Their research disclosed a new and peculiar motion mode of ray sperms, which they call the "Heterogeneous Dual Helixes (HDH) model". "This was actually an accidental discovery," said Dr Shi who has been focusing on developing different bio-therapies.

It all started with the team's another research of developing artificial insemination techniques for farming cartilaginous fishes, including sharks and rays, whose skeleton is wholly or largely composed of cartilage. "Cartilaginous fishes can be used as a 'factory' to produce antibodies against diseases, including COVID-19. So we wanted to develop artificial insemination techniques to farm them for high-value aquaculture," he said.

During that process, the team was greatly surprised when they first observed the unique structure and swimming motion of ray sperms under the microscope. They discovered that the ray sperm's head is in a long helical structure rather than being round, and it rotates along with the tail when swimming.

The team further investigated its propulsion mechanism, especially the exact role of the head in motion. They found that ray sperms consist of heterogeneous helical sections: a rigid spiral head and a soft tail, which are connected by a "midpiece" that provides energy for rotational motion. The ray sperm's head is not only a "container" of the genetic materials but also facilitates the propulsion together with the soft tail.

High Energy Efficiency of the HDH propulsion

To understand the motion mode more, the team analysed a large quantity of swimming data and observed the sperms' inner structure at the nanoscale. Since both the head and tail of the ray sperm rotated in the same direction with various rotational speeds and amplitudes when swimming, the team named this as the heterogeneous dual helixes (HDH) propulsion.

According to their statistical analysis, the head contributed about 31% of the total propulsive force, which is the first recorded head propulsion in all known sperms. Because of the head's contribution, the motion efficiency of the ray sperm is higher than other species like the sterlet and bull, which are only driven by the tail.

"Such an untraditional way of propulsion not only provides ray sperms with high adaptability to a wide range of viscous environments, but also leads to superior motion ability, and efficiency," explained Dr Shen, whose research focus is robotics as well as micro/nano manipulation and control.

High Environmental Adaptability

Environmental adaptability is crucial in natural selection. The head and tail of the ray sperms can adjust their motion and contribution to propulsion according to the environmental viscosity and swim at different speeds for forwarding motion. Hence, ray sperms can move in various environments with a wide range of viscosities, demonstrating high environmental adaptability.

The team also found that ray sperms have a unique bi-directional swimming ability, meaning that they can swim not only in a forward direction but also in a backward direction. Such an ability provides advantages to sperms in nature, especially when they encounter obstacles. And other sperms with spherical or rod-shaped head cannot achieve bidirectional motion.

Thanks to the HDH model, the spiral head of ray sperms has an active turning ability. As both the head and tail contribute to the propulsion, the angle between them will produce a lateral force on the body, enabling the ray sperm to turn, showing high flexibility in its motion.

Bio-inspired robot demonstrates the HDH model

The peculiar HDH model showed extensive features in motility and efficiency and inspired the team in designing microrobots. The bio-inspired robot, also with a rigid spiral head and a soft tail, demonstrated similar superiorities over conventional ones in terms of adaptability and efficiency under the same power input. It could move skillfully in an environment with liquid, even when the viscosity changed.

Such abilities can provide insights for designing swimming robot for challenging engineering tasks and biomedical applications inside the human body with complex fluidic environments, like inside blood vessels.

"We believed that understanding this unique propulsion would revolutionise the knowledge in microorganisms' motion, which would facilitate the understanding of natural fertilisation and provide inspiration for the design of bio-inspired robots under viscous conditions," concluded Dr Shen.

Credit: 
City University of Hong Kong

Molecular coating enhances organic solar cells

An electrode coating just one molecule thick can significantly enhance the performance of an organic photovoltaic cell, KAUST researchers have found. The coating outperforms the leading material currently used for this task and may pave the way for improvements in other devices that rely on organic molecules, such as light-emitting diodes and photodetectors.

Unlike the most common photovoltaic cells that use crystalline silicon to harvest light, organic photovoltaic cells (OPVs) rely on a light-absorbing layer of carbon-based molecules. Although OPVs cannot yet rival the performance of silicon cells, they could be easier and cheaper to manufacture at a very large scale using printing techniques.

When light enters a photovoltaic cell, its energy frees a negative electron and leaves behind a positive gap, known as a hole. Different materials then gather the electrons and holes and guide them to different electrodes to generate an electrical current. In OPVs, a material called PEDOT:PSS is widely used to ease the transfer of generated holes into an electrode; however, PEDOT:PSS is expensive, acidic and can degrade the cell's performance over time.

The KAUST team has now developed a better alternative to PEDOT:PSS. They use a much thinner coating of a hole-transporting molecule called Br-2PACz, which binds to an indium tin oxide (ITO) electrode to form a single-molecule layer. The organic cell using Br-2PACz achieved a power conversion efficiency of 18.4 percent, whereas an equivalent cell using PEDOT:PSS reached only 17.5 percent.

"We were very surprised indeed by the performance enhancement," says Yuanbao Lin, Ph.D. student and member of the team. "We believe Br-2PACz has the potential to replace PEDOT:PSS due to its low cost and high performance."

Br-2PACz increased the cell's efficiency in several ways. Compared with its rival, it caused less electrical resistance, improved hole transport and allowed more light to shine through to the absorbing layer. Br-2PACz also improved the structure of the light-absorbing layer itself, an effect that may be related to the coating process.

The coating could even improve the recyclability of the solar cell. The researchers found that the ITO electrode could be removed from the cell, stripped of its coating and then reused as if it was new. In contrast, PEDOT:PSS roughens the surface of the ITO so that it performs poorly if reused in another cell. "We anticipate this will have a dramatic impact on both the economics of OPVs and the environment," says Thomas Anthopoulos, who led the research.

Credit: 
King Abdullah University of Science & Technology (KAUST)

Dinosaurs lived in greenhouse climate with hot summers

image: Niels de Winter doing research on fossil shells

Image: 
Niels de Winter

Palaeoclimatologists study climate of the geological past. Using an innovative technique, new research by an international research team led by Niels de Winter (VUB-AMGC & Utrecht University) shows for the first time that dinosaurs had to deal with greater seasonal differences than previously thought.

De Winter: "We used to think that when the climate warmed like it did in the Cretaceous period, the time of the dinosaurs, the difference between the seasons would decrease, much like the present-day tropics experience less temperature difference between summer and winter. However, our reconstructions now show that the average temperature did indeed rise, but that the temperature difference between summer and winter remained rather constant. This leads to hotter summers and warmer winters."

To better characterize the climate during this period of high CO2 concentration, the researchers used very well-preserved fossils of mollusks that lived in southern Sweden during the Cretaceous period, about 78 million years ago. Those shells grew in the warm, shallow seas that covered much of Europe at the time. They recorded monthly variations in their environment and climate, like the rings in a tree. For their research, de Winter and the team used the "clumped isotope" method for the first time, in combination with a method developed by Niels de Winter.

Clumped isotopes in combination with the VUB-UU method - a revolution in geology

Isotopes are atoms of the same element with different masses. Since the 1950s, the ratio of oxygen isotopes in carbonate has been used to measure water temperature in the geological past. However, this required researchers to estimate the chemistry of the seawater, as the isotope ratio of the seawater affects the isotope ratio of the shell, which results in higher uncertainty. About ten years ago, the "clumped isotope" method was developed, which does not depend on the chemistry of the seawater and allows accurate reconstructions. But the clumped isotope method has a disadvantage: it requires so much carbonate that temperature reconstructions at a more detailed level, such as seasonal fluctuations based on shells, were not possible.

De Winter has now developed an innovative method in which measurements of much smaller quantities of carbonate are cleverly combined for temperature reconstructions. The clumped isotope method thus requires much less material and can therefore be used for research on fossil shells, which, like tree rings, hold a great deal of information about their living conditions. The method also allows carbonate from successive summers (and winters) to be aggregated for better reconstruction of seasonal temperatures. For example, Winter found that water temperatures in Sweden during the Cretaceous "greenhouse period" fluctuated between 15°C and 27°C, over 10°C warmer than today.

The team also worked with scientists from the University of Bristol (UK) who develop climate models to compare the results with climate simulations of the Cretaceous period. Whereas previous climate reconstructions of the Cretaceous often came out colder than these models, the new results agree very well with the Bristol models. This shows that variations in seasons and water chemistry are very important in climate reconstructions:

"It is very difficult to determine climate changes from so long ago on the seasonal scale, but the seasonal scale is essential to get climate reconstructions right. If there is hardly any difference between the seasons, reconstructions of average annual temperature come out differently from situations when difference between the seasons is large. It was thought that during the age of the dinosaurs difference between the seasons was small. We have now established that there were greater seasonal differences. With the same temperature average over a year, you end up with a much higher temperature in the summer.

De Winter: "Our results therefore suggest that in the mid latitudes, seasonal temperatures will likely rise along with climate warming, while seasonal difference is maintained. This results in very high summer temperatures. The results bring new insight into the dynamics of a warm climate on a very fine scale, which can be used to improve both climate reconstructions and climate predictions. Moreover, they show that a warmer climate can also have extreme seasons."

The development has far-reaching implications for the way climate reconstructions are done. It allows researchers to determine both the effect of seawater chemistry and that of differences between summer and winter, thus verifying the accuracy of decades of temperature reconstructions. For his groundbreaking research, De Winter has been nominated for both the annual EOS Pipette Prize and New Scientist Science Talent 2021.

Credit: 
Vrije Universiteit Brussel

Active platinum species

Highly dispersed platinum catalysts provide new possibilities for industrial processes, such as the flameless combustion of methane, propane, or carbon monoxide, which has fewer emissions and is more resource efficient and consistent than conventional combustion. In the journal Angewandte Chemie, a team of researchers reports on which platinum species are active in high-temperature oxidations and what changes they can undergo in the course of the process--important prerequisites for the optimization of catalysts.

Individual metal atoms and clusters consisting of only a few metal atoms have interesting catalytic properties determined by the exact nature of the active metal species. Usually, these are highly dispersed and deposited on a support such as zeolite, which is a porous silicate framework structure that also plays a role in the characteristics of a catalyst. Even the smallest change in the active centers can drastically reduce the efficiency of a catalyst. For example, noble metals like platinum tend to become permanently deactivated through sintering under harsh conditions.

Which specific platinum species play a role in high-temperature oxidations is hard to determine, however, because a significant number of such species cannot readily be obtained without the involvement of their support in the catalysis. A team led by Pedro Serna (ExxonMobil Research and Engineering Co., New Jersey, USA), as well as Manuel Moliner and Avelino Corma (Universitat Politècnica de València, Spain) investigated the behavior of individual platinum atoms and small platinum clusters on special CHA zeolites, which are non-reducible supports that can stabilize these species very well.

Their first experiment was an investigation of splitting O(2) using two different types of isotopically pure oxygen molecules, (16)O(2) and (18)O(2). The more active the catalyst, the more mixed (16)O(18)O molecules are formed upon recombination of the dissociated atoms. It was shown that platinum clusters of under one nanometer are significantly more active than individual atoms or larger clusters. However, at moderate temperatures (200 °C) the tiny clusters fall apart over time into individual platinum atoms and the catalytic activity for splitting oxygen ends.

In contrast, the team found that for the oxidation of alkanes, such as methane, at higher temperatures, the catalytic combustion was carried out by individual platinum atoms. These are formed in situ in the oxygen stream from the initial clusters, as was shown by X-ray absorption spectroscopy and by electron microscopy. The critical step in these oxidations is not the splitting of O(2) but the breaking of C-H bonds, which is less sensitive to changes in the active-site structure.

For the oxidation of CO, the catalysis is dominated by platinum clusters. Individual platinum atoms cannot be stabilized in the CO stream, and thus, play no role. In comparison with supports made of aluminum oxide, the CHA zeolite provided higher activity and greater stability of the platinum clusters in the presence of CO.

The high stability of individual platinum atoms for methane combustion and of small platinum clusters for CO oxidation, which is retained after regeneration or treatment with hot steam, opens new possibilities for systems made of platinum and silicate zeolites as efficient and robust heterogeneous catalysts for a variety of high-temperature oxidation scenarios.

Credit: 
Wiley

New study gives clue to the cause, and possible treatment of Parkinson's Disease

image: Immunostaining: Top panel SH-SY5Y cells transfected with GBA, ATP13A2, and PINK1 siRNAs are stained for dsDNA (magenta), histone H2B (green) and Hsp60 (turquoise). White arrows indicate cytosolic dsDNA of mitochondrial origin. Triple siRNA: Knockdown of GBA, ATP13A2, and PINK1 expression with siRNAs. Coimmunostaining: Bottom panel In situ hybridization of mitochondrial DNA and coimmunostaining for histone H2B (green) and Hsp60 (turquoise) in SH-SY5Y cells transfected with GBA, ATP13A2, and PINK1 siRNAs. White arrows indicate cytosolic dsDNA of mitochondrial origin. Triple siRNA: Knockdown of GBA, ATP13A2, and PINK1 expression with siRNAs.

Image: 
Matsui et al., Nat Commun. 2021

Niigata, Japan - Researchers from Brain Research Institute, Niigata University, Japan may have unraveled a new approach that could revolutionize the treatment, prevention, and possibly reversal of the damages that could lead to Parkinson's Disease (PD). This novel finding utilizing the cellular and zebrafish models, demonstrated how the leakage of mitochondrial dsDNA into the cytosol environment of the cell can contribute to the impairment of brain tissue of patients with PD.

Parkinson's disease is the second most common neurodegenerative disease, and its prevalence has been projected to double over the next 30 years.

These sobering statistics and the quest for PD prognostic marker discovery inspired a team of scientists led by Prof. Hideaki Matsui to build upon previous knowledge that link mitochondrial dysfunction and lysosomal dysfunction to PD. In an interview Prof. Matsui said, "Our results showed for the first time that cytosolic dsDNA of mitochondrial origin leaking and escaping from lysosomal degradation can induce cytotoxicity both in cultured cells, as well as in zebrafish models of Parkinson's disease."

Prof. Matsui went on to explain that "This study showed that the leakage of this mitochondrial nucleic material may occur as a result of mitochondrial dysfunction, which may involve genetic mutations in genes encoding mitochondrial proteins or incomplete degradation of mitochondrial dsDNA in the lysosome - which is a "degradation factory" of the cell. Upon the leakage into the cytoplasm, this undegraded dsDNA is detected by a "foreign" DNA sensor of the cytoplasm (IFI16) which then triggers the upregulation of mRNAs encoding for inflammatory proteins (type I interferon stimulated cytokines such as IL1β). Although further investigation is required, we hypothesize that the subsequent accumulation of inflammatory protein within the cytoplasm, may cause cell functional imbalance and ultimately cell death."

"However, this dsDNA leakage effect can be counteracted by DNAse II, a dsDNA degrading agent.", Prof. Akiyoshi Kakita, who was an associate investigator in the study also added.

The first part of the study was conducted in vitro, using cells of nerve cancer origin (SH-SY5Y cells) with defective mitochondria and lysosomal dysfunctions through knockdown of GBA, ATP13A and PINK1 genes. The mutant cells demonstrated leakage of dsDNA and accumulation of inflammatory cytokines and cell death. In an additional comparison experiment using mutant cells (with defective mitochondrial proteins) and wild type SH-SY5Y cells, they further demonstrated that DNAse II rescued cells through the degradation of dsDNA.

In a confirmatory study using a PD zebrafish model (gba mutant), the researchers demonstrated that a combination of PD-like phenotypes including accumulation of cytosol dsDNA deposits, reduced number of dopaminergic neurons after 3 months. Lastly, they further generated a DNase II mutant zebrafish model which exhibited decreased numbers of dopaminergic neurons and demonstrated accumulated cytosolic DNA. Interestingly, when then gba mutant zebrafish was complemented with human DNAse II gene, the overexpression of human DNAse II decreased cytosolic dsDNA deposits, rescued neuro-degradation by rescuing the number of dopaminergic and noradrenergic neurons after 3 months.

This demonstrated that neurodegenerative phenotype of gba mutant zebrafish induced by dsDNA deposits in the cytosol can be restored by DNAse II.

In a step further, to determine the effect of cytosolic dsDNA of mitochondrial origin in human brain with PD, they inspected postmortem human brain tissues from patients who were diagnosed with idiopathic PD. They observed abundance of cytosolic dsDNA of mitochondrial origin in medulla oblongata of postmortem brain tissues, the levels of IFI16 were also markedly increased in these brain tissues. Taken together, results in this study demonstrated that cytosolic dsDNA of mitochondrial origin accumulated in PD brains and that these dsDNA deposits and IFI16 play contributory roles in human PD pathogenesis.

Credit: 
Niigata University

Bacteria serves tasty solution to global plastic crisis

Researchers have discovered that the common bacteria E. coli can be deployed as a sustainable way to convert post-consumer plastic into vanillin, a new study reveals.

Vanillin is the primary component of extracted vanilla beans and is responsible for the characteristic taste and smell of vanilla.

The transformation could boost the circular economy, which aims to eliminate waste, keep products and materials in use and have positive impacts for synthetic biology, experts say.

The world's plastic crisis has seen an urgent need to develop new methods to recycle polyethylene terephthalate (PET) - the strong, lightweight plastic derived from non-renewable materials such as oil and gas and widely used for packaging foods and convenience-sized juices and water.

Approximately 50 million tonnes of PET waste is produced annually, causing serious economic and environmental impacts. PET recycling is possible, but existing processes create products that continue to contribute to plastic pollution worldwide.

To tackle this problem, scientists from the University of Edinburgh used lab engineered E. coli to transform terephthalic acid - a molecule derived from PET - into the high value compound vanillin, via a series of chemical reactions.

The team also demonstrated how the technique works by converting a used plastic bottle into vanillin by adding the E. coli to the degraded plastic waste.

Researchers say that the vanillin produced would be fit for human consumption but further experimental tests are required.

Vanillin is widely used in the food and cosmetics industries, as well as the formulation of herbicides, antifoaming agents and cleaning products. Global demand for vanillin was in excess of 37,000 tonnes in 2018.

Joanna Sadler, first author and BBSRC Discovery Fellow from the School of Biological Sciences, University of Edinburgh, said: "This is the first example of using a biological system to upcycle plastic waste into a valuable industrial chemical and this has very exciting implications for the circular economy.

"The results from our research have major implications for the field of plastic sustainability and demonstrate the power of synthetic biology to address real-world challenges."

Dr Stephen Wallace, Principle Investigator of the study and a UKRI Future Leaders Fellow from the University of Edinburgh, said: "Our work challenges the perception of plastic being a problematic waste and instead demonstrates its use as a new carbon resource from which high value products can be obtained."

Dr Ellis Crawford, Publishing Editor at the Royal Society of Chemistry, said: "This is a really interesting use of microbial science at the molecular level to improve sustainability and work towards a circular economy. Using microbes to turn waste plastics, which are harmful to the environment, into an important commodity and platform molecule with broad applications in cosmetics and food is a beautiful demonstration of green chemistry."

Credit: 
University of Edinburgh

First AI-based tool for predicting genomic subtypes of pancreatic cancer from histology slides

Paris, France and New York, NY June 10, 2021 - AP-HP Greater Paris University Hospitals, the leading European clinical trial center with the largest amount of healthcare data in France dedicated to research and Owkin, a startup pioneering Federated Learning and AI technologies for medical research and clinical development, announced the recent results of their ongoing strategic collaboration at ASCO 2021. The abstract and poster entitled "Identification of pancreatic adenocarcinoma molecular subtypes on histology slides using deep learning models" demonstrates the first AI-based
tool for predicting genomic subtypes of pancreatic cancer (PDAC) developed from machine learning applied to histology slides. The tool, a trained and validated AI model, is usable in clinical practice worldwide and opens the possibility of patient molecular stratification in routine care and for clinical
trials.

Gilles Wainrib, Chief Scientific Officer and Co-Founder of Owkin said:

"Our research shows AI can help connect information at the genomic, cellular and tissue levels, and how doing so can bring immediate value to make precision medicine a reality for patients. This study further underscores the value of using machine learning for identifying histo-genomic signals for cancer research and clinical development."

Pancreatic adenocarcinoma is a complex and heterogeneous disease. Improvement of prognosis has stalled while pancreatic cancer is predicted to become the second most lethal cancer by the year 2030. Heterogeneity and tumor plasticity are likely major factors in the failure of many clinical
trials. Multiomics studies have revealed two main tumor transcriptomic subtypes, Basal-like and Classical that have been proposed to be predictive of patient response to first line chemotherapy. The determination of these subtypes has been possible so far by RNA sequencing, a costly and
complex technique that is not yet feasible in a clinical routine setting. Taken together, these factors make it compelling to use advanced AI methods with common histological slides, trained alongside crucial context from expert researchers, to address the unmet needs of patients.

Pr Jérôme Cros, Pathologist at Beaujon Hospital - Université de Paris said:

"This tool was developed using the unique histological and molecular resources from four APHP
hospitals (Amboise Paré-Beaujon-Pitié Salpétrière-Saint Antoine) though a unique collaboration
between pathologists from APHP, bioinformaticians from the group Carte d'Identité des Tumeurs
de la Ligue Contre le Cancer and data scientists from Owkin. It can remotely subtype tumor in minutes paving the way for many applications from basic science (study of intra-tumor heterogeneity) to clinical practice (tumor subtyping in clinical trials)."

This research is born out of a successful and ongoing collaboration between Owkin's multidisciplinary teams and those of the AP-HP Greater Paris University Hospitals. Since 2019, the two have collaborated in the service of shared objectives: 1) to improve patient care and facilitate the development of new drugs in three main areas (oncology, immunology, cardiology), 2) to democratize access to AI for researchers in order to promote innovation and medical advances.

ASCO 2021 Science Yielded from an Ongoing Fruitful Research Collaboration

This most recent scientific achievement comes on the heels of several other publications. Recently, in January 2021, AP-HP Greater Paris University Hospitals and Owkin published an AI-Severity score for Covid-19 patients using CT scans alongside other data modalities in Nature Communications.
This project, the collaborative output of a consortium also including INRIA/CentraleSupélec and Gustave Roussy, was achieved in record time due to the close coordination and established framework agreement between AP-HP Greater Paris University Hospitals and Owkin. The result: the AI Severity score has been shown to outperform other scores currently in use and demonstrates that effective collaborations such as these can quickly derive research findings with direct clinical utility.

In August of 2020, Owkin published its novel predictive AI tool for RNA-seq expression from whole slide images (HE2RNA) in Nature Communications-- one of the top 50 most widely read papers of 2020 in said journal. These findings were born out of close collaboration with Prof. Julien Calderaro (anatomo-cyto-pathologist at Henri-Mondor hospital, AP-HP). This tool can be deployed to all types of cancer and laid the groundwork for the histo-transcriptomic findings of the recent Pancreatic Adenocarcinoma tool.

Other notable research from this collaboration includes a paper on AI prediction of survival for patients with hepatocellular carcinoma published with Prof. Julien Calderaro in Hepatology in February 2020 and a comparison of classification methods of Crohn"s disease with machine learning models,published in July 2019, led by Prof. Jean-Pierre Hugot (pediatrician at Robert Debré hospital, APHP).

Credit: 
Owkin, Inc.

New method to measure milk components has potential to improve dairy sustainability

Champaign, IL, June 10, 2021 - Present in blood, urine, and milk, the chemical compound urea is the primary form of nitrogen excretion in mammals. Testing for urea levels in dairy cows helps scientists and farmers understand how effectively nitrogen from feed is used in cows' bodies, with important economic implications for farmers in terms of feed costs, physiological effects for cows such as reproductive performance, and environmental impacts from excretion of nitrogen in dairy cow waste. Thus, accuracy in testing dairy cow urea levels is essential.

Since the 1990s, mid-infrared testing of milk urea nitrogen (MUN) has been the most efficient and least invasive way to measure nitrogen use by dairy cows in large numbers. In a recent article in the Journal of Dairy Science, researchers from Cornell University report the development of a robust new set of MUN calibration reference samples to improve accuracy of MUN measurement.

"When a set of these samples has been run on a milk analyzer, the data can be used to detect specific deficiencies in the quality of the MUN prediction that might be corrected by the instrument user or the milk analyzer manufacturer," explained senior author David M. Barbano, PhD, Northeast Dairy Foods Research Center, Department of Food Science, Cornell University, Ithaca, NY, USA. Accurate and timely MUN concentration information "is of great importance for dairy herd feeding and reproduction management," Barbano added.

Given increasing worldwide scrutiny of the environmental effects of large-scale agriculture and the economic challenges faced by farmers, the need for accurate understanding of nitrogen use in the dairy industry has perhaps never been more pressing. This improvement in milk component testing marks further progress toward healthier and more sustainable agricultural and food production practices that will benefit producers and consumers alike.

Credit: 
Elsevier

More sustainable mortars and concrete with optimal thermal and mechanical efficiency

image: A foundation built to scale for studying geothermal energy.

Image: 
UPV/EHU

The consumption of raw materials has increased notably in industry in general, and in the construction industry in particular, amidst growing concerns over sustainability issues. Concrete and mortar are the most commonly used materials in construction, and many studies are currently under way to try and reduce the harmful effects of their manufacture. Concrete and mortar are made by mixing water, sand, cement and aggregates.

"The main problem is the amount of cement used to produce this type of material; cement manufacturing uses a huge amount of energy and natural resources, which implies a high level of CO2 emissions. Diverse studies are under way aimed at reducing the quantity of cement required. We are working to replace cement and aggregates (sand or gravel) with non-natural materials, in order to reduce the use of natural resources and optimise the mechanical and thermal properties of the materials produced,"explains Roque Borinaga Treviño, a researcher at the UPV/EHU's Department of Mechanical Engineering.

To this end, the research team is analysing by-products from different industrial processes, which enable the mortars and concretes produced to be used for different functions, depending on the mechanical and thermal properties they acquire: 'the aim is to reduce as much as possible the volume of industrial by-products that end up in landfill sites, and to reuse these products in accordance with the dictates of the circular economy,' claims Dr Borinaga. Recently, the research team has explored three different by-products in three different areas.

Specific cases

Firstly, they have studied the possibility of using industrial metal waste as a reinforcement in concrete or mortar, analysing mortars reinforced with brass fibres from electrical discharge machining. Secondly, and linked to this avenue of research aimed at reducing the amount of cement required, they have explored the use of lime mud waste from the paper industry, obtaining good results in terms of thermal conductivity and finding that the resulting material is adequate for use in radiant floor heating systems. And finally, they have used furnace slag as an aggregate: 'the thermal conductivity of sand extracted from electric arc furnaces is low, making it a good option for insulation purposes,' explains Dr Borinaga.

Although they are studying many different types of materials, what they are doing is basic research: 'ours is the first step in researching these materials. Industrial by-products and waste are not particularly homogeneous, meaning that they vary greatly in accordance with their origin. Therefore, the first step is to analyse the properties bestowed by each specific type of waste. It is important to conduct these analyses with a large amount of waste with different origins, and to compare the results in order to determine whether or not the materials are suitable for use in manufacturing,' he concludes.

Credit: 
University of the Basque Country