Tech

Small volcanic lakes tapping giant underground reservoirs

Boulder, Colo., USA: In its large caldera, Newberry volcano (Oregon, USA) has two small volcanic lakes, one fed by volcanic geothermal fluids (Paulina Lake) and one by gases (East Lake). These popular fishing grounds are small windows into a large underlying reservoir of hydrothermal fluids, releasing carbon dioxide (CO2) and hydrogen sulfide (H2S) with minor mercury (Hg) and methane into East Lake.

What happens to all that CO2 after it enters the bottom waters of the lake, and how do these volcanic gases influence the lake ecosystem? Some lakes fed by volcanic CO2 have seen catastrophic CO2 degassing during lake overturn ("limnic eruptions"; e.g., Lake Nyos, Cameroon). Could East Lake be a simmering "American lake Nyos"? East Lake went through a short "gas alert" in summer 2020, with strong H2S smells spreading over the caldera region.

Six Wesleyan University undergraduate/graduate students and their advisor set out to measure CO2 fluxes at East Lake each summer between 2015 and 2019.

East Lake accumulates CO2 below its winter ice cover, which is released again in abundance during ice melting and subsequently during the summer months. They also proposed that the East Lake ecosystem is largely driven by its volcanic inputs: CO2, nutrients like phosphorus and trace metals, with the fixed nitrogen nutrient largely provided by local cyanobacteria.

The outside world only adds sunshine to make this organic matter factory go! Their study illustrates how the lake CO2 reservoir renews itself over the seasons, and East Lake is unlikely to have catastrophic gas releases. Variations in CO2 flux can be used for volcano monitoring once the seasonal flux trends related to lake processes are understood.

Credit: 
Geological Society of America

Study shows cactus pear as drought-tolerant crop for sustainable fuel and food

image: Among three cactus varieties researched by the University of Nevada, Reno as drought-tolerant crops for biofuel, Opuntia ficus-indica produced the most fruit while using up to 80% less water than some traditional crops.

Image: 
Photo by John Cushman, University of Nevada, Reno.

Could cactus pear become a major crop like soybeans and corn in the near future, and help provide a biofuel source, as well as a sustainable food and forage crop? According to a recently published study, researchers from the University of Nevada, Reno believe the plant, with its high heat tolerance and low water use, may be able to provide fuel and food in places that previously haven't been able to grow much in the way of sustainable crops.

Global climate change models predict that long-term drought events will increase in duration and intensity, resulting in both higher temperatures and lower levels of available water. Many crops, such as rice, corn and soybeans, have an upper temperature limit, and other traditional crops, such as alfalfa, require more water than what might be available in the future.

"Dry areas are going to get dryer because of climate change," Biochemistry & Molecular Biology Professor John Cushman, with the University's College of Agriculture, Biotechnology & Natural Resources, said. "Ultimately, we're going to see more and more of these drought issues affecting crops such as corn and soybeans in the future."

Fueling renewable energy

As part of the College's Experiment Station unit, Cushman and his team recently published the results of a five-year study on the use of spineless cactus pear as a high-temperature, low-water commercial crop. The study, funded by the Experiment Station and the U.S. Department of Agriculture's National Institute of Food and Agriculture, was the first long-term field trial of Opuntia species in the U.S. as a scalable bioenergy feedstock to replace fossil fuel.

Results of the study, which took place at the Experiment Station's Southern Nevada Field Lab in Logandale, Nevada, showed that Opuntia ficus-indica had the highest fruit production while using up to 80% less water than some traditional crops. Co-authors included Carol Bishop, with the College's Extension unit, postdoctoral research scholar Dhurba Neupane, and graduate students Nicholas Alexander Niechayev and Jesse Mayer.

"Maize and sugar cane are the major bioenergy crops right now, but use three to six times more water than cactus pear," Cushman said. "This study showed that cactus pear productivity is on par with these important bioenergy crops, but use a fraction of the water and have a higher heat tolerance, which makes them a much more climate-resilient crop."

Cactus pear works well as a bioenergy crop because it is a versatile perennial crop. When it's not being harvested for biofuel, then it works as a land-based carbon sink, removing carbon dioxide from the atmosphere and storing it in a sustainable manner.

"Approximately 42% of land area around the world is classified as semi-arid or arid," Cushman said. "There is enormous potential for planting cactus trees for carbon sequestration. We can start growing cactus pear crops in abandoned areas that are marginal and may not be suitable for other crops, thereby expanding the area being used for bioenergy production."

Fueling people and animals

The crop can also be used for human consumption and livestock feed. Cactus pear is already used in many semi-arid areas around the world for food and forage due to its low-water needs compared with more traditional crops. The fruit can be used for jams and jellies due to its high sugar content, and the pads are eaten both fresh and as a canned vegetable. Because the plant's pads are made of 90% water, the crop works great for livestock feed as well.

"That's the benefit of this perennial crop," Cushman explained. "You've harvested the fruit and the pads for food, then you have this large amount of biomass sitting on the land that is sequestering carbon and can be used for biofuel production."

Cushman also hopes to use cactus pear genes to improve the water-use efficiency of other crops. One of the ways cactus pear retains water is by closing its pores during the heat of day to prevent evaporation and opening them at night to breathe. Cushman wants to take the cactus pear genes that allow it to do this, and add them to the genetic makeup of other plants to increase their drought tolerance.

Bishop, Extension educator for Northeast Clark County, and her team, which includes Moapa Valley High School students, continue to help maintain and harvest the more than 250 cactus pear plants still grown at the field lab in Logandale. In addition, during the study, the students gained valuable experience helping to spread awareness about the project, its goals, and the plant's potential benefits and uses. They produced videos, papers, brochures and recipes; gave tours of the field lab; and held classes, including harvesting and cooking classes.

Fueling further research

In 2019, Cushman began a new research project with cactus pear at the U.S. Department of Agriculture - Agricultural Research Service' National Arid Land Plant Genetic Resources Unit in Parlier, California. In addition to continuing to take measurements of how much the cactus crop will produce, Cushman's team, in collaboration with Claire Heinitz, curator at the unit, is looking at which accessions, or unique samples of plant tissue or seeds with different genetic traits, provide the greatest production and optimize the crop's growing conditions.

"We want a spineless cactus pear that will grow fast and produce a lot of biomass," Cushman said.

One of the other goals of the project is to learn more about Opuntia stunting disease, which causes cactuses to grow smaller pads and fruit. The team is taking samples from the infected plants to look at the DNA and RNA to find what causes the disease and how it is transferred to other cactuses in the field. The hope is to use the information to create a diagnostic tool and treatment to detect and prevent the disease's spread and to salvage usable parts from diseased plants.

Credit: 
University of Nevada, Reno

Building networks not enough to expand rural broadband

ITHACA, N.Y. - Public grants to build rural broadband networks may not be sufficient to close the digital divide, new Cornell University research finds.

High operations and maintenance costs and low population density in some rural areas result in prohibitively high service fees - even for a subscriber-owned cooperative structured to prioritize member needs over profits, the analysis found.

Decades ago, cooperatives were key to the expansion of electric and telephone service to underserved rural areas, spurred by New Deal legislation providing low-interest government grants and loans. Public funding for rural broadband access should similarly consider its critical role supporting economic development, health care and education, said Todd Schmit, associate professor in the Charles H. Dyson School of Applied Economics and Management.

"The New Deal of broadband has to incorporate more than building the systems," Schmit said. "We have to think more comprehensively about the importance of getting equal access to these technologies."

Schmit is the co-author with Roberta Severson, an extension associate in Dyson, of "Exploring the Feasibility of Rural Broadband Cooperatives in the United States: The New New Deal?" The research was published Feb. 13 in Telecommunications Policy.

More than 90% of Americans had broadband access in 2015, according to the study, but the total in rural areas was below 70%. Federal programs have sought to help close that gap, including a $20.4 billion Federal Communications Commission initiative announced last year to subsidize network construction in underserved areas.

Schmit and Severson studied the feasibility of establishing a rural broadband cooperative to improve access in Franklin County in northern New York state, which received funding for a feasibility study from the U.S. Department of Agriculture's Rural Business Development Program.

The researchers partnered with Slic Network Solutions, a local internet service provider, to develop estimates of market prices, the cost to build a fiber-to-the-home network, operations and maintenance costs, and the potential subscriber base - about 1,600 residents - and model a cooperative that would break even over a 10-year cycle.

Federal and state grants and member investment would cover almost the entire estimated $8 million construction cost, so that wasn't a significant factor in the analysis, the researchers said.

But even with those subsidies, the study determined the co-op would need to charge $231 per month for its high-speed service option - 131% above market rates. At that price, it's unlikely 40% of year-round residents would opt for high-speed broadband as the model had assumed, casting further doubt on its feasibility.

The $231 fee included a surcharge to subsidize a lower-speed service option costing no more than $60 - a restriction the construction grants imposed to ensure affordability. Without that restriction, the high-speed price would drop to $175 and the low-speed climb to $105.

"In short," the authors wrote, "grants covering investment and capital construction alone do not solve the rural broadband problem, at least in our study area."

As an alternative - though not one available in Franklin County - Schmit and Severson examined the possibility of an existing rural electric or telecommunications co-op expanding into broadband. They would gain efficiencies from already operating infrastructure such as the poles that would carry fiber lines. In that scenario, the high-speed price improved to $144 a month - still 44% above market rates.

"These systems are very costly to operate and maintain," Schmit said, "particularly in areas like we looked at that are very low density."

The feasibility improves with growth in a coverage area's density and "take rate," or percentage of potential subscribers signing up at different speeds, according to the analysis. But in Franklin County, the researchers determined a startup co-op would need 14 potential subscribers per mile to break even over 10 years - more than twice the study area's actual density.

To better serve such areas, Schmit and Severson said, policymakers should explore eliminating property taxes on broadband infrastructure and payments to rent space on poles owned by regulated utilities, which respectively accounted for 16% and 18% of the proposed co-op's annual expenses. Those measures reduced an expanding rural utility co-op's high-speed fee to 25% above market rates, a level members might be willing to pay, the authors said.

"Consideration of the public benefits of broadband access arguably needs to be added to the equation," they wrote. "The case was made for electricity and telephone services in the 1930s and similar arguments would seem to hold for this technology today."

Credit: 
Cornell University

Online dating: Super effective, or just... superficial?

image: "It's extremely eye-opening that people are willing to make decisions about whether or not they would like to get to another human being, in less than a second and based almost solely on the other person's looks," said Dr. Chopik.

Image: 
Photo by Pratik Gupta on Unsplash

According to the Pew Research Center, 1 in 10 American adults have landed a long-term relationship from an online dating app, such as Tinder, OKCupid and Match.com. But what compels people to "swipe right" on certain profiles and reject others?

New research from William Chopik, an associate professor in the Michigan State University Department of Psychology, and Dr. David Johnson from the University of Maryland, finds that people's reason for swiping right is based primarily on attractiveness and the race of a potential partner, and that decisions are often made in less than a second.

"Despite online dating becoming an increasingly popular way for people to meet one another, there is little research on how people connect with each other on these platforms," said Chopik. "We wanted to understand what makes someone want to swipe left or swipe right, and the process behind how they make those decisions."

Chopik's research, published in the Journal of Research in Personality, used two studies to measure how dating app users from different walks of life interacted with available profiles. The first study focused on college students, while the second focused on middle-aged adults, averaging 35 years old. Participants were given a choice to either view profiles of men or women, depending on their dating preferences.

Male participants, on average, swiped right more often than women, and it was also found that individuals who perceive themselves to be more attractive swipe left more often overall, proving to be choosier when picking out potential partners.

"It's extremely eye-opening that people are willing to make decisions about whether or not they would like to get to another human being, in less than a second and based almost solely on the other person's looks," said Chopik.

"Also surprising was just how little everything beyond attractiveness and race mattered for swiping behavior - your personality didn't seem to matter, how open you were to hook-ups didn't matter, or even your style for how you approach relationships or if you were looking short- or long-term didn't matter."

While attractiveness played a major role in participants' decisions to swipe left or right, race was another leading factor. Users were significantly more likely to swipe on users within their same race, and profiles of users of color were rejected more often than those of white users.

"The disparities were rather shocking," Chopik said. "Profiles of Black users were rejected more often than white users, highlighting another way people of color face bias in everyday life."

Currently, Chopik is researching how people using online dating apps respond to profiles which swipe right on them first. Though his findings are still being finalized, so far, the data seems to show that people are significantly more likely to swipe right on a profile that liked them first, even if the user is less attractive or the profile in general is less appealing.

"We like people who like us," he said. "It makes sense that we want to connect with others who have shown an interest in us, even if they weren't initially a top choice."

Credit: 
Michigan State University

Terahertz waves from electrons oscillating in liquid water

image: Cartoon of an oscillating polaron in liquid water: (a) Schematic network of hydrogen-bonded water molecules of neat water (red: oxygen atoms, green: hydrogen atoms). (b) Electron solvated in water (yellow-red cloud). The electron attracts the hydrogen atoms of water molecules, thereby polarizing its environment of water molecules and generating a self-consistent potential trap for the electron. The electron solvated this way represents an elementary quantum system (c) A possible elementary excitation is a combined motion of the electron and the water shell, a so-called polaron. The polaron can be connected with an oscillation of the size of the quantum system (panels (b) and (c)), changing the strength of the overall electric polarization originating from the water molecules. (d) The oscillating electric polarization emits an electric field E_osc(τ) which is plotted as a function of time τ and represents the quantity observed experimentally.

Image: 
MBI

Ionization of water molecules by light generates free electrons in liquid water. After generation, the so-called solvated electron is formed, a localized electron surrounded by a shell of water molecules. In the ultrafast localization process, the electron and its water shell display strong oscillations, giving rise to terahertz emission for tens of picoseconds.

Ionization of atoms and molecules by light is a basic physical process generating a negatively charged free electron and a positively charged parent ion. If one ionizes liquid water, the free electron undergoes a sequence of ultrafast processes by which it loses energy and eventually localizes at a new site in the liquid, surrounded by a water shell [Fig. 1]. The localization process includes a reorientation of water molecules at the new site, a so-called solvation process, in order to minimize the electric interaction energy between the electron and the water dipole moments. The localized electron obeys the laws of quantum mechanics and displays discrete energy levels. Electron localization occurs in the subpicosecond time range (1 ps = 10^-12 s = a millionth of a millionth of a second) and is followed by dissipation of excess energy into the liquid.

Researchers at the Max-Born-Institute have now observed radiation in the terahertz range (1 THz = 10^12 Hz = 10^12 oscillations per second) which is initiated during the electron localization process. As they report in the recent issue of Physical Review Letters, Vol. 126, 097401 (2021), the THz emission can persist for up to 40 ps, i.e., much longer than the localization process itself. It displays a frequency between 0.2 and 1.5 THz, depending on the electron concentration in the liquid.

The emitted THz waves originate from oscillations of the solvated electrons and their water shells. The oscillation frequency is determined by the local electric field the liquid environment exerts on this quantum system. Adding hydrated electrons to the liquid changes the local field and, thus, induces a change of oscillation frequency with electron concentration. Most surprising is the comparably weak damping of the oscillations which points to a weak interaction with the fluctuating larger environment in the liquid and a longitudinal character of the underlying electron and water motions.

The new experimental results are accounted for by a theoretical model based on a polaron picture as explained in Fig. 1. The polaron is an excitation which includes coupled motions of the electron and the water shell at low frequency. Due to such internal oscillations of charge, the hydrated electron radiates a THz wave. The weak damping of this wave allows for a manipulation of the emission, e.g., by interaction of the hydrated electron with a sequence of ultrashort light pulses.

Credit: 
Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy (MBI)

Engineered safety switch curbs severe side effects of CAR-T immunotherapy

image: Matthew Foster, MD, and his UNC Lineberger colleagues have used an experimental safety switch incorporated as part of a chimeric antigen receptor-modified T cell therapy (CAR-T) for an aggressive form of leukemia to quickly and effectively reduce the severity of a neurotoxic side effect caused by CAR-T therapy.

Image: 
UNC Lineberger Comprehensive Cancer Center

CHAPEL HILL, North Carolina--UNC Lineberger Comprehensive Cancer Center researchers have successfully used an experimental safety switch, incorporated as part of a chimeric antigen receptor T-cell (CAR-T) therapy, a type of immunotherapy, to reduce the severity of treatment side effects that sometimes occur. This advance was seen in a patient enrolled in a clinical trial using CAR-T to treat refractory acute B-cell leukemia. It demonstrates a proof-of-principle for possible expanded use of CAR-T immunotherapy paired with the safety switch.

The researchers published their findings in the journal Blood as an ahead-of-print publication.

With CAR-T therapy, T-cells from a patient's immune system are modified in a manufacturing facility to express part of an antibody that can bind to a surface protein on cancer cells. The modified T-cells, after being infused back into the patient, seek out and attack cancer cells throughout the body. Patients with leukemia or lymphoma have experienced complete remission when treated with CAR-T therapy but sometimes experience toxicities, which can be life-threatening, due to inflammatory responses or nervous system toxicities caused by the modified T-cells.

When using standard forms of cancer therapies, including pills and infused drugs, doctors can interrupt or lower drug dosing to respond to treatment toxicities. With cell-based immunotherapies, this is not possible after the cells are infused. So UNC Lineberger researchers engineered T-cells to include a safety switch, called inducible caspase-9, or iC9, that can be activated if toxic side effects develop. Administration of the drug rimiducid "triggers" the switch to activate the expression of caspase-9, potentially leading to reduced severe side effects from the CAR-T therapy.

"Because of our active Cellular Immunotherapy Program at UNC Lineberger, we can engineer and generate various CAR-T cells for clinical trials. In this case, we have produced specialized CAR-T cells that could benefit patients by enhancing safety," said Matthew Foster, MD, lead author of the study and an associate professor in the UNC School of Medicine and a UNC Lineberger member. "With the assistance of our partner Bellicum Pharmaceuticals, we collaborated to use the safety switch-triggering drug rimiducid with cells manufactured at UNC Lineberger."

UNC Lineberger has enrolled patients in an ongoing early-phase clinical trial to determine whether a novel CAR-T therapy with the iC9 safety switch is safe and effective against relapsed or refractory B-cell acute lymphoblastic leukemia, a difficult to treat, fast-moving cancer that occurs frequently in children, adolescents and young adults.

One of the participants in the study, a 26-year-old woman, experienced a severe side effect -- immune effector cell-associated neurotoxicity syndrome (ICANS) -- after being infused with CAR-T. Her clinicians quickly reduced the severity of the side effects by administering the drug rimiducid to activate the iC9 safety switch. As intended, Foster said the safety switch reduced the number of circulating modified T-cells by nearly 60 percent within four hours and by more than 90 percent within 24 hours. The drug nearly eliminated the toxicities within one day.

"Even though this case study only documents an outcome in one patient, the fact that the drug was so successful so quickly gives us hope that it could have wider applications in a larger group of leukemia patients," said Gianpietro Dotti, MD, director of the UNC Lineberger Cellular Immunotherapy Program and professor of medicine at the UNC School of Medicine. "It should be noted that while rimiducid mitigated her toxicities, it also lowered the number of iC9 T cells fighting her cancer by 90 percent. But there seemed to be sufficient T-cells still circulating to maintain an anticancer response."

This trial is ongoing but the investigators will next explore the effects of lower doses of rimiducid in patients with less severe toxicity as it could be a way to intervene early and prevent severe toxicity.

"Given these results and the well-established high response rates in B-cell acute lymphoblastic leukemia patients receiving CAR-T cells, it is reasonable to have a high bar in 2021 and expect that we can achieve both safety and efficacy from such therapies," concluded Foster.

The investigators also see the potential to use CAR-T designed with the built-in safety switch to treat other cancers. "The ability to use a safety switch may also allow us to treat patients with solid tumors where there may be concern about the CAR-T cells affecting non-cancer tissue," said Jonathan Serody, MD, director of the UNC Lineberger Cellular Therapy Program. "In those instances, side effects can be eliminated by activating the safety switch."

Credit: 
UNC Lineberger Comprehensive Cancer Center

Advance in 'optical tweezers' to boost biomedical research

image: The resonance of ions in nanocrystals creates a strong optical trapping force.

Image: 
Dr Fan Wang

Much like the Jedis in Star Wars use 'the force' to control objects from a distance, scientists can use light or 'optical force' to move very small particles.

The inventors of this ground-breaking laser technology, known as 'optical tweezers', were awarded the 2018 Nobel Prize in physics.

Optical tweezers are used in biology, medicine and materials science to assemble and manipulate nanoparticles such as gold atoms. However, the technology relies on a difference in the refractive properties of the trapped particle and the surrounding environment.

Now scientists have discovered a new technique that allows them to manipulate particles that have the same refractive properties as the background environment, overcoming a fundamental technical challenge.

The study 'Optical tweezers beyond refractive index mismatch using highly doped upconversion nanoparticles' has just been published in Nature Nanotechnology.

"This breakthrough has huge potential, particularly in fields such as medicine," says leading co-author Dr Fan Wang from the University of Technology Sydney (UTS).

"The ability to push, pull and measure the forces of microscopic objects inside cells, such as strands of DNA or intracellular enzymes, could lead to advances in understanding and treating many different diseases such as diabetes or cancer.

"Traditional mechanical micro-probes used to manipulate cells are invasive, and the positioning resolution is low. They can only measure things like the stiffness of a cell membrane, not the force of molecular motor proteins inside a cell," he says.

The research team developed a unique method to control the refractive properties and luminescence of nanoparticles by doping nanocrystals with rare-earth metal ions.

Having overcome this first fundamental challenge, the team then optimised the doping concentration of ions to achieve the trapping of nanoparticles at a much lower energy level, and at 30 times increased efficiency.

"Traditionally, you need hundreds of milliwatts of laser power to trap a 20 nanometre gold particle. With our new technology, we can trap a 20 nanometre particle using tens of milliwatts of power," says Xuchen Shan, first co-author and UTS PhD candidate in the UTS School of Electrical and Data Engineering.

"Our optical tweezers also achieved a record high degree of sensitivity or 'stiffness' for nanoparticles in a water solution. Remarkably, the heat generated by this method was negligible compared with older methods, so our optical tweezers offer a number of advantages," he says.

Fellow leading co-author Dr Peter Reece, from the University of New South Wales, says this proof-of-concept research is a significant advancement in a field that is becoming increasingly sophisticated for biological researchers.

"The prospect of developing a highly-efficient nanoscale force probe is very exciting. The hope is that the force probe can be labelled to target intracellular structures and organelles, enabling the optical manipulation of these intracellular structures," he says.

Distinguished Professor Dayong Jin, Director of the UTS Institute for Biomedical Materials and Devices (IBMD) and a leading co-author, says this work opens up new opportunities for super resolution functional imaging of intracellular biomechanics.

"IBMD research is focused on the translation of advances in photonics and material technology into biomedical  applications, and this type of technology development is well aligned to this vision," says Professor Jin.

"Once we have answered the fundamental science questions and discovered new mechanisms of photonics and material science, we then move to apply them. This new advance will allow us to use lower-power and less-invasive ways to trap nanoscopic objects, such as live cells and intracellular compartments, for high precision manipulation and nanoscale biomechanics measurement."

Credit: 
University of Technology Sydney

Mutant proteins from SARS-CoV-2 block T cells' ability to recognize and kill infected cells

A deep sequencing study of 747 SARS-CoV-2 virus isolates has revealed mutant peptides derived from the virus that cannot effectively bind to critical proteins on the surface of infected cells and, in turn, hamper activation of CD8+ killer T cells that recognize and destroy these infected cells. These peptides, the authors say, represent one way the coronavirus subverts killer T cell responses and stymies immunity in the host. Their results may be of particular importance for SARS-CoV-2 subunit vaccines, such as the RNA vaccines currently in use, which induce responses against a limited number of viral peptides presented on T cells; such vaccines may be at risk of stunted efficacy if any of these target peptides are mutated in emerging virus variants. However, because T cells can broadly recognize an array of epitopes, it remains to be determined just how mutations in single epitopes truly affect viral control. Killer T cells kill infected cells upon recognition of viral epitopes, which are displayed on the surface of infected cells by class I major histocompatibility complex (MHC-I) proteins, or human leukocyte antigen (HLA) proteins as they're called in humans. Certain positions in these epitopes are critical for HLA-I presentation, and mutations in these regions might interfere with the epitope binding to the HLA. Benedikt Agerer and colleagues identified mutations in killer T cell epitopes from SARS-CoV-2 after deep sequencing 747 SARS-CoV-2 virus isolates. They confirmed that these mutant peptides could not effectively bind to HLA protein in a cell-free, in vitro assay. When exposed to killer T cells isolated from HLA-matched COVID-19 patients, reduced binding of mutant peptides to HLA-I decreased proliferation of T cells, stunted production of inflammatory factors such as IFN-γ, and interrupted the overall cell-killing activity of the killer T cells. In future work, the authors aim to address how these "escape" mutations are maintained during transmission between individuals with differing HLA subtypes and how viruses carrying epitope mutations affect disease severity.

Credit: 
American Association for the Advancement of Science (AAAS)

Porous crystal guides reaction to transform CO2

image: KAUST researchers have improved a chemical reaction that converts carbon dioxide into carbon monoxide using MOFs.

Image: 
© 2021 KAUST

By embedding a silver catalyst inside a porous crystal, KAUST researchers have improved a chemical reaction that converts carbon dioxide (CO2) into carbon monoxide (CO), which is a useful feedstock for the chemical industry.

Carbon monoxide is a building block for producing hydrocarbon fuels, and many researchers are searching for ways to produce it from CO2, a greenhouse gas emitted by burning fossil fuels. One strategy involves using electricity and a catalyst to drive a so-called CO2 reduction reaction. But this reaction typically produces a variety of other products, including methane, methanol and ethylene. Separating these products significantly raises the cost of the process, so researchers hope to guide the reaction to generate a single product.

Osama Shekhah and Mohamed Eddaoudi, chemists at KAUST, in collaboration with Ted Sargent's group at the University of Toronto, have now fine-tuned the CO2 reduction reaction using metal organic frameworks (MOFs). These porous crystals contain a lattice of metal-based nodes connected by carbon-based linker molecules. By altering these components, researchers can tailor the size of an MOF's pores and its chemical properties.

The researchers created four different MOFs with the same overall lattice arrangement and grew 5-nanometer-wide nanoparticles of silver inside the pores of each MOF. Then they tested each MOF to find how its structure affected the CO2 reduction reaction. They monitored which products emerged from the process and studied how an activated form of CO -- a crucial intermediate in the reaction -- bound to the silver catalyst.

The most effective MOF contained zirconium-based nodes connected by molecules of 1,4-naphthalenedicarboxylic acid. Because it has smaller pores, its ability to trap CO2 outperformed its rivals.

The silver nanoparticle in this MOF also bound activated CO in a different way than the others, connecting in a "bridging mode" involving two bonds rather than one. This ensured that CO was less likely to transform into unwanted byproducts. "Controlling the type of the CO intermediate during the reaction has a big influence on the CO selectivity," says Shekhah. Together, these effects boosted the efficiency of CO production to 94 percent, a dramatic improvement in selectivity.

The researchers hope to build on their strategy, making further tweaks to the MOF's structure to enhance the CO2 reduction reaction. "We believe that this work paves the way for using MOFs as new supports for improving the activity and product selectivity of the CO2 reduction reaction by directly interacting with the gaseous intermediates and controlling their binding mode," says Eddaoudi.

Credit: 
King Abdullah University of Science & Technology (KAUST)

SUTD study uncovers how big droughts in the Greater Mekong trigger CO2 emission bursts

image: The figure illustrates the response of the Thai-Laotian power grid, built and operated in 2016, under the hydro-climatological conditions experienced in four selected years. The intensity of droughts across the Mekong and Chao Phraya basins is measured with the Streamflow Drought Index, or SDI (negative values correspond to dry conditions). The impact of hydro-climatic variability on the power system is quantified with the following variables: annual anomalies of available hydropower (in GWh) and derated capacity of freshwater-dependant thermoelectric plants (in MW).

Image: 
SUTD

A study on big droughts in the Greater Mekong region revealed findings that can help reduce the carbon footprint of power systems while providing insights into better designed and more sustainable power plants.

The study, titled 'The Greater Mekong's climate-water-energy nexus: how ENSO-triggered regional droughts affect power supply and CO2 emissions', was published by researchers from the Singapore University of Technology and Design (SUTD) and the University of California, Santa Barbara, in the journal Earth's Future.

Known as an important means to support economic growth in Southeast Asia, the hydropower resources of the Mekong River Basin have been largely exploited by the riparian countries. The researchers found that during prolonged droughts hydropower production reduces drastically, forcing power systems to compensate with fossil fuels -- gas and coal -- thus increasing power production costs and carbon footprint. As such, the vulnerability of hydropower dams to inter-annual changes in water availability hinders their ability to keep to the promise of offering clean energy.

Based on the 2016 energy demand, the researchers estimated that prolonged droughts reduce hydropower production in the Thai-Laotian grid (refer to image) by about 4,000 GWh/year, increasing carbon dioxide emissions by 2.5 million metric tonnes, and increasing costs by US$120 million in one year.

At the same time, power supply was surprisingly found not to be at risk during droughts. This finding suggested that some big coal plants may have a capacity larger than necessary, thus contributing adversely to the environment.

The researchers also found that these phenomena -- droughts and shifts in energy generation mix -- are largely caused by El Nino events. This happens when trade winds weaken, the equatorial Pacific Ocean's surface is warmer than usual and less moisture is delivered to Southeast Asia from the Pacific. The bad news is that anthropogenic climate change may exacerbate El Nino events: if that happens, we will face a drier summer monsoon, with less water available for power systems.

So, what can we do to make power supply more sustainable?

"The answer may lie in mathematical models," explained principal investigator Associate Professor Stefano Galelli from SUTD.

"Our study builds on a new generation of high-resolution water-energy models that explain how each individual power plant reacts to external conditions, such as droughts or increased electricity demand. We can use these models to coordinate water-energy operations across countries, or to prepare contingency plans at the onset of a big drought," he added.

Credit: 
Singapore University of Technology and Design

Zinc oxide: key component for the methanol synthesis reaction over copper catalysts

image: Bimetallic Copper-Zinc nanoparticles convert CO,CO2 and H2 into methanol.

Image: 
© FHI/Kordus

The current commercial production of methanol through the hydrogenation of the green-house gas CO2 relies on a catalyst consisting of copper, zinc oxide and aluminum oxide. Even though this catalyst has been used for many decades in the chemical industry, unknowns still remain. A team of researchers from the Interface Science Department of the Fritz-Haber-Institute of the Max Planck Society, the Ruhr-University Bochum, Stanford Linear Accelerator Center (SLAC), FZ Juelich and Brookhaven National Laboratory have now elucidated the origin of intriguing catalytic activity and selectivity trends of complex nanocatalysts while at work. In particular, they shed light on the role of the oxide support and unveiled how the methanol production can be influenced by minute amounts of zinc oxide in intimate contact with copper.

Methanol can serve as an energy source or as a raw material for the production of other chemicals, with over 60 million metric tons produced yearly. The traditional copper, zinc oxide and aluminum oxide catalyst converts synthesis gas, which is composed of H2, CO and CO2, into methanol. Though reliable, this specific catalyst's efficiency changes over time, thus affecting its longevity, as is the case with many catalysts. "We therefore studied copper and mixed copper-zinc nanoparticles on various oxide supports to understand how they interact and evolve and unravel the role of each catalyst constituent. This knowledge will serve to improve future catalysts." says Núria Jiménez Divins, one of the lead authors of the study.

The team investigated the catalytic process under realistic reaction conditions reproducing those applied in the industrial process, meaning high pressures (20-60 bar) and mild temperatures. This required synchrotron-generated X-ray radiation. Simon R. Bare from the Stanford Synchrotron Radiation Lightsource, who contributed to the experiments, explains: "Reactions at such temperature and high pressures need to take place in a closed container which should also be transparent for the X-rays, which makes the measurements challenging. The special reactor design in combination with synchrotron radiation allowed us to undertake so-called operando-measurements, where we watched live what happens to the catalytic components at the industrially relevant reaction conditions." This allowed the researchers to follow not just the birth and death of the catalyst, but also its development and transformations leading to changes in its activity and selectivity.

By combining results from microscopy, spectroscopy and catalytic measurements, the team found that some supports had a more positive influence on the performance of the catalyst than others because of how they interacted with zinc oxide, which was available in a highly dilute manner as part of Cu-Zn nanoparticles. On silicon oxide supports, zinc oxide was partially reduced to metallic zinc or gave rise to a brass alloy during the catalytic process, which over time proved to be detrimental for the methanol production. When using aluminum oxide as a support, zinc interacts strongly with the support and gets incorporated in its lattice, giving rise to a change in reaction selectivity towards dimethyl ether. "This is an interesting finding", says David Kordus, the other lead author of the study and PhD student at the Interface Science Department at FHI. "We know now that the choice of support material has an influence on how the active components of the catalyst behaves and dynamically adapts to the reaction conditions. Especially the oxidation state of zinc is critically influenced by this, which should be considered for future catalyst design."

This work published in Nature Communications demonstrates that zinc oxide does not need to be available as part of the support, but that it still has a beneficial function even when available in highly dilute form as part of the nanoparticle catalyst itself. This will help elucidate the methanol synthesis catalysts better and potentially lead to an improvement of the catalyst for this important industrial process.

Credit: 
Fritz Haber Institute of the Max Planck Society

University of Limerick, Ireland, research identifies secrets of Fantasy Premier League success

As millions of Fantasy Premier League players mull over a decision whether to start Bruno Fernandes or Mohamed Salah in their teams this weekend, new research by the University of Limerick in Ireland has unlocked the secrets of the popular online game.

A new study by a team of researchers at UL has identified the underlying tactics used by the top-ranked competitors among the seven million players of Fantasy Premier League (FPL), the official - and world's largest - fantasy football game of the English Premier League.

Joseph O'Brien, Professor James Gleeson, and Dr David O'Sullivan, based within the Mathematics Applications Consortium for Science and Industry (MACSI) in the Department of Mathematics and Statistics at UL, have just published research in PLOS ONE, a high quality, peer-reviewed scientific journal, which, via a combination of large-scale data analysis, statistical techniques, and network science, provides a deeper understanding of the behaviours and resulting actions taken by the very best competitors within the game.

Lead author of the study Joseph O'Brien, a PhD student based at MACSI in UL, said: "FPL on the surface appears to be an extremely simple game in that one should just choose the most talented footballers for their teams and see what happens. However, in this study we analyse the results of competitors over multiple years and find that there are in fact groups of 'managers' that consistently perform extremely strongly, suggesting an element of skill."

Determined to understand why this phenomenon occurs, the researchers took advantage of the publicly available data to extract information from around 40 million webpages, describing the actions taken by the top one million ranked managers.

Through analysis of this data, a number of clearly defined strategies which differentiated successful managers from their less fortunate peers could be identified.

"We could immediately observe many different strategies used by managers and in particular there were multiple points in the season in which successful managers acted in an extremely different manner to those lower ranked, almost as if the thousands had come together with a ready-made game-plan," explained Joseph.

Arguably, the most interesting finding was that, despite managers being able to choose combinations from over 600 unique footballers, there were multiple stages in the season when the teams from these skilled managers converged to appear highly similar.

Using machine learning tools the researchers were then able to identify those players that were crucial in the make-up of successful teams.

"We were amazed to find that for most of the season the key player in successful teams wasn't Mo Salah or Kevin De Bruyne but rather Aaron Wan-Bissaka - a player in his debut season for Crystal Palace, due to his extremely low price and surprisingly efficient scoring (he completed a £45m transfer to Manchester United the following year). This combination allowed him to be a consistent 'enabler' for managers to have more expensive players elsewhere," said Joseph.

Taken together this research demonstrates clear characteristics present amongst the highest ranked managers suggesting "a pathway to success for competitors in the game with particular emphasis on long-term planning and identification of optimal enabling players", the UL researcher explained.

Noting possible opportunities for future applications of the research, Joseph outlined that "interesting questions remain as to whether the techniques we propose in this study may be used in similarly identifying signatures of successful competitors within other domains including e-sports, entrepreneurship, and scientific output".

Credit: 
University of Limerick

Researchers discover how to control zinc in plants: Could help the world's malnourished

Researchers discover how to control zinc content in plants: Could help the world's malnourished

Over 2 billion people worldwide are malnourished due to zinc deficiency. Led by the University of Copenhagen, an international team of researchers has discovered how plants sense zinc and use this knowledge to enhance plant zinc uptake, leading to an increase in seed zinc content by 50 percent. The new knowledge might one day be applied towards the cultivation of more nutritious crops.

A deficiency of zinc and other essential dietary nutrients is one of the greatest causes of malnutrition worldwide. More than two billion people are estimated to suffer from zinc deficiency, a problem that can lead to impaired immune systems, mental disorders and stunting. Among other things, malnutrition can be caused by infertile agricultural land, which affects the nutritional content of staple crops such as rice, wheat and maize.

But imagine that it was possible to flip a switch in crops, at the seed stage, that prompted them to turbocharge their intake of zinc, iron or other nutrients, and cause them to absorb more nutrients than they would otherwise. Researchers at the University of Copenhagen's Department of Plant and Environmental Sciences have done just that using the thale cress plant (Arabidopsis thaliana).

"For the first time ever, we have demonstrated that, by using a molecular 'switch' in the plant, we can cause the plant to absorb more zinc than it would otherwise, without apparent negative impact on the plant," states the study's lead author, Associate Professor Ana Assunção of the University of Copenhagen's Department of Plant and Environmental Sciences.

Plants absorbed 50 percent more zinc

Zinc benefits humans by helping to maintain a wide array of chemical processes and proteins running within our bodies. Should these processes cease to function properly, we become prone to illness. For plants, the absence of zinc primarily impacts growth, which is adversely affected in the absence of zinc.

Researchers have long attempted to understand how plants increase and decrease their zinc uptake. Ana Assunção and her colleagues have become the first to identify two specific proteins from thale cress that act as zinc sensors and determine the plant's ability to absorb and transport zinc throughout plant tissue.

By changing the properties of these sensors, or molecular "switch", that control a tightly connected network of zinc transporters, the researchers succeeded in getting them to absorb more zinc.

"Simply put, by making a small change in the sensor, we've led the plant to believe that it was in a permanent state of zinc deficiency. This kept the plant's zinc uptake machinery swiched-on and resulted in an increase of zinc content in the seeds by as much as 50 percent compared to a normal plant," explains Grmay Lilay, the study's first author, Postdoc at Assunção's Lab .

Up next: rice and beans

The researchers have demonstrated that it is possible to increase zinc-absorption in their experimental plant, but the next step is to reproduce the results in real crops. And the researchers are already well on the way to doing so.

"We're currently working to recreate our results in bean, rice and also tomato plants. Should we succeed, we'll realize some interesting opportunities to develop more nutritious and biofortified crops. Biofortification is a sustainable solution to improve micronutrient content in human diet," says Associate Professor Assuncao.

In the long term, the researchers' results could be applied by using CRISPR gene editing or by selecting naturally occurring crop varieties with a particularly good ability to absorb nutrients like zinc.
"The availability of enormous genomic resources will assist our efforts in finding crop varieties that are likely to display higher zinc accumulation, " concludes Grmay Lilay.

Credit: 
University of Copenhagen - Faculty of Science

Sea butterflies already struggle in acidifying Southern Ocean

image: A compilation of sea butterflies "Limacina retroversa" captured during the AMT27 ocean expedition.

Image: 
Credits: Lisette Mekkes & Katja Peijnenburg, Naturalis.

The oceans are becoming more acidic because of the rapid release of carbon dioxide (CO2) caused by anthropogenic (human) activities, such as burning of fossil fuels. So far, the oceans have taken up around 30% of all anthropogenic CO2 released to the atmosphere. The continuous increase of CO2 has a substantial effect on ocean chemistry because CO2 reacts with water and carbonate molecules. This process, called 'ocean acidification', lowers pH, and calcium carbonate becomes less available. This is a problem for calcifying organisms, such as corals and molluscs, that use calcium carbonate as the main building blocks of their exoskeleton.

In particular, organisms that build their shells from a type of calcium carbonate known as 'aragonite' are in trouble because aragonite is extremely soluble in sea water. Sea butterflies, tiny, swimming sea snails, build their shells of aragonite. Therefore, they are also known as 'the canaries of the coalmine' because they are expected to be amongst the first organisms to be affected by ocean acidification.

The Southern Ocean, around the Antarctic continent, is a region of high concern regarding ocean acidification. Globally, this region will experience acidified conditions first, because colder waters take up CO2 more readily. This is comparable to soda: you will find more bubbles dissolved in your soda when it is cold. Already within several decades, aragonite will be scarce in the Southern Ocean, posing a large problem for local sea life, such as sea butterflies.

Large numbers of sea butterflies inhabit the Southern Ocean and function as an important component of the food web where they are eaten by fish, seabirds and even whales. "It is important to gain a deeper understanding of the impacts of an increasingly acidifying ocean on the shell growth of sea butterflies", says Lisette Mekkes, PhD student at Naturalis Biodiversity Center and the University of Amsterdam. "Imagine that a highly abundant sea butterfly, Limacina retroversa, disappears from this region; that would have major implications for the rest of local sea life that depend on them for food, and for the calcium carbonate export from surface waters to the deep sea."

As part of Lisette's PhD research, sea butterflies from the Southern Ocean were exposed to different ocean conditions comparable to the past, the present and the near-future. Through a green, fluorescing substance, shell growth of the sea butterflies was visualized. Based on this fluorescing, the scientists discovered that sea butterflies already experience difficulties in building their shells in today's Southern Ocean. This will become even more difficult in the upcoming decades.

In past conditions, equivalent to the year 1880 (before an increased concentration of CO2 was absorbed by the oceans) sea butterflies were able to calcify over their entire shell: they built thicker and larger shells. In the present-day and future ocean conditions, sea butterflies mostly built more shell material only along the edge of the shell. Moreover, shells exposed to the future conditions did not increase in shell weight and had a lower density compared to past and present conditions, suggesting that calcification will be further compromised in the future.

"It appears that sea butterflies change their calcification strategy as a consequence of ocean acidification" explains Katja Peijnenburg, group leader at Naturalis Biodiversity Center and the University of Amsterdam. "As long as the oceans are not acidifying, sea butterflies can invest in growing thicker and larger shells. However, when the oceans acidify and aragonite is less available, they invest mostly in becoming larger" concludes Peijnenburg.

Although it is generally known that sea butterflies are vulnerable to changes in ocean chemistry, it is alarming to find out that they are already experiencing difficulties building their shells in today's ocean. The scientists wonder how much longer sea butterflies will be able to build their shells as the global ocean continues to acidify.

Credit: 
Naturalis Biodiversity Center

First wearable device can monitor jaundice-causing bilirubin and vitals in newborns

image: Schematic of neonatal wearable device for detecting jaundice and vitals

Image: 
Yokohama National University

Researchers in Japan have developed the first wearable devices to precisely monitor jaundice, a yellowing of the skin caused by elevated bilirubin levels in the blood that can cause severe medical conditions in newborns. Jaundice can be treated easily by irradiating the infant with blue light that breaks bilirubin down to be excreted through urine. The treatment itself, however, can disrupt bonding time, cause dehydration and increase the risks of allergic diseases. Neonatal jaundice is one of the leading causes of death and brain damage in infants in low- and middle-income countries.

To address the tricky balance of administering the precise amount of blue light needed to counteract the exact levels of bilirubin, researchers have developed the first wearable sensor for newborns that is capable of continuously measuring bilirubin. In addition to bilirubin detection, the device can simultaneously detect pulse rate and blood oxygen saturation in real time.

Led by Hiroki Ota, associate professor of mechanical engineering in Yokohama National University's Graduate School of System Integration, and Shuichi Ito, professor of department of Pediatrics in Yokohama City University's Graduate School of Medicine, the team published their results on March 3 in Science Advances.

"We have developed the world's first wearable multi-vital device for newborns that can simultaneously measure neonatal jaundice, blood oxygen saturation and pulse rate," Ota said, noting that jaundice occurs in 60 to 80% of all newborns. "The real-time monitoring of jaundice is critical for neonatal care. Continuous measurements of bilirubin levels may contribute to the improvement of quality of phototherapy and patient outcome."

Currently, medical professionals use handheld bilirubinometers to measure bilirubin levels, but there is not a device that can simultaneously measure jaundice and vitals in real time.

"In this study, we succeeded in miniaturizing the device to a size that can be worn on the forehead of a newborn baby," Ota said. "By adding the function of a pulse oximeter to the device, multiple vitals can easily be detected."

Held to the baby's forehead by a silicone interface, the device has a lens capable of efficiently transmitting lights to neonatal skin via battery-powered light-emitting diodes, commonly known as LEDs.

"At the present stage, coin cell batteries are used, and the overall shape is very thick," Ota said. "In the future, it will be necessary to further reduce the thickness and weight by using thin-film batteries and organic materials."

The researchers tested the device on 50 babies, and they found that the device is not currently accurate enough to suffice for clinical decision-making. According to Ota, they will reduce the thickness and increase the flexibility of the device, as well as improve the silicone interface to facilitate better skin contact.

In the future, the researchers plan to develop a combined treatment approach that pairs a wearable bilirubinometer with a phototherapy device to optimize the amount and duration of light therapy based on continuous measurements of bilirubin levels.

Credit: 
Yokohama National University