Tech

Paving the way to artificial photosynthesis -- effect of doping on the photocatalyst SrTiO3

image: Charge recombination occurs when mobile charge carriers present in the material that are exposed to light annihilate each other and can hamper the energy efficiency of the photocatalyst.

Image: 
Image courtesy: Masashi Kato from Nagoya Institute of Technology

For many years, researchers have been focused on developing technologies that can help us fight the imminent climate change crisis. They have one goal in common: finding sustainable energy sources that can replace the environmentally toxic fossil fuels. "Photocatalysts" that drive an artificial process that replicates photosynthesis (in which solar energy is converted to useful materials) are promising in this regard, given that we are able to develop the technology needed for them. Crystalline materials, such as strontium titanate (SrTiO3), which can serve as "photocatalysts" in solar devices, can lead us in the direction.

SrTiO3 is attractive owing to various other reasons too, such as its potential applications in resistive switches and fuel cell components. The versatile nature of SrTiO3 has motivated physicists to study its various materials properties in detail. But to dig deeper into the properties of SrTiO3, we need to understand a little more about them.

Photocatalytic materials such as SrTiO3 are usually "doped" with chemicals like niobium (Nb) that help to improve their electrical properties. But a process called "charge recombination" can occur in photocatalysts, which hampers with their efficiency. In this process, mobile charge carriers present in the material, such as "electrons" and "holes," when exposed to light, can annihilate each other. Some studies have shown that charge recombination is affected by the presence of defects in crystals. So how does Nb doping affect the material properties of SrTiO3? This is exactly what a team of researchers at Nagoya Institute of Technology, Japan, led by Prof. Masashi Kato, wanted to find out.

In their study published in Journal of Physics D: Applied Physics, the researchers looked at the effects of low-concentration Nb doping, as well as no doping, on the surface recombination in SrTiO3 crystals. Prof. Kato explains, "Quantitatively measuring the effects of surfaces and niobium impurities in SrTiO3 on carrier recombination can help us design photocatalysts with an optimal structure for artificial photosynthesis."

The scientists first analyzed the surface recombination, or "decay," patterns of undoped SrTiO3 samples as well as those doped with different concentrations of Nb, using a technique called "microwave photoconductivity decay." To further probe into the bulk carrier recombination properties of doped samples and different energy levels introduced by Nb doping, another technique called "time-resolved photoluminescence" was used.

The researchers found that the recombination of excited carriers was not dependent on their concentration, indicating that they recombined via "surface"" and "Shockley-Read-Hall" processes (which are insensitive to exciting carrier concentration). Moreover, the doped sample showed faster decay curves, which could be due to the introduction of a recombination center by Nb doping. Doping the material with high concentrations of Nb showed negative effects on carrier doping. Moreover, the size of the photocatalyst, and not its shape, influenced surface recombination and ultimately its overall efficiency.

The study concluded that moderately Nb-doped SrTiO3 could actually be more beneficial than pure SrTiO3, especially when operated at higher operating temperatures. These findings can help us design SrTiO3 photocatalysts with a lower surface recombination and higher energy conversion, leading to the development of efficient, sustainable sources of energy.

Prof. Kato optimistically concludes, "We are confident that our findings can accelerate the development of artificial photosynthesis technologies, ultimately contributing towards a greener, more sustainable society."

Credit: 
Nagoya Institute of Technology

A 50% rise in the level of CO2 could reduce rainfall in the Amazon more than deforestation

image: Direct impact of rising levels of carbon dioxide over the Amazon rainforest would be a reduction in rainfall equivalent to or even greater than the impact of complete substitution of the forest by pasture

Image: 
João Marcos Rosa/AmazonFACE

A 50% rise in the level of carbon dioxide (CO2) in the atmosphere could reduce rainfall in the Amazon as much as or even more than substitution of the entire forest by pasture. The rise in CO2 would reduce the amount of water vapor emitted by the forest, leading to a 12% annual drop in the volume of rainfall, while total deforestation would reduce rainfall by 9%.

These estimates are presented in a study published in Biogeosciences by scientists affiliated with the National Space Research Institute (INPE), the University of São Paulo (USP) and the University of Campinas (UNICAMP) in Brazil, and with Munich Technical University (TUM) in Germany.

“CO2 is a basic input for photosynthesis, so when it increases in the atmosphere, plant physiology is affected and this can have a cascade effect on the transfer of moisture from trees to the atmosphere [transpiration], the formation of rain in the region, forest biomass, and several other processes,” said David Montenegro Lapola, last author of the article.

Lapola is a professor at UNICAMP’s Center for Meteorological and Climate Research Applied to Agriculture (CEPAGRI) and principal investigator for a project funded via the FAPESP Research Program on Global Climate Change (RPGCC). The study was also part of a Thematic Project funded by FAPESP and supported by a postdoctoral fellowship awarded to the penultimate author.

The researchers set out to investigate how the physiological effects of rising atmospheric CO2 on plants influence the rainfall regime. Plants transpire less as the supply of CO2 increases, emitting less moisture into the atmosphere and hence generating less rain.

Normally, however, predictions regarding the increase in atmospheric CO2 do not dissociate its physiological effects from its effects on the balance of radiation in the atmosphere. In the latter case, the gas prevents some of the Sun’s reflected energy escaping from the atmosphere, causing the warming phenomenon known as the greenhouse effect.

Projections presented in the latest report of the Intergovernmental Panel on Climate Change (IPCC), taking into account changes in the atmospheric radiation balance plus the physiological effects on plants, had already forecast a possible reduction of up to 20% in annual rainfall in the Amazon and shown that much of the change in the region’s precipitation regime will be controlled by the way the forest responds physiologically to the increase in CO2.

For the recently published study, the researchers ran simulations on the supercomputer at INPE’s Center for Weather Forecasting and Climate Studies (CPTEC) in Cachoeira Paulista, state of São Paulo. They projected scenarios in which the atmospheric level of CO2 rose 50% and the forest was entirely replaced by pasture to find out how these changes affected the physiology of the forest over a 100-year period.

“To our surprise, just the physiological effect on the leaves of the forest would generate an annual fall of 12% in the amount of rain [252 millimeters less per year], whereas total deforestation would lead to a fall of 9% [183 mm]. These numbers are far higher than the natural variation in precipitation between one year and the next, which is 5%,” Lapola said.

The findings draw attention to the need for local action to reduce deforestation in the nine countries that share the Amazon basin and global action to reduce CO2 emissions into the atmosphere by factories, vehicles and power plants, for example.

Lapola is one of the coordinators of the AmazonFACE experiment. The acronym stands for Free-Air Carbon dioxide Enrichment. Installed not far north of Manaus, the experiment will raise the level of CO2 over small tracts of rainforest and analyze the resulting changes to plant physiology and the atmosphere. The experiment could anticipate the climate change scenario predicted for this century (more at: agencia.fapesp.br/32470).

Transpiration in forest and pasture

The scenarios projected by the computer simulations showed the decrease in rainfall being caused by a reduction of about 20% in leaf transpiration. The reasons for the reduction are different in each situation, however.

Stomata are microscopic portals in plant leaves that control gas exchange for photosynthesis. They open to capture CO2 and at the same time emit water vapor. In the scenario with more CO2 in the air, stomata remain open for less time and emit less water vapor, reducing cloud formation and rainfall.

Total leaf area shrinkage is another reason. If the entire forest were replaced by pasture, leaf area would shrink 66%. This is because the forest contains several layers of superimposed leaves in trees, so that leaf area per square meter is up to six times what it is on the ground. Lastly, both rising levels of CO2 and deforestation also influence the wind and the movement of air masses, which play a key role in the precipitation regime.

“The forest canopy has a complex surface made up of the tops of tall trees, low trees, leaves and branches. This is called canopy surface roughness. The wind produces turbulence, with eddies and vortices that in turn produce the instability that gives rise to the convection responsible for heavy equatorial rainfall,” Lapola said. “Pasture has a smooth surface over which the wind always flows forward, and without forest doesn’t produce vortices. The wind intensifies as a result, bearing away most of the precipitation westward, while much of eastern and central Amazonia, the Brazilian part, has less rain.”

The decrease in transpiration caused by rising levels of CO2 leads to a temperature increase of up to two degrees because there are fewer water droplets to mitigate the heat. This factor triggers a cascade of phenomena that result in less rain owing to inhibition of so-called deep convection (very tall rain clouds heavy with water vapor).

“A next step would be to test other computational models and compare the results with our findings,” Lapola said. “Another important initiative would consist of more experiments like FACE, as only these can supply data to verify and refine modeling simulations like the ones we performed.”

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

To understand the future of hurricanes, look to the past

The historic 2020 hurricane season, with its record-breaking 30 tropical storms and hurricanes, left in its wake hundreds of deaths in the United States, tens of billions of dollars in damages, and one important question: Is this what the future will look like?

While most climate scientists agree that hurricane severity, at least in terms of rainfall, will likely increase as the planet warms, there remains uncertainty about the future frequency of hurricanes. Today's climate models offer a range of possible futures, some predicting an increase in North Atlantic hurricane frequency, others a decrease. These conflicting results beg the question: are these models even capable of predicting hurricane frequency or are they missing some vital process?

"In order to understand whether these models are credible we need to see if they can reproduce the past," said Peter Huybers, Professor of Earth and Planetary Sciences and of Environmental Science and Engineering at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). "Today's models do a good job simulating the past 40 years of hurricanes but as we move back in time, the models and data increasingly diverge. This raises an important question: if models do not reproduce the long-term history of hurricanes, should we trust their long-term predictions?"

In a new paper published in Science Advances, Huybers and a team of researchers found that these models are, in fact, capable of reproducing the long-term history of hurricanes -- but only if historic sea surface temperatures are corrected. The research highlights the importance of understanding sea surface temperatures patterns and suggests that a better understanding of these patterns could reconcile conflicting model predictions and improve our understanding of how climate change will impact future hurricane frequency.

This paper builds off previous research in which Huybers and his team identified bias in historic sea surface temperature measurements and developed a comprehensive approach to correct the data. That paper, published in 2019, led to a better understanding of how oceans warmed over time. Here, the researchers apply the same correction to help model historic hurricane frequency.

Sea surface temperature plays a critical role in the formation of hurricanes.

"The frequency of Atlantic hurricanes depends on the pattern of sea surface temperatures, particularly the warmth of the subtropical North Atlantic, an area that roughly extends from the tip of Florida to Cape Verde, relative to the tropical oceans as a whole," said Duo Chan, a former graduate student at SEAS and first author of the paper.

When the subtropical North Atlantic is relatively warm, it leads to more atmospheric convection and more Atlantic hurricanes. When the subtropical North Atlantic is relatively cool, hurricane formation rates decrease, in part because of winds that shear apart proto-storm systems.

When today's climate models try to reproduce past hurricane seasons, they generally predict too few between 1885-1900 and too many between 1930-1960. These models, however, all rely upon historic sea surface temperatures that indicate a relative cool subtropical Atlantic at the turn of the 20th century and a warm Atlantic in the mid-20th century.

But, as Huybers and Chan demonstrated in previous research, those historic sea surface temperatures contain systematic errors. Their corrected sea surface temperatures show a warmer subtropical North Atlantic from 1885 to 1920 and a relatively cooler one between 1930 and 1960. These adjustments bring hurricane frequency in line with observations.

"Our corrections to sea surface temperature patterns were independently developed, and significantly improve the skill with which models reproduce historical hurricane variations," said Huybers. "These results increase our confidence both in historical sea surface temperatures and our models and provide a more-firm basis from which to explore how climate change will influence hurricane frequency going forward."

Credit: 
Harvard John A. Paulson School of Engineering and Applied Sciences

Dedicated journal edition on largest ever study on First Nations Food Security & Environment

(OTTAWA, ON) The University of Ottawa, the University of Montreal and the Assembly of First Nations are pleased to announce the newly published First Nations Food, Nutrition and Environment Study (FNFNES) in the Canadian Journal of Public Health. Mandated by First Nations leadership across Canada through Assembly of First Nations Resolution 30 / 2007 and realized through a unique collaboration with researchers and communities, the First Nations Food, Nutrition and Environment Study is the first national study of its kind. It was led by principal investigators Dr. Laurie Chan, a professor and Canada Research Chair in Toxicology and Environmental Health at the University of Ottawa, Dr. Tonio Sadik, Senior Director of Environment, Lands & Water at the Assembly of First Nations (AFN), and Dr. Malek Batal, a professor of Nutrition and Canada Research Chair in Nutrition and Health Inequalities from the University of Montreal.

Among other things, the study highlights the successful partnership between First Nations peoples across Canada and academia. The FNFNES team worked closely with nearly 100 participating First Nations, demonstrating how good partnerships can produce information that is both scientifically robust and meaningful for communities.

A set of articles published yesterday in the Canadian Journal of Public Health present key outcomes, drawing a remarkable picture of the diets of First Nations along with a suite of environmental factors that impact food and water, in and around communities.

Key findings

The study highlights that traditional food systems remain foundational to the health and well-being of First Nations and that traditional food is of superior quality to store bought food. The majority of traditional foods were found to be very safe and extremely healthy to consume, but that access to these foods does not meet current needs due to continuing environmental degradation, as well as socioeconomic, systemic, and regulatory barriers.

In fact, many First Nations face the challenge of extremely high rates of food insecurity - 3-5 times higher than the Canadian population overall - and the current diet of many First Nations adults is nu­tritionally inadequate.

The study also found that long-standing problems with water treatment sys­tems in many First Nations, particularly exceedances for metals that affect colour and taste, limit the acceptability and use of tap water for drinking.

Studies like FNFNES can support First Nations to help make informed decisions about nutrition, the environment and environmental stewardship, and can lead to further research and advocacy with respect to safeguarding First Nations’ rights and jurisdiction. FNFNES results also provide a baseline from which to measure environmental changes expected to take place over time.

Credit: 
University of Ottawa

Novel heat-management material keeps computers running cool

image: An electron microscopy image of a gallium nitride-boron arsenide heterostructure interface at atomic resolution

Image: 
The H-Lab/UCLA

UCLA engineers have demonstrated successful integration of a novel semiconductor material into high-power computer chips to reduce heat on processors and improve their performance. The advance greatly increases energy efficiency in computers and enables heat removal beyond the best thermal-management devices currently available.

The research was led by Yongjie Hu, an associate professor of mechanical and aerospace engineering at the UCLA Samueli School of Engineering. Nature Electronics recently published the finding in this article.

Computer processors have shrunk down to nanometer scales over the years, with billions of transistors sitting on a single computer chip. While the increased number of transistors helps make computers faster and more powerful, it also generates more hot spots in a highly condensed space. Without an efficient way to dissipate heat during operation, computer processors slow down and result in unreliable and inefficient computing. In addition, the highly concentrated heat and soaring temperatures on computer chips require extra energy to prevent processers from overheating.

In order to solve the problem, Hu and his team had pioneered the development of a new ultrahigh thermal-management material in 2018. The researchers developed defect-free boron arsenide in their lab and found it to be much more effective in drawing and dissipating heat than other known metal or semiconductor materials such as diamond and silicon carbide. Now, for the first time, the team has successfully demonstrated the material's effectiveness by integrating it into high-power devices.

In their experiments, the researchers used computer chips with state-of-the-art, wide bandgap transistors made of gallium nitride called high-electron-mobility transistors (HEMTs). When running the processors at near maximum capacity, chips that used boron arsenide as a heat spreader showed a maximum heat increase from room temperatures to nearly 188 degrees Fahrenheit. This is significantly lower than chips using diamond to spread heat, with temperatures rising to approximately 278 degrees Fahrenheit, or the ones with silicon carbide showing a heat increase to about 332 degrees Fahrenheit.

"These results clearly show that boron-arsenide devices can sustain much higher operation power than processors using traditional thermal-management materials," Hu said. "And our experiments were done under conditions where most current technologies would fail. This development represents a new benchmark performance and shows great potential for applications in high-power electronics and future electronics packaging."

According to Hu, boron arsenide is ideal for heat management because it not only exhibits excellent thermal conductivity but also displays low heat-transport resistance.

"When heat crosses a boundary from one material to another, there's typically some slowdown to get into the next material," Hu said. "The key feature in our boron arsenide material is its very low thermal- boundary resistance. This is sort of like if the heat just needs to step over a curb, versus jumping a hurdle."

The team has also developed boron phosphide as another excellent heat-spreader candidate. During their experiments, the researchers first illustrated the way to build a semiconductor structure using boron arsenide and then integrated the material into a HEMT-chip design. The successful demonstration opens up a path for industry adoption of the technology.

Credit: 
University of California - Los Angeles

Growing 'metallic wood' to new heights

image: This strip of metallic wood, about an inch long and one-third inch wide, is thinner than household aluminum foil but is supporting more than 50 times its own weight without buckling. If the weight were suspended from it, the same strip could support more than six pounds without breaking.

Image: 
Penn Engineering

Natural wood remains a ubiquitous building material because of its high strength-to-density ratio; trees are strong enough to grow hundreds of feet tall but remain light enough to float down a river after being logged.

For the past three years, engineers at the University of Pennsylvania's School of Engineering and Applied Science have been developing a type of material they've dubbed "metallic wood." Their material gets its useful properties and name from a key structural feature of its natural counterpart: porosity. As a lattice of nanoscale nickel struts, metallic wood is full of regularly spaced cell-sized pores that radically decrease its density without sacrificing the material's strength.

The precise spacing of these gaps not only gives metallic wood the strength of titanium at a fraction of the weight, but unique optical properties. Because the spaces between gaps are the same size as the wavelengths of visible light, the light reflecting off of metallic wood interferes to enhance specific colors. The enhanced color changes are based on the angle that light reflects off of the surface, giving it a dazzling appearance and the potential to be used as a sensor.

Penn Engineers have now solved a major problem preventing metallic wood from being manufactured at meaningful sizes: eliminating the inverted cracks that form as the material is grown from millions of nanoscale particles to metal films big enough to build with. Preventing these defects, which have plagued similar materials for decades, allows strips of metallic wood to be assembled in areas 20,000 times greater than they were before.

James Pikul, assistant professor in the Department of Mechanical Engineering and Applied Mechanics, and Zhimin Jiang, a graduate student in his lab, have published a study demonstrating this improvement in the journal Nature Materials.

When a crack forms within an everyday material, bonds between its atoms break, eventually cleaving the material apart. An inverted crack, by contrast, is an excess of atoms; in the case of metallic wood, inverted cracks consist of extra nickel that fills in the nanopores critical to its unique properties.

"Inverted cracks have been a problem since the first synthesis of similar materials in the late 1990s," says Jiang. "Figuring out a simple way of eliminating them has been a long-standing hurdle in the field."

These inverted cracks stem from the way that metallic wood is made. It starts as a template of nanoscale spheres, stacked on top of one another. When nickel is deposited through the template, it forms metallic wood's lattice structure around the spheres, which can then be dissolved away to leave its signature pores.

However, if there are any places where the spheres' regular stacking pattern is disrupted, the nickel will fill those gaps, producing an inverted crack when the template is removed.

"The standard way to build these materials is to start with a nanoparticle solution and evaporate the water until the particles are dry and regularly stacked. The challenge is that the surface forces of water are so strong that they rip the particles apart and form cracks, just like cracks that form in drying sand," Pikul says. "These cracks are very difficult to prevent in the structures we are trying to build, so we developed a new strategy that allows us to self-assemble the particles while keeping the template wet. This prevents the films from cracking, but because the particles are wet, we have to lock them in place using electrostatic forces so that we can fill them with metal."

With larger, more consistent strips of metallic wood now possible, the researchers are particularly interested in using these materials to build better devices.

"Our new manufacturing approach allows us to make porous metals that are three times stronger than previous porous metals at similar relative density and 1,000 times larger than other nanolattices," Pikul says. "We plan to use these materials to make a number of previously impossible devices, which we are already using as membranes to separate biomaterials in cancer diagnostics, protective coatings and flexible sensors."

Credit: 
University of Pennsylvania

Hotter, more frequent droughts threaten California's iconic blue oak woodlands

The devastating 2012 - 2016 drought in California triggered widespread tree cover loss and die-offs of a variety of species in the region. A new study in the open access journal Frontiers in Climate is the first to show that California's iconic blue oak (Quercus douglasii) woodlands have also decreased by more than 1,200 km2. By another metric, which reflects the altered or deteriorating condition of the tree cover, the blue oak range has lost over 600 km2 in addition. These findings highlight the need to raise awareness about the vulnerability of these ecosystems and to adapt conservation strategies to increasing climate extremes.

"Our findings indicate that droughts that last several years, and which occur along with warmer than historically normal temperatures, pose serious threats to the blue oak woodlands," says first author Dr Francis Dwomoh, of ASRC Federal Data Solutions, a contractor to the US Geological Survey Earth Resources Observation and Science Center in the United States. "Acting in concert with wildfires, these harsher climatic conditions may lead to major tree cover loss, with negative consequences on the plants and wildlife that depend on them, as well as the goods and services we derive from this ecosystem."

Endemic to California

Blue oak woodlands are only found in California and they are considered one of the largest remaining examples of native old-growth woodlands (pre-European settlement) in the region. This ecosystem is also one of the most biologically diverse in California, and it is home to more than 300 vertebrate animal species.

To better understand how this ecosystem has been responding to climate warming and associated disturbances such as wildfires, the research team used new models of land change based on the extensive record from the Landsat satellite series to specifically estimate both complete tree cover loss as well as conditional change. Conditional change - which shows partial disruption or degradation - indicates the health, productivity and susceptibility of the woodlands. But past studies have not had the resources to distinguish between these two states.

Major woodland loss, even in absence of fires

The team combined the new annual data from the U.S. Geological Survey Land Change Monitoring, Assessment and Projection (LCMAP) project with climate and wildfire records in the timeframe of 1985 - 2016. From this, they found that the 2012 - 2016 drought was associated with tree cover loss and conditional change both with and without forest fires. Unsurprisingly, the loss due to fire was particularly high during the driest and hottest years.

Given that these results are based on models and satellite measurements, fieldwork would provide a more complete picture of the state of these woodlands. Nevertheless, this approach, along with the LCMAP data that is available for 48 states in the US, may provide a useful tool for monitoring future changes and developing conservation strategies in California and elsewhere.

"We hope that our research findings will be useful for identifying and prioritizing the most vulnerable areas of the woodlands for appropriate management interventions," explains Dwomoh. "Furthermore, our results might be helpful to plan for more resilient blue oak woodlands and similar landscapes as the harsher climatic conditions of 2012 - 2016 are likely to be more common in the future."

Credit: 
Frontiers

How humans brought change to a tropical paradise

image: Homalictus fijiensis, a common bee in the lowlands of Fiji.

Image: 
James Dorey, Flinders University

After centuries of human impact on the world's ecosystems, a new study from Flinders University details an example of how a common native bee species has flourished since the very first land clearances by humans on Fiji.

In a new paper in Molecular Ecology (DOI: 10.1111/mec.16034), research led by Flinders University explores a link between the expansion of Homalictus fijiensis, a common bee in the lowlands of Fiji, which has increased its spread on the main island Viti Levu alongside advancing land clearance and the introduction of new plants and weeds to the environment.

"Earlier research connected the relatively recent population expansion to warming climates, but our study reveals an interesting and positive response from an endemic species to human modifications to the landscape which commenced about 1000BC," says lead author, Flinders University researcher James Dorey.

"This species is a super-generalist pollinator (pollinates many plant species) and likes to nest in open, cleared ground, so one of the most important bee pollinators in Fiji actually appears to have benefited from human arrival and subsequent clearing of land in Fiji."

The study examined changes in native bee populations in Fiji using phylogenetic analyses of mitochondrial and genomic DNA. They show that bee populations in Fiji expanded enormously, starting about 3000 years ago and accelerating from about 2000 years ago.

Compared to the main island, Mr Dorey says no corresponding change in bee population size was found for another major island, Kadavu, where human populations and agricultural activities have been historically very low.

"That is too recent to be explained by a warming climate since the last glacial maximum which ended about 18,000 thousand years ago," says senior author Associate Professor Michael Schwarz in the new paper.

"Instead, we argue that the expansion of Fijian bee population better coincides with the early occupation of the Pacific islands by the somewhat-mysterious Lapita people, and this expansion accelerated with increasing presence of later Polynesians in Fiji who modified the landscape with their agricultural practices."

The research is an example of how the impacts of early human dispersals can be inferred even when fossil records are not available and when climate change is a complicating factor.

One possible downside of super-generalist pollinators, such as the endemic Fijian halictine bee Homalictus fijiensis, is that they could encourage the expansion of introduced weeds and exotic crop species - exacerbating other ecosystem changes in the long run.

"As well, those research techniques could be applied to many other animal species. For example, changes in population sizes of mammals, such as kangaroos, wombats and koalas, could be explored by looking at their tick and lice parasites which might have better 'genetic signals' of how populations have fared over the last few thousands of years or more, adds Associate Professor Schwarz, who says high-resolution population genetic studies such as this are a good way to discriminate between older and 'natural' events due to climate change and those resulting from early human dispersal and colonisation.

"A persistent question in studies of ecosystems over the last 60,000 years or so concerns the relative roles of climate change and human modifications of the environment. For example, there is a continuing debate about the extinction of megafauna in Australia - was it due to humans, climate change, or both?

"Those kinds of question can be addressed if there are very good fossil records, but what about ecosystems where fossil records are very poor."

The new paper is a result of almost a decade of scientific studies into Fiji's biodiversity by SA Museum and Flinders University biological scientists and students.

SA Museum's research fellow in World Cultures, Dr Stephen Zagala (pictured attached), says the new study gives fascinating insights into how current ecosystems were assembled during the various phases of human migration and settlement.

"Early European explorers and naturalists were unaware that extensive human dispersals had already been transforming the ecologies of Pacific islands for millennia," he says. "This study adds important details to an emerging picture of the Pacific as a highly cultivated landscape."

Credit: 
Flinders University

A step forward for IVF patients with predicted poor response to treatment

This press release is in support of a presentation by Dr Maria Cerrillo Martinez presented online at the 37th Annual Meeting of ESHRE.

29 June 2021: Fertility patients who have a poor response to ovarian stimulation represent a stubborn challenge in IVF. Few eggs are collected, success rates are low, and several treatments are usually needed to achieve pregnancy (if at all). Clinical guidelines indicate that increasing the drug dose for stimulation or applying any of several adjunct therapies are of little benefit. Now, however, a study assessing two cycles of ovarian stimulation and two egg collections in the same menstrual cycle may yet provide a real advance for predicted poor responders in IVF.

A randomised trial performed in Spain has found that this double stimulation approach - known as "DuoStim" - is just as efficient as two conventional stimulations in different cycles and importantly reduces the time to pregnancy in a group of patients who rarely have time on their side. "This is of great clinical utility," said Dr Maria Cerrillo Martinez, presenting the study results at ESHRE's online annual meeting on behalf of her colleagues at IVIRMA in Madrid. Dr Cerrillo added that this was the first randomised trial to compare the efficiency of single-cycle DuoStim with two conventional treatments in separate cycles.

The study took place throughout 2017 and 2020 in a total of 80 IVF patients scheduled for embryo testing (PGT-A for chromosomal normality). All were aged over 38 and their reproductive history and ovarian reserve tests predicted a low response to ovarian stimulation - with few eggs collected and a likely poor outcome. They were randomised to two conventional cycles of stimulation or to DuoStim, and results were significantly in favour of the latter.

While the initial "laboratory" response to stimulation was comparable between the two groups in terms of eggs collected and healthy embryos developed, there was a significant difference in the average number of days it took to develop a chromosomally normal embryo ready for transfer. The DuoStim group reached this point in an average of 23 days from the start of stimulation, while the conventional two-treatment group took an average of 44 days. There was also a trend towards higher fertilisation and blastocyst-formation rate in the DuoStim group.

Since the earliest days of IVF treatment, whether in a natural or stimulated cycle, treatment has always involved a single attempt in a single cycle. Stimulation has always been with fertility hormones known as gonadotrophins with the aim of producing multiple eggs for collection and fertilisation. The idea of stimulating the ovaries twice in the same cycle has seemed counterintuitive, but, says Dr Cerrillo, the concept is quite in tune with the biology of the ovary.

"Classically," she explained, "only one cohort of follicles has been recruited in the early 'follicular' phase of the cycle. However, we know from data in other species that stimulation in the second 'luteal' phase of the cycle was also possible. So some women having fertility preservation before cancer treatment and with little time to spare were stimulated as soon as they came to the IVF unit, and they usually responded normally. It's the same principle behind DuoStim - that we can recruit a cohort of follicles on any day of the cycle."

So far, the benefits of a dual stimulation strategy seem to be concentrated on time to pregnancy and the generation of several eggs in a treatment group usually associated with very few. Studies so far have also found that fewer patients have dropped out of treatment programmes because of the shorter time required.

So far, guidelines for IVF have described DuoStim as only of experimental application, but, says Dr Cerrillo, "we think it's a great alternative for predicted poor responders who might otherwise have difficulties reaching blastocyst transfer from just one egg pick-up". She added that her own clinic in Madrid now has DuoStim experience in more than 500 patients. "Hopefully, the experimental recommendation will change with more data like ours being published."

Dr Cerrillo emphasised that the dual stimulation strategy was so far applicable mainly to a group of patients whose ovarian reserve tests - such as AMH - indicated a poor response to stimulation. "But we also need to consider the most relevant marker of prognosis, patient age. So the patients we're looking at are those with poor ovarian reserve, who are often at an older maternal age," she said.

"So if the patient is a good responder, we may not need more eggs or more embryos - and dual stimulation would not be necessary," explained Dr Cerrillo. "However, it may be a good alternative in poor responders, in fertility preservation patients with time constraints, or even in egg donors, whose aim is to maximise the number of eggs retrieved in a single treatment."

Credit: 
European Society of Human Reproduction and Embryology

Personal networks are associated with clean cooking fuel adoption in rural South India

image: The use of poorly ventilated stoves and open fires for cooking inside dwellings adds to the devastating toll of air pollution in India. A new study led by Boston College researchers found that personal networks can play a role in the adoption of clean cooking fuels as the nation works to convert households from traditional fuels like charcoal and wood. In this image, a woman prepares a meal using liquefied petroleum gas, a cleaner fuel source.

Image: 
Mark Katzman

Chestnut Hill, Mass. (6/29/2021) -- A new, first-of-its-kind study led by researchers from Boston College has found that personal networks in India could play an important role in advancing the adoption of a cleaner cooking fuel, in this case liquefied petroleum gas, according to a report published in the journal Environmental Research Letters.

"This is the first report in clean cooking research to show that just like with tobacco use, obesity, or physical activity -- where our networks play a role in shaping our behaviors and decisions -- we find that personal networks are also associated with what kinds of stoves rural poor use.," said study co-author and Boston College Assistant Professor of Social Work Praveen Kumar.

In rural India, particularly among those living in poverty, households rely widely on solid fuels such as firewood, charcoal, animal dung, and crop residue for household cooking. This use of traditional stoves has been linked to adverse health outcomes for household members and neighbors.

It has been estimated that household air pollution accounted for roughly 600,000 premature deaths in India in 2019, making it a leading cause of preventable deaths in the nation of 1.4 billion residents.

Government and non-governmental organizations have worked for decades to try to shift households to cleaner fuel sources, but slow progress has been attributed to economic, educational, and demographic barriers.

The researchers studied 198 respondents from roughly 30 villages in the southern Indian state of Andhra Pradesh. Respondents split evenly between those who adopted LPG and those who chose not to.

"We found that if there are a higher number of peers or friends who have cleaner cooking technology, that increases the likelihood of respondents to also have cleaner cooking technology," said Kumar, a scholar on energy access. "On the other hand if a respondent has a substantial number of friends with traditional cooking stoves, there is a higher likelihood that the respondent also has a traditional cooking stove."

In addition to Kumar, co-authors of the report include Boston College School of Social Work Dean Gautam Yadama, Liam McCafferty and Amar Dhand, of Harvard Medical School, Smitha Rao, of Ohio State University, Antonia Díaz-Valdés, of Universidad Mayor, Chile, and Rachel G Tabak and Ross C Brownson, of Washington University, St. Louis

"This study led by Kumar underscores the importance of behavioral and social drivers in the adoption and use of clean energy technologies, critical for improving health and environmental benefits for the poor," said Yadama, an authority on dissemination and implementation of clean energy systems to improve health and wellbeing of households in India.

The findings have implications for policy makers seeking ways to continue to convince poor households to shift to cleaner cooking fuels, said Kumar.

"The transition from traditional cooking to clean cooking is critically important -- not just in India. This is a global public health problem," said Kumar. "Governments have been working to support the transition, but more needs to be done. What we've found makes the argument that social policy around clean cooking needs to include personal network analysis and social network strategies. If personal networks are so important, there is a need to find residents who are influencers or opinion leaders and build targeted awareness campaigns involving them."

Kumar said the next steps in the research should explore the threshold of personal networks with cleaner stoves that could transition the households to cleaner cooking.

Credit: 
Boston College

UofL researchers lead call to increase genetic diversity in immunogenomics

image: Corey Watson, Ph.D., Melissa Smith, Ph.D., and Oscar Rodriguez, Ph.D., with the Pacific Biosciences Sequel IIe DNA sequencing system, housed in the University of Louisville Sequencing Technology Center.

Image: 
University of Louisville

LOUISVILLE, Ky. - Historically, most large-scale immunogenomic studies - those exploring the association between genes and disease - were conducted with a bias toward individuals of European ancestry. Corey T. Watson, Ph.D., assistant professor in the University of Louisville Department of Biochemistry and Molecular Genetics, is leading a call to actively diversify the genetic resources he and fellow immunogenomics researchers use in their work to advance genomic medicine more equitably.

Watson, along with UofL post-doctoral fellow Oscar Rodriguez, Ph.D., and visiting fellow Yana Safonova, Ph.D., are part of an international group of researchers who say the narrow studies limit their ability to identify variation in human adaptive immune responses across populations.

"We need to better understand how genetics influences immune system function by studying population cohorts that better represent the diversity observed across the globe if we are to fully understand disease susceptibility, as well as design more tailored treatments and preventative measures," Watson said.

In an article published in Nature Methods, Diversity in immunogenomics: the value and the challenge, the group advocates for resources used in immunogenomics research to actively include and specifically identify additional populations and minority groups. They say such diversity will make their research more relevant and help in understanding population and ancestry-specific gene-associated disease, leading to improvements in patient care.

"As scientists, we have a say in which populations are investigated. Therefore, it is critical for us to be actively inclusive of individuals representative of the world we live in. This is especially critical for genes that are as diverse and clinically relevant as those that encode antibodies and T cell receptors," Rodriguez said.

Watson's research focuses on immune function and molecular genetics. His team is studying a specific area of the genetic code that controls antibody function to better understand how differences in an individual's genes determine their susceptibility to certain diseases or immune responses to vaccines.

In collaboration with Melissa Smith, Ph.D., assistant professor in the Department of Biochemistry and Molecular Genetics, the team is conducting the largest sequencing efforts of the antibody gene regions in humans and in animal models, Watson said.

"Specifically in humans, we are working to build catalogs of genetic variation in samples from multiple ethnic backgrounds and are engaged in projects that seek to understand how this genetic variation influences the immune response in infection, vaccination and other disease contexts," he said.

Watson is involved in efforts to improve the resources and data standards for antibody and T cell receptor genes for immunogenomics researchers around the world.

The article in Nature Methods was co-authored by researchers from the United States, Canada, Norway, France, Sweden, the United Kingdom, Russia, Saudi Arabia, Israel, South Africa, Nigeria, Chile, Peru, China, Japan, Taiwan and French Polynesia with expertise in biomedical and translational research, population and public health genetics, health disparities and computational biology as well as immunogenomics.

Credit: 
University of Louisville

Transforming the layered ferromagnet F5GT for future spintronics

image: A SP-FET transistor, with F5GT flake on a solid proton conductor (SPC) - scale = 10μm

Image: 
FLEET

A RMIT-led international collaboration published this week has achieved record-high electron doping in a layered ferromagnet, causing magnetic phase transition with significant promise for future electronics

Control of magnetism (or spin directions) by electric voltage is vital for developing future, low-energy high-speed nano-electronic and spintronic devices, such as spin-orbit torque devices and spin field-effect transistors.

Ultra-high-charge, doping-induced magnetic phase transition in a layered ferromagnet allows promising applications in antiferromagnetic spintronic devices.

The FLEET collaboration of researchers at RMIT, UNSW, the University of Wollongong and FLEET partner High Magnetic Field Laboratory (China) demonstrates for the first time that ultra-high electron doping concentration (above 1021 cm-3) can be induced in the layered van der Waals (vdW) metallic material Fe5GeTe2 by proton intercalation, and can further cause a transition of the magnetic ground state from ferromagnetism to antiferromagnetism.

TUNING MAGNETISM IN THE VDW FERROMAGNET Fe5GeTe2 (F5GT)

The emergence of layered, vdW magnetic materials has expedited a growing search for novel vdW spintronic devices.

Compared to itinerant ferromagnets, antiferromagnets (AFMs) have unique advantages as building blocks of such future spintronic devices. Their robustness to stray magnetic fields makes them suitable for memory devices, and the AFM-based spin-orbit torque devices require a lower current density than that in ferromagnets.

However currently vdW itinerant antiferromagnets are still scarce.

Besides directly synthesizing a vdW antiferromagnet, another possible method toward this function is to induce a magnetic phase transition in an existing vdW itinerant ferromagnet.

"We chose to work with newly synthesised vdW itinerant ferromagnet Fe5GeTe2 (F5GT)" says the study's first author, FLEET Research Fellow Dr Cheng Tan (RMIT).

"Our previous experience on Fe3GeTe2 (Nature Communication 2018) enabled us to quickly identify and evaluate the material's magnetic properties, and some studies indicate Fe5GeTe2 is sensitive to local atomic arrangements and interlayer stacking configurations, meaning it would be possible to induce a phase transition in it by doping," Cheng says.

The team firstly investigated the magnetic properties in Fe5GeTe2 nanosheets of various thicknesses by electron transport measurements.

However, the initial transport results also show that the electron density in Fe5GeTe2 is high as expected, indicating that the magnetism is hard to be modulated by traditional gate-voltage due to the electric-screen effect in metal:

"Despite the high charge density in Fe5GeTe2, we knew it was worth trying to tune the material via protonic gating, as we have previously achieved in Fe3GeTe2 (Physical Review Letters 2020), because protons can easily penetrate into the interlayer and induce large charge doping, without damaging the lattice structure," says co-author Dr Guolin Zheng (also at RMIT).

FABRICATING THE SOLID PROTONIC FIELD-EFFECT TRANSISTOR (SP-FET)

Like all classical-computing beyond-CMOS researchers, the team are seeking to build an improved form of the transistor, the switches that provide the binary backbone of modern electronics.

A solid protonic field-effect transistor (SP-FET) is one that switches based on insertion (intercalation) of protons. Unlike traditional proton FETs (which switch by dipping liquid, and are considered promising candidates for bridging between traditional electronics and biological systems. ), the SP-FET is solid, and thus suitable for use in real devices

The SP-FET has been demonstrated to be very powerful in tuning thick metallic materials (ie, it can induce large charge doping level), which are very difficult to modulate via traditional dielectric based or ion liquid gating techniques(because of electric screening effect in metal).

By fabricating a solid protonic field-effect transistor (SP-FET) with Fe5GeTe2, the team were able to dramatically change the carrier density in Fe5GeTe2 and change its magnetic ground state. Further density functional theory calculation confirmed the experimental results.

"All the samples show that the ferromagnetic state can be gradually supressed by increasing proton intercalation, and finally we see several samples display no hysteresis loops, which indicates the change of the magnetic ground state, the theoretical calculations are consistent with the experimental results," says Cheng.

"The success of realizing an AFM phase in metallic vdW ferromagnet Fe5GeTe2 nanosheets constitutes an important step towards vdW antiferromagnetic devices and heterostructures that operate at high temperatures," says co-author A/Prof Lan Wang (also at RMIT).

"Again, this demonstrates that our protonic gate technique is a powerful weapon in electron transport experiments, and probably in other areas well."

Credit: 
ARC Centre of Excellence in Future Low-Energy Electronics Technologies

Preliminary results of clinical trial for Crigler-Najjar syndrome

Preliminary results from the European gene therapy trial for Crigler-Najjar syndrome, conducted by Genethon in collaboration with European network CureCN, were presented at the EASL (European Association for the Study of the Liver) annual International Liver Congress on June 26. Based on initial observations, the drug candidate is well tolerated and the first therapeutic effects have been demonstrated, to be confirmed as the trial continues.

Crigler-Najjar syndrome is a rare genetic liver disease characterized by abnormally high levels of bilirubin in the blood (hyperbilirubinemia). This accumulation of bilirubin is caused by a deficiency of the UGT1A1 enzyme, responsible for transforming bilirubin into a substance that can be eliminated by the body, and can result in significant neurological damage and death if not treated quickly. At present, patients must undergo phototherapy for up to 12 hours a day to keep their bilirubin levels below the toxicity threshold.

Dr. Lorenzo D'Antiga (Azienda Ospedaliera Papa Giovanni XXIII, Bergamo, Italy), one of the investigators working on the gene therapy trial being conducted into Crigler-Najjar syndrome by Genethon, presented the results from the first treated patients at this year's EASL (European Association for the Study of the Liver) congress. The treatment involves providing the liver cells with a copy of the UGT1A1 gene that encodes an enzyme designed to facilitate bilirubin elimination. Based on initial observations, the results are encouraging.

Specifically, the first two cohorts reveal that:

The product is safe and well tolerated in the four patients undergoing treatment

There appears to be a dose-response relationship (to be confirmed):

In cohort 1, treated at the lowest dose, the clinicians observed a temporary therapeutic effect but that was insufficient to allow prolonged stopping the phototherapy at the sixteenth week post-injection (the product efficacy endpoint)

In cohort 2, treated at a higher dose, a major reduction in bilirubin levels was demonstrated in the first patient, enabling her to stop phototherapy a few weeks ago. The second patient has also seen a major decrease in her bilirubin levels. Her treatment is too recent to demonstrate a stable decrease, but if this decrease is confirmed she will also be able to discontinue phototherapy in a few weeks' time.

"We are very excited with the results achieved so far in this trial of AAV-mediated gene therapy for Crigler Najjar syndrome. The treatment, at appropriate doses, has shown to be safe and able to correct the disease to an extent that allowed the first patient to stop daily phototherapy, eliminating the risk of neurological injury. The degree of improvement of the second patient suggests that soon she might be able to stop phototherapy too. Our work on the immunomodulation protocol is now focused on maintaining a durable effect in the long term. This innovative strategy may replace liver transplantation in patients with a genetic liver disease". Dr. Lorenzo D'Antiga, who treated the last two patients and presented their results at the EASL congress.

The trial uses a technology developed by Genethon's Immunology and Liver Gene Therapy team, headed up by Dr. Giuseppe Ronzitti: "The team has worked incredibly hard on this project, from designing and developing the approach right through to the trial. We designed the drug candidate, undertook the preclinical efficacy testing, then designed the product for the clinical trial. We are continuing our work to develop new approaches for other liver diseases".

"These initial observations presented at this year's EASL congress indicate that gene therapy could become an alternative treatment for this severe liver disease. We need to remain cautious, as the trial is ongoing and will allow us to evaluate these initial encouraging results in other patients and over a longer period." Frederic Revah, CEO of Genethon.

Credit: 
AFM-Téléthon

Danger caused by subdomains

image: Example of a website structure with many subdomains: tuwien.ac

Image: 
TU Wien

The internet is full of dangers: Sensitive data can be leaked, malicious websites can allow hackers to access private computers. The Security & Privacy Research Unit at TU Wien in collaboration with Ca' Foscari University has now uncovered a new important security vulnerability that has been overlooked so far. Large websites often have many subdomains - for example, "sub.example.com" could be a subdomain of the website "example.com". With certain tricks, it is possible to take control of such subdomains. And if that happens, new security holes open up that also put people at risk who simply want to use the actual website (in this example: example.com).

The research team studied these vulnerabilities and also analysed how widespread the problem is: 50,000 of the world's most important websites were examined, and 1,520 vulnerable subdomains were discovered. The team was invited to the 30th USENIX Security Symposium, one of the most prestigious scientific conferences in the field of cybersecurity. The results have now been published online.

Dangling Records

"At first glance, the problem doesn't seem that bad," says Marco Squarcina from the Institute of Logic and Computation at TU Vienna. "After all, you might think that you can only gain access to a subdomain if you're explicitly allowed by the administrator of the website, but that's a mistake."

This is because often a subdomain points to another website that is physically stored on completely different servers. Maybe you own the website example.com and want to add a blog. You don't want to build it from scratch, but instead use an existing blogging service of another website. Therefore, a subdomain, such as blog.example.com, is connected to another site. "If you use the example.com page and click on the blog there, you won't notice anything suspicious," says Marco Squarcina. "The address bar of the browser shows the correct subdomain blog.example.com, but the data now comes from a completely different server."

But what happens if one day this link is no longer valid? Perhaps the blog is not needed anymore or it is relaunched elsewhere. Then the link from blog.example.com points to an external page that is no longer there. In this case, one speaks of "dangling records" - loose ends in the website's network that are ideal points of attack.

"If such dangling records are not promptly removed, attackers can set up their own page there, which will then show up at sub.example.com," says Mauro Tempesta (also TU Wien).

This is a problem because websites apply different security rules to different areas of the internet. Their own subdomains are typically considered "safe", even if they are in fact controlled from outside. For example, cookies placed on users by the main website can be overwritten and potentially accessed from any subdomains: in the worst case, an intruder can then impersonate another user and carry out illicit actions on their behalf.

Alarmingly common problem

The team composed by Marco Squarcina, Mauro Tempesta, Lorenzo Veronese,Matteo Maffei (TU Wien), and Stefano Calzavara (Ca' Foscari) investigated how common this problem is: "We examined 50,000 of the most visited sites in the world, discovering 26 million subdomains," says Marco Squarcina. "On 887 of these sites we found vulnerabilities, on a total of 1,520 vulnerable subdomains." Among the vulnerable sites were some of the most famous websites of all, such as cnn.com or harvard.edu. University sites are more likely to be affected because they usually have a particularly large number of subdomains.

"We contacted all the people responsible for the vulnerable sites. Nevertheless, 6 months later, the problem was still only fixed on 15 % of these subdomains," says Marco Squarcina. "In principle, it would not be difficult to fix these vulnerabilities. We hope that with our work we can create more awareness about this security threat."

Credit: 
Vienna University of Technology

TPU scientists offer scalable technology to obtain polytetrafluoroethylene membranes

Scientists of Tomsk Polytechnic University were able to obtain polytetrafluoroethylene (PTFE) membranes using electrospinning. PTFE is known to be the most stable existent polymer. According to the scientists, it is a simple, affordable and easily scalable method, which will allow obtaining chemically stable membranes in industrial-scale production. The membranes can be used in petrochemical, aerospace, nuclear industries, carbon-free energy and medicine.

The latest results of the research of physical and chemical properties and biocompatibility of the obtained membranes are published in the Journal of Fluorine Chemistry (IF: 2,332; Q1). The obtained membranes were tested using cells and laboratory animals. The research confirmed that the membranes are not rejected by the cells and are not destroyed in the biological matrix. The interdisciplinary team consisting of physicists and chemicals is currently conducting the research at TPU.

"The material and methods of work with it were noteworthy for us. PTFE is a polymer containing fluorine. Fluorine and similar compounds are called fluoropolymers. They are noteworthy for scientists and experts working at industrial enterprises due to their inert. Fluoropolymers can be used in corrosive media or where material stability is crucial. These either can be hydrogen fuel cell operating in the conditions of corrosive media or a medical implant inside a human body. It means that obtaining membranes is very perspective, however, there is no large capacity technology in the world yet. It is either expensive or labour-intensive, even if the raw material is affordable," Evgeny Bolbasov, Research Fellow of the TPU Butakov Research Center, says.

The TPU scientists used electrospinning. It is drawing charged threads of polymer solutions under the effect of an electric field. The result is a knitted material of polymer threads.

"The main advantage of the method is that the small laboratory installation is not different from an industrial one by its core and processes. Everything that can be done in the laboratory is easily reproducible at the enterprise. Previously, it was believed that obtaining a PTFE membrane using electrospinning is simply impossible. PTFE is not pulled into threads. To solve this problem, we added polyvinyl alcohol (PVA), a crosslinking agent in the synthesis chain," the scientist says.

The process of obtaining the membrane described in the article carries two stages. First, very fine powder is mixed with PVA. A solution loading in the electrospinning installation is obtained. The thinnest threads are pulled inside of the electrospinning installation and the white porous bed is spun from these pulled threads. It is the membrane. At stage two, the membrane is fired in an oven at about 400°?. The added PVA completely evaporates in the oven and the membrane is getting dark a bit. The entire process takes no longer than three hours.

The researchers note that all raw materials used for the synthesis are commercially affordable and are produced in Russia.

These membranes possess a wide range of potential application. Only a scalable technology is required. Industrial methods of obtaining membranes from fluoropolymers are searched in Europe, the USA, China. Meanwhile, the Russian scientists possess an opportunity to offer a commercially interesting solution. From our point of view, electrospinning is such a solution.

This method is dozen folds cheaper than its alternatives, it allows easily controlling the pore structure of the membranes. Moreover, this method is reproducible and scalable, which is very interesting for potential industrial partners," Vyacheslav Buznik, an academician of the Russian Academy of Sciences, one of the article authors, says.

"Currently, the main task of the TPU researchers is to show the method opportunities for solving specific applied problems. The task is complicated, complex. It can be solved only by interdisciplinary teams consisting of materials specialists, chemists, physicists. It is crucially important for us that there are all the required experts and competences at TPU. It will help us to actively develop this field," Marina Trusova, Director of the TPU Research School of Chemistry and Applied Biomedical Sciences, notes.

Credit: 
Tomsk Polytechnic University