Tech

Airborne laser scanning of gaps in Amazon rainforest helps explain tree mortality

image: Distribution of laser scanning flights over the Brazilian Amazon. Each flight line is about 12 x 0.5 km.

Image: 
Ricardo Dal'Agnol/INPE

A group of researchers led by Brazilians has used an innovative model to map gaps in the Amazon rainforest and identify factors that contribute to tree mortality.  Water stress, soil fertility, and anthropic forest degradation have the most influence on gap dynamics in the world’s largest and most biodiverse tropical rainforest, according to an article on the study published in Scientific Reports.

Forest gaps are most frequent in the areas with the highest levels of soil fertility, possibly because the abundance of organic material drives faster tree growth and shorter life cycles.

The main method of data collection used in the study was LiDAR (light detection and ranging), a remote sensing method that uses pulsed laser light. Coverage extended to remote parts of the Brazilian Amazon where fieldwork is very difficult and satellite images can be imprecise, owing mainly to heavy cloud.

An airborne LiDAR system emits thousands or hundreds of thousands of laser light pulses, which bounce off Earth’s surface and return to the system at the speed of light, enabling the height of trees and other objects to be determined on the basis of the lag between emission and reception of the pulses. Resolution can be as high as 1 meter, so LiDAR is used to survey topography and the structure of vegetation, often in the form of a 3D scan.

“The western and southeastern parts of Amazonia had the most gaps, closest to the ‘arc of deforestation’ on the agricultural frontier. Forest dynamics are up to 35% faster there than in the center-east and north, with more gap creation and tree mortality,” Ricardo Dal’Agnol, first author of the article, told Agência FAPESP. Dal’Agnol is an environmental engineer working as a researcher in the Earth Observation & Geoinformatics Division of Brazil’s National Space Research Institute (INPE).

In the study, which was supported by FAPESP, the scientists used a database resulting from more than 600 flights over the forest as part of INPE’s Amazon Biomass Estimation Project (EBA), led by Jean Ometto, a senior researcher at INPE and a co-author of the article.

The purpose of the EBA was to quantify biomass and carbon in the Amazon and explore the dynamics of vegetation in the region. The maps produced by INPE as part of the project can be used to formulate public policy, facilitate the inventorying of emissions, and estimate carbon balances.

Carbon sequestration

Forests, above all tropical forests, are considered the largest biological reservoir of biomass and carbon on the planet. Trees need large amounts of CO2 to develop and grow. Changes in forest functioning and tree mortality therefore significantly influence the amount of greenhouse gas emissions into the atmosphere. They also directly affect the market for carbon credits currently being implemented in several countries following regulation by the Paris Agreement, a major global environmental policy milestone. 

In 2019, greenhouse gas emissions in Brazil rose 9.6% compared with the previous year, largely owing to deforestation in the Amazon. In that year, Brazil pumped 2.17 billion gross tonnes of carbon dioxide equivalent (tCO2e) into the atmosphere, up from 1.98 billion tCO2e in 2018, reversing the downtrend seen in previous years, according to a report by Brazil’s Greenhouse Gas Emission and Removal Estimating System (SEEG). 

“The uncertainties associated with tree mortality drivers and mechanisms, especially at smaller scales (Scientific Reports article.

Previous research had already pointed to the influence of climate change, especially rising temperatures and drier weather, on tree mortality in tropical forests. One recent study, also led by Brazilian researchers, was published in December 2020 in Proceedings of the National Academy of Sciences (PNAS)

Future

According to Dal’Agnol, mapping trees that die standing to obtain more data on forest dynamics is the next big challenge. “Some trees die but don’t fall, remaining upright as skeleton-like trunks,” he said. “A next step could be to try to map these standing dead trees in order to obtain a more comprehensive picture of tree mortality.”

In the article, the scientists say “the spatial patterns of dynamic gaps” mapped using LiDAR data were “notably consistent with field mortality patterns” but were 60% lower, probably owing to “predominant detection of the broken/uprooted mode of death”. 

Dal’Agnol’s postdoctoral research, on which he is now working with FAPESP’s support, uses a novel approach to the analysis of airborne LiDAR data to quantify tree mortality and estimate biomass loss in tropical forests. The principal investigator for the project is Luiz Eduardo Oliveira e Cruz de Aragão, which subscribes the article as last author. 

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

UBCO engineer cautions pregnant women about speed bumps

image: UBCO researcher Hadi Mohammadi cautions that accelerating over speed bumps poses a danger for pregnant women and their fetuses.

Image: 
UBC Okanagan

Slow down. Baby on board.

So says UBC Okanagan researcher and Associate Professor of Mechanical Engineering Hadi Mohammadi. His new research, conducted in collaboration with Sharif University of Technology, determines that accelerating over speed bumps poses a danger for pregnant women and their fetuses.

"There is lots of research about the importance of movement for women during pregnancy," explains Mohammadi, who teaches in the School of Engineering. "Our latest research looked specifically at the impacts of sudden acceleration on a pregnant woman."

Using new modelling based on data from crash tests and fundamental dynamic behaviours of a pregnant woman, Mohammadi and his co-authors found that accelerating over speedbumps raises concern. If driven over quickly, they caution this can lead to minor injuries to the fetal brain, cause an abnormal fetal heart rate, abdominal pain, uterine contraction, increasing uterine activity and further complications.

Occupants in a vehicle, especially pregnant women, are subjected to relatively large forces suddenly and over a short period when a vehicle accelerates over a speedbump, he explains.

Mohammadi is particularly interested in vibrations, and in this case their impact on human organs. This recent study looked at the effect of these vibrations on a woman in her third trimester of pregnancy.

Their investigation included many factors such as the speed of the car as it goes over the speedbump, the size of the speedbump as it can cause a drag on the uterus as it goes up and then down, and the fact that all this movement puts pressure on the amniotic fluid that is protecting the fetus.

"We took all these factors into account to ensure a comprehensive differential model that mirrors real-world responses and interactions of the woman and fetus."

As a result, the researchers were very specific in their recommendations. Slow down.

In fact, they advise slowing a vehicle to less than 45 km/h when hitting a speedbump, and preferably as low as 25km/h to reduce risk to the fetus.

"Obviously, there are other variables at play when a driver approaches a speedbump, but we hope our findings provide some evidence-based guidance to keep drivers and their occupants literally and figuratively safe," says Mohammadi.

Furthermore, he hopes the findings can help researchers better understand how a pregnant woman and her fetus are subjected to risk caused by a vehicle passing bumpy terrain such as speed bumps. His end goal is for his research to make vehicular safety improvements for pregnant women.

Credit: 
University of British Columbia Okanagan campus

Unlocking richer intracellular recordings

image: This sketch displays the experimental procedure of ultra-fast laser moving over the 3DFG electrodes.

Image: 
College of Engineering, Carnegie Mellon University

Behind every heartbeat and brain signal is a massive orchestra of electrical activity. While current electrophysiology observation techniques have been mostly limited to extracellular recordings, a forward-thinking group of researchers from Carnegie Mellon University and Istituto Italiano di Tecnologia has identified a flexible, low-cost, and biocompatible platform for enabling richer intracellular recordings.

The group's unique "across the ocean" partnership started two years ago at the Bioelectronics Winter School (BioEl) with libations and a bar napkin sketch. It has evolved into research published today in Science Advances, detailing a novel microelectrode platform that leverages three-dimensional fuzzy graphene (3DFG) to enable richer intracellular recordings of cardiac action potentials with high signal to noise ratio. This advancement could revolutionize ongoing research related to neurodegenerative and cardiac diseases, as well as the development of new therapeutic strategies.

A key leader in this work, Tzahi Cohen-Karni, associate professor of biomedical engineering and materials science and engineering, has studied the properties, effects, and potential applications of graphene throughout his entire career. Now, he is taking a collaborative step in a different direction, using a vertically-grown orientation of the extraordinary carbon-based material (3DFG) to access the intracellular compartment of the cell and record intracellular electrical activity.

Due to its unique electrical properties, graphene stands out as a promising candidate for carbon-based biosensing devices. Recent studies have shown the successful deployment of graphene biosensors for monitoring the electrical activity of cardiomyocytes, or heart cells, outside of the cells, or in other words, extracellular recordings of action potentials. Intracellular recordings, on the other hand, have remained limited due to ineffective tools...until now.

"Our aim is to record the whole orchestra--to see all the ionic currents that cross the cell membrane--not just the subset of the orchestra shown by extracellular recordings," explains Cohen-Karni. "Adding the dynamic dimension of intracellular recordings is fundamentally important for drug screening and toxicity assay, but this is just one important aspect of our work."

"The rest is the technology advancement," Cohen-Karni continues. "3DFG is cheap, flexible and an all-carbon platform; no metals involved. We can generate wafer-sized electrodes of this material to enable multi-site intracellular recordings in a matter of seconds, which is a significant enhancement from an existing tool, like a patch clamp, which requires hours of time and expertise."

So, how does it work? Leveraging a technique developed by Michele Dipalo and Francesco De Angelis, researchers at Istituto Italiano di Tecnologia, an ultra-fast laser is used to access the cell membrane. By shining short pulses of laser onto the 3DFG electrode, an area of the cell membrane becomes porous in a way, allowing for electrical activity within the cell be recorded. Then, the cardiomyocytes are cultured to further investigate interactions between the cells.

Interestingly, 3DFG is black and absorbs most of the light, resulting in unique optical properties. Combined with its foam-like structure and enormous exposed surface area, 3DFG has many desirable traits that are needed to make small biosensors.

"We have developed a smarter electrode; an electrode that allows us better access," emphasizes Cohen-Karni. "The biggest advantage from my end is that we can have access to this signal richness, to be able to look into processes of intracellular importance. Having a tool like this will revolutionize the way we can investigate effects of therapeutics on terminal organs, such as the heart."

As this work moves forward, the team plans to apply its learnings in large-scale cell/tissue interfaces, to better understand tissue development and toxicity of chemical compounds (e.g. drug toxicity).

Credit: 
College of Engineering, Carnegie Mellon University

Rescuing street art from vandals' graffiti

WASHINGTON, April 13, 2021 -- From Los Angeles and the Lower East Side of New York City to Paris and Penang, street art by famous and not-so-famous artists adorns highways, roads and alleys. In addition to creating social statements, works of beauty and tourist attractions, street art sometimes attracts vandals who add their unwanted graffiti, which is hard to remove without destroying the underlying painting. Now, researchers report novel, environmentally friendly techniques that quickly and safely remove over-paintings on street art.

The researchers will present their results today at the spring meeting of the American Chemical Society (ACS). ACS Spring 2021 is being held online April 5-30. Live sessions will be hosted April 5-16, and on-demand and networking content will continue through April 30. The meeting features nearly 9,000 presentations on a wide range of science topics.

"For decades, we have focused on cleaning or restoring classical artworks that used paints designed to last centuries," says Piero Baglioni, Ph.D., the project's principal investigator. "In contrast, modern art and street art, as well as the coatings and graffiti applied on top, use materials that were never intended to stand the test of time."

Research fellow Michele Baglioni, Ph.D., (no relation to Piero Baglioni) and coworkers built on their colleagues' work and designed a nanostructured fluid, based on nontoxic solvents and surfactants, loaded in highly retentive hydrogels that very slowly release cleaning agents to just the top layer -- a few microns in depth. The undesired top layer is removed in seconds to minutes, with no damage or alteration to the original painting.

Street art and overlying graffiti usually contain one or more of three classes of paint binders -- acrylic, vinyl or alkyd polymers. Because these paints are similar in composition, removing the top layer frequently damages the underlying layer. Until now, the only way to remove unwanted graffiti was by using chemical cleaners or mechanical action such as scraping or sand blasting. These traditional methods are hard to control and often damaged the original art.

"We have to know exactly what is going on at the surface of the paintings if we want to design cleaners," explains Michele Baglioni, who is at the University of Florence (Italy). "In some respects, the chemistry is simple -- we are using known surfactants, solvents and polymers. The challenge is combining them in the right way to get all the properties we need."

Michele Baglioni and coworkers used Fourier transform infrared spectroscopy to characterize the binders, fillers and pigments in the three classes of paints. After screening for suitable low-toxicity, "green" solvents and biodegradable surfactants, he used small angle X-ray scattering analyses to study the behavior of four alkyl carbonate solvents and a biodegradable nonionic surfactant in water.

The final step was formulating the nanostructured cleaning combination. The system that worked well also included 2-butanol and a readily biodegradable alkyl glycoside hydrotrope as co-solvents/co-surfactants. Hydrotropes are water-soluble, surface-active compounds used at low levels that allow more concentrated formulations of surfactants to be developed. The system was then loaded into highly retentive hydrogels and tested for its ability to remove overpaintings on laboratory mockups using selected paints in all possible combinations.

After dozens of tests, which helped determine how long the gel should be applied and removed without damaging the underlying painting, he tested the gels on a real piece of street art in Florence, successfully removing graffiti without affecting the original work.

"This is the first systematic study on the selective and controlled removal of modern paints from paints with similar chemical composition," Michele Baglioni says. The hydrogels can also be used for the removal of top coatings on modern art that were originally intended to preserve the paintings but have turned out to be damaging. The hydrogels will become available commercially from CSGI Solutions for Conservation of Cultural Heritage, a company founded by Piero Baglioni and others. CSGI, the Center for Colloid and Surface Science, is a university consortium mainly funded through programs of the European Union.

Credit: 
American Chemical Society

Combining mask wearing, social distancing suppresses COVID-19 virus spread

image: Illustration of a network of contacts to show the spreading of COVID-19 in a population where a fraction of the individuals (cones) wear masks and practice social distancing (cones with white stripes).

Image: 
Anna Sawulska and Maurizio Porfiri

WASHINGTON, April 13, 2021 -- Studies show wearing masks and social distancing can contain the spread of the COVID-19 virus, but their combined effectiveness is not precisely known.

In Chaos, by AIP Publishing, researchers at New York University and Politecnico di Torino in Italy developed a network model to study the effects of these two measures on the spread of airborne diseases like COVID-19. The model shows viral outbreaks can be prevented if at least 60% of a population complies with both measures.

"Neither social distancing nor mask wearing alone are likely sufficient to halt the spread of COVID-19, unless almost the entire population adheres to the single measure," author Maurizio Porfiri said. "But if a significant fraction of the population adheres to both measures, viral spreading can be prevented without mass vaccination."

A network model encompasses nodes, or data points, and edges, or links between nodes. Such models are used in applications ranging from marketing to tracking bird migration. In the researchers' model, based on a susceptible, exposed, infected, or removed (recovered or has died) framework, each node represents a person's health status. The edges represent potential contacts between pairs of individuals.

The model accounts for activity variability, meaning a few highly active nodes are responsible for much of the network's contacts. This mirrors the validated assumption that most people have few interactions and only a few interact with many others. Scenarios involving social distancing without mask wearing and vice versa were also tested by setting up the measures as separate variables.

The model drew on cellphone mobility data and Facebook surveys obtained from the Institute for Health Metrics and Evaluation at the University of Washington. The data showed people who wear masks are also those who tend to reduce their mobility. Based on this premise, nodes were split into individuals who regularly wear masks and socially distance and those whose behavior remains largely unchanged by an epidemic or pandemic.

Using data collected by The New York Times to gauge the model's effectiveness, the researchers analyzed the cumulative cases per capita in all 50 states and the District of Columbia between July 14, 2020, when the Centers for Disease Control and Prevention officially recommended mask wearing, through Dec. 10.

In addition to showing the effects of combining mask wearing and social distancing, the model shows the critical need for widespread adherence to public health measures.

"U.S. states suffering the most from the largest number of infections last fall were also those where people complied less with public health guidelines, thereby falling well above the epidemic threshold predicted by our model," Porfiri said.

Credit: 
American Institute of Physics

Huntington's Disease: Neural traffic could help understand the disease

image: In Huntington's Disease, axonal transport is defective and this leads to neuronal degeneration. In this work, we found that the protein mutated in Huntington's Disease, called huntingtin (HTT), needs to be methylated by the enzyme PRMT6 in order to guarantee an efficient axonal transport and the survival of neural cells (in orange and blue we depicted the vesicles that are transported along the axon). Indeed, low PRMT6 levels cause a reduction in the number of vesicles travelling along the axon (top panel), whereas high PRMT6 levels rescue this defect and ameliorate the disease in in vitro and in vivo models (bottom panel).

Image: 
by University of Trento

Huntington's disease is a genetic neurodegenerative disorder caused by mutations in the protein huntingtin and characterized by involuntary dance-like movements, severe behavioural changes and cognitive impairment. That neuronal traffic is impaired in this disease has been very well known for several years. But that this deranged trafficking could be ameliorated by increasing huntingtin methylation was not yet known. The findings emerged from an international research work coordinated by the University of Trento and published in Cell Reports.

The research teams identified the fundamental role of the protein arginine methyltransferase PRMT6 in ensuring transport along axons, the routes that connect nerve cells to each other, and hence the health of neurons. According to the study, neural impairment and degeneration are caused by the loss of methylation of huntingtin, the protein that regulates the traffic. The researchers therefore focused on how to restore huntingtin's function and observed benefits associated with increased expression of PRMT6.

"We have learned that huntingtin, the protein that causes Huntington's disease when mutated, is modified by an enzyme (PRMT6) that is capable of adding methyl groups (small flags) to teach other proteins, responsible for axonal transport, to recognize huntingtin. When huntingtin is not recognised, axonal traffic slows down. Imagine axonal transport like a train. Huntingtin works like a driver that checks tickets and opens the doors at the station, that means it loads proteins or various organelles that are transported from one side of the axon to the other. With no functional huntingtin, the traffic is altered. The flags provided by PRMT6 derive from metabolic cycles that depend on vitamins B9 and B12. It will be interesting to measure the level of B9 and B12 in patients with Huntington's disease and perform both in vitro and in vivo experiments to find out if an increase in vitamin intake in neurons can lead to an increase in huntingtin methylation and therefore restore its function. That is what we will work on in the coming months with Alice, our young investigator", concluded Manuela Basso (corresponding author for the University of Trento).

"Neurodegenerative diseases are associated with an impairment of neural function. Unfortunately, the mechanisms underlying these pathological processes are still unknown and there are no treatments to stop or delay the progression of these devastating disorders. This research work sheds light on the importance of the interaction of proteins, like huntingtin, with factors that regulate gene expression in cells, like PRMT6. Our study adds a new piece to the puzzle of neurodegenerative disorders. Having established that huntingtin interacts with PRMT6, and understood the functional role of this interaction in normal conditions and in disease, we will now be able to understand what to do to restore this process and improve neural function to the benefit of patients", explained Maria Pennuto (corresponding author for the University of Padova).

The international research project brought together Italian, French, Spanish and American contributions. The study was inspired by a discovery by Maria Pennuto (University of Padova and Veneto Institute of Molecular Medicine - VIMM) reported in 2015 in the prestigious journal Neuron, and was developed mainly by Alice Migazzi and Chiara Scaramuzzino in the laboratories of the Department of Cellular, Computational and Integrative Biology, University of Trento (Laboratory of Transcriptional Neurobiology, led by Manuela Basso), and in the laboratory of Frédéric Saudou at Grenoble Institut des Neurosciences. The research work on Huntington's disease will continue thanks to the grant awarded by Fondazione Caritro to Alice Migazzi.

About the article

The article "Huntingtin-mediated axonal transport requires arginine methylation by PRMT6" was published on April 13th 2021 in "Cell Reports".

First authors: Alice Migazzi (Department of Cellular, Computational and Integrative Biology - CIBIO, University of Trento), and Chiara Scaramuzzino (Grenoble Institut des Neurosciences, Univ. Grenoble Alpes Inserm).
Corresponding authors: Frédéric Saudou (Grenoble Institut des Neurosciences, Univ. Grenoble Alpes Inserm); Maria Pennuto (Department of Biomedical Sciences, University of Padova and Veneto Institute of Molecular Medicine - VIMM, Padova); Manuela Basso (Department of Cellular, Computational and Integrative Biology, University of Trento).

Journal

Cell Reports

DOI

10.1016/j.celrep.2021.108980

Credit: 
Università di Trento

Ancient ammonoids' shell designs may have aided buoyancy control

image: Fossil of Menuites oralensis with external shell removed to reveal intricate suture patterns.

Image: 
David Peterman

Ammonoids, ancestors of today's octopus, squid and cuttlefish, bobbed and jetted their way through the oceans for around 340 million years beginning long before the age of the dinosaurs. If you look at the fossil shells of ammonoids over the course of that 340 million years, you'll notice something striking--as time goes on, the wavy lines inside the shell become more and more complex, eventually becoming frilled almost like the edges of kale leaves.


Fossil of Menuites oralensis with external shell removed to reveal intricate suture patterns.

These wavy lines are called sutures, and they reflect the complexity of the edges of septa, or the walls that separated the chambers in the ammonoids' shells. Researchers previously focused on the roles of these complex structures in resisting pressure on the shell, but University of Utah researchers provide evidence for a different hypothesis. Complex sutures, they found, retained more liquid through surface tension, possibly helping the ammonoids fine-tune their buoyancy. Their results are published in Scientific Reports.

Due to an unfortunate lack of living ammonoids, the researchers had to turn to another method to understand the function of shell structure: 3-D printed models.

"These hypotheses couldn't be tested without being able to create incredibly accurate models of these intricate features," says David Peterman, lead author of the study and a postdoctoral scholar in the Department of Geology and Geophysics. "The 3-D printed models allow the fabrication of incredibly intricate chamber walls that have details comparable to the living animals."


Fossil ammonite along with 3D-printed computer reconstructions showing internal and external morphology.

Increasing complexity

Although ammonoids are long extinct, we can look at their distant living relative, the chambered nautilus, to understand how their shells work.

If you look at a cross-section of a nautilus shell, you'll see that the shell is divided into chambers, each one separated by cup-shaped divider walls--septa. The suture lines are the intersections of these septa with the internal shell wall. "The earliest sutures were essentially straight lines in ammonoid ancestors like the nautiloids," Peterman says. And just as the sutures became more intricate and complex over evolutionary time in ammonoids, the septa developed more complex and fractal-like edges. "Some species," he says, "had sutures so complex that there was hardly any free space where the septa meet the shell."

If ammonoids developed the complex sutures and septa as a result of evolution, they must confer some survival advantage, right? Most research on ammonoids has focused on the hypothesis that the complex septa gave the shell strength. "Mechanical functional interpretations generally concern stress resistance," Peterman says, "with more complex divider walls acting as buttresses."

But several studies, he says, have challenged that hypothesis. An alternative hypothesis is that the intricate surfaces of the septa could change their surface tension, allowing more water to stick and improving the refilling of the shell chambers with water. This matters because that's the mechanism the ammonoids likely used to control their buoyancy during growth, in response to weight changes, and perhaps for vertical movement.

Peterman, assistant professor Kathleen Ritterbush and colleagues set out to test that hypothesis. But first they'd need some septa. The chambers of fossilized ammonoids are typically filled with lithified mud or minerals, Peterman says, necessitating another approach.

3-D printing the past

Using virtual modelling, the researchers custom-designed example septal surfaces in various sizes and with varying levels of complexity. Virtual modeling, Peterman says, allowed for the fabrication of hypothetical surfaces as well. "For example," he says, "one of the most complex sutures out there, from the shell of Menuites oralensis, was iteratively smoothed to investigate differences in complexity while holding the relative chamber volume and shell shape constant."


The half-cut shell of the modern Nautilus (right). 3D-printed half-cut ammonite shell (left). Chamber model from the current study, cut to show internal geometry (top).

The team added to the models a coating of micro-dispersed oxidized cellulose to help the water stick to the surface. Nautilus shells have a similar coating. "While nautilids are distant relatives of ammonoids, in some ways they serve as our best analogues for the function of ammonoid shells," Peterman says.

The experimental process was relatively simple: weigh each model dry, dunk it in water, rotate it to drain the water held on by gravity, and then weigh it again to see how much water remained, held on by surface tension.

But the results showed clearly that the more complex structures held more water. And the more complex folds were especially effective at holding water in larger models. The results suggest, Peterman says, that complex septal surfaces may have helped with more precise and active buoyancy control. Ritterbush adds that they may also have enabled better balance, bigger size and external shapes that favor speed.


Virtual model of a single Menuites oralensis septum, used in the current study to create the 3-D-printed models.

Ammonoids hit the peak of suture complexity just before their extinction, along with the dinosaurs, at the end of the Cretaceous. Only the simply-sutured nautilids survived, but there were likely other factors at play aside from suture complexity that enabled their survival.

Their study lays the groundwork for this physiological function to be further explored, along with its relationship to ammonoid ecology. The development of advanced computing workflows and smart materials will eventually allow these enigmatic creatures to be "resurrected"  with functioning models.

"While we won't be able to revive these animals like the dinosaurs in Jurassic Park," Peterman says, "computer simulations and experiments such as these are the closest we will get to bringing these ecologically significant cephalopods back to life."

Credit: 
University of Utah

Towards automatic design for freeform optics

image: The first design example is a three-mirror freeform imaging system that has a field-of-view of 8°×6°, a focal length of 50 mm and a F-number of 1.8, which works in the LWIR band. The computing task is deployed on the high-performance computing platform in Tsinghua University. Through 41.8 hours of automatic computation, 127 systems are obtained, all of which have the average RMS wavefront error (AVG WFE RMS) smaller than 0.075λ (λ = 10 μm). The imaging quality is considered to be diffraction-limited or near-diffraction-limited.

Image: 
by Benqi Zhang, Guofan Jin and Jun Zhu

In the early time of optical design, people have to be proficient in aberration theory and perform a huge amount of numerical calculations, and thus mathematical skills and talents are very important. The emergence of electronic computers has freed people from heavy calculation tasks, and realized fast real ray tracing and been able to solve complex aberration equations. Since then, the application and development of optimization algorithms and optical design software have greatly improved the speed and effect of optical design. However, optical design still requires to solve or find an initial solution as the starting point of optimization, which will greatly determine the final result of optimization. Moreover, optimization is essentially a process of trail-and-error, and the effect of optimization is closely related to the experience of the designer. Therefore, optical design is both an art and a science.

Although there are more and more automated tools, optical design without human guidance is generally considered impossible. The future optical design we look forward to will be: input the system's specifications and constraints at the beginning of the design, and then a large number of high-quality design results with various structures can be automatically outputted. The main job of the designers will be to comprehensively consider factors such as manufacturability, system structure, etc., and select the final design from the output results.

Towards this ultimate goal of optical design, in a new paper published in Light Science & Application, a team of scientists, led by Professor Jun Zhu from State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, China, have developed a result-diversified automatic design method for freeform optics. With the system's specifications (field-of-view, focal length, entrance pupil diameter) as the only input, a variety of three-mirror freeform imaging systems are obtained automatically, which have various structures and high imaging qualities of diffraction-limited. Such function is realized for the first time in the field of optical design.

The proposed method is able to perform a coarse search of the solution space of three-mirror freeform systems to obtain a wide variety of high-quality systems, so that one can have an overview of the solutions. This method is also feasible to let one focus on specific designs and conduct fine searches to obtain more similar designs or designs with higher imaging qualities. Through different levels of coarse and fine search, more and better freeform design could be found out.

The result-diversified automatic design method proposed in this research provides a brand new thought for the realization of fully automatic optical design. It enables people to obtain a variety of high-quality designs with only basic knowledge of optical design. In the field of scientific research, people can explore the solution space of optical systems and the boundaries of system's performance based on the massive good results obtained, or conduct research on the disciplines of optical design. In the field of engineering applications, optical design tools based on the proposed method are expected to change the working mode and core content of optical design. People can focus on system specification, manufacturability, and cost, etc.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Tree hydraulics and water relations: Why trees die as a result of drought

When trees die during a period of drought, they die of thirst. Researchers from the University of Basel have demonstrated in a field study that a rapid collapse in the hydraulic system is responsible for tree death. And they found out that the trees possibly die more rapidly than previously thought.

The heatwave of summer 2018 was an exceptional situation - both for nature and for research. Although admittedly hard on our native woods, it also presented an opportunity for researchers at the University of Basel to closely study the reaction of trees to this weather phenomenon.

The research group led by Professor Ansgar Kahmen had already set up a research area in the Basel-Landschaft municipality of Hölstein the previous year. Their aim was to study the tree canopy 30 meters above ground using a crane to determine how native tree species such as the Norway spruce respond to climate change.

Researching in real-life conditions

Shortly afterwards, the heatwave of summer 2018 descended. "This was a unique opportunity for us," says forest scientist Dr. Matthias Arend, a member of Kahmen's research group and the lead authors of the study. "It was the first time we were able to observe in nature what drought can do to large, old trees."

As part of their study, the researchers studied 10 randomly selected Norway spruces, all more than 100 years old and about 30 meters tall, in order to measure the seasonal fluctuations in the water balance in the canopy.

With its flat root system, the Norway spruce is particularly susceptible to collapse, says Arend: "The tree dies because the hydraulic system that transports the water upwards from the soil collapses."

Death is extremely sudden

Arend emphasizes that the observation that trees suffer during drought is not new. What is much more important, he says, is to understand the processes that lead to this tree death, and this is exactly what the researchers have achieved in the study published in the scientific journal PNAS. "This is the only way for us to ensure better modeling processes in future," explains Arend.

The study also made a surprising finding: "The hydraulic system collapses extremely quickly," he says. The researchers assume that this critical point is reached when a large proportion of the roots in the drying soil lose contact with the soil moisture. "Forecasts are very difficult, because it is not a slow, linear process, but one that happens very suddenly, with the system of water uptake and transport failing in the space of just a few days."

The new results diverge from the previous mortality threshold values identified in the lab, which means that the hydraulic system of a tree collapses much sooner than previously thought. This happens because dehydration does not progress linearly and the tree cannot recover from the hydraulic collapse, and dies as a result.

In search of new tree species

The researchers conclude that the Norway spruce in particular responds more sensitively to drought than previously assumed. "As we can expect extreme periods of drought to become increasingly common in future, we have to think about other tree species that may be able to deal better with the lack of water," says Arend.

He and his team have been able to observe this finding first-hand: in 2018, the Norway spruce suffered most as a result of the drought. It is the most important conifer both in Switzerland and in central Europe. The results of the study are representative of northern Switzerland as a whole, and can also be applied to other conifer species.

Credit: 
University of Basel

Using emotion and humor to combat science misinformation

Misinformation in public debates about scientific issues such as vaccinations and climate change can be found all over the internet, especially on social media. In a new study, Sara K. Yeo, associate professor of communication at the University of Utah, examines why it's so difficult to detect science misinformation and suggests that using humor may help combat the issue.

In the article, published in Proceedings of National Academics of Sciences, Yeo and her colleague Meaghan McKasy, assistant professor of communication at Utah Valley University, argue that limited science and media literacy combined with structural constraints such as fewer science journalists and a decreasing number of local newspapers, curtail the ability to discern fact from falsehood. Readers also tend to use mental shortcuts--shaped from political ideology, religious values and unconscious bias--to sift through the deluge of information, which can further complicate the ability to identify false news.

"Misinformation is often packaged or framed in simplistic and emotional ways," said Yeo. "Consider online 'clickbait' as an example: Such content often has captivating titles that promote seemingly scandalous information. This encourages the use of mental shortcuts, which can make detecting and parsing falsehoods a challenge."

According to Yeo and McKasy, the strong emotions that arise from clickbait can impair one's ability to process information rationally, but the effect of emotions on the detection and acceptance of misinformation is not straightforward. However, advances in research on emotion and relatedly, humor, in science communication reveal how they can be used as strategies to address the problem.

Humor is ubiquitous in daily life and human communication. Science is no exception - science jokes abound online under hashtags such as #overlyhonestmethods and #fieldworkfail. In an era of misinformation, humor has the potential to be a defense against fake news, but according to Yeo and McKasy, there needs to be a better understanding of how humor influences attitudes toward science.

"Funny science can draw attention to issues that might not be on the public's agenda and may even help direct attention to valuable and accurate information embedded within a joke. Humor also impacts how we process information about science to form attitudes and behavioral intentions."

Further, humor is linked to people's evaluations of an information source and it can humanize and make a source more likable. Yeo's recent research shows that scientists who use humor are perceived as more likable yet retain their credibility as an expert.

According to their article, Yeo and McKasy believe there isn't a single or simple solution to the problem of science misinformation, however, they believe the best and most realistic approach is to use multiple strategies together.

"Understanding how emotion and humor shape the public's understanding of science is one more resource that can aid communicators' efforts to combat misinformation. Of course, strategies must be used ethically and how best practices are translated from research depends on the communication goals. It is essential that we engage in dialogue about the ethical considerations that face science communication in the digital media era."

Credit: 
University of Utah

Ultrastable low-cost colloidal quantum dot microlasers of operative temperature up to 450 K

image: Dispersed CQD self-assembling into close-packed CQD cluster to achieve high packing density, then to the CQD-assembly microsphere to achieve high coupling efficiency, finally to the solidified microsphere to achieve stable and integrated high-T laser. The CQD-assembly microsphere can serve as both gain medium and microcavity. Lights travel inside the WGM microcavity due to the total internal reflection at the resonator boundary to achieve high coupling efficiency. CQDAMs are solidified in silica matrix through sol-gel method to ensure stable working at high temperature.

Image: 
by Hongxing Dong, Wei Xie, Long Zhang

Low-dimensional colloidal quantum dots (CQD) have attracted significant attention because of their unique structures, extraordinary optical properties, and low-cost preparation processes. Since its first synthesis in the 1990s, motivation to realize high-performance low-cost CQD micro-/nanolasers have been a driving force for more than three decades. However, the low packing density, inefficient coupling of CQD with optical cavities, and the poor thermal stability of miniaturized complex systems make it challenging to achieve practical CQD micro-/nanolasers, especially to combine the continuous working ability at high temperatures and the low-cost potential with mass-produced synthesis technologies. Hence, how to solve above key problems efficiently needs to new ideas different from the traditional CQD laser research.

In a new paper published in Light Science & Application, a team of scientists, led by Professor Hongxing Dong and Professor Long Zhang from Key Laboratory of Materials for High-Power Laser, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, China, and co-workers have developed a novel assembly technique combined with the sol-gel method to fabricate CQD-assembled microspheres (CQDAMs) solidified in a silica matrix, which not only guarantees that the CQDAMs work stably at high temperatures but also solves the problems of gain packing density and coupling efficiency. Researchers first achieved single-mode lasing based on solidified CQDAMs with operative temperatures up to 450 K. So far, this is the highest operational temperature for CQD microlasers. Even if they continuously work in such a high-temperature environment, the stable output of lasing pulses can be maintained for 40 min. By changing the composition and/or size of CQD, single-mode lasing can be extended to the entire visible spectral range. Moreover, the solution-processable method has the advantages of low cost and potential for mass-production. It does not require complex optical cavity processing, which means no expensive equipment or extremely complex processing is required. Meanwhile, these CQDAMs lasers can be highly integrated in a micro-substrate, and also applicable to other kinds of semiconductor nanoparticles, which promote predictable commercial application value in high-temperature low-cost micro-integrated optoelectronic devices.

In the research field of micro-/nanolasers devices, high-performance low-cost CQD laser is an important hot topic. Unfortunately, the development is obviously hysteretic considering the coexistence of the multi-level challenges, that is, (1) the basal requirement of excellent lasing performance; (2) the promotional ability to meet the application conditions such as continuous working with high stability, applicability in high-temperature environments; (3) the combination of low-cost production advantage and the merits in previous points (1), (2). These scientists summarize the original design ideas of their microlasers:

"From the point view of gain medium, the self-assembled CQD almost reach the high limit of packing density, ensuring sufficient optical gain. From the point view of light-matter coupling, such CQDAM samples are used both as gain materials and as optical microcavities, fully improving the light-matter coupling efficiency. From the point view of optical cavity performance, the spherical WGM microcavity can effectively improve the confinement ability of cavity photons. For a CQDAM sample of volume of about 1 μm-3, there could be only a single resonant mode effected in the emission wavelength range. However, the Q factor of operative mode could be 104. Most importantly, we combine these three advantages of different aspects together into the CQDAM sample."

"Besides the above laser parameters, the lasing stability at high temperature is also an important aspect related with commercialization potential. The heat dissipation problem is an intrinsic and inevitable difficulty for the next generation of microchip-integrated lasing devices. In this work, the operative temperature of CQDs microlaser is demonstrated to promote to 450 K. Moreover, the CQDs microlaser can be high-density integrated with excellent working ability even at such a high temperature. In addition, our unique but generic CQD microlasers fabrication method is very attractive and promising from a commercial standpoint where they can greatly reduce manufacturing cost and simplify the manufacturing process, thereby benefiting their large-scale industrial production. In other words, this highly efficient solution-preparation processes do not need complex processing techniques and expensive processing equipment, the costs are mainly the low-priced materials. This cost-effective manufacturability and the flexible integration capability pave a new route and promise a great potential in the advancement of CQD microlasers from laboratory to industrialization." they added.

"In addition, ever since the first demonstration of stimulated emission from CQD, the pursuit of electrically pumped CQD lasing has become the subject of intense research. Interestingly, our CQDAMs can serve as both gain medium and optical cavity, which can be readily incorporated into the electroluminescent architecture as emitting layer to enable electrically pumped nanolasers. In fact, the realization of electro induced micro laser is a great challenge, and more complex problems need to be solved, which is also an important content of our future research." the scientists forecast.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

When FRETing over cancer biomarkers won't work, focus on blinking instead

image: Schematic representation of fluorescence blinking controlled by triplet formation and triplet-triplet energy transfer.

Image: 
Osaka University

Osaka, Japan - Fluorescence spectroscopy is indispensable in biomedical diagnostics. One can think of turning on fluorescence as turning on a flashlight in a dark room. A diagnostic assay can be designed to label, for example, a specific molecule of DNA with a fluorescent probe. If that specific molecule of DNA is present, you see fluorescence or a change in the fluorescence.

Sometimes an otherwise fluorescent molecule stops emitting light for a brief period of time. This is called fluorescence blinking, which can make it difficult to detect biomolecules at the ultralow concentrations necessary for disease diagnostics. A way to simultaneously decrease blinking for diagnostics and extract useful biochemical information from the blinking for basic research would be the best of both worlds.

In a study recently published in Angewandte Chemie, researchers from Osaka University used a well-known molecule abbreviated as COT—a photostabilizer—to modulate fluorescence blinking in biochemical assays. The researchers used COT to probe the architecture of DNA molecules and to detect a cancer RNA biomarker at ultralow concentrations.

"COT suppresses fluorescence blinking, and so increases fluorescence, by coming into physical contact with the fluorophore," explains Jie Xu, lead author. "In contrast, modulating emission by a widely-used technique known as fluorescence resonance energy transfer, FRET, works over only much longer distances—in the region of 1 to 10 nanometers—and only on a nanosecond timescale."

The researchers first tested their setup on double-stranded DNA containing an internal spacer. When COT was on one end of the spacer and the fluorophore at the other end, there was more fluorescence than when COT was not present. However, fluorescence blinking wasn't eliminated entirely. The researchers exploited this fact by testing how the chemical architecture of the spacer modulates blinking.

"Increasing spacer length and increasing pi-stacking interactions—noncovalent interactions between aromatic rings—in the spacer increased the fluorophore's time in the 'off' state," says Kiyohiko Kawai, senior author. "FRET can't provide information on biomolecular dynamics over these subnanometer distances."

The researchers next detected ultrasmall concentrations of an RNA molecule that is a biomarker for many cancers. They first affixed a fluorescent probe containing COT to a glass slide. The probe was designed such that binding to the RNA biomarker would increase fluorescence from the probe.

"Binding to the target RNA decreased the probe's time in the off state by half," says Xu. "This provides a clear means to detect a cancer biomarker."

Detecting a disease-pertinent biomolecule at ultralow concentrations, as made possible with this technique, can be a way to diagnose a disease in its early stages and facilitate treatment. Furthermore, many fundamental biochemical research studies are feasible now that researchers can probe molecular motions on the subnanometer scale and over broad timescales.

Credit: 
Osaka University

Modeling past and future glacial floods in northern Greenland

image: An outlet stream from Qaanaaq Glacier flooded in August 2016, destroying the road to the airport (Ken Kondo, et al. Journal of Glaciology. February 17, 2021).

Image: 
Ken Kondo, et al. Journal of Glaciology. February 17, 2021

Hokkaido University researchers have clarified different causes of past glacial river floods in the far north of Greenland, and what it means for the region's residents as the climate changes.

The river flowing from the Qaanaaq Glacier in northwest Greenland flooded in 2015 and 2016, washing out the only road connecting the small village of Qaanaaq and its 600 residents to the local airport. What caused the floods was unclear at the time. Now, by combining physical field measurements and meteorological data into a numerical model, researchers at Japan's Hokkaido University have some answers. They published their findings in the Journal of Glaciology.

In 2015, a combination of warm temperatures and strong winds led to a rapid increase in the Qaanaaq Glacier melting. In 2016, the culprit was different: torrential rainfall, which seldom occurs in the region, was the primary driver of the flooding. Both flooding events happened in August, at the end of the summer when most of the snow covering the glacier had melted, leaving the glacier's ice exposed.

"There was nothing to absorb either melting ice water or rainfall, so it all flowed directly into the river," explains Professor Shin Sugiyama, a glacier researcher at Hokkaido University's Institute of Low Temperature Science.

Sugiyama and colleagues, including from Hokkaido University's Arctic Research Center, have been studying the Qaanaaq Glacier for about a decade. This is unique because the remote, rugged area is difficult to access, and so most research on Greenland glacial melting has taken place in the southern part of the island.

For this study, a team of scientists including Ken Kondo, a Ph.D. candidate at Hokkaido University and first author of the paper, visited Qaanaaq every summer from 2016 to 2019 to take measurements on the glacier, including snow and ice accumulation and melting. They also measured river flow levels beginning in 2017. The researchers also collected historical information on air temperature, wind speed, and rainfall. They used the data to build a model and reconstruct the past flooding events, revealing the precise causes.

The team then used the model to predict flooding risks over the next century, if temperatures rise by 4°C by 2100 and rainfall increases, as is expected with continuing climate changes. The model predicts water levels in the river from the Qaanaaq Glacier will be three times higher in 2100 than today.

"These results indicate that the risk of flooding will increase in Qaanaaq," Kondo says. "We can also extrapolate that similar risks will apply across the Arctic region."

About 80% of the area of Greenland, located in the Arctic, is covered with ice, and more than 50,000 people live in a small area between glaciers and the sea. Glacier runoff rivers that flow near settlements provide freshwater needed for daily life, but also cause flooding. The researchers hope that the insights gained from the model can help residents plan for future floods and mitigate potential damage.

"We plan to continue our research in Qaanaaq and to refine our flood risk model," Sugiyama says. "We want to better understand how environmental changes in the Arctic impact ecology and human society, and contribute to the sustainable development of Greenland."

Credit: 
Hokkaido University

Study of US tuna fisheries explores nexus of climate change, sustainable seafood

A new study published in Elementa by researchers at the University of California, Santa Cruz and NOAA examines traditional aspects of seafood sustainability alongside greenhouse gas emissions to better understand the "carbon footprint" of U.S. tuna fisheries.

Fisheries in the United States are among the best managed in the world, thanks to ongoing efforts to fish selectively, end overfishing, and rebuild fish stocks. But climate change could bring dramatic changes in the marine environment that threaten seafood productivity and sustainability. That's one reason why researchers set out to broaden the conversation about sustainability in seafood by comparing the carbon emissions of different tuna fishing practices.

The paper also puts those emissions in context relative to other sources of protein, like tofu, chicken, pork, or beef. In particular, the study examined how the carbon footprint of tuna was affected by how far from shore fishing fleets operated, or what type of fishing gear they used.

"This can be an opportunity to look at fisheries from different angles, all of which may be important," said Brandi McKuin, the study's lead author and a postdoctoral researcher in environmental studies at UC Santa Cruz.

Comparing Carbon Footprints

Generally speaking, less selective tuna fishing gear--like purse seine nets that scoop up many tuna all at once--are more likely to accidentally catch other species during the fishing process. That's called bycatch, and it's a conservation concern that often factors into seafood sustainability assessments.

But selective gear targeted more specifically for tuna, like trolling lines that reel fish in one at a time, typically have a higher carbon footprint, according to the study's estimates. That's because fishing vessels using these methods had to travel greater distances or spend more time on the water to catch their allotment of fish, which meant they used more fuel.

In one example, skipjack tuna had up to 12 times more estimated climate forcing when produced with trolling gear rather than purse seine gear. Skipjack from purse seine fleets had an estimated carbon footprint almost low enough to compete with plant-based protein sources, like tofu, but this style of fishing can have relatively high bycatch. On the other hand, skipjack produced from trolling has almost no bycatch, but the study estimates its carbon footprint falls on the higher end of the protein spectrum, between pork and beef.

There were other fishing methods that seemed to strike a balance. Albacore tuna caught on trolling and pole-and-line fishing gear by the North Pacific surface methods fleet had both negligible bycatch and relatively low estimated climate impacts.

Comparing bycatch, carbon footprints, and other environmental criteria can get complicated for seafood consumers, but overall, tuna had a relatively low estimated carbon footprint: less than or similar to that of chicken and lower than beef or pork, for most of the fishing methods studied.

"Given recent headlines about how much carbon is unleashed by commercial fishing activities, it's important to have a rigorous, peer-reviewed data analysis which demonstrates the carbon footprint of tuna fishing activities is favorably low compared to many land-based food protein production alternatives," said Stephen Stohs, a coauthor of the study who is a research economist at NOAA Fisheries' Southwest Fisheries Science Center.

Advancing Seafood Sustainability

The study says consumers could choose to eat seafood with negligible bycatch impacts but a higher climate impact less often, just as some people choose to eat beef less often due to its climate impact. But the fishing industry may also be able to innovate in ways that would continue improving seafood sustainability on multiple fronts.

Seafood producers with lower carbon footprints can look for ways to further reduce their bycatch, while those with higher carbon footprints can work to improve their efficiency, whether in catching fish or using fuel. The study provides several policy recommendations to help fisheries reduce their carbon footprints.

One idea discussed in the study is shifting fuel subsidies for fishing away from fossil fuels and toward investments in electrification technology and infrastructure, like hybrid electric and battery electric boat propulsion, as these options become more feasible. While this technology can't yet support longer offshore trips, it already shows potential for coastal fleets. And support for electrification efforts could prioritize fleets using highly selective fishing gear.

Another idea for lowering the carbon footprint of seafood is finding ways to offset emissions. But this strategy would first require a better understanding of emissions across the U.S. fishing sector. There are gaps in data about fuel use intensity for fishing vessels, which was a challenge even for the current study. But increased insight on emissions across the fishing sector could help with designing solutions.

Some within the fishing industry are already taking up this challenge. For example, the pollock industry in Alaska is setting an example by conducting a life cycle assessment to take a full inventory of their carbon footprint. Efforts like these have the potential to yield new sustainability benefits, and Brandi McKuin hopes more seafood producers will follow suit.

"Companies are asking themselves, 'What is our carbon footprint?' and that awareness can help them lead important change in the industry," McKuin said.

Credit: 
University of California - Santa Cruz

Almond production remains stable in the long term, despite deficit irrigation

image: Case study in which the research was carried out.

Image: 
University of Cordoba

Spain boasts the largest cultivated area of almond trees in the world, with more than 700,000 ha (MAPA, 2018), but ranks third in terms of production. How can this be? Actually it's easy to explain: most of the country's cultivated area of almond trees is comprised of traditional rainfed orchards and located in marginal areas featuring a low density of trees per hectare.

Over the last decade, however, the nut's surging prices havegiven rise to intensive almond tree plantations characterised bya high density of trees per hectare and the employment offertilisation and irrigation, yielding endless rows of white when the trees are in bloom. Knowing what the future of these plantations will be like in a country where the availability of water for irrigation is limited is one of the main tasks tackled by agronomic research.

The recent establishment of this type of plantation makes it difficult to conduct long-term studies on them. The creation of these new plantations requires significant investments that are not recovered for several years, so studying the productive sustainability of the crop over the life of the plantation is of the utmost importance. One of the questions to be answered in this context is whether the widespread practice of deficit irrigation (i.e. irrigation below the crop's needs) can lead to a drop in productivity in the long term. In order to answer this question, a team formed by researcher Álvaro López Bernal and Professor Elías Fereres, with the María de Maeztu Unit of Excellence - Department of Agronomy at the UCO (DAUCO), together with researchers from IFAPA and the CSIC's Institute of Sustainable Agriculture (IAS - CSIC), have analysed the response of almond production under different deficit irrigation strategies between 2014 and 2019.

To analyse the response to irrigation, crop production functions were constructedrepresentinghow almond yield varies in response tothe amount of water used seasonally in the form of applied irrigation, plot evapotranspiration(plot evaporation plus transpiration from the leaves of the plant itself) and tree transpiration. The production functions of this trial were obtained based onfour irrigation treatments: a control treatment, in which total irrigation does not limit crop evapotranspiration; two moderate deficit treatments,applying 65% of the control irrigation; and a severe deficit irrigation treatment, at 35%.

Comparing the data obtained for the initial (2014-2016) and final (2017-2019) three-year periods, it was shown that the production functions were practically identical. This result suggests that, based on the conditions and duration of the trial, "the deficit irrigation treatments implemented make it possible to maintain sustained production over time, without a gradual drop in production", says López Bernal. Therefore, the deficit irrigated trees in this study did not exhibit signs of reserve depletion or a reduction in the renewal capacity of the mixed bunches and May clusters yieldingflowers and fruit.

Although the difference in production between the fully irrigated treatment and the more severe deficit irrigation was significant (2,660 vs. 1,430 kg of almond kernels per year), thestudy suggests that the productive capacity of the trees remained stable over the six years of the experiment even for those subjected to severe deficit irrigation. It is important to bear in mind that the grower's water supply in Spain is more similar to that of deficit irrigation treatments than to any of the others, hence the importance of knowing what happens to production under these water conditions. It should be noted that this study sheds light on the effects of this practice over a longer period than those that had been analysed to date.

Credit: 
University of Córdoba