Tech

Victoria's watch catchments may not recover from drought: Study

image: Percentage of the 161 study water catchments that displayed low runoff behaviour before, during and after the Millennium Drought from ~1997-2010. The shaded area from 2010 shows the percentage of water catchments that had not recovered from the drought.

Image: 
Monash University

One-third of the water catchments included in a Victorian study had not recovered from a severe drought nearly eight years later, Australian-first research from Monash University shows.

Globally, science holds the common view that rivers and underground water supplies eventually replenish following periods of severe drought or flood.

This study, led by Dr Tim Peterson from Monash University's Department of Civil Engineering and published today in the prestigious international journal Science, is the first in the world to challenge this widely held view.

Researchers used statistical models of rainfall and streamflow at 161 water catchments across Victoria, each with over 30 years of data and no upstream dams or water extractions. The area surveyed is about the size of the United Kingdom or half that of the US state of California.

Dr Peterson and research colleagues from The University of Melbourne discovered that when the drought ends, some rivers continue to behave like they're still in a drought for years afterwards and many have not yet recovered.

Specifically, the runoff, as a fraction of precipitation, had not recovered in 37 per cent of water catchments in Victoria after Australia's Millennium Drought, and the number of recovering water catchments remained stagnant.

This means that 100mm of precipitation before the drought in 1990 created more river flow than the same 100mm in 2017, therefore delivering a 30 per cent reduced streamflow after the drought.

The number of water catchments with a low or very low runoff state increased rapidly from 1996 to the end of the meteorological drought in summer 2010. By 2011, only 15 per cent of water catchments had recovered.

The Millennium Drought, regarded as one of the worst droughts to hit Australia in its modern history, crippled the Murray-Darling Basin and placed extreme pressure on ecosystems, agricultural production and urban water supply in the south-eastern part of the country. It ended with a La Nina weather event in 2010.

A water catchment, or watershed, is any area of land that captures precipitation, which then flows into common outlets, such as a river, stream, bay or lake. Almost all of Victoria's water supply comes from streamflow.

Dr Peterson said the regeneration of water catchments after severe drought had major implications for global long-term water resource planning and aquatic environments, especially when climate change is added on top of their findings.

"Our findings suggest hydrological droughts can persist indefinitely after meteorological droughts and that the mechanism for recovery remains an open question," Dr Peterson said.

"This new discovery just appears to be the way catchments naturally behave. It's not explained by factors like land use. They are just more complex than we thought."

Each water catchment analysed for this study had at least 15, seven and five years of streamflow observations before, during and after the Millennium Drought respectively, and had no major upstream reservoirs or river extractions.

Across all 161 water catchments, researchers found eight years into the drought, 51 per cent of the catchments switched into a low or very low runoff state. When the drought ended in 2010, primarily the eastern water catchments returned to a normal runoff state (see figure).

Importantly, by mid-2017, nearly eight years after the drought, more than one-third of water catchments still remained within a low runoff state, and have not recovered back to the pre-drought behaviour.

Dr Peterson said evidence also suggested vegetation responded to the drought by increasing the fraction of precipitation going to transpiration - the process of water movement through a plant and its evaporation from leaves.

"Practically, this implies that in response to the Millennium Drought, vegetation in selected water catchments responded by maintaining similar rates of transpiration," he said.

Researchers say they've shown that water catchments are more complex than previously thought and that the findings are helping water agencies to better plan for the future.

Dr Peterson and his co-authors at The University of Melbourne have been working with, and communicating the findings to, the Victorian and national water agencies; most recently through the broader findings of the Victorian Water and Climate Initiative.

He says: "it's exciting that the findings have already begun to be used in how water is managed. We are now developing mathematical tools to further help water management use these findings to ensure long-term water supply within a challenging and changing climate."

Credit: 
Monash University

Advertising on popular made-for-kids online channels

What The Study Did: Advertisements on videos on made-for-kids channels on YouTube, as well as the frequency of age-inappropriate ads, were analyzed in this study.

Authors: Jenny S. Radesky, M.D., of the University of Michigan Medical School in Ann Arbor, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamanetworkopen.2021.9890)

Editor's Note: The article includes conflict of interest and funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

Carbon emissions from dams considerably underestimated so far

image: The Eder dam (Germany) in the year 2019. Areas of water that are drying out release considerably more carbon than areas covered by water.

Image: 
Maik Dobbermann

Among other things, dams serve as reservoirs for drinking water, agricultural irrigation, or the operation of hydropower plants. Until now, it had been assumed that dams act as net carbon stores. Researchers from the Helmholtz Centre of Environmental Research (UFZ) together with Spanish scientists from the Catalan Institute for Water Research (ICRA) in Girona and the University of Barcelona showed that dams release twice as much carbon as they store. The study has been published in Nature Geosciences.

Whether leaves, branches, or algae - streams transport large amounts of carbon-containing material. If the water is dammed, the material gradually settles and accumulates at the bottom of the body of water. "Because of the lack of oxygen, the degradation processes are much slower down there. As a result, less carbon dioxide is released. The carbon contained is stored in the sediment of the dam for a longer time", explains Dr Matthias Koschorreck, a biologist in the Department of Lake Research at the UFZ. "It had been assumed that dams store about the same amount of carbon than they release as greenhouse gases".

However, for the carbon balance of bodies of water, not only the zones covered by water - but also those that temporarily dry out because of a drop in the water level - play a role. Koschorreck's working group had demonstrated this in previous studies. If the carbon-containing material previously covered by water comes into contact with atmospheric oxygen, degradation processes and thus the formation of carbon dioxide are strongly driven. "Areas of water that are drying out thus release considerably more carbon than areas covered by water", says Philipp Keller, a former PhD student in the Department of Lake Research at the UFZ. "If large amounts of water are released by a dam, large areas are suddenly exposed. But these areas had not been considered when calculating the carbon balance. This is the knowledge gap that we close with our work".

For their investigations, the researchers used a database based on satellite imagery. This contains monthly data on the size of water surface areas from around 6,800 dams worldwide between 1985 and 2015. For these 30 years, the scientists were thus able to determine exactly when, where, and for how long the dams were not completely filled and how large the dry areas were. On average, 15% of the total reservoir surface was not covered by water. The scientists used this figure to further calculate the carbon release from these areas. "Our calculations show that carbon emissions from dams had been significantly underestimated. On a global average, they release twice as much carbon as they store", says Koschorreck. "Their image as a net carbon store in the global carbon cycle must be reconsidered".

The data also show that the magnitude of water level fluctuations of dams depends on both their use and their geographic location. "Fluctuations were more pronounced in dams used for irrigation than in those used for hydropower generation", say Keller. "And in places where the annual precipitation pattern is more uniform - such as near the poles and around the equator - there were fewer large fluctuations in water levels than in the intermediate latitudes, where larger areas of the dams were often dry for much longer periods".

Using the example of dams, the research team demonstrates the influence of areas that are drying out on the global carbon balance of bodies of water. "We hope that our study raises the awareness that areas that are drying out must also be considered when balancing carbon fluxes of natural inland waters", says Koschorreck. The new findings could also be incorporated into a more climate-friendly management of dams. If, for example, the water has to be drained for maintenance, it makes sense to consider the best timing with respect to carbon release. If the work is done in the cold season instead of the summer, the degradation processes of the exposed carbon-containing material are much slower, and the carbon emission is much lower.

In order to better understand the carbon balance of dams, Koschorreck's research team plans to take a closer look at the release of both carbon dioxide and methane as well as the role of vegetation on the carbon cycle of areas that have become dry.

Credit: 
Helmholtz Centre for Environmental Research - UFZ

The first frost is the deepest

image: Frost on Arabidopsis thaliana - new discovery may help us grow crops in fluctuating climate

Image: 
John Innes Centre

The first frost of autumn may be grim for gardeners but the latest evidence reveals it is a profound event in the life of plants.

The discovery may affect how we grow crops in a fluctuating climate and help us better understand molecular mechanisms in animals and humans.

Much of our understanding of how plants register temperature at a molecular level has been gained from the study of vernalization - the exposure to an extended period of cold as a preparation for flowering in spring.

Experiments using the model plant Arabidopsis have shown how this prolonged period of cold lifts the brake on flowering, a gene called FLC. This biochemical brake also involves another molecule COOLAIR which is antisense to FLC. This means it lies on the other strand of DNA to FLC and it can bind to FLC and influence its activity.

But less is known about how natural temperature changes affect this process. How does COOLAIR facilitate the shutdown of FLC in nature?

To find out, researchers from the John Innes Centre used naturally occurring types of Arabidopsis grown in different climates.

They measured how much COOLAIR is turned on in three different field sites with varying winter conditions, one in Norwich, UK, one in south Sweden and one in subarctic northern Sweden.

COOLAIR levels varied among different accessions and different locations. However, researchers spotted something that all the plants had in common - the first time the temperature dropped below freezing there was a peak in COOLAIR.

To confirm this boosting of COOLAIR after freezing they did experiments in temperature-controlled chambers which simulated the temperature changes seen in natural conditions.

They found COOLAIR expression levels rose within an hour of freezing and peaked about eight hours afterwards. There was a small reduction in FLC levels immediately after freezing too, reflecting the relationship between the two key molecular components.

Next, they found a mutant Arabidopsis which produces higher levels of COOLAIR all the time even when it is not cold, and low levels of FLC. When they edited the gene to switch off COOLAIR they found that FLC was no longer suppressed, providing further evidence of this elegant molecular mechanism.

Dr Yusheng Zhao, co-first author of the study said: "Our study shows a new aspect of temperature sensing in plants in natural field conditions. The first seasonal frost serves as an important indicator in autumn for winter arrival. The initial freezing dependent induction of COOLAIR appears to be an evolutionarily conserved feature in Arabidopsis and helps to explain how plants sense environmental signals to begin silencing of the major floral repressor FLC to align flowering with spring."

The study offers insight into the plasticity in the molecular process of how plants sense temperatures which may help plants adapt to different climates.

Professor Dame Caroline Dean, corresponding author of the study explained: "From the plant's point of view it gives you a tunable way of shutting off FLC. Any modulation of antisense will switch off sense and from an evolutionary perspective, depending on how efficiently or how fast this happens, and how many cells it happens in, you then have a way of dialing the brake up and down among cells."

The findings will be helpful for understanding how plants and other organisms sense fluctuating environmental signals and could be translatable to improving crops at a time of climate change.

The discovery will also likely be widely relevant for environmental regulation of gene expression in many organisms because antisense transcription has been shown to alter transcription in yeast and human cells.

Credit: 
John Innes Centre

Abortion opposition related to beliefs about fetal pain perception

A person's stance on abortion is linked to their, often inaccurate, belief about when a fetus can feel pain, a University of Otago study has found.

Lead author Emma Harcourt, PhD candidate in Otago's Centre for Science Communication, says misinformation about abortion and pregnancy is common and potentially harmful.

"The current medical consensus is that it is unlikely that fetal pain perception is possible before the 29th or 30th weeks of pregnancy. However, we found that most people believe that the capacity to feel pain develops much earlier and that this was particularly evident in participants with anti-abortion views," she says.

The study, published in The Australian and New Zealand Journal of Obstetrics and Gynaecology, recruited 374 people living in the United States and used an online questionnaire to assess their beliefs about abortion and the ability of a fetus to perceive pain.

The researchers found anti-choice participants were more likely than pro-choice participants to believe a fetus in utero can perceive pain before the 23rd week of pregnancy and in the first trimester.

Nearly 80 per cent of female participants believe a fetus can perceive pain prior to the third trimester, compared to just 56 per cent of males. This may be due to women being the targets of anti-choice disinformation campaigns, which systematically overstate the pace at which embryos and fetuses develop, Ms Harcourt says.

Interestingly, most Black and Catholic participants, along with those with advanced degrees, think fetal pain is not possible before the third trimester.

"It's possible that having an accelerated view of fetal development causes people to oppose abortion; however, it is equally possible that having anti-abortion views alters how people perceive a fetus in utero and affects their willingness to engage with information that doesn't conform with their beliefs. Further research would be needed to determine the directionality of this relationship."

Ms Harcourt says the COVID-19 pandemic has demonstrated the important role that trust plays in the relationship between patient and practitioner.

"Being able to trust that the information given to us by our doctor is true and accurate is the bare minimum that we should expect from the medical profession. However, two-thirds of women of reproductive age in the United States live in a state that has enacted legislation requiring physicians to misinform their patients about one or more aspects of pregnancy and abortion, either verbally or through written materials provided by their state's department of health.

"In New Zealand, obstetricians and abortion providers are aware their patients often have also been exposed to false claims about fetal development and the safety of abortion. I hope this research will serve as a reminder that patients may be coming into the consulting room with potentially harmful misconceptions."

Credit: 
University of Otago

New study reveals where memories of familiar places are stored in the brain

image: The place-memory network of the human brain, compared with the brain areas that process visual scenes (white).

Image: 
Figure by A.Steel et al.

As we move through the world, what we see is seamlessly integrated with our memory of the broader spatial environment. How does the brain accomplish this feat? A new study from Dartmouth College reveals that three regions of the brain in the posterior cerebral cortex, which the researchers call "place-memory areas," form a link between the brain's perceptual and memory systems. The findings are published in Nature Communications.

"As we navigate our surroundings, information enters the visual cortex and somehow ends up as knowledge of where we are - the question is where this transformation into spatial knowledge occurs. We think that the place-memory areas might be where this happens," explains lead author Adam Steel, a Neukom Fellow with the department of psychology and brain sciences in the Robertson Lab at Dartmouth. "When you look at the location of the brain areas that process visual scenes and those that process spatial memories, these place-memory areas literally form a bridge between the two systems. Each of the brain areas involved in visual processing are paired with a place-memory counterpart."

For the study, an innovative methodology was employed. Participants were asked to perceive and recall places that they had been to in the real world during functional magnetic resonance imaging (fMRI), which produced high-resolution, subject specific maps of brain activity. Past studies on scene perception and memory have often used stimuli that participants knew of but had never visited, like famous landmarks, and have pooled data across many subjects. By mapping the brain activity of individual participants using real-world places that they had been to, researchers were able to untangle the brain's fine-grained organization.

In one experiment, 14 participants provided a list of people that they knew personally and places that they have visited in real-life (e.g., their father or their childhood home). Then, while in the fMRI scanner, the participants imagined that they were seeing those people or visiting those places. Comparing the brain activity between people and places revealed the place-memory areas. Importantly, when the researchers compared these newly identified regions to the brain areas that process visual scenes, the new regions were overlapping but distinct.

"We were surprised," says Steel, "because the classic understanding is that the brain areas that perceive should be the same areas that are engaged during memory recall."

In another experiment, the team investigated whether the place-memory areas were involved in recognition of familiar places. During fMRI scanning, participants were presented with panning images of familiar and unfamiliar real-world locations downloaded from Google Street View. When the researchers looked at the neural activity, they found that the place-memory areas were more active when images of familiar places were shown. The scene-perception areas did not show the same enhancement when viewing familiar places. This suggests that the place-memory areas play an important role in recognizing familiar locations.

"Our findings help explain how a generic image of a clock tower becomes one that we recognize, such as Baker-Berry Library's tower here on Dartmouth's campus," says Steel.

"It's thrilling to discover a new set of brain areas," says senior author Caroline Robertson, an assistant professor of psychological and brain sciences at Dartmouth. "Learning how the mind is organized is at the heart of the quest of understanding what makes us human."

"The place-memory network provides a new framework for understanding the neural processes that drive memory-guided visual behaviors, including navigation," explains Robertson.

The research team is currently using virtual reality technology to explore how representations in the place-memory areas evolve as people become more familiar with new environments.

Credit: 
Dartmouth College

Freeform imaging systems: Fermat's principle unlocks 'first time right' design

image: Graphical user interface of the developed open-access trial web application that provides readers the opportunity for hands-on freeform design experience.

Image: 
by Fabian Duerr and Hugo Thienpont

Optical imaging systems have been playing an essential role in scientific discovery and societal progress for several centuries. For more than 150 years scientists and engineers have used aberration theory to describe and quantify the deviation of light rays from ideal focusing in an imaging system. Until recently most of these imaging systems included spherical and aspherical refractive lenses or reflective mirrors or a combination of both. With the introduction of new ultra-precision manufacturing methods, it has become possible to fabricate lenses and mirrors that lack the common translational or rotational symmetry about a plane or an axis. Such optical components are called freeform optical elements and they can be used to greatly extend the functionalities, improve performance, and reduce volume and weight of optical imaging systems. Today, the design of optical systems largely relies on efficient raytracing and optimization algorithms. A successful and widely used optimization-based optical design strategy therefore consists of choosing a well-known optical system as a starting point and steadily achieving incremental improvements. Such a "step-and-repeat" approach to optical design, however, requires considerable experience, intuition, and guesswork, which is why it is sometimes referred to as "art and science". This applies especially to freeform optical systems.

In a newly published paper in Light Science & Applications - Nature, researchers at Brussels Photonics (B-PHOT), Vrije Universiteit Brussel, Belgium have developed a deterministic direct optical design method for freeform imaging systems based on differential equations derived from Fermat's principle and solved using power series. The method allows calculating the optical surface coefficients that ensure minimal image blurring for each individual order of aberrations. They demonstrate the systematic, deterministic, scalable, and holistic character of their method for mirror- and lens-based design examples. The reported approach provides a disruptive methodology to design optical imaging systems from scratch, while largely reducing the 'trial and error' approach in present-day optical design.

The scientists summarize the operational principle of their method:

"We only need to specify the layout, the number and types of surfaces to be designed and the location of the stop. The established differential equations and solution scheme requires only two further steps: (1) solve the non-linear first order case using a standard non-linear solver; (2) solve the linear systems of equations in ascending order by setting unwanted aberrations to zero or by minimizing a combination thereof as required by the targeted specifications of the imaging freeform system. Most importantly, these two steps are identical for all (freeform) optical designs"

"The presented method allows a highly systematic generation and evaluation of directly calculated freeform design solutions that can be readily used as an excellent starting point for further and final optimization. As such, it allows the straightforward generation of 'first time right' initial designs that enable a rigorous, extensive and real-time evaluation in solution space when combined with available local or global optimization algorithms."

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Scientists show how to attack the 'fortress' surrounding pancreatic cancer tumors

image: Photo of a human pancreatic tumour section showing SLC7A11 in helper cells (yellow)

Image: 
UNSW Sydney

UNSW medical researchers have found a way to starve pancreatic cancer cells and 'disable' the cells that block treatment from working effectively. Their findings in mice and human lab models - which have been 10 years in the making and are about to be put to the test in a human clinical trial - are published today in Cancer Research, a journal of the American Association for Cancer Research.

"Pancreatic cancer has seen minimal improvement in survival for the last four decades - and without immediate action, it is predicted to be the world's second biggest cancer killer by 2025," says senior author Associate Professor Phoebe Phillips from UNSW Medicine & Health.

"But our latest advance means today I am the most optimistic and hopeful I have been in my career."

Pancreatic cancer is notoriously difficult to treat because of the dense scar tissue surrounding tumours - the tissue acts like a fortress that blocks chemotherapy delivery.

"This scar tissue is produced by critical 'helper cells' - also called cancer-associated fibroblasts - which cancer cells recruit to support their growth and spread. Yet, these helper cells have been ignored in current treatment strategies," A/Prof. Phillips says.

"Our approach hits both the tumour cells and the helper cells, so it's ideal for overcoming the aggressiveness and drug resistance of the disease."

In today's paper, the team demonstrates their novel way to metabolically rewire helper cells by targeting one particular protein called SLC7A11, which in turn shuts off the cells' tumour-promoting activity and reduces the scar tissue they produce.

"We found that switching off SLC7A11 in mice with pancreatic tumours directly killed pancreatic cancer cells, reduced the spread of tumour cells throughout their body and decreased the scar tissue fortress," says Dr George Sharbeen, a postdoc researcher in A/Prof. Phillips' lab who led the experimental work.

Comprehensive models, in-depth study

SLC7A11 has been studied in pancreatic cancer cells before, but this is the first piece of research to show that it plays a critical role in non-tumour helper cells, too.

"In other words, we have identified a novel 'dual cell' therapeutic target - tackling both the tumour cells and their helpers - which overcomes the current limitations of standard chemotherapy." The team used several complementary models to improve the clinical translatability of their findings, including patient-derived pancreatic cancer cell lines and helper cells, 3D at-the-bench models including an explant model that maintains pieces of human pancreatic tumour tissue, and multiple mouse models of pancreatic cancer.

"We also used our cutting-edge nanomedicine we developed in a multi-disciplinary collaboration with engineers - UNSW Professor Cyrille Boyer and University of Queensland Professor Thomas Davis - to deliver a gene therapy to inhibit SLC7A11. This therapy is advantageous because our nano-drug is tiny and able to penetrate the scar tissue in pancreatic cancer," co-first author Associate Professor Joshua McCarroll from the Children's Cancer Institute says.

Clinical trial about to commence

The team's findings have formed the foundation for a clinical trial led by A/Prof. Phillips and UNSW Medicine collaborator Professor David Goldstein, which was funded by a recently awarded Cancer Institute NSW Translational Program Grant.

"In this trial, we will repurpose an anti-arthritis drug called sulfasalazine - which we know potently inhibits SLC7A11 - for the treatment of pancreatic cancer patients with tumours that have high SLC7A11 levels, which we've shown to be the case in more than half of patients. It has the potential to improve treatment response and ultimately survival of these patients," A/Prof. Phillips says.

The researchers say the opportunity to repurpose an existing drug that's already in the clinic will help them make progress more quickly.

"Using an approved drug has allowed us to get this piece into the clinic much faster than what would be the case if we started from scratch with drug development, too," says A/Prof. Phillips.

"We are taking this exciting development all the way from the lab bench through to the clinic with the sole purpose of improving outcomes for patients with pancreatic cancer."

The research team hopes to analyse and publish the first set of results of the trial within three years.

Improving outcomes that haven't changed in decades

In addition to the clinical trial, the team now hopes to assess how their approach interferes with the exchange of nutrients between tumour cells and helper cells. They also want to identify the ideal drugs to combine with their therapeutic approach to enhance anti-tumour effects.

Pancreatic cancer is a highly lethal disease, with only one in 10 patients surviving beyond five years. In 2020, an estimated 4000 Australians were diagnosed with pancreatic cancer - about 90 per cent of them will die, often within a few months of diagnosis.

"We clearly need improved treatments to turn these dismal statistics around, and we hope clinical translation of our findings will ultimately increase the number of pancreatic cancer survivors," says A/Prof. Phillips.

"We will not give up until we improve the quality of life of patients and provide them with an effective treatment."

Credit: 
University of New South Wales

How widespread is lemur and fossa meat consumption?

image: Critically endangered silky sifaka (Propithecus candidus) in Makira National Park, Madagascar

Image: 
Charlotte Spria

MAROANTSETRA, Madagascar (May 13, 2021) - A new study by WCS (Wildlife Conservation Society) looks at the prevalence of human consumption of lemur and fossa (Madagascar's largest predator) in villages within and around Makira Natural Park, northeastern Madagascar, providing up-to-date estimates of the percentage of households who eat meat from these protected species.

Authors from the Wildlife Conservation Society (WCS) describe their findings in the journal Conservation Science and Practice. In Madagascar, the consumption of endangered and protected species, in particular lemurs, is widespread. Consumer demand for bushmeat can drive species to extinction, largely because species with higher body mass are generally the most heavily hunted, but also tend to have low reproductive rates and are therefore particularly at risk of going extinct because of the demand for their meat.

All of Madagascar's lemur species and the fossa (Cryptoprocta ferox) are protected by law, and local taboos - called fady in Malagasy culture - tend to forbid lemur meat consumption. Therefore, it is difficult to quantify the prevalence of this behavior as people who engage in illegal or socially unacceptable practices are generally reluctant to discuss them openly.

The authors estimated the prevalence of lemur and fossa meat consumption using the unmatched count technique (UCT) - an indirect questioning method that estimates the proportion of a community that takes part in a given sensitive behavior without asking sensitive questions directly to survey respondents, and without knowing whether individual respondents took part in the behavior or not.

The UCT revealed that 53 percent of households had eaten lemur meat over the previous year and 24 percent had eaten fossa meat. These UCT estimates were compared with results from direct questioning, which revealed the percentage of households that ate lemur and fossa meat more than 3.3 and 12 times higher, respectively, than that obtained from direct questioning.

"Because of their low reproductive rates and the high human population density around Makira Natural Park, these species are known to be hunted unsustainably" said WCS researcher Charlotte Spira, the lead author of the study. "Quantifying the prevalence of lemur and fossa meat consumption through repeated measures over time will enable us to assess the impact of ongoing conservation efforts aimed at reducing their hunting and consumption while increasing the production and consumption of alternative protein sources."

Said Michelle Wieland, Central Africa Livelihoods Coordinator for WCS and a co-author of the study: "We know that many rural Malagasy households don't have enough micronutrients in their diet, and even small amounts of wildmeat are important for child nutrition. That's why households are participating in new poultry production and fish farming programs to replace the rare, but consequential consumption of endangered species."

Poor, protein-deficient communities have been targeted through the Sustainable Wildlife Management program for domestic livestock and fish production programs to increase their access to animal-based proteins that contain vital nutrients. The support they receive ranges from comprehensive training in chicken and fish farming - including in how to build and maintain low-cost pens and ponds - to being provided chickens and fish to start their farms, through to regular monitoring of their production and the difficulties they encounter.

The study's findings are being used to design a behavior change campaign at the appropriate scale comprised in part of social marketing messages that should be disseminated to a large proportion of the ~13,300 people who live in the study area. The content of the campaign, i.e. the messages and approaches used to disseminate them so they reach the target consumers, are being defined based on results from a more in?depth study on meat consumption preferences, drivers, and behavioral habits that was conducted in parallel to this study.

The authors strongly recommend the use of the UCT by researchers who wish to estimate the prevalence of sensitive behaviors in areas where conservation projects are implemented. Particular attention should be paid to training, survey design, and piloting to ensure that all the underlying assumptions to applying this method are met, and that language and representation subtleties associated with the species of interest are taken into account.

Credit: 
Wildlife Conservation Society

Man's best friend in life and death: Pet dog brain banking supports aging research

image: Veterinary anatomist, Dr. Kálmán Czeibert during brain measuring

Image: 
Photo: Eniko Kubinyi / Eötvös Loránd University

Two recent papers from Hungarian researchers highlight the so far underrated relevance of pet dog biobanking in molecular research and introduce their initiative to make pioneering steps in this field. The Hungarian Canine Brain and Tissue Bank (CBTB) was established by the research team of the Senior Family Dog Project in 2017, following the examples of human tissue banks. In a recent paper, the team reports findings, which would not have been possible without the CBTB, and may augment further progress in dog aging and biomarker research.

Even though dogs have a much shorter average lifespan than humans, the aging path of the two species has remarkable similarities. Hence our best friends have attracted the attention of aging researchers. Most importantly, dogs tend to develop similar age-related diseases as humans do, including dementia, which is not typical for most other animal species. However, not every human or dog is affected by age-related ailments to the same extent. The ability to foresee the individual tendencies and provide interventions, which can shift aging to a healthier track, is the holy grail of aging research. The fact that dogs mirror human aging in many aspects makes them highly promising preclinical models to test interventions and understand the factors that determine aging. These goals, however, require a deeper insight into the microscopic levels of canine aging as well.

As stated in the recent review paper from the Senior Family Dog Project at the Department of Ethology, Eötvös Loránd University, dog biobanks would represent a valuable source of tissues and other biological materials, which could not be gathered for scientific purposes in any other way. "Medical research often relies on laboratory dogs, but keeping them to study natural aging would be time-consuming, expensive and ethically debatable." - told Dr. Sára Sándor, geneticist, first author of the review published in the journal GeroScience.

Dog biobanks, which collect and store samples from pet dogs, which lived with their owners until being euthanized for medical reasons or old age, may provide valuable support to dog research without the need to include more laboratory animals. "Pet dogs represent various breeds and live together with their owners. Therefore, we can grasp the effects of lifestyle and environmental stressors on aging and dementia in dogs like in no other model animal." - told Dr. Enik? Kubinyi, the principal investigator of the research group. "When we started to study the aging of hundreds of old dogs due to a European Research Council Starting Grant, we realized that during the project, many of these dogs would, sadly, pass away due to natural causes. We could ask the owners whether they would further support research by offering the body of their deceased dogs, we thought. Linking behavior, pet-keeping conditions, and molecular data would result in a unique biobank. We were relieved and very grateful that many owners said yes. Luckily, we met a committed and highly knowledgeable veterinary anatomist, Dr. Kálmán Czeibert, who helped us to establish the Canine Brain and Tissue Bank".

Creating and managing a pet dog biobank holds several challenges, which have to be addressed appropriately, - as discussed in the review paper. Some of these issues, like protocol optimization and communicating the goals in the public media, are shared by human biobanks, which, in contrast to dog biobanks, have a vast literature of protocol development and experiences.

"The bank currently keeps 130 dog brains and other tissues. Besides continuously upgrading our protocols, we conduct our own studies and are open to sharing the samples with research groups from all over the world." - adds Dr. Czeibert.

The CBTB research team first delved into actual molecular research by measuring the amount of protein-coding RNA molecules - termed gene expression - in canine tissues collected by the CBTB. RNA molecules are very sensitive and may degrade soon after the animal's death, making an especial relevance for the CBTB, where tissues can be stabilized only a few hours following the euthanasia of animals. The specific target of this study, published in Frontiers in Veterinary Science, was the cyclin-dependent kinase inhibitor 2A or, shortly, CDKN2A gene. This gene has been proposed as a powerful aging biomarker in humans because it shows an increased expression - a higher number of RNA molecules - in several human tissues with age. This correlation is exceptionally high in the brain tissue of older people affected by dementia. Finding such biomarkers, which can signal the onset of some age-related diseases before actual symptoms take place, is crucial for clinical applications in both dogs and humans.

The researchers examined brain, skeletal muscle, and skin tissue from the donated dogs Following human studies. The team found that older dogs', on average, had more CKDN2A mRNA in their brains and muscles than younger dogs, indicating a positive correlation between CDKN2A expression and age - however, they have not found a similar correlation in the skin. These results, even the inconsistency between tissues, matched previous human findings. The scientists were also curious about how CDKN2A behaved in blood obtained from live animals, as this question would have direct implications for clinical applications. As a first step, they tested 15 border collies and found a moderate, positive correlation with age. "This less pronounced correlation means that further research will be required to validate whether CDKN2A could suffice as a blood-borne biomarker for clinical application in dogs." - added Dr. Sándor. "However, the robust correlation in the brain shows apparent promises for comparative dementia research because it suggests that similar cellular senescence mechanisms contribute to brain aging in dogs and humans."

Hence, it seems that our canine companions will become our best friends in aging research as well.

Credit: 
Eötvös Loránd University

Molecular alteration may be cause -- not consequence -- of heart failure

Clinicians and scientists have long observed that cells in overstressed hearts have high levels of the simple sugar O-GlcNAc modifying thousands of proteins within cells. Now, researchers at Johns Hopkins Medicine have found evidence in mouse experiments that these excess sugars could well be a cause, not merely a consequence or marker of heart failure.

Their research found that elevated levels of O-GlcNAc made mice more prone to heart failure, but lowering levels of O-GlcNAc restored the animals' risk of death and heart function to normal. Together, the investigators say, the new findings, described online in the April 27th issue of the journal Circulation, could offer a potentially new molecular target for therapies that prevent or stop human heart failure.

According to the Centers for Disease Control and Prevention, an estimated 6.2 million Americans have heart failure, a progressive condition in which the heart struggles and ultimately fails to pump enough blood and oxygen to support the body's organs. The ailment costs the U.S. an estimated $30.7 billion in hospitalizations, treatment and lost productivity. Other conditions, including high blood pressure, diabetes and obesity contribute to the development of heart failure.

"Heart failure is a huge problem around the world, and our experiments show we may be able to move the therapeutic needle in the right direction by manipulating levels of O-GlcNAc," says Priya Umapathi, M.D., assistant professor of medicine at the Johns Hopkins University School of Medicine and first author of the new paper.

Proteins within living cells can be modified with the addition of small chemical groups that coax the proteins to change their shape or function. Among those modifications is O-GlcNAcylation, the addition of the sugar molecule O-GlcNAc (O-linked N-acetylglucosamine). The modification is controlled by two other molecules: O-GlcNAc transferase (OGT), an enzyme that adds the sugars to proteins, and O-GlcNAcase (OGA), an enzyme that facilitates their removal.

Researchers have long known that proteins in the cells of people with heart failure have more O-GlcNAc than usual. But whether increased levels of the sugar were a cause or consequence of heart failure -- or an attempt by the body to ward off heart failure -- has been unclear.

"The field has been conflicted about whether O-GlcNAc in the heart is a good thing or a bad thing," says Umapathi.

In the new work, Umapathi and her colleagues genetically engineered mice with higher than usual levels of OGT or OGA in heart muscle cells. The animals with high OGT -- and therefore more O-GlcNAc in these cells -- developed severe heart failure. Their hearts began to weaken and pump less blood at just 6 weeks old. By 25 weeks of age, more than half of all mice with high OGT had died, while no control animals with normal levels of OGT had died.

"These mice developed really stunning heart failure," says Umapathi. "Similar to many patients with cardiomyopathy, the mice developed enlarged hearts, abnormal electrical rhythms and died very early."

Animals with high OGA -- and therefore lower than usual O-GlcNAc in their heart cells -- remained healthy, however, and showed no signs of heart failure, even when challenged with an operation that constricts one of the heart's blood vessels.

To test whether high levels of O-GlcNAc could be reversed to help prevent end-stage heart failure, the researchers next cross-bred the two strains of mice, engineering animals to have both high OGT and OGA levels.

These animals no longer developed heart failure or died early, presumably because while OGT led them to add excessive O-GlcNAc sugars to proteins in the heart cells, the high levels of OGA reversed that excessive modification. That observation, the researchers say, suggests that drugs targeting the O-GlcNAc pathway could help prevent heart failure.

"Most existing heart failure therapies -- including beta-blockers, diuretics and ACE inhibitors -- target the same few molecular pathways," says Mark Anderson, M.D., Ph.D., professor and director of the Department of Medicine at Johns Hopkins University School of Medicine and an author of the new paper. "O-GlcNAc represents a completely new pathway that hasn't been targeted with therapeutics before, so that's really exciting."

In additional experiments, the team studied which proteins in heart cells were being modified with the addition of O-GlcNAc. Further studies along these same lines could reveal exactly why the sugars are so important and could possibly identify other molecules involved in heart failure.

"Now that we have these beautiful models to manipulate O-GlcNAc levels in the heart, we can start to get a much better understanding of how this modification plays a role in different subtypes of heart failure," says Natasha Zachara, Ph.D., associate professor of biological chemistry at the Institute for Basic Biomedical Sciences at Johns Hopkins University School of Medicine and a lead author of the new work.

Credit: 
Johns Hopkins Medicine

New evidence for electron's dual nature found in a quantum spin liquid

image: Researchers at Princeton University conducted experiments on materials known as quantum spin liquids, finding evidence that the electrons in the quantum regime behave as if they are made up of two particles.

Image: 
Catherine Zandonella, Princeton University

A new discovery led by Princeton University could upend our understanding of how electrons behave under extreme conditions in quantum materials. The finding provides experimental evidence that this familiar building block of matter behaves as if it is made of two particles: one particle that gives the electron its negative charge and another that supplies its magnet-like property, known as spin.

"We think this is the first hard evidence of spin-charge separation," said Nai Phuan Ong, Princeton's Eugene Higgins Professor of Physics and senior author on the paper published this week in the journal Nature Physics.

The experimental results fulfill a prediction made decades ago to explain one of the most mind-bending states of matter, the quantum spin liquid. In all materials, the spin of an electron can point either up or down. In the familiar magnet, all of the spins uniformly point in one direction throughout the sample when the temperature drops below a critical temperature.

However, in spin liquid materials, the spins are unable to establish a uniform pattern even when cooled very close to absolute zero. Instead, the spins are constantly changing in a tightly coordinated, entangled choreography. The result is one of the most entangled quantum states ever conceived, a state of great interest to researchers in the growing field of quantum computing.

To describe this behavior mathematically, Nobel prize-winning Princeton physicist Philip Anderson (1923-2020), who first predicted the existence of spin liquids in 1973, proposed an explanation: in the quantum regime an electron may be regarded as composed of two particles, one bearing the electron's negative charge and the other containing its spin. Anderson called the spin-containing particle a spinon.

In this new study, the team searched for signs of the spinon in a spin liquid composed of ruthenium and chlorine atoms. At temperatures a fraction of a Kelvin above absolute zero (or roughly -452 degrees Fahrenheit) and in the presence of a high magnetic field, ruthenium chloride crystals enter the spin liquid state.

Graduate student Peter Czajka and Tong Gao, Ph.D. 2020, connected three highly sensitive thermometers to the crystal sitting in a bath maintained at temperatures close to absolute zero degrees Kelvin. They then applied the magnetic field and a small amount of heat to one crystal edge to measure its thermal conductivity, a quantity that expresses how well it conducts a heat current. If spinons were present, they should appear as an oscillating pattern in a graph of the thermal conductivity versus magnetic field.

The oscillating signal they were searching for was tiny -- just a few hundredths of a degree change -- so the measurements demanded an extraordinarily precise control of the sample temperature as well as careful calibrations of the thermometers in the strong magnetic field.

The team used the purest crystals available, ones grown at the U.S. Department of Energy’s Oak Ridge National Laboratory (ORNL) under the leadership of David Mandrus, materials science professor at the University of Tennessee-Knoxville, and Stephen Nagler, corporate research fellow in ORNL’s Neutron Scattering Division. The ORNL team has extensively studied the quantum spin liquid properties of ruthenium chloride.

In a series of experiments conducted over nearly three years, Czajka and Gao detected temperature oscillations consistent with spinons with increasingly higher resolution, providing evidence that the electron is composed of two particles consistent with Anderson's prediction.

"People have been searching for this signature for four decades," Ong said, "If this finding and the spinon interpretation are validated, it would significantly advance the field of quantum spin liquids."

Czajka and Gao spent last summer confirming the experiments while under COVID restrictions that required them to wear masks and maintain social distancing.

"From the purely experimental side," Czajka said, "it was exciting to see results that in effect break the rules that you learn in elementary physics classes."

Credit: 
Princeton University

Ingredient in common weed killer impairs insect immune systems, study suggests

The chemical compound glyphosate, the world's most widely used herbicide, can weaken the immune systems of insects, suggests a study from researchers at the Johns Hopkins Bloomberg School of Public Health. Glyphosate is the active ingredient in Round Up™, a popular U.S. brand of weed killer products.

The researchers investigated the effects of glyphosate on two evolutionarily distant insects, Galleria mellonella, the greater wax moth, and Anopheles gambiae, a mosquito that is an important transmitter of malaria to humans in Africa. They found that glyphosate inhibits the production of melanin, which insects often use as part of their immune defenses against bacteria and parasites; it thereby reduces the resistance of these species to infection by common pathogens.

The findings were published online May 12 in in PLoS Biology.

"The finding that glyphosate appears to have an adverse effect on insects by interfering with their melanin production suggests the potential for a large-scale ecological impact, including impacts on human health," says study co-first author Daniel Smith, a PhD candidate in the laboratory of Arturo Casadevall MD, PhD, Alfred and Jill Sommer Professor and Chair of the Department of Molecular Microbiology and Immunology at the Bloomberg School.

The study was a collaboration between the laboratories of Casadevall and Nichole Broderick, PhD, assistant professor in the Department of Biology at Johns Hopkins University.

"Our results show unexpected effects from a widely used herbicide, and alert us to the fact that spreading these chemicals in the environment may have unintended consequences," says Casadevall, a Bloomberg Distinguished Professor.

The idea that human products and activities can inadvertently disrupt surrounding animal populations through the use of ordinary household or industrial chemicals is by now widely accepted. About 50 years ago, for example, most countries banned the common pesticide DDT due to its deleterious effects on insects, fish, and birds. In recent years, apparent declines in some insect populations have led to concerns among scientists that other common chemicals, including glyphosate, may also be causing harmful disruptions to ecosystems.

Prior research suggests that glyphosate may have adverse effects on honeybees and other insect species, linking the effect to oxidation or disrupting gut bacteria, but scientists haven't investigated additional adverse effects that could occur. In 2001, Casadevall and colleagues found that glyphosate can weaken fungi by inhibiting their production of melanin, a compound that helps pathogenic fungi resist the immune systems of animals they infect.

Melanin has many other functions in the animal kingdom. While in humans it is best known as a light-absorbing pigment that protects the skin from ultraviolet radiation damage, melanin plays an important role in immunity in insects. Thus, in the new study, the researchers examined glyphosate's effects on melanin production and immunity in two representative insect species, the greater wax moth and an African mosquito that can carry malaria.

Melanin works in insect immunity essentially by trapping and killing an invading bacterium, fungal cell, or parasite. Melanin production rises in response to the infection, and in a process called melanization, melanin molecules surround the invading pathogen--while highly reactive molecules produced as part of the melanin-synthesis process, effectively destroy the invader. Smith and colleagues found that in the larvae of Galleria mellonella moths, glyphosate inhibits the complex set of reactions that synthesize melanin, and thus weakens the melanization response and shortens the survival of the insects when they are infected with the yeast Cryptococcus neoformans.

Similarly, the researchers found that in A. gambiae mosquitoes, glyphosate inhibits melanin production and melanization, and thereby makes the mosquitoes more susceptible to infection by Plasmodium falciparum, the most dangerous species of malaria parasite. They found too that glyphosate alters the composition of the bacterial and fungal population in the mosquito midgut--the "gut microbiome" that, as in humans, helps regulate mosquito health.

In a further set of experiments, Smith and colleagues found that other phosphate-containing compounds related to glyphosate have similar effects in reducing melanization.

To the researchers, the results raise concerns that glyphosate and possibly other phosphate-containing compounds may be harming insect populations. Insects have many roles in the global ecosystem, and disrupting their populations could in turn have major adverse effects on people, for example in agriculture, and even in the realm of infectious diseases.

"Mosquitoes exposed to glyphosate were less able to control Plasmodium infections they would have otherwise resisted, which hints that glyphosate exposure may make them better vectors for malaria," Smith says. "These results raise concerns about the increasing use of glyphosate in regions of the world where malaria is endemic."

The researchers are now studying the long-term, multi-generational effects of glyphosate on insect populations.

Credit: 
Johns Hopkins Bloomberg School of Public Health

Species losses on isolated Panamanian island show importance of habitat connectivity

image: red-capped manakin

Image: 
OSU photo by Randall Moore

CORVALLIS, Ore. - Free from human disturbance for a century, an inland island in Central America has nevertheless lost more than 25% of its native bird species since its creation as part of the Panama Canal's construction, and scientists say the losses continue.

The Barro Colorado Island extirpations show how forest fragmentation can reduce biodiversity when patches of remnant habitat lack connectivity, according to a study by researchers at Oregon State University.

Even when large remnants of forest are protected, some species still fail to survive because of subtle environmental changes attributable to fragmentation, and those losses continue over many decades, the scientists say.

Findings, published this week in Scientific Reports, suggest that the island's bird population is "drying out" - i.e., taking on community characteristics associated with less damp parts of central Panama.

Barro Colorado Island, or BCI, is a 6-square-mile former hilltop jutting from manmade Gatun Lake, created when the Chagres River was dammed as part of the canal project more than 100 years ago. The lake is a primary component of the canal, and the island was set aside as a nature reserve in 1923 by the United States, which then controlled the canal zone.

BCI is probably the best-case example for conservation of tropical diversity in fragmented landscapes, according to Doug Robinson, the Mace Professor of Watchable Wildlife in OSU's College of Agricultural Sciences. BCI's surrounding habitats are stable over long periods of time, it is protected from hunting, and there is no resource extraction such as tree harvesting.

BCI is also home to the Smithsonian Tropical Research Institute, which calls the island the world's most intensively studied tropical forest. Yet BCI is still losing species even after 100 years of protection.

"BCI has been a model system to study species losses such as the ones we report on in this paper," Robinson said. "The island is unique in that it has been visited by ornithologists since the 1920s, so we have a rare record of inventories spanning the last century. No other tropical site has been studied so long."

Forests in the tropics display extreme biological diversity, Robinson notes, adding the highest bird diversity on the globe is in tropical America. Tropical landscapes, however, have undergone rapid change through large-scale deforestation, which means habitat loss as well as a lack of connectivity among the forest patches that remain.

"Species disappear from these isolated habitat remnants, but the losses are not immediate nor are the causes of the losses obvious," said Robinson, who has been doing research on the island for 27 years. "Some of the species disappearances take more than 100 years to materialize and our understanding of the erosion of diversity is in many locations hampered by a lack of tropical studies that span more than a few decades."

Collaborator Jenna Curtis is a staff member at Cornell University who earned a Ph.D. at Oregon State while working with Robinson. She used the century-long record of bird inventories from BCI and parts of the data Robinson and collaborators have been collecting since 1994 to ask questions about what factors might explain species losses in forest fragments.

In the case of BCI, 27% of the 228 bird species initially found on the island have disappeared, though all are still present in the bigger forest parcels that ring Gatun Lake. Thirty-seven forest-associated species have disappeared despite increasing forest coverage on the island.

To hone-in on the losses among forest-associated resident birds, the study excluded aquatic species, vagrants and non-breeding migrants, as well as birds that forage in the air like vultures, swifts, swallows and nighthawks, whose daily ranges extend well beyond BCI.

"We examined some previously recognized factors that could help explain the risk of extinction, such as initial population size in the remnant patch and traits like preference for certain foods and foraging locations and nest height and type," Curtis said. "The novel result we identified was that species living in the wettest forests of central Panama were most likely to have disappeared from the island - now the bird community on BCI looks more like a community that you'd find in the less rainy locations in central Panama. Changes in the bird community therefore reflect the idea that the BCI bird community is drying out, even though rainfall itself hasn't changed."

The group of scientists, which also included Ghislain Rompre? and Randall Moore of OSU's Department of Fisheries, Wildlife, and Conservation Sciences and Bruce McCune of the Department of Botany and Plant Pathology, could detect that trend because of the exhaustive surveys, led by Robinson, of birds across central Panama, which has a range of local annual rainfalls.

"The north, the Caribbean coast, is very wet with more than 3 meters of rain per year," he said. "The Pacific coast on the south is relatively dry. The bird communities in the 24 regions we've surveyed are quite different, showing that we could use the rainfall gradient as a way to index habitat preferences of the birds. The birds that live in the wet forests near the north coast were much more likely to have disappeared from BCI than species that can tolerate dry forests."

When habitat remnants are connected with other patches of undisturbed forest instead of being isolated like BCI, birds could move when conditions were too dry and return when they were more favorable, Curtis said.

"It's very easy to underappreciate the effects forest fragmentation has on tropical diversity," she said. "Even when we have large remnants that are protected from human disturbance, we still lose species because of subtle environmental changes, and those losses continue to occur over a very long time. Thus connecting forest remnants will be very important for the long-term preservation of tropical bird diversity."

Credit: 
Oregon State University

CT promising for sublobar resection in early-stage non-small cell lung cancer

image: (A) 70-year-old woman with pulmonary adenocarcinoma who underwent sublobar resection without evidence for pLVI. 15-mm solid nodule with irregular margins present in right lower lobe (arrow). No tumor recurrence on 37-month follow-up. (B) 75-year-old man with pulmonary adenocarcinoma who underwent wedge resection that exhibited pLVI. 19-mm solid nodule with irregular margins and peritumoral interstitial thickening (arrowheads) present in right upper lobe. Ipsilateral mediastinal and hilar lymph node metastasis occurred after 5-month follow-up (not shown).

Image: 
American Roentgen Ray Society (ARRS), American Journal of Roentgenology (AJR)

Leesburg, VA, May 13, 2021--According to an open-access Editor's Choice article in ARRS' American Journal of Roentgenology (AJR), CT features may help identify which patients with stage IA non-small cell lung cancer are optimal candidates for sublobar resection, rather than more extensive surgery.

This retrospective study included 904 patients (453 men, 451 women; mean age, 62 years) who underwent lobectomy (n=574) or sublobar resection (n=330) for stage IA non-small cell lung cancer. Two thoracic radiologists independently evaluated findings on preoperative chest CT, later resolving any discrepancies. Recurrences were identified via medical record review.

"In patients with stage IA non-small cell lung cancer, pathologic lymphovascular invasion was observed only in solid-dominant part solid nodules and solid nodules with solid portion diameter over 10 mm," concluded corresponding author Mi Young Kim from the department of radiology at the University of Ulsan College of Medicine, Asan Medical Center.

"Among such nodules," the authors of this AJR article continued, "peritumoral interstitial thickening (odds ratio=13.22) and pleural contact (odds ratio=2.45) were independently associated with pathologic lymphovascular invasion." Moreover, models incorporating these features independently predicted recurrence-free survival after sublobar resection (hazard ratio=5.37-6.05).

Credit: 
American Roentgen Ray Society