Earth

Researchers uncover hidden hunting tactics of wolves in Minnesota's Northwoods

image: In a new paper published in the journal Behavioral Ecology, , researchers from the University of Minnesota and the Voyageurs Wolf Project show that wolves have evolved ambush hunting tactics specifically tailored for catching and killing beavers.

Image: 
Voyageurs Wolf Project

Wolves are arguably the most well-studied large predators in the world, yet new research shows there is still a lot to learn about their hunting tactics. Typically, wolves hunt large mammals like moose, deer, and bison in packs by outrunning, outlasting, and exhausting their prey. However, throughout the dense boreal forests in North America and Eurasia, during the summer wolves often hunt beavers by themselves.

But how does a wolf catch a semi-aquatic prey that spends little time on land and never ventures far from the safety of its pond? Turns out with patience, and a lot of waiting.

In a new paper published in the journal Behavioral Ecology, researchers from the University of Minnesota and the Voyageurs Wolf Project--which studies wolves in the Greater Voyageurs Ecosystem in the northwoods of Minnesota--show that wolves have evolved ambush hunting tactics specifically tailored for catching and killing beavers.

"Over a five-year period, we estimate that our field research team collectively put in over 15,000 person-hours to search nearly 12,000 locations where wolves had spent time. Through this effort, we ended up documenting 748 locations where wolves waited to ambush beavers but were unsuccessful, and 214 instances where wolves killed beavers," said Sean Johnson-Bice, a co-author of the study. "This dedicated fieldwork has provided unprecedented insight into the hunting tactics wolves use in boreal environments."

Beavers have extremely poor eyesight, so they rely primarily on their well-developed sense of smell to detect predators on land--and wolves appear to have learned this through time. Researchers discovered wolves choose ambushing sites within a few feet of where beavers are active on land because wolves have learned beavers cannot visually detect them. When doing so, wolves almost always choose ambushing sites that are downwind to avoid being smelled by beavers.

"The results are very clear," said Tom Gable, the study's lead-author: "89-94% of the ambushing sites were downwind, where beavers were likely unable to smell wolves."

When staking out beavers, wolves appear to be surprisingly patient. They spend substantial periods of time waiting next to areas where beavers are active on land, such as near beaver dams and trails.

"Wolves waited an average of four hours during each stakeout. But they often waited eight-12 hours or more, and one wolf even waited-in-ambush for 30 hours," said study co-author Austin Homkes.

The researchers note that these behaviors were not unique to a few wolves. Instead, wolves from multiple packs across several years used the same ambushing tactics, indicating that this behavior is widespread throughout the Greater Voyageurs Ecosystem and likely other ecosystems where wolves hunt beavers. Notably, wolves and beavers co-occur across much of the Northern Hemisphere so the implications have wide applicability.

"Gathering data to demonstrate how wind direction influences the ambushing behavior of predators has been difficult for animal ecologists," said co-author Joseph Bump. "Scientists have long-thought that ambush predators are able to strategically choose ambush sites in areas where prey are unable to detect them via scent. Until now though, documenting these hunting tactics in exhaustive detail proved extremely challenging."

Ultimately, the study challenges the classic concept that wolves are solely cursorial predators (i.e., predators that kill their prey by outrunning and outlasting them). Instead, wolf-hunting strategies appear highly flexible, and they are able to switch between hunting modes (cursorial and ambush hunting) depending on their prey.

"It is the first systematic analysis of wolf ambushing behavior," said Gable. "It overturns the traditional notion that wolves rely solely on hunting strategies that involve pursuing, testing, and running down prey."

Credit: 
University of Minnesota

Low carbon transport at sea: Ferries voyage optimization in the Adriatic

image: Graphical abstract

Image: 
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Energy efficiency or carbon intensity (defined as CO2 emissions per transport work, ed.) is a possible point of convergence between the International Maritime Organization (IMO) and EU regulations to cut GHG emissions and decarbonize shipping. Short term measures to increase energy efficiency and achieve carbon intensity savings include voyage optimization.

A new study led by the CMCC Foundation, realized in the framework of the Interreg Italy-Croatia GUTTA project and recently published in the Journal of Marine Science and Engineering, explores the potential of carbon intensity reduction through voyage optimization in short sea shipping.

To this purpose, the VISIR (discoVerIng Safe and effIcient Routes) ship routing model was upgraded to a "VISIR-2" version, for computing least-CO2 routes for a ferry in presence of waves and currents.

"VISIR can compute optimal routes by suggesting a spatial diversion which leads to avoidance of rough sea and related ship speed loss", explains Gianandrea Mannarini, senior scientist at the CMCC Foundation and Lead author of the study. "Besides least-distance and least-time routes, we added in the latest VISIR version a capacity to compute routes of least-CO2 emissions. Moreover, a more accurate vessel model was introduced in VISIR-2 making use of a coupled bridge-engine room simulator hosted by the GUTTA project partner University of Zadar, from which the performance and emissions of a ferry were estimated at various sea conditions.

The Adriatic Sea, which is routinely crossed by several ferry lanes joining ports in Italy with ports in Croatia, Montenegro, and Albania, was an interesting candidate domain for testing the role of route optimization on short sea shipping. It is relatively small and characterized by not too rough seas. Therefore, if optimization works there, it is scalable to bigger and stormier basins of the world ocean.

On top of that, ferries are quite relevant for emissions, as they account for about 10% of the CO2 emissions in the EEA, despite they represent just 3% of the fleet due to report it (more information here). At International level, there currently is a vivid regulatory activity for curbing ship emissions, and operational measures such a voyage optimization, are considered in the short term.

Therefore, CMCC researchers tried to assess whether path optimization can play a role even for ferries in the Adriatic, what CO2 savings are potentially attainable, and how much ferries' carbon intensity can be decreased. In order to highlight the role of waves and sea currents in the optimization, CMEMS (Copernicus - Marine Environment Monitoring Service) ocean analysis products related to waves and sea currents were used.

"Our results support the thesis that voyage optimization could be a viable operational measure for short-sea shipping to meet short-term targets for both absolute emission and carbon intensity reduction" comments Mannarini. "For a case study, we found out carbon intensity savings up to 11%, and this is an encouraging outcome towards both IMO and EU curbing targets. We now aim to produce more statistically significant estimations through a web tool we are going to develop in the frame of the Italy-Croatia Interreg project GUTTA."

Credit: 
CMCC Foundation - Euro-Mediterranean Center on Climate Change

A study presents an algorithm that automates electrocardiogram recordings

An electrocardiogram (ECG) is an examination that records the electrical activity of the heart during the cardiac cycle. It is non-invasive and usually involves placing electrodes on the subject's skin. It is a most indicated type of examination when there is suspected heart disease and also in routine preventive health check-ups.

The cardiac cycle entails the emptying of blood from the atria to the ventricles ("P" wave, red in the image), the contraction of the ventricles to propel blood to the different tissues and organs of the body ("QRS" wave, green in the image), and the relaxation of the ventricles to get ready for the next heartbeat ("T" wave, magenta in the image).

The electrocardiographic signal is, however, underutilized from a technological point of view. "While many cardiologists and specialized health personnel have experience in interpreting an electrocardiogram and attempting diagnosis, much of this process is not automated", explains Guillermo Jiménez-Pérez, first author of a paper published in Scientific Reports which presents an algorithm to delineate an electrocardiogram, that is, an analytical method to perform a separation/quantification of the different phases involved in the cardiac cycles.

The measurements of the different phases of the cardiac cycle must be made robustly, that is, they must be applicable to the broad variability of possible cases. In the first place, due to the many possible morphologies that may arise from patient to patient (the picture depicts a representation of different pathological and normal heartbeats, which may vary greatly between individuals), and also to promote their use for subsequent comparative analyses. "That is, being able to obtain the morphology of the different QRS of a particular patient may help identify which aspects of the QRS are abnormal in relation to the general population - or even with regard to the patient a few years earlier", Pérez-Jiménez explains.

This work was published in Scientific Reports on 13 January, and was conducted by Guillermo Jiménez-Pérez and Oscar Camara, members of the PhySense research group, within the BCN MedTech Research Unit, attached to the UPF Department of Information and Communication Technologies (DTIC), together with Alejandro Alcaide of San Jorge University in Zaragoza.

Imatge inicial

An electrocardiogram (ECG) is an examination that records the electrical activity of the heart during the cardiac cycle. It is non-invasive and usually involves placing electrodes on the subject's skin. It is a most indicated type of examination when there is suspected heart disease and also in routine preventive health check-ups.

The cardiac cycle entails the emptying of blood from the atria to the ventricles ("P" wave, red in the image), the contraction of the ventricles to propel blood to the different tissues and organs of the body ("QRS" wave, green in the image), and the relaxation of the ventricles to get ready for the next heartbeat ("T" wave, magenta in the image).

Delineating an electrocardiogram entails performing separation/quantification of the different phases involved in the cardiac cycles

The electrocardiographic signal is, however, underutilized from a technological point of view. "While many cardiologists and specialized health personnel have experience in interpreting an electrocardiogram and attempting diagnosis, much of this process is not automated", explains Guillermo Jiménez-Pérez, first author of a paper published in Scientific Reports which presents an algorithm to delineate an electrocardiogram, that is, an analytical method to perform a separation/quantification of the different phases involved in the cardiac cycles.

The measurements of the different phases of the cardiac cycle must be made robustly, that is, they must be applicable to the broad variability of possible cases. In the first place, due to the many possible morphologies that may arise from patient to patient (the picture depicts a representation of different pathological and normal heartbeats, which may vary greatly between individuals), and also to promote their use for subsequent comparative analyses. "That is, being able to obtain the morphology of the different QRS of a particular patient may help identify which aspects of the QRS are abnormal in relation to the general population - or even with regard to the patient a few years earlier", Pérez-Jiménez explains.

This work was published in Scientific Reports on 13 January, and was conducted by Guillermo Jiménez-Pérez and Oscar Camara, members of the PhySense research group, within the BCN MedTech Research Unit, attached to the UPF Department of Information and Communication Technologies (DTIC), together with Alejandro Alcaide of San Jorge University in Zaragoza.

An algorithm for automating electrocardiograms
For the study authors, the fact that the electrocardiography technique is not yet automated is surprising, considering that at present, many of the major trends related to technology are based on automation through the development of artificial intelligence algorithms.

It is in this context that the work published in Scientific Reports is framed. By using cutting-edge artificial intelligence techniques, an algorithm has been developed for the delineation of the electrocardiogram, improving performance over other algorithms available in the literature.

This kind of technologies should be used with caution and, especially, looking to the future

These algorithms are obtained by the capturing and processing of massive data and, once developed, they are able to process new data very quickly and robustly. This paradigm, of artificial intelligence, has many positive externalities, such as processing capacity and the use of increasingly larger amounts of data which, in turn, are useful for the annotation and processing of new data, thus creating positive feedback loops that eventually lead to benefits for the user.

"This kind of technologies should be used with caution and, especially, looking to the future", the study authors assert. It is of paramount importance to use artificial intelligence to process data so that the results obtained can be immediately interpreted; i.e., artificial intelligence aimed at obtaining measurements (quantification) of the objective reality that surrounds it.

"These measurements can be used in subsequent processes to perform more complex tasks such as, returning the original issue, simplifying the physician's workload or the accurate diagnosis of heart problems", Jimenez-Perez explains. In short, they are reliable technologies for performing many types of subsequent analyses that can be directly interpreted by health personnel, fulfilling all the requirements of ethics and confidentiality required in the field of biomedicine.

Credit: 
Universitat Pompeu Fabra - Barcelona

RUDN University ecologist suggested a way to reduce greenhouse gas emissions in animal farming

image: An ecologist from RUDN University suggested a method to evaluate and reduce the effect of animal farms on climate change and developed a set of measures for small farms that provides for the complete elimination of greenhouse gas emissions.

Image: 
RUDN University

An ecologist from RUDN University suggested a method to evaluate and reduce the effect of animal farms on climate change and developed a set of measures for small farms that provides for the complete elimination of greenhouse gas emissions. The results of the study were published in the Journal of Cleaner Production.

Crop and animal farming and other agricultural activities account for almost a quarter of all greenhouse gases produced by humanity and therefore add a lot to climate change. On the other hand, the soils and biomass accumulate a lot of carbon, thus preventing it from getting into the atmosphere as a part of carbon dioxide and slowing climate change down. An ecologist from RUDN University suggested a comprehensive approach to finding a balance between these two processes and reducing greenhouse gas emissions.

In his study, he focused on animal plants in the agricultural regions of Italy. The suggested approach consists of two steps. The first combines different methods of carbon footprint evaluation at all stages of animal farming. Many factors including the production of methane in the digestive tracts of animals, nitrogen oxide emissions in the course of manure processing, the effect of fertilizers and farm machinery, as well as the extermination of pests are taken into consideration. In step two, measures aimed at reducing the amounts of greenhouse emissions are analyzed. For example, a diet rich in fats reduces the production of methane by cattle, and maintaining a natural grass cover around farms helps retain more carbon in the biomass. Another measure is using renewable energy sources instead of fossil fuels.

The team tested their approach on a 13 ha area in the Lazio region in central Italy. The area has two animal plants with 25% of their territories covered in crop fields and orchards and over 40%--in pastures and feed crops. The team established that both farms produced 3.9 megatons of CO2 equivalent per year. The majority of emissions came from the cattle, as well as sprinkler systems, crops, and the energy supply of the stalls.

Having analyzed all possible measures, the team concluded that the annual volume of emissions of CO2 equivalent could be reduced by 3.9-5 megatons. Therefore, the farms could not only eliminate any greenhouse gas emissions but also consume additional carbon from the atmosphere. The so-called minimum tillage could lead to the biggest effect--1.3 to 2 megatons of consumed CO2 equivalent a year. This method calls for no cultivation of the soil between seeding and harvest and for leaving the remains of harvested crops in the fields to form humus. This way more carbon is accumulated in the soils.

"We have confirmed that some animal farms can become carbon neutral and eliminate their greenhouse gas emissions. However, to achieve this, they have to implement all the recommended measures at the same time, and such a considerable change could be associated with many difficulties. Therefore, although our method is applicable on a smaller scale and aimed at individual farms, it requires support from governmental and economic institutions," said Riccardo Valentini, a Ph.D. in Biology, the head of the Science and Research Laboratory "Smart Technologies for Sustainable Development of the Urban Environment in the Global Change" at RUDN University, and a Nobel Prize winner as a member of the Intergovernmental Panel on Climate Change (IPCC).

Credit: 
RUDN University

Baby vampire bat adopted by mom's best friend

video: Lilith begs for or receives regurgitated food from BD

Image: 
Gerry Carter lab surveillance camera video

During a study with captive vampire bats at the Smithsonian Tropical Research Institute (STRI) in Panama, a young vampire bat pup was adopted by an unrelated female after its mother died. Although this observation was not the first report of adoption in vampire bats, it is uniquely contextualized by more than 100 days of surveillance-camera footage. This footage captured by STRI research associate Gerry Carter's lab at Ohio State University reveals intimate details about the changing social relationships between the mother, the pup and the adoptive mother throughout their time in captivity.

"The adoption took place after a very sad but ultimately serendipitous occurrence," said Imran Razik, then short-term fellow at STRI and doctoral student at the Ohio State University. "We realized after the mother died and the other female stepped in to adopt the baby, that we had recorded the entire social history of these two adult female bats who met for the first time in captivity. The strong relationship they formed based on grooming and sharing food with each other may have motivated this adoption."

To learn more about how vampire bats form social bonds, researchers in Carter's lab captured vampire bats from three sites across Panama. These sites were all very distant from one another, such that bats from different sites were unrelated and had never met before. Their new home, a cage shrouded in black mesh fabric, was fitted with three infrared surveillance cameras that each recorded about six hours of footage every day for four months.

Based on the footage, bats that were initially strangers began to form new social bonds, which are best defined by grooming and food-sharing interactions. Grooming other individuals is somewhat common, whereas food sharing is less common, especially among strangers.

Vampire bats must eat often to survive--typically every night. If a bat is unable to find a blood meal, it may receive regurgitated blood from a close social partner.

"To some extent, we were trying to see if we could influence partner choice between bats by manipulating if and when they could share food," Razik said. "We wanted to see how these grooming and food-sharing relationships were forming, so we kept track of all grooming and food-sharing interactions on the video recordings."

When the mother bat, Lilith, unexpectedly died and her 19-day-old pup was adopted by another female, BD, the research team continued their observations.

"Shortly before Lilith died, I noticed that the pup would occasionally climb onto BD, and I suppose this may have initiated a cascade of neuroendocrine mechanisms that caused BD to start lactating," Razik said. BD was not pregnant and did not have a pup of her own, but Razik found that she was lactating on the day Lilith died. After Lilith's death, in addition to nursing, BD appeared to groom and share food with the pup more than any other female in the colony.

A German researcher in the 1970s observed vampire bat adoptions several times in his captive colony, so this finding was not new. However, before leaving Panama, Razik gave a tour of the vampire bat project to one of STRI's emerita senior scientists, Mary Jane West-Eberhard, and she mentioned that it would be interesting to follow up on the relationships between the mother, the pup and the adopter. Carter and Rachel Page, STRI staff scientist and head of the Batlab in Gamboa, Panama, agreed that it was worth taking a closer look at the relationships between these bats.

When Razik reviewed the videos after the experiment was finished, not only did it turn out that BD and Lilith had been primary grooming partners, but BD was also Lilith's top food donor. However, Lilith did not appear to share food with BD. Moreover, the data confirmed Razik's initial impression--BD helped the pup at rates much higher than any other female.

"Another intriguing discovery was that BD and another bat, called BSCS, both of which had been in captivity before, were the two bats who groomed the pup the most," Razik said. "Now we're wondering if somehow the experience of being in captivity motivates individuals to invest in other bats at higher rates or adopt orphaned pups in critical need."

"Compared to other bats, vampires make extraordinary investments in their offspring," Page said. "And we still don't know if, or how often, adoption may happen in the wild. But this was an amazing chance to better understand what kind of relationships might result in an adoption."

"Studying adoption might give us insight into what immediate factors in the brain or environment affect parental-care decisions," Carter said. "As a new parent myself, I have come to realize the utter power of baby cuteness! I feel that my brain has been completely rewired. Most of us can understand the strong desire to adopt and care for a cute puppy or kitten, or to take on the ultimate responsibility of adopting a child. Regardless of why these traits exist, it is inherently fascinating to consider the neuroendocrine mechanisms that underlie them, the stimuli that trigger them, how they differ across species or individuals and how these traits might even be pre-adaptations for other forms of cooperation."

Credit: 
Smithsonian Tropical Research Institute

The invisible killer lurking in our consumer products

image: Our consumer products can be filled with nanomaterials, but they do not show up in lists of ingredients.

Image: 
MostPhotos/Tatiana Mihailova

This release has been updated upon request by the submitting institution.

CORRECTION NOTICE, 11 February 2021

The phase “The use of nanomaterials remains unregulated and they do not show up in lists of ingredients” has been corrected to “The use of nanomaterials is challenging to regulate because of their small size.” The correction pertains to the lead paragraph and to the second last comment by Dr Fazel Monikh. In addition, the lead paragraph's analogy between the long-term impacts of nanomaterials and COVID-19 has been removed, and the specification “...but we do not know if they pass the brain barrier” has been added to Dr Fazel Monikh’s comment relating to the accumulation of nanomaterials in the brain of the organisms studied. Added are also the specifications “size and shape must be also measured” and “using standard methods” in sections that deal with how nanomaterials are measured.

The corrected press release reads as follows:

Our consumer products, such as food, cosmetics and clothes, might be filled with nanomaterials – unbeknownst to us. The use of nanomaterials remains challenging to regulate because of their small size. This is a cause of concern since nanomaterials are tricky to measure, they enter our food chain and, most alarmingly, they can penetrate cells and accumulate in our organs.

Nanotechnology is appearing everywhere, to change our daily lives. Thanks to applications of nanotechnology, we can treat many diseases so efficiently that they’ll soon be a thing of the past. We also have materials that are 100 times stronger than steel, batteries that last 10 times longer than before, solar panels that yield twice as much energy than old ones, skin care products that keep us looking young, not to mention self-cleaning cars, windows, and clothes. These used to be the stuff of science fiction and Hollywood movies, but are now the reality we live in.

Nanotechnology has the potential to become the next industrial revolution. The global market for nanomaterials is growing, estimated at 11 million tonnes at a market value of 20 billion euros. The current direct employment in the nanomaterial sector is estimated between 300,000 and 400,000 in Europe alone.

Yet, nanomaterials and their use in consumer products is far from unproblematic. A new study published in Nature Communications today sheds light on whether they are harmful and what happens to them when they enter an organism. An international team of researchers developed a sensitive method to find and trace nanomaterials in blood and tissues, and traced nanomaterials across an aquatic food chain, from microorganisms to fish, which is a major source of food in many countries. This method can open new horizons for taking safety actions.

“We found that that nanomaterials bind strongly to microorganisms, which are a source of food for other organisms, and this is the way they can enter our food chain. Once inside an organism, nanomaterials can change their shape and size and turn into a more dangerous material that can easily penetrate cells and spread to other organs. When looking at different organs of an organism, we found that nanomaterials tend to accumulate especially in the brain, but we don’t know if they pass the brain barrier,” lead author Dr Fazel A. Monikh from the University of Eastern Finland says.

According to the researchers, nanomaterials are also difficult to measure: their amount in an organism cannot be measured only by using their mass, which is the standard method for measuring other chemicals for regulations, size and shape must be also measured. The findings emphasise the importance of assessing the risk of nanomaterials before they are introduced to consumer products in large amounts. A better understanding of nanomaterials and their risks can help policy makers to introduce stricter rules on their use, and on the way they are mentioned in products’ lists of ingredients.

“It could be that you are already using nanomaterials in your food, clothes, cosmetic products, etc., but you still don’t necessarily see any mention of them in the ingredient list. Why? Because they are challenging to regulate because they are so small, and that we simply can’t measure them using standard methods once they’re in your products,” Dr Fazel A. Monikh says.

“People have the right to know what they are using and buying for their families. This is a global problem which needs a global solution. Many questions about nanomaterials still need to be answered. Are they safe for us and the environment? Where will they end up after we’re done using them? How can we assess their possible risk?” Dr Fazel A. Monikh concludes.

Credit: 
University of Eastern Finland

Traumatic stress in childhood can lead to brain changes in adulthood: study

A new study from University of Alberta researchers has shown that traumatic or stressful events in childhood may lead to tiny changes in key brain structures that can now be identified decades later.

The study is the first to show that trauma or maltreatment during a child's early years--a well-known risk factor for developing mental health conditions such as major depressive disorder in adulthood--triggers changes in specific subregions of the amygdala and the hippocampus.

Once these changes occur, researchers believe the affected regions of the brain may not function as well, potentially increasing the risk of developing mental health disorders as adults during times of stress.

"Now that we can actually identify which specific sub-regions of the amygdala or the hippocampus are permanently altered by incidents of childhood abuse, trauma or mistreatment, we can start to focus on how to mitigate or even potentially reverse these changes," said Peter Silverstone, interim chair of the Department of Psychiatry and one of a team of eight U of A researchers who conducted the study.

A total of 35 participants with major depressive disorder were recruited for the study, including 12 males and 23 premenopausal females aged 18 to 49 years. Researchers also recruited 35 healthy control subjects, including 12 males and 23 females who were matched by age, sex and education.

"This may help shed some light on how promising new treatments such as psychedelics work, since there is mounting evidence to suggest they may increase nerve regrowth in these areas. Understanding the specific structural and neurochemical brain changes that underlie mental health disorders is a crucial step toward developing potential new treatments for these conditions, which have only increased since the onset of the COVID-19 pandemic," said Silverstone, who is also a member of the U of A's Neuroscience and Mental Health Institute.

The study noted that previously "most of the work on the effect of stress on the amygdala and hippocampal substructures has been conducted in animals," and direct testing of preclinical stress models in humans has been impossible to date. However, "recent advances in high-resolution MRI (magnetic resonance imaging) of the hippocampal subfields and amygdala subnuclei have allowed researchers to test these models in vivo in humans for the first time."

Once these biological changes occur in stress-related brain structures, researchers say the affected regions of the brain may become "maladaptive" or dysfunctional when people deal with adult stresses, making them "more vulnerable" to developing depression or other psychiatric disorders as adults.

The amygdala and hippocampus are regarded as targets of childhood adversity "because they exhibit protracted postnatal development, a high density of glucocorticoid receptors and postnatal neurogenesis," the study notes. "(This) study confirmed the negative effects of childhood adversity on the right amygdala and suggested that these effects might also affect the basolateral amygdala."

Credit: 
University of Alberta Faculty of Medicine & Dentistry

Higher blood pressure at night than in daytime may increase Alzheimer's disease risk

Higher blood pressure at night than in daytime may be a risk factor for Alzheimer's disease in older men. This is suggested by a new study from researchers at Uppsala University, now published in the journal Hypertension.

'Dementia' is an umbrella term used to describe a category of symptoms marked by behavioural changes and gradually declining cognitive and social abilities. Numerous factors, including hypertension (high blood pressure), affect the risk of developing these symptoms.

Under healthy conditions, blood pressure (BP) varies over 24 hours, with lowest values reached at night. Doctors call this nocturnal blood pressure fall 'dipping'. However, in some people, this BP pattern is reversed: their nocturnal BP is higher than in daytime. This blood pressure profile is known as 'reverse dipping'.

"The night is a critical period for brain health. For example, in animals, it has previously been shown that the brain clears out waste products during sleep, and that this clearance is compromised by abnormal blood pressure patterns. Since the night also represents a critical time window for human brain health, we examined whether too high blood pressure at night, as seen in reverse dipping, is associated with a higher dementia risk in older men," says Christian Benedict, Associate Professor at Uppsala University's Department of Neuroscience, and senior author of the study.

To test this hypothesis, the researchers used observational data from one thousand Swedish older men, who were followed for a maximum of 24 years. The included men were in their early seventies at the beginning of the study

"The risk of getting a dementia diagnosis was 1.64 times higher among men with reverse dipping compared to those with normal dipping. Reverse dipping mainly increased the risk of Alzheimer's disease, the most common form of dementia," says Xiao Tan, postdoctoral fellow from the same department and first author of this research.

"Our cohort consisted only of older men. Thus, our results need to be replicated in older women," concludes Benedict.

According to the researchers, an interesting next step would be to investigate whether the intake of antihypertensive (BP-lowering) drugs at night can reduce older men's risk of developing Alzheimer's disease.

Credit: 
Uppsala University

Study links exposure to nighttime artificial lights with elevated thyroid cancer risk

People living in regions with high levels of outdoor artificial light at night may face a higher risk of developing thyroid cancer. The finding comes from a study published early online in CANCER, a peer-reviewed journal of the American Cancer Society.

Over the past century, nightscapes--especially in cities--have drastically changed due to the rapid growth of electric lighting. Also, epidemiological studies have reported an association between higher satellite-measured levels of nighttime light and elevated breast cancer risk. Because some breast cancers may share a common hormone-dependent basis with thyroid cancer, a team led by Qian Xiao, PhD, of The University of Texas Health Science Center at Houston School of Public Health, looked for an association between light at night and later development of thyroid cancer among participants in the NIH-AARP Diet and Health Study, which recruited American adults aged 50 to 71 years in 1995-1996. The investigators analyzed satellite imagery data to estimate levels of light at night at participants' residential addresses, and they examined state cancer registry databases to identify thyroid cancer diagnoses through 2011.

Among 464,371 participants who were followed for an average of 12.8 years, 856 cases of thyroid cancer were diagnosed (384 in men and 472 in women). When compared with the lowest quintile of light at night, the highest quintile was associated with a 55 percent higher risk of developing thyroid cancer. The association was primarily driven by the most common form of thyroid cancer, called papillary thyroid cancer, and it was stronger in women than in men. In women, the association was stronger for localized cancer with no sign of spread to other parts of the body, while in men the association was stronger for more advanced stages of cancer. The association appeared to be similar for different tumor sizes and across participants with different sociodemographic characteristics and body mass index.

The researchers noted that additional epidemiologic studies are needed to confirm their findings. If confirmed, it will be important to understand the mechanisms underlying the relationship between light at night and thyroid cancer. The scientists noted that light at night suppresses melatonin, a modulator of estrogen activity that may have important anti-tumor effects. Also, light at night may lead to disruption of the body's internal clock (or circadian rhythms), which is a risk factor for various types of cancer.

"As an observational study, our study is not designed to establish causality. Therefore, we don't know if higher levels of outdoor light at night lead to an elevated risk for thyroid cancer; however, given the well-established evidence supporting a role of light exposure at night and circadian disruption, we hope our study will motivate researchers to further examine the relationship between light at night and cancer, and other diseases," said Dr. Xiao. "Recently, there have been efforts in some cities to reduce light pollution, and we believe future studies should evaluate if and to what degree such efforts impact human health."

Credit: 
Wiley

Use of goldenseal may compromise glucose control in diabetics on metformin

SPOKANE, Wash. - Diabetic patients taking the natural product goldenseal while taking the prescription drug metformin may be unwittingly sabotaging their efforts to maintain healthy blood glucose levels. This concern arose from a recent study published in the journal Clinical Pharmacology & Therapeutics.

Metformin--the world's most-prescribed oral glucose-lowering medication--was included in a cocktail of selected drugs given to participants in a clinical study led by scientists at Washington State University's College of Pharmacy and Pharmaceutical Sciences. The study sought to determine the impact of goldenseal on specific drug transporters, proteins that facilitate absorption or expulsion of drug molecules in different tissues such as the intestine, liver and kidney.

"After six days of taking goldenseal, participants had about 25 percent less metformin in their bodies, a statistically significant change that could potentially impact glucose control in patients with type 2 diabetes," said the study's first author James Nguyen, a Ph.D. candidate in pharmaceutical sciences and recent Doctor of Pharmacy graduate. He said the finding serves as a caution to health care providers and patients that over-the-counter natural product use can lead to unwanted drug interactions, which may lead to negative health outcomes.

Unstable glucose levels increase patients' risk of serious health complications, such as heart disease, kidney disease and infections. Adding to that concern, Nguyen said there are reports that diabetic patients are increasingly using goldenseal and berberine--a substance found in goldenseal--to self-treat their condition, likely based on claims that berberine helps lower glucose levels.

A perennial herb native to North America, goldenseal is often combined with Echinacea, a top-selling botanical product, in herbal remedies used to self-treat the common cold and other respiratory tract infections. Goldenseal is also commonly used to self-treat digestive issues such as diarrhea and constipation as well as rashes and other skin problems.

Establishing Best Practices

Goldenseal is one of several natural products being studied by the researchers as part of the National Institutes of Health-funded Center of Excellence for Natural Product Drug Interaction Research, a WSU-led, multidisciplinary effort to develop standardized approaches for studying interactions between natural products and pharmaceutical drugs.

Senior author and center principal investigator Mary Paine--a professor in the WSU College of Pharmacy & Pharmaceutical Sciences--noted that while the Food and Drug Administration and other regulatory agencies have well-established guidelines for studying potential interactions between drugs, no such guidelines exist for natural product-drug interactions. This gap exists because, unlike drugs, natural products are not required to be tested for potential drug interaction risks prior to entering the market.

"Our work in this goldenseal study helps lay the foundation for establishing best practices for studying these interactions, with a particular niche in transporter-mediated interactions," Paine said.

Study Tests Model Predictions

One of the overarching goals of this recently published study was to determine whether established FDA basic mathematical models for predicting transporter-mediated drug-drug interactions could be used to successfully predict natural product-drug interactions. To find out, the researchers partnered with a contract research organization to conduct test tube experiments to determine whether a goldenseal extract inhibited any of 15 different transporters. Data from those experiments were then incorporated into the models to predict whether goldenseal interacts with any of the drugs included in a drug cocktail slated to be used in the subsequent clinical study. The cocktail included low doses of three different drugs known to be transported by various transporters: furosemide (a diuretic), rosuvastatin (an anti-cholesterol drug), and metformin. The drug midazolam (a short-acting sedative) was included in the cocktail as a positive control, or a drug known to interact with goldenseal. Goldenseal inhibits the metabolic enzyme that breaks down midazolam, leading to increased midazolam in the body.

Finally, they conducted a clinical study with 16 healthy participants to see if their predictions held up. Participants were given just the drug cocktail during the baseline phase. In the goldenseal exposure phase, participants took goldenseal three times daily for five days before being given the drug cocktail and another dose of goldenseal on day six, followed by two more doses later that day. Blood and urine samples were collected at regular intervals after participants took the drug cocktail and analyzed by the researchers to compare how each drug moved through the body with or without exposure to goldenseal.

Based on their model predictions, the researchers expected to find an interaction between goldenseal and rosuvastatin in the clinical study, but it did not materialize. Surprisingly, the clinical data showed that taking goldenseal along with metformin decreased metformin blood concentrations, which the model predictions did not reveal.

These findings will help the researchers refine these models to increase prediction accuracy of future natural product-drug interaction studies. As follow-up to the research, Nguyen plans to conduct studies to determine the mechanism by which goldenseal alters metformin absorption. Based on the data, he said that this appears to happen in the intestine and may be driven by the transporter OCT1. This research could eventually lead to the discovery of other natural product-drug interactions involving goldenseal and drugs transported by OCT1.

Credit: 
Washington State University

UMass Amherst researchers gain insight into the biology of a deadly fungus

image: Lillian Fritz-Laylin is a professor of biology in the College of Natural Sciences at the University of Massachusetts Amherst.

Image: 
UMass Amherst

Researchers at the University of Massachusetts Amherst have gained new insight into the biological processes of a chytrid fungus responsible for a deadly skin infection devastating frog populations worldwide.

Led by cell biologist Lillian Fritz-Laylin, the team describes in a paper published Feb. 8 in Current Biology how the actin networks of Batrachochytrium dendrobatidis (Bd) also serve as an "evolutionary Rosetta Stone," revealing the loss of cytoskeletal complexity in the fungal kingdom.

"Fungi and animals seem so different, but they are actually pretty closely related," says Fritz-Laylin, whose lab studies how cells move, which is a central activity in the progression and prevention of many human diseases. "This project, the work of Sarah Prostak in my lab, shows that during early fungal evolution, fungi probably had cells that looked something like our cells, and which could crawl around like our cells do."

Chytrids including Bd encompass more than 1,000 species of fungi deep on the phylogenetic, or evolutionary, tree. The researchers used chytrids, which share features of animal cells that have been lost in yeast and other fungi, to explore the evolution of actin cytoskeleton, which helps cells keep their shape and organization and carry out movement, division and other crucial functions.

Prostak, a research associate in Fritz-Laylin's lab, is the lead author of the paper, which she initially wrote as her undergraduate honors biology thesis, the expanded and finished the research after graduation. Other authors are Margaret Titus, professor of genetics, cell biology and development at the University of Minnesota, and Kristyn Robinson, a UMass Amherst Ph.D. candidate in Fritz-Laylin's lab.

"Bd is more closely related to animal cells than more typically studied fungi so it can tell us a lot about the animal lineage and the fungal lineage and can also provide a lot of insight into human actin networks," Prostak says. "We can use it to study animal-like regulation in a similar system rather than actually studying it in animal cells, which is very complicated because animal cells have so many actin regulators."

The research team used a combination of genomics and fluorescence microscopy to show that chytrids' actin cytoskeleton has features of both animal cells and yeast. "How these complex actin regulatory networks evolved and diversified remain key questions in both evolutionary and cell biology," the paper states.

The biologists explored the two developmental stages in Bd's life cycle. In the first stage, Bd zoospores swim with a flagellum and build actin structures similar to those of animal cells, including pseudopods that propel the organisms forward. In the reproductive stage, Bd sporangia assemble actin shells, as well as actin patches, which are similar to those of yeast.

The disease chytridiomycosis, caused by Bd, ravages the skin of frogs, toads and other amphibians, eventually leading to heart failure after throwing off fluid regulation. This disease has been attributed to huge losses of biodiversity, including dozens of presumed population declines and extinctions over the past 50 years, though exactly how many species have been affected by this disease has been subject to debate.

The UMass Amherst biologists say Bd's actin structures they observed likely play important roles in causing the disease. "This model suggests that actin networks underlie the motility and rapid growth that are key to the pathology and pathogenicity of Bd," the paper concludes.

Prostak, an animal lover drawn to Fritz-Laylin's lab because of its focus on pathogens, hopes their research advancing the knowledge about Bd will lead to measures that slow the deadly damage of chytridiomycosis.

"Figuring out the basic biology of Bd will hopefully give insight into disease mitigation in the future," Prostak says.

Credit: 
University of Massachusetts Amherst

Potential for misuse of climate data a threat to business and financial markets

The findings are published in the prestigious journal, Nature Climate Change, and calls on businesses, the financial services industry and regulators to work more closely with climate scientists.

Regulators and governments - both domestic and international - are increasingly requiring that businesses assess and disclose their vulnerability to the physical effects of climate change, for example, increased drought, bushfires and sea level rise.

"People are making strategically material decisions on a daily basis, and raising debt or capital to finance these, but the decisions may not have properly considered climate risk," said lead author Dr Tanya Fiedler from the University of Sydney Business School.

"To assess the physical risks of climate change, businesses are referencing climate models, which are publicly available but complex. The problem arises when this information is used for the purpose of assessing financial risk, because the methodologies of those undertaking the risk assessment can be 'black boxed' and in some instances are commercial in confidence. This means the market is unable to form a view."

Co-author on the paper, Professor Andy Pitman from the University of New South Wales, said: "Businesses want to know which of their assets and operations are at risk of flooding, cyclones or wind damage and when, but providing that information using existing global climate models is a struggle. There is, of course, very useful information available from climate models, but using it in assessing business risk requires a bespoke approach and a deep engagement between business and climate modellers."

Professor Pitman, Director of the ARC Centre of Excellence for Climate Extremes, added: "A whole host of issues can trip up the unwary, ranging from the type of model, how it was set up, how changes in greenhouse gases were represented, what time period is being considered and how "independent" of each other the different models truly are."

To address the gap between science and business, a paradigm shift is needed.

Professor Christian Jakob from Monash University, another co-author of the study, said: "Climate modelling needs to be elevated from a largely research-focussed activity to a level akin to that of operational weather forecasting - a level that is of tangible and practical value to business."

Without such an approach, the paper highlights some of the unintended consequences arising from climate information being used inappropriately.

"As with any form of decision-making, businesses could be operating under a false sense of security that arises when non-experts draw conclusions believed to be defensible, when they are not," Dr Fiedler, an expert at the University of Sydney's Discipline of Accounting, said.

"Our study proposes a new approach with deep engagement between governments, business and science to create information that is fit for purpose. Until this happens, your best bet is to go to the source - the climate modellers themselves."

Credit: 
University of Sydney

An interdecadal decrease in extreme heat days in August over Northeast China around the early 1990s

image: The probability distribution function of the daily series of Tmax in August over Northeast China before (black line) and after (red line) the interdecadal change.

Image: 
Ruidan Chen

Against the background of global warming, extreme heat days (EHDs) occur frequently and greatly threaten human health and societal development. Therefore, it is of great importance to understand the variation of EHDs.

Previous studies have indicated that the frequency of EHDs is mainly modulated by the mean state of temperature, and thus the frequency of EHDs mostly presents an increasing trend.

"However, the variability of the daily maximum temperature also plays an important role in the interdecadal change of extreme heat days over Northeast China," says Ms. Liu Wenjun, a Master's student from the group of Dr. Ruidan Chen in the School of Atmospheric Sciences at Sun Yat-sen University and the first author of a paper recently published in Atmospheric and Oceanic Science Letters.

"The variability of the daily maximum temperature in August over Northeast China experienced a significant interdecadal decrease around the early 1990s, which overwhelmed the effect of the mean-state warming and led to an interdecadal decrease in extreme heat days," she explains.

The research team further discovered that the interdecadal change in the variability of daily maximum temperature is modulated by the atmospheric circulation.

"After the early 1990s, the influence of the Silk Road teleconnection and the East Asian-Pacific teleconnection on Northeast China weakened obviously, resulting in a decrease in the variability of the daily maximum temperature over Northeast China," concludes Liu.

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Richness of plant species reduces the number of viral infections in meadows

image: The study found out that ribwort plantain meadows located close to agricultural land contained more plant viruses compared to meadows surrounded by natural environments.

Image: 
Suvi Sallinen

A study carried out at the University of Helsinki indicates that agricultural activity confuses the mechanisms that regulate the occurrence of plant diseases in nature. A wider variety of virus species was found in meadows close to agricultural fields compared to those located in natural surroundings, with the richness of plant species having no effect on the number of virus species. However, maintaining biodiversity is worthwhile, as plant richness did reduce the number of viral infections in the meadows.

An increasing share of the global land area is used for agricultural purposes, with more and more of the remaining area located at the boundary between agricultural and natural land, also known as the agro-ecological interface. At the same time, biodiversity is narrowing and epidemics are threatening humans, animals and plants. In the wild, species interact with one another, making the richness and distribution of host species in a given area impact the occurrence of pathogens as well.

The Research Centre for Ecological Change headed by Professor Anna-Liisa Laine at the University of Helsinki surveyed the effects of the proximity of cultivated land to viral distribution in ribwort plantain (Plantago lanceolata) meadows in the Åland Islands.

"The meadow network on the Åland Islands is favourable for research, as we have previously identified five new plant viruses in the area. It was the distribution of these viruses we were now able to investigate, thanks to the techniques we have developed for the purpose. We chose for our study meadows located either on the edges of fields or far away from agricultural land," says researcher Hanna Susi from the University of Helsinki's Faculty of Biological and Environmental Sciences, who headed the study.

At the agro-ecological interface, many factors that affect the spread of plant diseases, including soil nutrients as well as the diversity and density of plant species, change suddenly. The researchers observed that the effect of agriculture extends, however, to the areas surrounding fields.

"Surprisingly enough, we observed no differences in species richness or community composition between plant communities situated on field edges and wild populations further away," Susi notes.

The researchers found that the meadows on the edges of fields had a higher number of virus species whose numbers were not reduced by plant richness, contrary to what was seen in meadows surrounded by natural environments. At the same time, the richness of plants reduced viral infections in both meadow types.
The study conducted by Laine's group indicates that agricultural activity confuses the mechanisms that regulate the occurrence of diseases in the wild.

"However, maintaining biodiversity is worthwhile, as the richness of plant species did reduce the number of viral infections in the meadows - regardless of location," Susi says.

Credit: 
University of Helsinki

Monitoring precious groundwater resources for arid agricultural regions

video: KAUST researchers have designed a framework in collaboration with the Saudi Ministry of Environment Water and Agriculture to provide detailed information on agricultural groundwater use in arid regions.

Image: 
© 2021 KAUST

A framework designed to provide detailed information on agricultural groundwater use in arid regions has been developed by KAUST researchers in collaboration with the Saudi Ministry of Environment Water and Agriculture (MEWA).

"Groundwater is a precious resource, but we don't pay for it to grow our food, we just pump it out," says Oliver López, who worked on the project with KAUST's Matthew McCabe and co-workers. "When something is free, we are less likely to keep track of it, but it is critical that we measure groundwater extraction because it impacts both food and water security, not just regionally, but globally."

Saudi Arabia's farmland is often irrigated via center pivots that tap underground aquifer sources. The team has built a powerful tool that captures details of water use from the regional scale down to individual fields. This is the first operational system in the world for monitoring and modeling agricultural water use at such fine spatial and time scales, notes López.

The framework combines data from several sources, including the Landsat 8 satellite, weather prediction models and a land-surface hydrology model, to enhance the system's resolution and prediction accuracy.

"The satellite images show distinct patterns of active fields against the bare desert background, allowing us to identify individual center-pivot fields, even if they are irregular in shape and size," says López.

The team derived crop evaporation rates, surface temperature and albedo, and crop growing patterns from the satellite data. They combined this information with a regional surface hydrology model to estimate the amount of water delivered to each field by the center pivots.

They evaluated the framework at a small-scale 40-field experimental facility in Al Kharj, before trialing it at the large scale in Al Jawf province, where it successfully estimated water use in over 5000 individual fields. The approach has since been applied nationally across more than 35,000 fields.

"Our framework has provided extensive field-scale estimates for 2015 that will serve as a benchmark for future comparisons," says López. "We hope our model offers a consistent and reliable tool that demonstrates the impact of water management policies and drives future decisions."

The team plans to integrate multiple satellite data sources to improve data-collection frequency and resolution, thus improving the framework's accuracy.

"A key goal of our research group is to monitor 'every field, everywhere, all the time' -- a true Big Data analytics problem," says McCabe. "To address national and global food and water security concerns, we need local-level knowledge, delivered at the planetary scale."

Credit: 
King Abdullah University of Science & Technology (KAUST)