Earth

Researchers explore genetics of California mountain lions to inform future conservation

image: Mountain lion cubs walk through the Irvine Ranch Conservancy along the West Coast.

Image: 
Irvine Ranch Conservancy

Fragmentation of wildlife populations is increasing on a global scale, and understanding current genetic structure, genetic diversity and genetic connectivity is key to informing future wildlife management and conservation.

This is true of mountain lion -- also known as pumas or cougars -- populations in California, according to a new study conducted by a University of Wyoming research team.

"Large expanses of continuous habitat provide populations the opportunity to maintain large numbers of gene variants, called alleles. This is analogous to a deck of cards. If you have 40 cards, you are capable of harboring more types of cards than if you had 10," says Kyle Gustafson, an assistant professor of genetics in the Department of Biology and Environmental Health at Missouri Southern State University, but who started this work in Holly Ernest's Wildlife Genomics and Disease Ecology Lab at UW. "When populations get isolated, like many of the puma populations surrounded by urbanization, the only way for them to maintain a large number of alleles is through migration. Otherwise, natural selection and genetic drift will ultimately lead to genetic uniformity (fixation) and mating among related individuals (inbreeding)."

The new study, titled "Genetic Source-Sink Dynamics Among Naturally Structured and Anthropogenically Fragmented Puma Populations," was published Dec. 10 in Conservation Genetics, a journal that promotes the conservation of biodiversity by providing a forum for data and ideas, aiding the further developments of this area of study. Contributions include work from the disciplines of population genetics, molecular ecology, molecular biology, evolutionary biology, systematics and forensics.

Gustafson is the paper's lead author. Ernest, a professor in the UW Program in Ecology and the Wyoming Excellence Chair in Disease Ecology, was the senior and corresponding author. Roderick Gagne, a former postdoctoral research associate in the UW Department of Veterinary Sciences, was a contributing author.
"This study took nearly two decades for my lab to complete and could only be accomplished through coordinated efforts with the California Department of Fish and Wildlife, multiple research institutions and several nonprofit organizations," Ernest explains.

The analyses revealed that mountain lions in California exhibited strong population genetic structure, and some California populations had extremely low levels of genetic diversity, with some exhibiting estimates as low as the endangered Florida panther, another common name for the puma. Nine genetic populations of pumas were identified in California and one population in Nevada.

During the study, 992 mountain lions across California and Nevada were genotyped at 42 microsatellite loci. Microsatellites are neutral gene mutations, meaning they don't code for a specific trait, so their mutations can accumulate without causing harm. Loci are different locations on chromosomes.

"If you could imagine each card in a standard deck of 52 cards as a unique mutation and, if you could imagine each mountain lion holding a card, then you could see how we could group certain mountain lions together," Gustafson explains. "For example, we could lump mountain lions together if they were holding red cards, face cards or a specific number. By keeping track of the many DNA mutations and, by using dozens of hypothetical decks of cards, we can identify shared mutations among the individuals and then identify genetic populations."

Gene flow is critically important to individual fitness and to the evolutionary potential of populations because successful migrant animals can diversify gene combinations. Without gene flow, small populations are especially subject to inbreeding, genetic drift and increased extinction risk.

The considerable variation in genetic diversity and effective population sizes among California and Nevada populations are likely attributable to the variation in man-made barriers. Gene flow among adjacent mountain lion populations has been nearly negated by freeways in densely populated Southern California. In Nevada, the analysis showed mountain lions had fewer barriers to gene flow and weak population differentiation, likely because mountain lions have access to more contiguous land areas with fewer humans.

"Mountain lions are capable of long-distance travel, yet we found strong genetic structure, indicating mountain lion habitats are not well-connected in the state of California," Ernest says. "Most statewide studies in North America have found weak genetic structure, further indicating the genetics of mountain lions in California are not the norm and that there is cause for concern."

The paper's results have far-reaching conservation and management implications for mountain lions and indicate large-scale fragmentation in one of North America's most biodiverse and rapidly urbanized areas. Whenever possible, government agencies and other stakeholders should consider population connectivity and prevent further fragmentation by human development, both within and among populations.

"Our study provides wildlife managers with critical population-level data across two states," Gustafson says. "Certain mountain lion populations in the state of California have low genetic diversity that is concerning and are potentially at risk of inbreeding depression. Without our data, wildlife managers would not know which populations are at risk and which populations can help restore genetic diversity, if needed."

Credit: 
University of Wyoming

How sperm stem cells maintain their number

image: In the mouse testis, sperm stem cells (pink) migrate over the basement membrane (brown), capturing and consuming FGFs (green) produced by specialized LE cells (depicted as large green cells). Competition occurs here between stem cells for a limited supply of FGFs, from which homeostasis emerges as a self-organized process.

Image: 
NIBB

The steady production of sperm relies on the number of sperm stem cells in the testis remaining constant. Researchers including Asst. Prof. Yu Kitadate and Prof. Shosei Yoshida (developmental biologists at the National Institute for Basic Biology within the National Institutes of Natural Sciences in Japan) and Prof. Benjamin Simons (a theoretical physicist at the University of Cambridge in the UK) have revealed a novel mechanism for stem cell number control. Their results show that constant sperm stem cell numbers are achieved, in mouse testes, through a self-organized process in which they actively migrate and compete for a limited supply of self-renewal-promoting fibroblast growth factors (FGFs). This study was published on line in Cell Stem Cell on Nov. 20th, 2018.

To ensure a balance between the loss of differentiated cells and their replacement in long-lived multicellular organisms, it is critically important to keep the number of tissue stem cells constant. Failure to maintain stem cell number is thought to underlay the progression of ageing and disease. In tissues like the testis and ovary of the fruitfly Drosophila and intestine of mammalians, stem cells are clustered in their specialized home where self-renewal-promoting factors are abundant: the stem cell niche. In these tissues, stem cell numbers are controlled simply by the capacity of the niche. However, sperm stem cells are not clustered in mouse testis, but are highly motile and widely dispersed across the basement membrane. Yet, their density remains surprisingly uniform thus raising the question of how their numbers are regulated.

In this study, the researchers found that a subset of lymphatic endothelial (LE) cells produce FGFs (Fgf5, 8 and 4 in particular), which promote stem cell self-renewal.

Asst. Prof. Kitadate said, "The lymphatic endothelial cells in the testis were described via the use of electron microscopy in the 1970s, but had scant attention paid to them for a long time. By a stroke of good luck, our screening met these cells again and threw light on their hidden roles!"

Quantitative analyses of mice with increased or decreased FGF production revealed a simple mechanism: migratory stem cells uptake and consume FGFs. Stem cells which consume more FGFs are likely to duplicate, while those that consume less are inclined to differentiate. Under this framework, stem cells effectively compete with each other for a limited supply of FGFs, leading the stem cell number to automatically adjust to a particular value, depending on the rate of FGF supply. The discovery of a novel, and extremely simple, mechanism of stem cell number control based on "competition for self-renewal promoting factors (or mitogens)" advances our understanding of the regulation of stem cells in tissues without a canonical, anatomically definable stem cell niche - a microenvironment sometimes called an "open niche".

Prof. Simons said, "As a general and robust mechanism of stem cell density control, these findings may have important implications for the regulation of stem cell density in other tissue types."

Prof. Yoshida said, "Sperm stem cells migrate in the testis to intake FGFs, just as cows move around the meadow to eat the grass which they live on. Interestingly, the dynamics of stem cells can be described using mathematics similar to that for ecosystem - a fruit of my beloved interdisciplinary research team!"

Credit: 
National Institutes of Natural Sciences

New threat to ozone recovery

Earlier this year, the United Nations announced some much-needed, positive news about the environment: The ozone layer, which shields the Earth from the sun's harmful ultraviolet radiation, and which was severely depleted by decades of human-derived, ozone-destroying chemicals, is on the road to recovery.

The dramatic turnaround is a direct result of regulations set by the 1987 Montreal Protocol, a global treaty under which nearly every country in the world, including the United States, successfully acted to ban the production of chlorofluorocarbons (CFCs), the main agents of ozone depletion. As a result of this sustained international effort, the United Nations projects that the ozone layer is likely to completely heal by around the middle of the century.

But a new MIT study, published in Nature Geoscience, identifies another threat to the ozone layer's recovery: chloroform -- a colorless, sweet-smelling compound that is primarily used in the manufacturing of products such as Teflon and various refrigerants. The researchers found that between 2010 and 2015, emissions and concentrations of chloroform in the global atmosphere have increased significantly.

They were able to trace the source of these emissions to East Asia, where it appears that production of products from chloroform is on the rise. If chloroform emissions continue to increase, the researchers predict that the recovery of the ozone layer could be delayed by four to eight years.

"[Ozone recovery] is not as fast as people were hoping, and we show that chloroform is going to slow it down further," says co-author Ronald Prinn, the TEPCO Professor of Atmospheric Science at MIT. "We're getting these little side stories now that say, just a minute, species are rising that shouldn't be rising. And certainly a conclusion here is that this needs to be looked at."

Xuekun Fang, a senior postdoc in Prinn's group, is the lead author of the paper, which includes researchers from South Korea, Japan, England, Australia, and California.

Short stay, big rise

Chloroform is among a class of compounds called "very short-lived substances" (VSLS), for their relatively brief stay in the atmosphere (about five months for chloroform). If the chemical were to linger, it would be more likely to get lofted into the stratosphere, where it would, like CFCs, decompose into ozone-destroying chlorine. But because it is generally assumed that chloroform and other VSLSs are unlikely to do any real damage to ozone, the Montreal Protocol does not stipulate regulating the compounds.

"But now that we're at the stage where emissions of the more long-lived compounds are going down, the further recovery of the ozone layer can be slowed down by relatively small sources, such as very short-lived species -- and there are a lot of them," Prinn says.

Prinn, Fang, and their colleagues monitor such compounds, along with other trace gases, with the Advanced Global Atmospheric Gases Experiment (AGAGE) -- a network of coastal and mountain stations around the world that has been continuously measuring the composition of the global atmosphere since 1978.

There are 13 active stations scattered around the world, including in California, Europe, Asia, and Australia. At each station, air inlets atop typically 30-foot-tall towers pull in air about 20 times per day, and researchers use automated instruments to analyze the atmospheric concentrations of more than 50 greenhouse and ozone-depleting gases. With stations around the world monitoring gases at such a high frequency, AGAGE provides a highly accurate way to identify which emissions might be rising and where these emissions may originate.

When Fang began looking through AGAGE data, he noticed an increasing trend in the concentrations of chloroform around the world between 2010 and 2015. He also observed about three times the amount of atmospheric chloroform in the Northern Hemisphere compared to the Southern Hemisphere, suggesting that the source of these emissions stemmed somewhere in the Northern Hemisphere.

Using an atmospheric model, Fang's collaborators on the paper estimated that between 2000 and 2010, global chloroform emissions remained at about 270 kilotons per year. However, this number began climbing after 2010, reaching a high of 324 kilotons per year in 2015. Fang observed that most stations in the AGAGE network did not measure substantial increases in the magnitude of spikes in chloroform, indicating negligible emission rises in their respective regions, including Europe, Australia, and the western United States. However, two stations in East Asia -- one in Hateruma, Japan, and the other in Gosan, South Korea -- showed dramatic increases in the frequency and magnitude of spikes in the ozone-depleting gas.

The rise in global chloroform emissions seemed, then, to come from East Asia. To investigate further, the team used two different three-dimensional atmospheric models that simulate the movement of gases and chemicals, given global circulation patterns. Each model can essentially trace the origins of a certain parcel of air. Fang and his colleagues fed AGAGE data from 2010 to 2015 into the two models and found that they both agreed on chloroform's source: East Asia.

"We conclude that eastern China can explain almost all the global increase," Fang says. "We also found that the major chloroform production factories and industrialized areas in China are spatially correlated with the emissions hotspots. And some industrial reports show that chloroform use has increased, though we are not fully clear about the relationship between chloroform production and use, and the increase in chloroform emissions."

"An unfortunate coherence"

Last year, researchers from the United Kingdom reported on the potential threat to the ozone layer from another very short-lived substance, dichloromethane, which, like chloroform, is used as a feedstock to produce other industrial chemicals. Those researchers estimated how both ozone and chlorine levels in the stratosphere would change with increasing levels of dichloromethane in the atmosphere.

Fang and his colleagues used similar methods to gauge the effect of increasing chloroform levels on ozone recovery. They found that if concentrations remained steady at 2015 levels, the increase observed from 2010 to 2015 would delay ozone recovery by about five months. If, however, concentrations were to continue climbing as they have through 2050, this would set a complete healing of the ozone layer back by four to eight years.

The fact that the rise in chloroform stems from East Asia adds further urgency to the situation. This region is especially susceptible to monsoons, typhoons, and other extreme storms that could give chloroform and other short-lived species a boost into the stratosphere, where they would eventually decompose into the chlorine that eats away at ozone.

"There's an unfortunate coherence between where chloroform is being emitted and where there are frequent storms that puncture the top of the troposphere and go into the stratosphere," Prinn says. "So, a bigger fraction of what's released in East Asia gets into the stratosphere than in other parts of the world."

Fang and Prinn say that the study is a "heads-up" to scientists and regulators that the journey toward repairing the ozone layer is not yet over.

"Our paper found that chloroform in the atmosphere is increasing, and we identified the regions of this emission increase and the potential impacts on future ozone recovery," Fang says. "So future regulations may need to be made for these short-lived species."

"Now is the time to do it, when it's sort of the beginning of this trend," Prinn adds. "Otherwise, you will get more and more of these factories built, which is what happened with CFCs, where more and more end uses were found beyond refrigerants. For chloroform, people will surely find additional uses for it."

Credit: 
Massachusetts Institute of Technology

Translating the 'language of behavior' with artificially intelligent motion capture

video: An interdisciplinary team of Princeton researchers created LEAP, a flexible motion-capture tool that can be trained in a matter of minutes to track body parts over millions of frames of video with high accuracy, without any physical markers or labels.

Image: 
Courtesy of the Murthy Lab and Shaevitz Lab, Princeton University

You might have seen Hollywood stars in "motion capture" suits, acting in full-body costumes peppered with sensors that let a computer transform them into a Hulk or a dragon or an enchanted beast.

Now, a collaboration between the labs of Princeton professors Mala Murthy and Joshua Shaevitz has gone a step further, using the latest advances in artificial intelligence (AI) to automatically track animals' individual body parts in existing video.

Their new tool, LEAP Estimates Animal Pose (LEAP), can be trained in a matter of minutes to automatically track an animal's individual body parts over millions of frames of video with high accuracy, without having to add any physical markers or labels.

"The method can be used broadly, across animal model systems, and it will be useful to measuring the behavior of animals with genetic mutations or following drug treatments," said Murthy, an associate professor of molecular biology and the Princeton Neuroscience Institute (PNI).

The paper detailing the new technology will be published in the January 2019 issue of the journal Nature Methods, but its open-access version, released in May, has already led to the software being adopted by a number of other labs.

When the researchers combine LEAP with other quantitative tools developed in their labs, they can study what they call "the language of behavior" by observing patterns in animal body movements, said Shaevitz, a professor of physics and the Lewis-Sigler Institute for Integrative Genomics.

"This is a flexible tool that can in principle be used on any video data," said Talmo Pereira, a PNI graduate student who is the first author on the paper. "The way it works is to label a few points in a few videos and then the neural network does the rest. We provide an easy-to-use interface for anyone to apply LEAP to their own videos, without having any prior programming knowledge."

When asked if LEAP worked as well on large mammals as it did on the flies and mice that made up most of the initial subjects, Pereira promptly created a motion-tagged video of a giraffe taken from the live feed from Mpala Research Centre in Kenya, a field research station for which Princeton is managing partner.

"We took a video of a walking giraffe from the Mpala research station ... and labeled points in 30 video frames, which took less than an hour," Pereira said. "LEAP was then able to track motion from the entire rest of the video (roughly 500 frames) in seconds."

Previous efforts to develop AI tools that could track human motion have relied on large training sets of manually annotated data. That allowed the software to work robustly on diverse kinds of data, with vastly different backgrounds or lighting conditions.

"In our case, we optimized similar methods to work on data collected in a laboratory setting, in which conditions are consistent across recordings," said Murthy. "We built a system that allows the user to choose a neural network appropriate for the kind of data that the user collected rather than being constrained by what other researchers or companies have worked on."

This project arose from a unique collaboration between a senior thesis student in the Murthy lab, Diego Aldarondo of the Class of 2018, and his graduate student mentor, Pereira, who is jointly advised by Murthy and Shaevitz.

"Diego was exploring the use of deep neural networks for annotating animal behavioral data via one of his computer science classes at Princeton, and over late-night chats in the lab with Talmo, he realized that these methods could be powerfully applied to their own data: videos of fruit flies interacting during their courtship ritual," said Murthy. "The collaboration took off from there, and it was incredible fun to work together -- Diego and Talmo showed how effective these AI methods can be."

The work has great potential outside of neuroscience as well, said Monica Daley, a senior lecturer at the Structure and Motion Laboratory of the Royal Veterinary College in the United Kingdom, who was not involved in this research.

"Much of my research aims to understand how animals move effectively under different terrain and environmental conditions," Daley said. "One of the biggest ongoing challenges in the field is pulling meaningful information about animal movement from video footage. We either process videos manually, requiring many hours of tedious work, or focus on very simplistic and limited analysis that can be automated. The algorithms presented in this paper have potential to automate the labor-intensive part of our work more than has been possible previously, which could allow us to study a greater variety of animal locomotor behaviors."

Once they have a database of motion and behaviors, the neuroscientists on the team can draw connections to the neural processes behind them. This will allow researchers "to not only gain a better understanding of how the brain produces behaviors," said Shaevitz, "but also to explore future diagnostics and therapies that rely on a computer interpreting someone's actions."

A similar tool was shared over the summer by a team of Harvard researchers, who used existing neural network architecture, whereas the Princeton team created their own. "Our method and theirs have different advantages," said Murthy. "This is an incredibly exciting field right now with a lot of activity in developing AI tools for studies of behavior and neural activity."

"We use a different approach, where smaller, leaner networks can achieve high accuracy by specializing on new datasets quickly," said Pereira. "More importantly, we show that there are now easy-to-use options for animal pose tracking via AI, and we hope that this encourages the field to begin to adopt more quantitative and precise approaches to measurement of behavior."

"In the last five years, neuroscience has made enormous strides in the technology observing and manipulating brain activity," said co-author Samuel Wang, a professor of molecular biology and PNI. "Now, automatic classification of behavior adds a critical complement to that technology. Princeton is becoming a central hub in the budding field of computational neuroethology."

Credit: 
Princeton University

Researchers from the CNIO and the Hospital 12 de Octubre make sense out of the chaos of melanoma

image: New drivers of metastasis in melanoma that define poor prognosis in patients: In red, imaging by fluorescence microscopy of p62 in melanoma cells. Red foci correspond to large complexes of p62 with other proteins that potentiate the ability of these tumor cells to survive, proliferate and invade distal organs.

Image: 
CNIO

Unlike other skin cancers - and without knowing really why - melanoma is one of the most aggressive tumours, with potential for metastasis from very early stages, when lesions are just millimetres thick. Most puzzling is that these metastases occur in an apparently chaotic way, as they involve many processes that occur simultaneously but do not appear to have an apparent relationship among them. Now, scientists from the Melanoma Group at the Spanish National Cancer Research Centre (CNIO), together with the University Hospital 12 de Octubre, have just found some order to this chaos, through a research study that proves that these metastatic processes do not occur through independent mercenaries, but they are coordinated by a general captain: the p62 protein.

Furthermore, the study discovers that one of these warriors controlled by p62 is FERMT2, a protein that had not been previously linked to metastasis in melanoma, and shows that both, FERMT2 and p62, could represent a marker for poor patient prognosis. These important findings are featured in the cover of the January 2019 issue of the prestigious journal Cancer Cell.

An unexpected result

"It is complicated to bring order within the mechanisms underlying melanoma progression, and particularly, to dissect the barcode that defines the aggressive behaviour of this tumour", explains Marisol Soengas, Head of the Melanoma Group. This is because the melanoma cells accumulate an extremely high number of alterations in the RNA - the genetic material that enables proteins to be created. Furthermore, Soengas says that "there are more than 1,500 proteins that bind RNA and most of them have not been studied".

Her Group had found that one of the characteristics that differentiate melanoma from other tumours is the regulation of a cell self-cleaning process called autophagy. Autophagy is a system that all cells, including tumour ones, eliminate components that they no longer need, and generate energy to continue growing.

The Melanoma researchers at CNIO began then to explore p62, a protein commonly associated with autophagy in many malignancies. "We were interested in p62 because it had been described as one of the yin and yang of cancer, for its ability to favour or inhibit tumours depending on the context", indicates Panagiotis Karras, the first author of the study. Analysing melanoma biopsies, it was observed that the greater the progression of the melanoma, the higher the p62 levels. However, through the use of animal models, it was verified that in this tumour, p62 is not decisive in autophagy.

To find the main function of p62 in melanoma, the researchers conducted a full -omic study, which also used the most cutting edge bioinformatic technologies. In this way, they performed the first detailed characterisation of p62 and its mode of action in melanoma, studying the expression of the genes involved in these processes (transcriptomics), the identity of the proteins involved (proteomics) and the interactions that occur between them (interactomics).

A coordinated, non-chaotic, metastasis of melanoma

Thanks to this comprehensive study, researchers discovered a new and unexpected function of p62: to control the half-life of other fact0rs involved in melanoma metastasis. Soengas continues: "Recruiting certain RNA-binding proteins, from the 1,500 described, p62 controls numerous proteins, which favour metastasis through such apparently independent processes as survival, metabolism, cell cycle or invasion. Now, we can see that these processes are not independent, but they have a common regulator".

This work went even further by finding that other processes regulated by p62 may also affect the survival of patients. In this way, a new protein was identified, FERMT2, which correlates with a worse prognosis in metastatic melanomas. "For us pathologists, it was interesting to find that p62 and FERMT2 are increased in samples from patients with melanoma metastasis, because up to now, we did not have any good markers of tumour progression", indicates José Luis Rodríguez-Peralto, Head of the Anatomical Pathology Department at the University Hospital 12 de Octubre in Madrid, co-author of this paper.

The next step for the researchers will be to try to validate these results in a higher number of biopsies, and to further investigate this melanoma barcode that separates it from other aggressive tumours.

Credit: 
Centro Nacional de Investigaciones Oncológicas (CNIO)

Study offers new view of how cartels work

Suppose you were building a cartel -- a group of business interests who coordinate to fix high prices that consumers must pay. How would you design it? Received economic wisdom says transparency among cartel members is crucial: If colluding suppliers share information, they can keep prices high and monitor members of the cartel to make sure no one deviates from the cartel's norms.

A newly published paper co-authored by MIT economist Alexander Wolitzky offers a different idea: Firms do not have to share information extensively in order to collude. Indeed, the paper contends, extensive information-sharing can help firms undercut cartels and gain market share for themselves.

"If I'm thinking about entering your market, which I'm not supposed to do, but if I'm tempted to do it, then I can do it better if I have this information about your market," Wolitzky says. The corollary, he notes, is that there appear to be cases where "by not sharing information about their pricing behavior, the firms make it easier to sustain collusion."

The paper is thus a rethinking of an important policy topic: In the U.S., Europe, and across the world, governments are charged with regulating cartels and collusion, in an attempt to ensure that consumers can benefit from market competition.

Given the prevailing notion that data-sharing helps cartels, firms investigated for price-fixing can argue that they must not be illegally colluding if the evidence shows they have not been extensively sharing information with other businesses.

"Because of this conventional wisdom that firms that collude share a lot of information, a firm's defense is, 'We weren't sharing so much information,'" Wolitzky says. And yet, as the new paper suggests, that level of cooperation may not be necessary for collusion to occur.

The paper, "Maintaining Privacy in Cartels," is by Takuo Sugaya, an associate professor at the Stanford Graduate School of Business, and Wolitzky, an associate professor in MIT's Department of Economics; it appears in the December issue of the Journal of Political Economy.

What's the whole story?

The current paper adds to a body of academic literature whose best-known component is "A Theory of Oligopoly," a 1964 paper by economist George Stigler, which describes how the availability of information should help cartels maintain their grip on prices. Some subsequent empirical work also shows that in some conditions, increased transparency helps cartels sustain themselves.

Sugaya and Wolitzky do not deny that a degree of transparency among cartel members helps collusion occur, but they complicate this picture by introducing alternate circumstances, in which less transparency helps cartels thrive and more transparency undercuts them.

"We're investigating the generality of this [older] result, and whether it tells the whole story," says Wolitzky.

The paper by the scholars builds a new model of firm behavior oriented around the "home-market principle" of collusion, in which cartels reduce the competitive supply of products in each other's markets -- which may often be segmented by geographic reach. North American and European firms in the same industry, in this scenario, would stay away from each other's territory, thereby reducing competition.

In the study, the authors contend that there are three effects that increased transparency has on cartels. Transparency within cartels enables firms to keep each other in check, and it helps them coordinate prices -- but it also "lets individual firms tailor deviations to current market conditions," as they write in the paper.

This last point, Sugaya and Wolizky assert, has been seriously underexplored by scholars in the past. In the model they propose, the "deviation gain" -- what happens when a firms leaves the cartel -- "is strictly larger when all prices and quantities are observable," that is, when the firm has more information about its erstwhile collaborators.

Real cartels, low transparency

The proposition that a relative lack of information-sharing coexists with collusion is not just an arbitrary function of the authors' model, but something supported by empirical evidence as well, as they note in the paper. The European Commission, for instance, has uncovered several cartels that seemingly made a point of limiting transparency: The firms in question largely shared just industry-wide sales data among all members, not extensive firm-level data.

These low-transparency cartels include industries such as plasterboard production, copper plumbing tube manufacturing, and plastics -- all of whom structured their collusion operations around intermediaries. Those intermediaries -- industry associations, in some cases -- handled the sensitive information and only distributed small portions of it to the individual firms.

A more vivid example comes from a graphite manufacturing cartel, as Sugaya and Wolitzky recount. At a meeting of cartel representatives, each member secretly entered their own sales data into a calculator passed around the room, in such a way that the firms could only learn the industry-wide sales volume, not the specific sales data of each firm.

Such examples indicate that "conventional wisdom may not tell the whole story" when it comes to cartels and transparency, Sugaya and Wolitzky write.

To be sure, the new theory developed by the scholars does not propose a uniform relationship between transparency and collusion; it all depends on the circumstances.

"It would be nice to have a very thorough characterization of when more information among cartel members makes colluding easier, and when it makes it harder," Wolitzky says.

In the new model, Sugaya and Wolitzky do suggest that greater transparency corresponds with collusion specifically in volatile business conditions, which may necessitate more robust long-term projections of sales and demand. By contrast, given less volatile, more consistent consumer demand over time, firms need less transparency to deviate from tacit collusion agreements and undercut their erstwhile cartel partners. As the authors acknowledge, firm behavior within cartels, in a variety of these circumstances, could use further study.

Credit: 
Massachusetts Institute of Technology

New memory study first to use intracranial recordings

image: A research team led by Noa Ofen, Ph.D. at Wayne State University is working to address a critical gap in our understanding of how maturation of the prefrontal cortex drives memory development through the use of electrocorticographic (ECoG) data recorded directly from the prefrontal cortex.

Image: 
Artwork by Julian Wong.

DETROIT - Declarative memory -- memories that can be consciously recalled -- is critical to everyday life. Throughout childhood and adolescence, declarative memory improves remarkably. However, until most recently, there was a critical gap in our understanding of how maturation of the prefrontal cortex drives memory development.

A team of researchers led by Noa Ofen, Ph.D., associate professor of psychology in Wayne State University's College of Liberal Arts and Sciences and Institute of Gerontology, and Lisa Johnson, Ph.D., postdoctoral research scientist at the Helen Wills Neuroscience Institute, University of California-Berkeley, are addressing this critical gap through the use of electrocorticographic (ECoG) data recorded directly from the prefrontal cortex in a cohort of 17 children and adolescents. These ECoG recordings were obtained from patients with surgically implanted subdural electrodes used for the clinical management of seizures.

"In our study, we used an established task employed in numerous studies aimed at investigating memory development in this age group," said Ofen. "It provided our team with rare insight previously not obtainable."

The team followed a two-tier, unbiased approach to ECoG data analysis by which data were first analyzed per trial on the individual level using non-parametric statistics, and then modeled on the group level.

"The high spatiotemporal precision of these recordings allowed us to provide a unique demonstration of how the developing prefrontal cortex drives the formation of memories of events in our own lives," said Johnson. "Our research shows that earlier activity predicts greater memory accuracy, and sub-second deviations in activity flow between frontal subregions predict memory formation."

This study is the first to demonstrate that the spatiotemporal propagation of frontal activity supports memory formation in children as young as 6 years of age, and show how adjacent frontal subregions follow dissociable developmental trajectories.

Credit: 
Wayne State University - Office of the Vice President for Research

URI researchers: Small changes in oxygen levels have big implications for ocean life

image: This is Lucicutia hulsemannae, a copepod that stays at the Lower Oxycline of the Oxygen Minimum Zone (OMZ). The organism is remarkably tolerant of extremely low oxygen levels, but very sensitive to small changes in those levels.

Image: 
Photo by Dawn Outram

KINGSTON, R.I. - December 19, 2018 - Oceanographers at the University of Rhode Island have found that even slight levels of ocean oxygen loss, or deoxygenation, have big consequences for tiny marine organisms called zooplankton.

Zooplankton are important components of the food web in the expanse of deep, open ocean called the midwater. Within this slice of ocean below the surface and above the seafloor are oxygen minimum zones (OMZs), large regions of very low oxygen. Unlike coastal "dead zones" where oxygen levels can suddenly plummet and kill marine life not acclimated to the conditions, zooplankton in OMZs are specially adapted to live where other organisms - especially predators - cannot. But OMZs are expanding due to climate change, and even slight changes to the low oxygen levels can push zooplankton beyond their extraordinary physiological limits.

"Although the animals in the ocean's oxygen minimum zone have adapted over millions of years to the very low oxygen of this extreme and widespread midwater habitat, they are living at the very limits of their physiological capability," said Karen Wishner, a professor of oceanography at URI's Graduate School of Oceanography and lead author of a new paper on deoxygenation and zooplankton in the Eastern Tropical North Pacific OMZ. "Our research shows that they are sensitive to very small changes in oxygen, and decrease in abundance when oxygen gets just a little bit lower."

The research team, which this week published their findings in Science Advances, found more natural variability in oxygen levels in the OMZ than previously known. This has a direct effect on the distribution of many types of zooplankton because, as the team discovered, the organisms respond to a less than 1 percent reduction in oxygen levels.

While zooplankton have had millions of years to adapt to conditions in the OMZ, these low oxygen zones may expand rapidly due to climate change, leading to major unanticipated changes to midwater ecosystems. For example, an expansion of the OMZ into shallower waters may make zooplankton more susceptible to predators like fish. If this leads to a zooplankton population crash, it will have impacts all the way up the food chain.

"Further loss of oxygen in ocean waters is predicted in the future as a result of global warming, and these animals may be unable to adapt and persist," Wishner said. "They are important components of the food web of oceanic ecosystems, and their loss could potentially impact top predators, including whales and commercially important fisheries."

Credit: 
University of Rhode Island

Process makes stem-cell-derived heart cells light up

image: This is a diagram of the process in which a stem cell line was generated for production of cardiac cells.

Image: 
Penn State Human Stem Cell Engineering Lab

A faster, more cost-efficient, and more accurate method of examining the effectiveness of human pluripotent stem-cell-derived cardiac muscle cells has been discovered, according to researchers from Penn State.

Human pluripotent stem cells (hPSCs) -- human embryonic stem cells and induced pluripotent stem cells -- can be induced to produce other types of human cells through stem cell differentiation. The researchers looked at cardiac muscle cells -- cardiomyocytes (CMs) -- in this study.

The goal is to use these cells to treat cardiac conditions, but first the researchers must determine the cell's functionality through characterization, which involves examining how well the cells were modified, and whether or not they are mature, functioning CMs. One clear sign that the cells are functioning is if they are beating, because CMs beat like a heart does. Current methods for determining functionality include using a force transducer, which studies the mechanics of a single muscle cell, and using calcium imaging. However, there are issues with these methods.

"CMs derived from hPSCs hold tremendous promise for cell-based therapies for heart diseases," said Xiaojun Lance Lian, assistant professor of biomedical engineering and primary investigator on the project. "Nevertheless, current methods for CM characterization cause undesirable impacts on the cells' functionality and are expensive and time-consuming."

To combat these issues, Lian and his colleagues developed a process that is non-invasive and less likely to aversely effect the CMs functionality. The researchers used CRISPR-Cas9, a genome-editing tool, to generate a calcium-indicating reporter stem cell line, which is a type of stem cell line that is more easily analyzed for CM functionality than other stem cell lines.

To create this stem cell line, prior to the stem cell differentiation into CMs, the researchers used CRISPR-Cas9 to insert a calcium indicator protein called GCaMP6s into the stem cells. The GCaMP6 protein enables the stems cells to be modified into CMs that can be directly characterized by fluorescence intensity. The intensity of the fluorescence correlates with mechanical strain detected by a video microscope analysis. This analysis shows the cells' responses to cardiac drugs.

"Our system is well-established, cost-effective and very sensitive, so it is a more advanced method of CM characterization," Yuqian Jiang, doctoral student in biomedical engineering. "Since it is non-invasive, it is also much better for the CMs and their functionality."

And because of the many benefits of the system, the process can further contribute to improved disease modeling and drug screening for treating cardiac diseases, Lian said.

Looking ahead, the researchers want to construct an "on switch" for the GCaMP6s protein by adding doxycycline, which will activate a switcher protein known as Tet-On.

The research team is also exploring the use of this particular GCaMP6s-enhanced stem cell line for other research.

"We can also use this stem cell line for imaging other lineages, like neurons and astrocytes," Jiang said.

Credit: 
Penn State

Kidney failure on the rise in Australians under 50 with type 2 diabetes

image: This is professor Jonathan Shaw, Head of Clinical Diabetes and Epidemiology at the Baker Institute.

Image: 
Baker Institute

A study of more than 1.3 million Australians with diabetes has found that kidney failure is increasing in people with type 2 diabetes aged under 50 years, leading to reduced quality of life and placing growing demand on the country's kidney dialysis and transplantation services.

The study, led by researchers from the Baker Heart and Diabetes Institute, calls for urgent attention to reduce the progression of kidney disease in Australia and highlights the importance of aggressive risk factor treatment in people with younger-onset type 2 diabetes.

The registry study examined the trends in end-stage kidney disease (defined as kidney transplantation or the commencement of dialysis) within the Australian population with diabetes from 2002 to 2013.

The most concerning finding was the progressive rise in in end-stage kidney disease seen in people with type 2 diabetes aged under 50, while it remained stable for those with type 1 diabetes and for type 2 diabetes aged 50-80.

This finding is also supported by other similar studies.The study, published in the American Journal of Kidney Diseases, also found the incidence of end-stage kidney disease was higher in men than women; those living in the most disadvantaged areas; in Indigenous people compared to non-Indigenous people and those living in remote areas compared to major cities.

End-stage kidney disease occurs when chronic kidney disease -- the gradual loss of kidney function -- reaches an advanced state and the kidneys are no longer able to filter wastes and excess fluids from the blood, which should be excreted in the urine. When kidneys lose these capabilities, dangerous levels of fluid, electrolytes and wastes can build up, requiring dialysis or a kidney transplant to stay alive.

In most developed countries, diabetes is now the leading cause of end-stage kidney disease and is responsible for over 40 per cent of new cases of kidney failure. Patients can experience symptoms from nausea and loss of appetite, to fatigue, sleep problems and muscle cramps.

Senior author, diabetes researcher and endocrinologist at the Baker Institute, Professor Jonathan Shaw said the increasing prevalence of diabetes coupled with the rising risk of end-stage kidney disease in people with type 2 diabetes suggested the future demand for kidney dialysis and transplantation would place an enormous burden on Australia's healthcare system.

"We've known for a long time that the total number of people requiring kidney dialysis or transplantation in Australia was going up but we thought that was mainly due to increasing numbers of people with diabetes," Professor Shaw said.

"The main concern is the increasing rate in the under 50s," he said. "This is a really troubling finding but hopefully by improving medical care, by aggressively managing blood pressure and other cardiovascular risk factors in addition to blood sugar control, we can start to turn this around."

Professor Shaw said it was uncertain whether rising rates of kidney disease were a reflection of less aggressive medical therapy in Australia over the past 12 years in people with type 2 diabetes along with changes in management of kidney disease, or the result of a more aggressive form of the disease now emerging. He said more research in this area was urgently needed.

Credit: 
Baker Heart and Diabetes Institute

Stick insects: Egg-laying techniques reveal new evolutionary map

Known for exceptional mimicry, stick insects have evolved a range of egg-laying techniques to maximize egg survival while maintaining their disguise – including dropping eggs to the ground, skewering them on leaves, and even enlisting ants for egg dispersal. Scientists have now combined knowledge on these varied techniques with DNA analysis to create the best map of stick-insect evolution to date. Contrary to previous evolutionary theories based on anatomical similarities, the new analysis finds the first stick insects flicked or dropped their eggs while hiding in the foliage. It also finds that geographically isolated populations of stick insects are more likely to be related than those with similar features. The research, published in a special issue on stick insects in Frontiers in Ecology and Evolution, takes us one step closer to understanding these enigmatic creatures.

"While the evolutionary history of most insect groups is well documented, stick insects have been hard to classify. Our new analysis has made great strides, showing that the evolution of stick and leaf insects cannot be solely based on anatomical features," says Dr James A. Robertson, based at the Animal and Plant Health Inspection Service and affiliated with the Brigham Young University, USA. "Linking their wide-variety of egg-laying techniques to their evolutionary history, we find that flicking and dropping eggs is the oldest strategy from an evolutionary perspective."

Stick insects are increasingly popular in the pet industry on account of their remarkable size, bizarre appearance and gentle nature. They are the only insects where each species has an individual egg form. In the 1950s, scientists based stick-insect evolutionary theories on the traditional method of examining subtle changes in anatomical features. However, this method could not explain why distantly-related species -- for example those separated by faraway continents -- often shared very similar features.

Using DNA analysis and linking these findings to their variety of egg-laying techniques, Robertson and his colleagues created their own map of stick-insect evolution. As well as revealing that species geographically isolated with each other were more likely to be related than species that looked similar, the results challenged previous theories on how stick-insect egg-laying strategies evolved.

"Stick-insects were thought to evolve from a ground-dwelling adult form that deposited its eggs directly in the soil. We show that ancestral stick-insects actually remained in the foliage and dropped or flicked their eggs to the ground, a technique employed by most of these insects as a strategy to remain in disguise," explains Robertson. "The hardening of the egg capsule early in the evolution of stick insects represents a key innovation allowing further diversification."

This hardened capsule allows the egg to survive falls from the canopy, to float on water and to pass through the intestines of birds. A further innovation, exclusive to stick insects that flick or drop their eggs, is a food-filled cap on the egg that attracts ants, who then disperse it much further than a female stick insect could achieve on her own.

Robertson continues, "Stick insects have then adapted to new micro-habitats, which involves changing how their eggs are deployed and dispersed. There are several independent examples where species have evolved to adapt to a ground or bark dwelling habitat by depositing their eggs in the soil or in bark crevasses. Other populations have independently evolved gluing strategies, with one of these diversifying further by burying their eggs, skewering them in leaves or producing a sophisticated egg sac."

This new research demonstrates that molecular data can begin to shed light on the evolution of these enigmatic creatures, with more to be revealed.

Robertson explains, "We hope to investigate how and when key innovations in stick insect evolution occurred, how widespread these traits are and where geographically they evolved."

Credit: 
Frontiers

The oldest large-sized predatory dinosaur comes from the Italian Alps

video: At the Natural History Museum of Milan, paleontologist Cristiano Dal Sasso (speaking) and co-authors Simone Maganuco and Andrea Cau (center and right) examine the bones of Saltriovenator, deposited in the Museum collections.

Image: 
Gabriele Bindellini

Early Jurassic predatory dinosaurs are very rare, and mostly small in size. Saltriovenator zanellai, a new genus and species described in the peer-reviewed journal PeerJ - the Journal of Life and Environmental Sciences by Italian paleontologists, is the oldest known ceratosaurian, and the world's largest (one ton) predatory dinosaur from the Lower Jurassic (Sinemurian, ~198 Mya).

This unique specimen, which also represents the first Jurassic dinosaur from Italy, was accidentally discovered in 1996 by a fossil amateur within a quarry near Saltrio, some 80 km N-E of Milan. Many bones of Saltriovenator bear feeding marks by marine invertebrates, which represent the first case on dinosaurian remains and indicate that the dinosaur carcass floated in a marine basin and then sunk, remaining on the sea bottom for quite a long time before burial.

Although fragmentary, "Saltriovenator shows a mosaic of ancestral and advanced anatomical features, respectively seen in the four-fingered dilophosaurids and ceratosaurians, and the three-fingered tetanuran theropods, such as allosaurids", says first author Cristiano Dal Sasso, of the Natural History Museum of Milan, who reassembled and studied the fossil for several years.

"Paleohistological analysis indicates that Saltriovenator was a still growing subadult individual, therefore its estimated size is all the more remarkable, in the context of the Early Jurassic period", says co-author Simone Maganuco.

"The evolutionary 'arms race' between stockier predatory and giant herbivorous dinosaurs, involving progressively larger species, had already begun 200 million of years ago."

The evolution of the hand of birds from their dinosaurian ancestors is still hotly debated. "The grasping hand of Saltriovenator fills a key gap in the theropod evolutionary tree: predatory dinosaurs progressively lost the pinky and ring fingers, and acquired the three-fingered hand which is the precursor of the avian wing", remarks co-author Andrea Cau.

Credit: 
PeerJ

Changing climate, longer growing seasons complicate outlook for coniferous forests

For decades, ecologists have differed over a longstanding mystery: Will a longer, climate-induced growing season ultimately help coniferous forests to grow or hurt them? A new University of Colorado Boulder study may help researchers find a more definitive answer.

As climate warming has lengthened growing seasons, two scenarios seem plausible: If forest growth increases as a result of milder temperatures during more of the year, the additional tree cover could help remove carbon dioxide emissions from the atmosphere at a faster rate. Conversely, if growth decreases as a result of decreased moisture or increased heat-related stress, carbon absorption would decline and climate warming could accelerate even beyond current levels.

Despite a large number of studies on the topic, no standard for measuring the beginning, middle and end of a growing season has emerged, leading to diverging--and at times, wildly opposite--conclusions.

"Nobody can say for certain what a growing season 'is,' due to all the variation in how forest behave and how the start and end of the growing season is characterized," said David Barnard, lead author of the study and a former postdoctoral researcher with the Boulder Creek Critical Zone Observatory at the Institute of Arctic and Alpine Research (INSTAAR). "Even in winter, forests in warmer areas can still be growing. There's less of a distinct on/off switch."

The new CU Boulder study, published today in the journal Scientific Reports, examined data from eleven western sites in the AmeriFlux and Long-Term Ecological Research networks, a set of monitoring stations supported by the Department of Energy and National Science Foundation. These long-term research sites measure, among other things, the exchange of carbon dioxide between forests and the atmosphere.

"I've been thinking about this question since grad school when I was working on Niwot Ridge and couldn't find standard guidelines for how to calculate growing season length," said John Knowles, co-lead author of the study and a former CU Boulder graduate student now a researcher at the University of Arizona.

By applying different methods for characterizing growing season length to past studies, the researchers found that many previous datasets could be made to yield a positive (forest growth) or negative (forest decline) outlook depending on which single methodology was applied--an ambiguity that complicates efforts to quantify climate change effects at scales ranging from individual forests to continents and the globe.

"This work shows how the result of any given study may be subject to methodological bias, especially in colder, more northerly ecosystems where climate is changing the fastest," added Barnard, now a researcher with the U.S. Geological Survey.

The study provides recommendations and best practices for calculating growing season length by using an ensemble approach, combining multiple study methods and taking an average to come up with a more robust conclusion.

"It may still be years before it's clear whether a longer growing season is good, bad or somewhere in-between for forests," Barnard said.

Notwithstanding, Knowles adds that this work "will immediately help to the characterize the uncertainty associated with how longer growing seasons are likely to impact forest carbon emissions in the future."

"Every forest behaves differently," Barnard said. "There is still a good bit of uncertainty about what increasing growing seasons will do for forest growth, but we do know that they are crucial to understanding the global carbon cycle."

Credit: 
University of Colorado at Boulder

Global warming didn't pause

The reality of ongoing climate warming might seem plainly obvious today, after the four warmest years on record and a summer of weather extremes in the whole northern hemisphere. A few years back however, some media and some experts were entangled in debates about an alleged pause in global warming - even though there never has been statistical evidence of any "hiatus", as new research now confirms. In two recent studies, a group of international scientists joined forces to thoroughly disentangle any possible "hiatus" confusion, affirming that there was no evidence for a significant pause or even slowdown of global warming in the first place.

"Claims of a presumed slowdown or pause in global warming during the first decade of the 21st century and an alleged divergence between projections from climate models and observations have attracted considerable research attention, even though the Earth's climate has long been known to fluctuate on a range of temporal scales," says James S. Risbey from CSIRO in Australia, lead author of one of the new studies. "However, our findings show there is little or no statistical evidence for a pause in global warming. Neither current nor historical data support it."

"The alleged pause in global warming was at no time statistically conspicuous or significant, but fully in line with the usual fluctuations", explains Stefan Rahmstorf from the Potsdam Institute for Climate Impact Research, a co-author to both studies. "The results of our rigorous investigation in both studies are as simple as unambiguous: There was no pause in global warming. And global warming did not fall short of what climate models predicted. Warming continued as predicted, together with the normal short-term variability. There has been no unusual slowing of warming, as our comprehensive data analysis shows."

There was no pause in global warming

Published in Environmental Research Letters, the first new study analyzes variations in global surface temperature in historical context, while the second compares model projections to observations. They scrutinized all available global temperature data sets in all available earlier and current versions and for all alleged time periods of a "hiatus", looking for statistical significance. In no data set and for no time period could a significant pause or slowing of global warming be detected, nor any discrepancy to climate models.

Statements claiming the contrary were based on premature conclusions, partly without considering statistics at all, partly because statistical analysis were faulty.

A common problem for instance was the so-called selection bias. Simple significance tests generally only apply to randomly drawn samples. But when a particular time interval is chosen out of many possibilities specifically because of its small trend, then this is not a random sample. "Very few articles on the 'pause' account for or even mention this effect, yet it has profound implications for the interpretation of the statistical results," explains Stephan Lewandowsky from University of Bristol in the UK, lead author of the second study.

Reduced momentum for action to prevent climate change

One reason for the attention that the alleged "global warming pause" received in the public may have been that interest groups used this idea to argue against the urgency of ambitious climate policies to reduce CO2-emissions from the burning of fossil fuels. This in turn may have contributed to delays in action to halt global warming, the scientists argue.

"A final point to consider is why scientists put such emphasis on the 'pause' when the evidence for it was so scant. An explanation lies in the constant public and political pressure from climate contrarians", adds Naomi Oreskes from Harvard University in the USA and co-author of the second study. "This may have caused scientists to take positions they would not have done without such opposition."

Credit: 
Potsdam Institute for Climate Impact Research (PIK)

Multicultural creatures of habit

image: This is a common noctule (Nyctalus noctula).

Image: 
Uwe Hoffmeister

Every year trillions of animals migrate for thousands of kilometres between their summer and winter areas. Among them are several species of bats whose journeys in the dark of the night unfold largely unnoticed by humans and have only partially been investigated by science. A reconstruction of individual migration patterns of the common noctule (Nyctalus noctula) in Central Europe has now revealed that travelling distances vary largely among individuals, yet overall females cover longer distances than males. Local bat populations, which remain separate when females rear their offspring in summer, strongly mix in their hibernacula, the roost where they hibernate in winter. Additionally the study showed that individuals rarely change their migration habits - a behaviour that could prove problematic when bats are forced to adjust to rapidly changing ecosystems. The study was published in the Proceedings of the Royal Society: Biological Sciences.

An international team of scientists led by the Leibniz Institute for Zoo and Wildlife Research (Leibniz-IZW) in Berlin used stable hydrogen isotope ratios in minute fur samples to estimate the region of origin of more than 1,000 individuals from 7 wintering areas across Central Europe. This data informed researchers about the migration routes of noctule bats within, into and out of Central Europe. The data reveals that the animals from a hibernacula show a great variety of migratory behaviour: The majority of the bats populate the same region in summer and winter, but in each hibernacula a significant share of animals travelled for longer distances to reach it.

"We showed that individuals from a hibernacula can show divergent migratory behaviours with some individual staying local and others being long-distance migrants," says Christian Voigt from the Leibniz-IZW. "This pattern is called partial migration and it causes strong genetic mixing of populations when bats mate during migration and hibernation. During rearing of the offspring in the summer months bats remain separate in distinct regions such as Poland, Russia, the Baltic States, Scandinavia or Germany."

Furthermore, data analyses indicate that especially female bats cover long distances in their migration. More often than males, female common noctules came from northern regions to their hibernacula. Correlating these findings with morphologic measurements the researchers show that the females benefit from long journeys. "We derived a coefficient from body mass and forearm length, a body mass index, and were able to show that males are in best shape when they staying in the same region local while among females the ones who travel long distances are in better shape," explains Linn Lehnert from Leibniz-IZW, first author of the study. "We assume that a superior food supply for the insectivorous bats in northern regions is the main reason for this pattern. Females have higher energy demands during pregnancy and nursing which makes it worthwhile for them to shoulder the epic journeys." Males, on the contrary, migrate predominantly during their early life and tend to adopt a regional lifestyle for much of their adult life.

Last but not least the scientists analysed isotopic data from 79 individuals that were caught more than once in different years in the same hibernacula. They found that the vast majority of common noctules are creatures of habit. 86 percent showed the exact same migratory behaviour as in a previous year. These firm habits could constrain adjusting to changing environmental conditions, a scenario that becomes more and more probable thinking of global warming, insect decline and other human-induced ecosystem changes. Individuals with a greater flexibility may have better chances to cope successfully with these challenges.

In order to reliably assess the resilience of different bat species in connection with migratory behaviour, more research is needed, says Lehnert. "What we do know for sure, however, is how important local hibernacula are for the population of the common noctule bat in Central Europe. A combination of locally embedded and internationally coordinated conservation efforts is needed urgently to protect hibernacula."

Credit: 
Forschungsverbund Berlin