Culture

Explosion or collapse?

image: Light from the stellar explosion that created this energized cosmic cloud was first seen on planet Earth in October 1604, a mere 400 years ago. The supernova produced a bright new star in early 17th century skies within the constellation Ophiuchus. It was studied by astronomer Johannes Kepler and his contemporaries. Recent data has shown relative elemental abundances typical of a Type Ia supernova, and further indicated that the progenitor was a white dwarf star that exploded when it accreted too much material from a companion. The explosions discussed in the publication would produce a remnant that looks like Kepler but with the presence of an oxygen-neon-iron white dwarf at the center.

Image: 
Picture: X-ray: NASA/CXC/NCSU/M. Burkey et al.; Optical: DSS

A group of scientists, among them several from GSI Helmholtzzentrum für Schwerionenforschung and from Technical University of Darmstadt, succeeded to experimentally determine characteristics of nuclear processes in matter ten million times denser and 25 times hotter than the centre of our Sun. A result of the measurement is that intermediate-mass stars are very likely to explode, and not, as assumed until now, collapse. The findings are now published in the scientific magazine Physical Review Letters. They stress the fascinating opportunities offered by future accelerator facilities like FAIR in understanding the processes defining the evolution of the Universe.

Stars have different evolutionary paths depending on their mass. Low-mass stars such as the Sun will eventually become white dwarfs. Massive stars, on the other hand, finish with a spectacular explosion known as a supernova, leaving either a neutron star or a black hole behind. The fate of both low- and high-mass stars is well understood but the situation for intermediate-mass stars, which weigh between seven and eleven times as much as the Sun, has remained unclear. This is surprising since intermediate-mass stars are prevalent in our Galaxy.

"The final fate of intermediate-mass stars depends on a tiny detail, namely, how readily the isotope neon-20 captures electrons in the stellar core. Depending on this electron capture rate, the star will be either disrupted in a thermonuclear explosion or it will collapse to form a neutron star," explains Professor Gabriel Martínez-Pinedo of GSI's research department Theory and the Institut für Kernphysik, TU Darmstadt. Professor Karlheinz Langanke, Research Director of GSI and FAIR, adds: "This work started when we realized that a strongly suppressed, and hence previously ignored and experimentally unknown, transition between the ground states of neon-20 and fluorine-20 was a key piece of information needed to determine the electron capture rate in intermediate mass stars." By a combination of precise measurements of the beta-decay of fluorine-20 and theoretical calculations, an international collaboration of physicists with participation from GSI and TU Darmstadt, has now succeeded in determining this important rate. The experiment took place under conditions far more peaceful than those found in stars, namely at the Accelerator Laboratory of the University of Jyväskylä. The measurements showed a surprisingly strong transition between the ground states of neon-20 and fluorine-20 that leads to electron capture in neon-20 occurring at lower density than previously believed. For the star, this implies that, in contrast to previous assumptions, it is more likely to be disrupted by a thermonuclear explosion than to collapse into a neutron star. "It is amazing to find out that a single transition can have such a strong impact on the evolution of a big object like a star," says Dag Fahlin Strömberg, who, as a PhD student at TU Darmstadt, was responsible for large parts of project's simulations.

Since thermonuclear explosions eject much more material than those triggered by gravitational collapse, the results have implications for galactic chemical evolution. The ejected material is rich in titanium-50, chromium-54, and iron-60. Therefore, the unusual titanium and chromium isotopic ratios found in some meteorites, and the discovery of iron-60 in deep-sea sediments could be produced by intermediate-mass stars and indicate that these have exploded in our galactic neighbourhood in the distant (billions of years) and not so distant (millions of years) past.

In the light of these new findings the most probable fate of intermediate-mass stars seems to be a thermonuclear explosion, producing a subluminous type Ia supernova and a special type of white dwarf star known as an oxygen-neon-iron white dwarf. The (non-)detection of such white dwarfs in the future would provide important insights into the explosion mechanism. Another open question is the role played by convection -- the bulk movement of material in the interior of the star -- in the explosion.

At existing and future accelerator centres like the international FAIR project (Facility for Antiproton and Ion Research) currently under construction at GSI, new not yet investigated isotopes and their properties can be investigated. Thus, scientists continue to bring the universe into the laboratory to answer the unsolved questions about our cosmos.

Credit: 
Helmholtz Association

Long-term medication for schizophrenia is safe

image: Jari Tiihonen, professor at the Department of Clinical Neuroscience, Karolinska Institutet, Sweden. Photo: Stefan Zimmerman.

Image: 
Stefan Zimmerman

Researchers at Karolinska Institutet in Sweden and their colleagues in Germany, the USA and Finland have studied the safety of very long-term antipsychotic therapy for schizophrenia. According to the study, which is published in the scientific journal World Psychiatry, mortality was higher during periods when patients were not on medication than when they were.

People with schizophrenia have an average life expectancy ten to twenty years below the norm, and there has long been concern that one of the causes is the long-term use of antipsychotic drugs. Earlier compilations (meta-analyses) of results from randomised studies, however, indicated that the mortality rate for people with schizophrenia on antipsychotic medication was 30 to 50 per cent lower than those who have received placebo.

However, most of the studies done have been shorter than six months, which does not reflect the reality of treatment often being life-long. Researchers from Karolinska Institutet and their international colleagues have now done a long-term follow-up, substantiating previous results and demonstrating that antipsychotic drugs are not associated with increased risk of co-morbid complications, such as cardiovascular disease. The study is the largest conducted in the field to date.

"It's difficult to make comparisons between people on permanent medication and those who aren't, as these groups differ in many ways," says Heidi Taipale, assistant professor at the Department of Clinical Neuroscience at Karolinska Institutet. "One common method of dealing with this has been to try to take account of such differences when making comparisons. However, we chose another method, in which each person was their own control, making it possible for us to make individual comparisons of hospitalisation during periods of antipsychotic medication and periods of no treatment."

The researchers monitored just over 62,000 Finns who had received a schizophrenia diagnosis at some time between 1972 and 2014. This they did by accessing various Finnish registries up until 2015, giving an average follow-up period of over 14 years. They found that the likelihood of being hospitalised for a somatic disease was just as high during the periods when the patients were on antipsychotic drugs as when they were not. The differences in mortality, however, were noticeable. The cumulative mortality rate in the follow-up period at periods of medication and non-medication was 26 and 46 per cent respectively.

The researchers believe that there is overwhelming support for continual antipsychotic treatment for schizophrenia being a safer option than no medication. At the same time, treatment brings the risk of adverse reactions, such as an increase in weight, which can raise the risk of cardiovascular disease. The finding that treatment with antipsychotic drugs does not increase the likelihood of hospitalisation for cardiovascular disease may be attributable, argue the researchers, to the fact that the drugs can also have an antihypertensive effect and can reduce anxiety and the risk of substance abuse. Antipsychotic treatment may also help patients adopt a healthier lifestyle and make them more likely to seek care when needed.

"Antipsychotics get something of a bad press, which can make it difficult to reach out to the patient group with information on how important they are," says Jari Tiihonen, professor of psychiatry at the Department of Clinical Neuroscience, Karolinska Institutet. "We know from previous studies that only half of those who have been discharged from hospital after their first psychotic episode with a schizophrenia diagnosis take antipsychotic drugs. Besides, there are many people with schizophrenia who are on long-term benzodiazepine medication, which is in breach of existing guidelines and is associated with increased mortality risk. Building trust and understanding towards the efficacy and safety of antipsychotic drugs is important, and we hope that this study can contribute to this end."

Credit: 
Karolinska Institutet

New function for potential tumor suppressor in brain development

image: With the MADM technique, researchers can remove a gene from single cells and visualize what happens to these cells.

Image: 
© IST Austria - Hippenmeyer group

The gene Cdkn1c could have been considered an open-and-shut case: Mice in which the gene is removed are larger and have bigger brains, so Cdkn1c should function to inhibit growth. This rationale has led to Cdkn1c being studied as a tumour suppressor gene. New research from the group of Simon Hippenmeyer, professor at the Institute of Science and Technology Austria (IST Austria), has now uncovered a novel, opposite role for Cdkn1c. When Cdkn1c is removed only in certain cells of the brain, these cells die, arguing for a new growth promoting role of Cdkn1c. The new research is published today in the journal Nature Communications.

Simon Hippenmeyer and his research group, including co-first authors Susanne Laukoter (PhD student), Robert Beattie (postdoc) and Florian Pauler (senior technical assistant), removed Cdkn1c in a brain region called the cerebral cortex in mice and found a surprising result: Contrary to what had previously been thought, the cortex was smaller, not bigger, than in animals with a normal amount of Cdkn1c. To make sense of this seeming paradox, the researchers compared the effect of Cdkn1c loss in the whole animal with a loss of the gene in just a single tissue or even in single cells in the developing mouse.

Studying brain development and gene function at single cell level with MADM

Using a genetic technique called Mosaic Analysis with Double Markers (MADM) allowed the researchers to knockout a gene of interest in single cells and at the same time, visualize the effect of gene deletion on these cells under the microscope. When they removed the gene Cdkn1c in cells in the whole cortex, the cortex was smaller. "When we take out the gene, cells die. In fact, we see massive death by apoptosis", Hippenmeyer explains.

In a cortex where Cdkn1c was removed, the researchers further modified single cells with MADM to observe their fate. They found that if a cell has two intact copies of Cdkn1c, the cell is protected against death. If a cell has just one intact copy of Cdkn1c, the cell dies. Intriguingly, it does not matter whether the DNA, the "instruction manual" in our cells that defines how products like proteins are made, is active and thus allows generation of proteins, or not. Just having two copies of the intact DNA, the intact instruction manual, is enough to protect a cell from death.

Implications for studies on brain malformations and tumour development

For Hippenmeyer, this study underlines the importance of studying both systemic effects of gene loss (i.e. gene loss in the whole animal) and the effect of gene loss in individual cells. "Our method reveals a new function of Cdkn1c, as taking the gene out in a single cell has a fundamentally different effect from taking it out in the whole animal. Systemic effects may mask the effect observed in individual cells. It is important to also study this in human conditions that lead to malformations of the brain, such as microcephaly."

As Cdkn1c and its role in the development of tumours has been studied extensively, the new research likely also has important implications for this field, says Florian Pauler. "There has been interest in Cdkn1c as it has been regarded as a tumour suppressor. Like the single cells and individual tissue we studied, tumours can also be seen as non-systemic. So, our findings change the way we should think about Cdkn1c, also in tumours."

In the future, Hippenmeyer and his research group will continue to explore the mechanisms and functions of Cdkn1c. "When this piece of DNA is missing, something fundamental is changed and death is triggered in a cell. Of course, we want to now know why and how this happens", Hippenmeyer asserts.

Credit: 
Institute of Science and Technology Austria

Water governance: Could less sometimes be more?

image: A diagram showing the contribution of each new rule to the overall capacity for governance coordination over time; i.e. the "improvement" of governance provided by each new rule. The different phases are visible with an increasingly strong improvement until a turning point, where the improvement then becomes weaker. An example of a reading for the Swiss case (brown curve): From 1850 onwards, each new rule increasingly improves the ability to coordinate. This capacity stagnated at its peak during the first part of the 20th century, only to decline gradually. Thus, in 2006, the capacity to improve the coordination of each new rule returned to a level on the order of that reached in the second half of the 19th century.

Image: 
© UNIGE

The use of environmental resources has been regulated for centuries with the aim of improving the management and behaviour of private and public actors on an on-going basis. But, does the never-ending introduction of new regulations really have a positive effect? Or, does a surfeit of rules cause malfunctions and lead to disturbing overlaps? In an attempt to answer these questions, researchers from the Universities of Geneva (UNIGE) and Lausanne (UNIL), Switzerland, analysed water governance regulations in six European countries from 1750 to 2006. Their results, published in the journal Ecological Economics, show that rules designed to improve resource management eventually come into conflict in the long run, creating an equal number of positive and negative effects until the system falls apart. At this point, the only way out is for the state to overhaul governance.

Societies have been making rules to control behaviours and the uses of natural resources such as water for centuries. At the same time, however, the competing interests of state and private actors continue to produce environmental problems. In overall terms, the scientific literature is in agreement that developments in the way these regulations are structured are, nevertheless, increasingly positive and effective. But to what extent is this really the case in the long run?

"To assess whether a regulation is positive in the long run, you need to factor in the ecosystem of rules that it is part of, and which it may either reinforce or disrupt", begins Thomas Bolognesi, a researcher at the Institute for Environmental Sciences (ISE) at UNIGE. In fact, a rule that induces a positive impact on the use that it regulates may cause turmoil once it begins to interact with existing regulations, causing the entire system to malfunction, conceived here as transversal transaction costs (TTCs). "And over the very long term", adds the Geneva-based scientist, "the negative effect of TTCs can grow and end up being equivalent to the positive effect generated by the new regulation, creating what we called an institutional complexity trap." The quality of governance is based, therefore, on two key components: the scope, i.e. the set of uses governed by the rules (quantity); and the consistency, i.e. the fact that the rules are defined and followed correctly (quality).

Successive improvements to the system lead to breaking point

To test their hypothesis, Bolognesi and Stéphane Nahrath, a professor at UNIL's Swiss Graduate School of Public Administration (IDHEAP), scrutinised the water governance systems in six European countries (Switzerland, Belgium, Spain, France, Italy and the Netherlands) from 1750 to 2006. "The aim of the study was to determine whether the increase in the scope of the governance reduced the system-wide coherence, and even went as far as overriding the positive effects intended by the additional regulations", says professor Nahrath. The researchers identified three distinct phases in the evolution of the governance in the six countries.

The first phase, which lasted from 1750 to 1850 and was followed by around 50 years of stagnation, covered the launch of the governance process, i.e. the production of framework rules that had relatively little impact. From 1900 to 1980, governance developed and the rules, which grew in precision, generated significant positive effects. But since 1980, we have entered a phase where the negative indirect effect, linked to a drop in the system's coherence, has been reinforced and offsets the previous positive effect, even to the point of supplanting it. "This is due to the creation of a profusion of new rules, especially following the introduction of the New Public Management approach in the 1980s", notes Bolognesi. This proliferation of regulations, which were sometimes designed to regulate the same area but along different lines, had an indirect negative impact on governance and resulted in a decrease in efficiency and clarity, leading to a systemic malfunction. "Consequently, to achieve a positive effect - as slim as it is - more and more rules need to be produced, increasing the risk of malfunction and leading to a vicious circle", continues Nahrath.

System reformed by the state

Contrary to the widespread idea that water governance is constantly improving, the study by the researchers from UNIGE and UNIL demonstrates the conflicts instigated by repeatedly introducing new rules designed to increase the system's efficiency. "If we carry on in the same way, we're going to hit breaking point", warns Bolognesi. "That's why we think it's important that the state and government policy should take charge of environmental governance issues. That way, we can avoid introducing separate rules that generate frictions and uncertainties, and that could create insurmountable obstacles for coordinating the system." As professor Nahrath concludes: "The contractual rules must in no instance take precedence over state rules."

Credit: 
Université de Genève

Plant physiology: One size may not suit all

A new study published by biologists at Ludwig-Maximilians-Universitaet (LMU) in Munich demonstrates that there are no simple or universal solutions to the problem of engineering plants to enable them to cope with the challenges posed by climate change.

For plants, climate change promises one thing for sure - increased levels of stress. After all, plants put down roots. They don't have the option of moving to where the weather suits them. Wider fluctuations in temperatures and increasing levels of aridity in many regions around the world are already making their lives more difficult. Plants are highly complex and sensitive systems. Even in zones with stable climates today, variations in light levels can reduce growth rates and crop yields. For example, plants have developed sophisticated cellular mechanisms that protect them against the deleterious effects of high light intensities on photosynthesis. In one such photoprotective process, the excess light energy is dissipated as heat before it can damage the photosynthetic apparatus. This depresses yields but it is very much in the plant's interest.

Three enzymes play a key role in this adaptation process, which are referred to as V, P and Z for short. In a paper published in 2016, which drew a great deal of attention, an American research group overexpressed the genes for these three proteins in tobacco plants, thus increasing the amounts of the enzymes produced in the leaves. They subsequently observed, under field conditions, that these 'VPZ' lines grew faster rates than did control plants with normal levels of the enzymes. LMU biologists Antoni Garcia-Molina and Dario Leister have now performed essentially the same experiment in the model plant Arabidopsis thaliana (thale cress). Their findings appear in the journal Nature Plants.

Their results confirm that, as in the case of tobacco, higher levels of V, P and Z reduce rates of photosynthesis while enabling the plants to adapt more rapidly (in fact, even faster than tobacco) to fluctuating light levels. Crucially however, the Arabidopsis VPZ lines did not grow faster than control plants. On the contrary, overexpression of the three enzymes resulted in retarded growth. "This clearly shows that it's not quite as easy to produce plants that are better adapted as some research groups have confidently suggested," Leister remarks. "In fact, higher levels of photoprotection may actually interfere with the operation of other mechanisms that are important for plant growth."

For Leister, these data essentially demonstrate that targeted adaptation of plants to facilitate successful adjustment to changing climatic conditions is likely to be a very complicated task. They certainly show that one cannot always expect to confer increased resistance to desiccation or optimize yields under fluctuating light levels simply by adjusting the levels of a few proteins. "The physiological processes in plants are tightly interconnected. This makes it impossible to predict the effects of flipping this switch or tightening that screw," he says. This explains why he and his colleagues approach the problem of targeted adaptation from the perspective of systems biology, which takes a 'holistic' view, as he calls it. For example, efforts to increase the yield or biomass by increasing the efficiency of photosynthesis must also ensure that the extra energy available is in fact channeled into increased growth. In principle, enhanced photosynthetic performance should result in the capture of more energy and in higher levels of metabolites. But this extra energy and abundance of chemical compounds must be put to some beneficial use. In the absence of any 'added value', increased rates of photosynthesis can prove to be detrimental to plants.

The analysis of complex relationships like this is the raison d'ètre of the Transregional Collaborative Research Center TR175, of which Leister is the principal coordinator. The scientists involved in the project seek to understand how plants react to biotic and abiotic environmental factors, such as drought, light levels and temperature, by analyzing their impact on the concentrations of all measurable metabolites, transcripts and proteins in plant cells. With the help of these data, they hope to identify the key components that allow plants to cope with varying conditions. In the case of crop plants that are indispensable for human nutrition, the mechanisms that underlie trade-offs between growth rates, increases in biomass and yields must also be taken into consideration. "In the context of climate change, the idea is to help plants to adapt to the changing conditions by introducing targeted genetic changes that allow them to handle the altered environmental parameters," Leister explains. Researchers refer to this strategy as 'assisted evolution'. "In order to have a realistic chance of finding sustainable solutions, we must adopt a systematic approach to the active adaptation of plants to the changing environmental conditions," he says. In this respect, some progress has already been made in certain species of algae that have very short generation times, which permits instances of successful adaptation to be rapidly detected. Such systems can then serve as sources of potentially useful genetic mutations that can be introduced into green plants.

Credit: 
Ludwig-Maximilians-Universität München

It's not about East and West, it's about top and bottom

Overall, 93 per cent of the German populace feels valued in their everyday lives, whereas far fewer - but still one out two (52 per cent) - feel disrespected. Most Germans experience high levels of appreciation overall, especially in private contexts such as among family or friends. Disrespect, however, is most commonly experienced in the workplace. East and West Germans feel equally appreciated in their everyday lives - yet also equally disrespected. Rather, how much appreciation and disrespect a person experiences strongly depends on levels of income, education, and the employment status.

These are the results of a study conducted by a team of sociologists at the Otto von Guericke University Magdeburg (OvGU) in Germany. The study is based on the question module "Status Confidence and Anxiety" which was developed for the Innovation Sample of the German Socio-Economic Panel, a long-standing representative survey in Germany. Building on these data, sociologists Prof. Jan Delhey, Dr. Christian Schneickert, and Leonie Steckermeier (MA) investigate who experiences social appreciation and disrespect in Germany, in which contexts and why. According to Prof. Delhey, the study reveals quite surprising results.

Regarding both, experiences of appreciation and disrespect in daily life, individuals' position on the socio-economic ladder makes a huge difference: the higher a person's income and level of education, the more they feel recognized, and the less disrespected. "An individual's employment status is also crucial." Prof. Jan Delhey continues, "Being unemployed goes hand in hand with a significantly higher risk of experiencing less social recognition and more disrespect." In contrast, a person's regional or ethnic background prove negligible: The study does not find significant differences between East and West Germans, nor between migrants and non-migrants

"Anyone interested in the unequal distribution of social recognition and disrespect in everyday-life in Germany," Dr. Christian Schneickert concludes, "should devote attention to socio-economic inequalityies rather than to socio-cultural diversity."

In addition, the researchers investigated the consequences of these everyday-life experiences for individuals' evaluations of their own lives and their satisfaction with democracy: The more people feel socially appreciated, the more satisfied they are with their lives as well as with democracy.

The study was carried out as part of the project "Recognition, Depreciation and Status Seeking" at the Chair for Macrosociology at the OVGU and was funded by the German Research Foundation (DFG).

The empirical analysis was based on survey data from a representative sample of the German population covering 3,580 respondents. The data were collected in 2016 as part of the Innovation Sample of the German Socio-Economic Panel. The data provide detailed insights into the distribution and significance of feelings of appreciation and disrespect.

The study has been published in the prestigious German social science journal Kölner Zeitschrift für Soziologie und Sozialpsychologie (KZfSS), and can be downloaded for free.

Credit: 
Otto-von-Guericke-Universität Magdeburg

MU scientists find oldest-known fossilized digestive tract -- 550 million years

image: A fossilized cloudinomorph from the Montgomery Mountains near Pahrump, Nev. This is representative of the fossil that was analyzed in the study.

Image: 
University of Missouri

A 550 million-year-old fossilized digestive tract found in the Nevada desert could be a key find in understanding the early history of animals on Earth.

Over a half-billion years ago, life on Earth was comprised of simple ocean organisms unlike anything living in today's oceans. Then, beginning about 540 million years ago, animal structures changed dramatically.

During this time, ancestors of many animal groups we know today appeared, such as primitive crustaceans and worms, yet for years scientists did not know how these two seemingly unrelated communities of animals were connected, until now. An analysis of tubular fossils by scientists led by Jim Schiffbauer at the University of Missouri provides evidence of a 550 million-year-old digestive tract -- one of the oldest known examples of fossilized internal anatomical structures -- and reveals what scientists believe is a possible answer to the question of how these animals are connected.

The study was published in Nature Communications, a journal of Nature.

"Not only are these structures the oldest guts yet discovered, but they also help to resolve the long-debated evolutionary positioning of this important fossil group," said Schiffbauer, an associate professor of geological sciences in the MU College of Arts and Science and director of the X-ray Microanalysis Core facility. "These fossils fit within a very recognizable group of organisms -- the cloudinids -- that scientists use to identify the last 10 to 15 million years of the Ediacaran Period, or the period of time just before the Cambrian Explosion. We can now say that their anatomical structure appears much more worm-like than coral-like."

The Cambrian Explosion is widely considered by scientists to be the point in history of life on Earth when the ancestors of many animal groups we know today emerged.

In the study, the scientists used MU's X-ray Microanalysis Core facility to take a unique analytical approach for geological science -- micro-CT imaging -- that created a digital 3D image of the fossil. This technique allowed the scientists to view what was inside the fossil structure.

"With CT imaging, we can quickly assess key internal features and then analyze the entire fossil without potentially damaging it," said co-author Tara Selly, a research assistant professor in the Department of Geological Sciences and assistant director of the X-ray Microanalysis Core facility.

Credit: 
University of Missouri-Columbia

Mayo Clinic discovers a molecular switch for repairing central nervous system disorders

ROCHESTER, Minn. -- A molecular switch has the ability to turn on a substance in animals that repairs neurological damage in disorders such as multiple sclerosis (MS), Mayo Clinic researchers discovered. The early research in animal models could advance an already approved Food and Drug Administration therapy and also could lead to new strategies for treating diseases of the central nervous system.

Research by Isobel Scarisbrick, Ph.D., published in the Journal of Neuroscience finds that by genetically switching off a receptor activated by blood proteins, named Protease Activated Receptor 1 (PAR1), the body switches on regeneration of myelin, a fatty substance that coats and protects nerves.

"Myelin regeneration holds tremendous potential to improve function. We showed when we block the PAR1 receptor, neurological healing is much better and happens more quickly. In many cases, the nervous system does have a good capacity for innate repair," says Dr. Scarisbrick, principal investigator and senior author. "This sets the stage for development of new clinically relevant myelin regeneration strategies."

Myelin, Thrombin and the Nervous System

Myelin acts like a wire insulator that protects electrical signals sent through the nervous system. Demyelination, or injury to the myelin, slows electrical signals between brain cells, resulting in loss of sensory and motor function. Sometimes the damage is permanent. Demyelination is found in disorders such as MS, Alzheimer's disease, Huntington's disease, schizophrenia and spinal cord injury.

Thrombin is a protein in blood that aids in healing. However, too much thrombin triggers the PAR1 receptor found on the surface of cells, and this blocks myelin production. Oligodendrocyte progenitor cells capable of myelin regeneration are often found at sites of myelin injury, including demyelinating injuries in multiple sclerosis.

"These oligodendroglia fail to differentiate into mature myelin regenerating cells for reasons that remain poorly understood," says Dr. Scarisbrick. "Our research identifies PAR1 as a molecular switch of myelin regeneration. In this study, we demonstrate that blocking the function of the PAR1, also referred to as the thrombin receptor, promotes myelin regeneration in two unique experimental models of demyelinating disease."

The Research

The research focused on two mouse models. One was an acute model of myelin injury and the other studied chronic demyelination, each modeling unique features of myelin loss present in MS, Alzheimer's disease and other neurological disorders. Researchers genetically blocked PAR1 to block the action of excess thrombin.

The research not only discovered a new molecular switch that turns on myelin regeneration, but also discovered a new interaction between the PAR1 receptor and a very powerful growth system called brain derived neurotropic factor (BDNF). BDNF is like a fertilizer for brain cells that keeps them healthy, functioning and growing.

Significantly, the researchers found that a current Food and Drug Administration-approved drug that inhibits the PAR1 receptor also showed ability to improve myelin production in cells tested in the laboratory.

"It is important to say that we have not and are not advocating that patients take this inhibitor at this time," says Dr. Scarisbrick. "We have not used the drug in animals yet, and it is not ready to put in patients for the purpose of myelin repair. Using cell culture systems, we are showing that this has the potential to improve myelin regeneration."

Additional research is needed to verify and advance the findings toward clinical practice.

Credit: 
Mayo Clinic

Researchers develop new protocol to generate intestinal organoids in vitro

(Boston) - Boston researchers have developed a new way to generate groups of intestinal cells that can be used, among others, to make disease models in the lab to test treatments for diseases affecting the gastrointestinal system. Using human induced pluripotent stem cells, this novel approach combined a variety of techniques that enabled the development of three-dimensional groups of intestinal cells called organoids in vitro, which can expand disease treatment testing in the lab using human cells.

Published online in Nature Communications, this process provides a novel platform to improve drug screenings and uncover novel therapies to treat a variety of diseases impacting the intestine, such as inflammatory bowel disease, colon cancer and Cystic Fibrosis.

Researchers at the Center for Regenerative Medicine (CReM) of Boston University and Boston Medical Center used donated human induced pluripotent stem cells (hiPSCs), which are created by reprogramming adult cells into a primitive state. For this study, these cells were pushed to differentiate into intestinal cells using specific growth factors in order to create organoids in a gel. This new protocol allowed the cells to develop without mesenchyme, which typically in other protocols, provides support for the intestinal epithelial cells to grow. By taking out the mesenchyme, the researchers could study exclusively epithelial cells, which make up the intestinal tract.

In addition, using CRISPR technology, the researchers were able to modify and create a novel iPSC stem cell line that glowed green when differentiated into intestinal cells. This allowed the researchers to follow the process of how intestinal cells differentiate in vitro.

"Generating organoids in our lab allows us to create more accurate disease models, which are used to test treatments and therapies targeted to a specific genetic defect or tissue - and it's all possible without harming the patient," said Gustavo Mostoslavsky, MD, PhD, co-director of CReM and faculty in the gastroenterology section at Boston Medical Center. "This approach allows us to determine what treatments could be most effective, and which are ineffective, against a disease."

Using this new protocol, the researchers generated intestinal organoids from iPSCs containing a mutation that causes Cystic Fibrosis, which typically affects several organs, including the gastrointestinal tract. Using CRISPR technology, the researchers corrected the mutation in the intestinal organoids. The intestinal organoids with the mutation did not respond to a drug while the genetically corrected cells did respond, demonstrating their future potential for disease modeling and therapeutic screening applications.

The protocol developed in this study provides strong evidence to continue using human iPSCs to study development at the cellular level, tissue engineering and disease modeling in order to advance the understanding - and possibilities - of regenerative medicine.

"I hope that this study helps move forward our collective understanding about how diseases impact the gastrointestinal tract at the cellular level," said Mostoslavsky, who also is associate professor of medicine and microbiology at Boston University School of Medicine. "The continual development of novel techniques in creating highly differentiated cells that can be used to develop disease models in a lab setting will pave the way for the development of more targeted approaches to treat many different diseases."

Credit: 
Boston Medical Center

Deep learning differentiates small renal masses on multiphase CT

image: Except for AUC, all values are percentages. Ranges in parentheses are 95% CIs.

Image: 
<em>American Journal of Roentgenology</em> (AJR)

Leesburg, VA, January 10, 2020--A deep learning method with a convolutional neural network (CNN) can support the evaluation of small solid renal masses in dynamic CT images with acceptable diagnostic performance, according to an article published ahead-of-print in the March issue of the American Journal of Roentgenology (AJR).

Between 2012 and 2016, researchers at Japan's Okayama University studied 1807 image sets from 168 pathologically diagnosed small (? 4 cm) solid renal masses with four CT phases--unenhanced, corticomedullary, nephrogenic, and excretory--in 159 patients.

Masses were classified as malignant (n = 136) or benign (n = 32) using a 5-point scale, and this dataset was then randomly divided into five subsets.

As lead AJR author Takashi Tanaka explained, "four were used for augmentation and supervised training (48,832 images), and one was used for testing (281 images)."

Utilizing the Inception-v3 architecture CNN model, the AUC for malignancy and accuracy at optimal cutoff values of output data were evaluated in six different CNN models.

Finding no significant size difference between malignant and benign lesions, Tanaka's team did find that the AUC value of the corticomedullary phase was higher than that of other phases (corticomedullary vs excretory, p = 0.022).

Additionally, the highest accuracy (88%) was achieved in the corticomedullary phase images.

Multivariate analysis revealed that the CNN model of corticomedullary phase was a significant predictor for malignancy, "compared with other CNN models, age, sex, and lesion size," Tanaka concluded.

Credit: 
American Roentgen Ray Society

Technique allows dolphin pregnancy exams to mirror those in humans

image: Ultrasound images of dolphin fetuses showing: (A) eye, (B) stomach, liver, lungs, (C) male genitalia (penis), and (d) female genitalia (vagina).

Image: 
NMMF

Ultrasound has been used for decades to study dolphin health and much of that work has been pioneered by veterinarians at the National Marine Mammal Foundation (NMMF). But in a groundbreaking study just published in Veterinary Radiology & Ultrasound, scientists have developed a new ultrasound technique for evaluating dolphin fetuses at all stages of gestation. This recent advancement in dolphin medicine allows veterinarians to evaluate dolphin pregnancies the same way doctors approach pregnancies in humans.

"We can now re-create the human 20-week fetal ultrasound exam in dolphins, which means we can better understand the health challenges dolphin mothers and their babies are facing," said NMMF Executive Director Dr. Cynthia Smith."This is a game-changer for the conservation of bottlenose dolphins and other small cetaceans around the world."

To develop the technique, veterinarians followed 16 healthy pregnancies of dolphins in human care and determined normal findings for both the fetus and placenta. The Chicago Zoological Society's radiologist, Dr. Marina Ivanči?, collaborated on the project.

"This new technique can be performed in a matter of minutes and provides a wealth of information about the health of the dolphin fetus," said Dr. Ivanči?. "We are thrilled to make this technique widely available to veterinarians and radiologists, which has the potential to elevate dolphin medicine globally."

The work was part of an investigation into the long-term health effects of the Deepwater Horizon oil spill, which caused a major reduction in pregnancy success in bottlenose dolphins living within the oil spill footprint. The study was made possible by a research grant from the Gulf of Mexico Research Initiative and is already leading to major breakthroughs in the scientist's ability to understand the reproductive failures.

Future applications of the technique and normal data will help identify fetal abnormalities and maternal illness, helping to determine the cause of reproductive failure in wild and managed dolphin populations.

"This advanced ultrasound technique is allowing us to diagnose problems as early as the first trimester of pregnancy in dolphins," said Dr. Forrest Gomez. "That gives us a chance to determine if there is something that could be done to save the pregnancy, which could prove critical for populations of dolphins and porpoises that are at risk."

The NMMF has also released an important study reporting the normal blood values from dolphins that reproduce successfully. This allows for the early detection of pregnancy problems, which can also help determine the cause of reproductive failures in populations that are struggling to thrive.

Credit: 
National Marine Mammal Foundation

Scientists examine how a gut infection may produce chronic symptoms

image: Along the edge of the small intestine, neurons (green) appear in close proximity to the inflammatory molecule Nlrp6 (pink).

Image: 
Laboratory of Mucosal Immunology

Sometimes the end of an intestinal infection is just the beginning of more misery. Of those who contract traveler's diarrhea, for example, an unlucky few go on to develop irritable bowel syndrome (IBS), a chronic inflammation of the intestinal tract.

Scientists aren't sure exactly how this happens, but some think an infection may contribute to IBS by damaging the gut nervous system. A new Rockefeller study takes a close look at why neurons in the gut die and how the immune system normally protects them.

Conducted with mice, the experiments described recently in Cell offer insight on IBS and could point toward potential new treatment approaches.

Keeping inflammation in check

In a healthy gut, the immune system must strike a careful balance between responding to threats and keeping that response in check to avoid damage.

"Inflammation helps the gut ward off an infection, but too much of it can cause lasting harm," says Daniel Mucida, an associate professor and head of the Laboratory of Mucosal Immunology. "Our work explores the complex mechanisms that prevent inflammatory responses from destroying neurons."

To understand the effects of an infection on the nervous system, Mucida and his colleagues gave mice a weakened form of Salmonella, a bacterium that causes food poisoning, and analyzed neurons within the intestine. They found that infection induced a long-lasting reduction of neurons, an effect they attributed to the fact these cells express two genes, Nlrp6 and Caspase 11, which can contribute to a specific type of inflammatory response.

This response, in turn, can ultimately prompt the cells to undergo a form of programmed cell death. When the researchers manipulated mice to eliminate these genes specifically in neurons, they saw a decrease in the number of neurons expiring.

"This mechanism of cell death has been documented in other types of cells, but never before in neurons," says Fanny Matheis, a graduate student in the lab. "We believe these gut neurons may be the only ones to die this way."

Macrophages to the rescue

It's not yet clear exactly how inflammation causes neurons to commit cell suicide, yet the scientists already have clues suggesting it might be possible to interfere with the process. The key may be a specialized set of gut immune cells, known as muscularis macrophages.

Previous work in Mucida's lab has shown that these cells express inflammation-fighting genes and collaborate with the neurons to keep food moving through the digestive tract. If these neurons die off, as happens in an infection, a possible result is constipation--one of a number of unpleasant IBS symptoms. In their recent report, the team demonstrate how macrophages come to the neurons' aid during an infection, ameliorating this aspect of the disorder.

Their experiments revealed that macrophages possess a certain type of receptor molecule that receives stress signals released by another set of neurons in response to an infection. Once activated, this receptor prompts the macrophage to produce molecules called polyamines, which the scientists think might interfere with the cell death process.

Getting back to normal

In other experiments, the researchers found that Salmonella infection alters the community of microbes within the guts of mice--and when they restored the animals' intestinal flora back to normal, the neurons recovered.

"Using what we learned about the macrophages, one could think about ways to disrupt the inflammatory process that kills the neurons," says Paul Muller, a postdoctoral fellow in the lab.

For instance, it might be possible to develop better treatments for IBS that work by boosting polyamine production, perhaps through diet, or by restoring gut microbial communities. Since short-term stress responses also appear to have a protective effect, Muller thinks it may also be helpful to target that system.

Credit: 
Rockefeller University

SuperTIGER on its second prowl -- 130,000 feet above Antarctica

image: The Super Trans-Iron Galactic Element Recorder (SuperTIGER) instrument is used to study the origin of cosmic rays. SuperTIGER is a collaboration among Washington University in St. Louis, Goddard Space Flight Center, California Institute of Technology Jet Propulsion Laboratory and the University of Minnesota. The SuperTIGER instrument is carried aloft above Antarctica by a giant 39.5 million-cubic-foot scientific balloon. The balloon flies at a height of about 129,000 feet -- nearly four times the typical cruising altitude of commercial airliners. Here, the instrument is waiting on the launch pad as the balloon inflates.

Image: 
Wolfgang Zober, Washington University in St. Louis.

A balloon-borne scientific instrument designed to study the origin of cosmic rays is taking its second turn high above the continent of Antarctica three and a half weeks after its launch.

SuperTIGER (Super Trans-Iron Galactic Element Recorder) is designed to measure the rare, heavy elements in cosmic rays that hold clues about their origins outside of the solar system. The effort is a collaboration among Washington University in St. Louis, Goddard Space Flight Center, California Institute of Technology Jet Propulsion Laboratory and the University of Minnesota.

The longer the balloon and instrument are up, the better.

"The significance of our observation increases with the number of events we observe essentially linearly with time, so we simply want to have as long a flight as possible to maximize the statistics of the data collected," said Brian Rauch, research assistant professor of physics in Arts & Sciences at Washington University and principal investigator for SuperTIGER. "A day of data is a small increment of progress, and we just have to put our heads down and keep grinding away.

"SuperTIGER flights are marathons, not sprints."

'Fly as long as we can'

On Dec. 31, the balloon completed its first full revolution of Antarctica.

Little more than two weeks prior, Rauch and his team were celebrating a successful launch after a succession of challenging seasons on the ice.

"After three Antarctic seasons -- with 19 launch attempts, two launches and one recovery of the payload from a crevasse field -- it is wonderful to have SuperTIGER-2 finally reach float altitude and begin collecting scientific data. The third season is the charm!" Rauch said in a Dec. 15 news release.

NASA's Balloon Program Office called it a "picture-perfect launch," although the scientists suffered some technical setbacks once the instrument was in the air. There were problems with a power supply, Rauch said, and a computer failure eliminated one of the detector modules early in the flight.

"This only further emphasizes the importance for us to fly as long as we can to make up for the loss in instrument collecting power," he said. "As it is, in this flight we may hope to collect about 40% of the statistics achieved with the first SuperTIGER flight."

The 2012-13 SuperTIGER flight broke scientific ballooning records for longevity -- staying afloat for a remarkable 55 days. The current mission will not challenge that record.

"The way the stratospheric winds are circulating this season, our flight will be terminated when the balloon comes over a suitable location at the end of our second revolution around the continent," Rauch said.

Along for the ride

The balloon that carries SuperTIGER is also transporting four, smaller experimental devices that are "piggybacked" onto its core scientific payload. The list includes two experiments by Washington University in St. Louis researchers:

A gamma-ray explorer developed by James H. Buckley, professor of physics in Arts & Sciences. Called APT-Lite, this instrument will be followed by a bigger device called APT, currently under development.
Another instrument developed Alex Meshik, research professor of physics, which captured stratospheric air during the balloon's ascent. This air will be used for high-precision analyses of noble gas isotopes.

Sun never sets on SuperTIGER

"There are two SuperTIGERs (team members) still on the ice, both from Washington University," Rauch said.

"In order to provide continuous coverage for the duration of the flight, the day is divided into monitoring shifts," he said. "Those of us on the ice get to cover the graveyard shift for the folks in the United States.

"My routine has evolved into my getting up in the mid-afternoon, eating dinner, doing the monitoring shift in our office in Crary Lab, working in the office for another few hours or so, and then going to bed. (Graduate student in physics) Wolfgang Zober's routine is similar, but he usually arrives, eats and gets to the office earlier than me and calls it a 'day' closer to the end of our shift."

"When I'm not monitoring, I go walk on the trails, taking photos of penguins and seals," Zober said. "I also make it a habit of socializing with other science groups to learn about other research being done here."

There are few exceptions to the routine. The researchers did carve out a little time from their monitoring tasks to watch on Jan. 6 the launch of another balloon experiment, the BLAST-TNG mission led by researchers at University of Pennsylvania. That voyage ended only 15 hours into the flight, due to technical issues.

For the SuperTIGER team, there was no stopping for the holidays.

"I managed to convince Wolfgang to take a break on New Year's Eve to experience IceStock while I covered the monitoring shift alone," Rauch said.

"I did step outside just before midnight to see the New Year in."

---
LAUNCH video available: https://youtu.be/yx-ZJpmUr7c
---

Want to follow the mission? Get updates on SuperTIGER as the flight progresses through the Washington University team's Twitter account, @SuperTigerLDB, or by following the Twitter handle @NASAUniverse.

Credit: 
Washington University in St. Louis

Prenatal Exposure to Flame Retardants Linked to Reading Problems

A new study from researchers at Columbia University Vagelos College of Physicians and Surgeons suggests that prenatal exposure to flame retardants may increase the risk of reading problems.

The study was published in the January 2020 print edition of Environmental International.

An estimated 2 million children have learning disorders; of these, about 80% have a reading disorder. Genetics account for many, but not all, instances of reading disorders.

In the current study, the researchers hypothesized that in utero exposure to polybrominated diphenyl ethers (PBDEs)--a type of flame retardant that is known to have adverse effects on brain development--might alter the brain processes involved in reading. (While use of PBDEs has been banned, exposure to the compounds is still widespread because they do not degrade easily in the environment.)

The research team analyzed neuro-imaging data from 33 5-year-old children--all novice readers--who were first given a reading assessment to identify reading problems. They also used maternal blood samples, taken during pregnancy, to estimate prenatal exposure to PDBEs.

The researchers found that children with a better-functioning reading network had fewer reading problems. The also showed that children with greater exposure to PDBEs had a less efficient reading network.

However, greater exposure did not appear to affect the function of another brain network involved in social processing that has been associated with psychiatric disorders such as autism spectrum disorder.

"Since social processing problems are not a common aspect of reading disorders, our findings suggest that exposure to PDBEs doesn't affect the whole brain--just the regions associated with reading," says Amy Margolis, PhD, assistant professor of medical psychology in the Department of Psychiatry at Columbia University Vagelos College of Physicians and Surgeons.

Although exposure to PDBEs affected reading network function in the 5-year-olds, it did not have an impact on word recognition in this group. The finding is consistent with a previous study, in which the effects of exposure to the compounds on reading were seen in older children but not in emergent readers. "Our findings suggest that the effects of exposure are present in the brain before we can detect changes in behavior," says Margolis. "Future studies should examine whether behavioral interventions at early ages can reduce the impact of these exposures on later emerging reading problems."

The paper is titled "Functional Connectivity of the Reading Network is Associated with Prenatal Polybrominated Diphenyl Ether Concentrations in a Community Sample of 5 Year-Old Children: A preliminary study."

Additional authors are Sarah Banker (Columbia University Irving Medical Center, New York, NY), David Pagliaccio (CUIMC), Erik De Water (Icahn School of Medicine at Mount Sinai, New York, NY), Paul Curtin (Icahn School of Medicine), Anny Bonilla (Icahn School of Medicine), Julie B. Herbstman (CUIMC), Robin Whyatt (CUIMC), Ravi Bansal (University of Southern California, Los Angeles, CA), Andreas Sjödin (Center for Disease Control and Prevention, Atlanta, GA), Michael P. Milham (Child Mind Institute, New York, NY), Bradley S. Peterson (USC), Pam Factor-Litvak (CUIMC), Megan K. Horton (Icahn School of Medicine).

This work was supported by funding from the National Institute for Environmental Health Sciences (K23ES026239 to A.E.M., R00 ES020364 to M.K.H; R21 ES016610-01 to R.W.)

The authors report no financial or other conflicts of interest.

Credit: 
Columbia University Irving Medical Center

Lonely in a crowd: Overcoming loneliness with acceptance and wisdom

image: Dilip Jeste, MD, senior associate dean for the Center of Healthy Aging and Distinguished Professor of Psychiatry and Neurosciences at UC San Diego School of Medicine. Photo by Erik Jepson, UC San Diego Publications.

Image: 
Erik Jepson, UC San Diego Publications

By nature, human beings are social creatures. Yet, as we age, personal dynamics and lifestyles change, which can result in loneliness and isolation. With older adults increasingly moving into senior living or retirement communities, researchers at University of California San Diego School of Medicine sought to identify the common characteristics of residents who feel lonely in these environments.

"Loneliness rivals smoking and obesity in its impact on shortening longevity," said senior author Dilip V. Jeste, MD, senior associate dean for the Center of Healthy Aging and Distinguished Professor of Psychiatry and Neurosciences at UC San Diego School of Medicine. "It is a growing public health concern, and it's important that we identify the underlying causes of loneliness from the seniors' own perspectives so we can help resolve it and improve the overall health, well-being and longevity of our aging population."

Jeste noted that there are few published qualitative studies about loneliness among older adults in the independent living sector of senior housing communities, where shared common areas, planned social outings and communal activities are intended to promote socialization and reduce isolation. "So why are many older adults living in this type of housing still experiencing strong feelings of loneliness?" asked Jeste.

The new study, published online in the January 10, 2020 issue of Aging and Mental Health, found that people's experience of living with loneliness is shaped by a number of personal and environmental factors.

Researchers conducted one-and-a-half-hour individual interviews of 30 adults ages 67 to 92, part of an overall study evaluating the physical, mental and cognitive functions of 100 older adults living in the independent living sector of a senior housing community in San Diego.

In this communal setting, 85 percent of the residents reported moderate to severe levels of loneliness. "Loneliness is subjective," said Jeste. "Different people feel lonely for different reasons despite having opportunities and resources for socialization. This is not a one size fits all topic."

Three main themes emerged from the study:

Age-associated losses and inadequate social skills were considered to be primary risk factors for loneliness. "Some residents talked about the loss of spouses, siblings and friends as the cause of their loneliness. Others mentioned how making new friends in a senior community cannot replace deceased friends they grew up with," said first author Alejandra Paredes, PhD, a research fellow in the Department of Psychiatry at UC San Diego School of Medicine.
The feeling of loneliness was frequently associated with a lack of purpose in life. "We heard powerful comments like, 'It's kind of gray and incarcerating,'" said Jeste. "Others expressed a sense of 'not being attached, not having very much meaning and not feeling very hopeful' or 'being lost and not having control.'"
The research team also found that wisdom, including compassion, seemed to be a factor that prevented loneliness. "One participant spoke of a technique she had used for years, saying 'if you're feeling lonely, then go out and do something for somebody else.' That's proactive," said Jeste. Other protective factors were acceptance of aging and comfort with being alone. "One resident told us, 'I've accepted the aging process. I'm not afraid of it. I used to climb mountains. I want to keep moving, even if I have to crawl. I have to be realistic about getting older, but I consider and accept life as a transition,'" Jeste noted. "Another resident responded, 'I may feel alone, but that doesn't mean I'm lonely. I'm proud I can live by myself.'"

According to the National Center for Health Statistics, by 2029, more than 20 percent of the United States population will be over the age of 65. "It is paramount that we address the well-being of our seniors -- they are friends, parents and grandparents of the younger generations," said Jeste. "Our study is relevant to better understand loneliness within senior housing and other settings to so we can develop effective interventions."

Credit: 
University of California - San Diego