Tech

How one pain suppresses the other

Two consecutive studies on this have been published in the journals Brain Sciences and BMC Neuroscience.

The same stimulus hurts differently

The human perception of pain can vary greatly depending on the situation. So it is possible that the same pain stimulus feels more or less painful under different conditions. The body's own pain control system is responsible for this. Researchers investigate this system with the research method called Conditioned Pain Modulation, or CPM for short. "This records how strongly a painful stimulus inhibits the experience of another painful stimulus that is presented at the same time," explains Assistant Professor Dr. Oliver Höffken, neurologist at Bergmannsheil.

In the first study, the research team compared an established CPM model with a recently introduced variation. With Conditioned Pain Modulation, two pain stimuli always play a role. The first stimulus, also called the test stimulus, is administered twice: once alone and once in conjunction with the second stimulus, the conditioning stimulus. The test person should assess how painful the test stimulus was on its own and how it felt while the conditioning stimulus was administered.

An objective criterion

In the current work, the team led by Oliver Höffken, Dr. Özüm Özgül and Professor Elena Enax-Krumova compared two different test stimuli: a tried and tested stimulus caused by heat pain and a new one triggered by electrical stimulation of the skin. In both cases the conditioning stimulus was generated by cold water. The electrical stimulation of the skin has a decisive advantage over the previously used heat method: it allows the changes in brain activity triggered by the electrical stimuli of the skin to be measured with the help of EEG recording. This adds an objectively measurable criterion to the subjective pain assessment of the test persons.

Two mechanisms with the same result

In the second study, the researchers used the previously tested CPM model with the electrical stimulation of the skin and compared it to the pain-relieving effect of cognitive distraction. They found that both the CPM method and cognitive distraction can reduce the sensation of pain to a similar degree. However, the two methods showed different results in the measurement of the electrical potentials. "Based on our measurements, we assume that the two pain-relieving effects examined are two different neural mechanisms that just lead to the same effect," says Höffken.

The researchers carried out their studies on healthy volunteers. However, research into the body's own pain inhibition system is also relevant in order to better understand various pain disorders. "In patients with chronic pain, the development of postoperative pain and the transition from acute to chronic pain, changed CPM effects have already been found in the past. In our research group, we therefore use the CPM model as an instrument to investigate mechanisms in the processing of painful information", explains Höffken.

Original publication

Elena Enax-Krumova, Ann-Christin Plaga, Kimberly Schmidt, Özüm S. Özgül, Lynn B. Eitner, Martin Tegenthoff and Oliver Höffken: Painful cutaneous electrical stimulation vs. heat pain as test stimuli in conditioned pain modulation, in: Brain Sciences, 2020, DOI: 10.3390/brainsci10100684

A. T. Lisa Do, Elena Enax-Krumova, Özüm Özgül, Lynn B. Eitner, Stefanie Heba, Martin Tegenthoff, Christoph Maier, Oliver Höffken: Distraction by a cognitive task has a higher impact on electrophysiological measures compared with conditioned pain modulation, in: BMC Neuroscience, 2020, DOI: 10.21203/rs.3.rs-26882/v3

Credit: 
Ruhr-University Bochum

Despite same treatment, obese women face more risks for postpartum hemorrhage complications

image: The study's senior author was Judette Louis, MD, MPH, the James Ingram Endowed Professor and chair of Obstetrics and Gynecology at the University of South Florida Morsani College of Medicine and co-medical director of Women's and Children's Services at Tampa General Hospital.

Image: 
Photo courtesy of USF Health

TAMPA, Fla (Dec. 21, 2020) -- Postpartum hemorrhage, or excessive bleeding after delivery, is still one of the leading causes of severe maternal injury and death in the United States. And the rise in obesity among pregnant women has been linked to increased rates of this potentially serious, largely preventable obstetric complication.

As part of an academic medical center initiative to improve maternal health, researchers at the University of South Florida Health (USF Health) and Tampa General Hospital (TGH) examined how obesity affected the management and outcomes of postpartum hemorrhage at a tertiary care center. Their findings were published Oct. 14 in the American Journal of Perinatology.

"This study showed that we managed postpartum hemorrhage the same way for women who were obese and those who were not. That's good overall - but the same medical treatment is not always equitable because the obese women still experienced worse outcomes," said study senior author Judette Louis, MD, MPH, the James Ingram Endowed Professor and chair of Obstetrics and Gynecology at the USF Health Morsani College of Medicine and co-medical director of Women's and Children's Services at TGH. "It highlights that certain groups of high-risk obstetric patients, such as obese women, may need some additional support or a different treatment protocol for postpartum hemorrhage."

The researchers conducted a retrospective analysis of all deliveries complicated by postpartum hemorrhage from February 2013 through January 2014 - about 2.6% of the hospital's 9,890 deliveries during that period (a rate consistent with the national average). Controlling for confounding variables, they compared two groups of patients treated for postpartum hemorrhage: obese women (a body mass index of 30 or higher) and nonobese women (BMI characteristic of normal weight or overweight). Both groups were similar in age, race, insurance status, and alcohol and tobacco use.

Among the study's key findings:

Obese patients were more likely to have had cesarean sections, a risk factor for hemorrhage complications, than nonobese patients.

Both groups were equally likely to receive the same medications (carboprost, methylergonovine and misoprostol) to treat excessive blood loss, but obese women tended to receive more than one of these uterotonic agents. The medications are administered to induce contractions when the uterus does not contract enough to shrink to normal size after childbirth. This condition, known as uterine atony, is a primary cause of postpartum hemorrhage.

Despite similar management, obese women experienced more of any severe hemorrhage-related complications (including shock, renal failure, transfusion-related lung injury, and cardiac arrest), and they were more apt to sustain more than one of the serious complications.

While the need for blood transfusion was similar for both groups, obese women were more likely to have greater blood loss and require more units of transfused blood. "Hemorrhage-related complications are largely driven by blood loss and the number of units of blood transfused," said Dr. Louis, a USF Health maternal-fetal medicine specialist at TGH.

Although obese women were more often transferred to the operating room, the rates of intrauterine pressure balloon tamponade (a device used to promote uterine contraction), interventional radiology procedures, or hysterectomy were no different for obese and nonobese women.

Some basic science and clinical studies investigating uterine contractions during labor indicate obesity can impair uterine tone, so that the reproductive organ may not react as quickly or well to contraction-inducing medications. The underlying reasons for this are undefined, but a disruption of the hormonal balance in obese women may contribute to the impaired uterine response to control bleeding, Dr. Louis said. "Perhaps they need a higher dose of uterotonic agents, or the order in which the medications are administered should be changed to work more effectively for them."

The USF Health-TGH study points to the need for larger, multisite studies to better understand the different responses to treatment protocols for postpartum hemorrhage in obese women, she added. That includes looking into the possible physiological connections between obesity, pharmacokinetics of the treatment (how the body processes medications) and the impact on uterine atony.

"With higher rates of obesity affecting higher numbers of pregnant women each year, it is important to evaluate how this is affecting the management of obstetric complications," the study authors conclude. "This study shows that despite similar (postpartum hemorrhage) management, key differences do exist in outcomes based on obesity status. There are numerous directions for future research... many of which have the potential for significant clinical implications and improvement of maternal outcomes."

Credit: 
University of South Florida (USF Health)

Looking for dark matter near neutron stars with radio telescopes

image: Figure 1. illustrates the CP symmetry operation performed upon a meson particle. We say that the CP symmetry is violated if we observe that the original system (first frame in Fig.1) decays into a different particle than the CP transformed system (fourth frame in Fig.1).

Image: 
Kavli IPMU

In the 1970s, physicists uncovered a problem with the Standard Model of particle physics--the theory that describes three of the four fundamental forces of nature (electromagnetic, weak, and strong interactions; the fourth is gravity). They found that, while the theory predicts that a symmetry between particles and forces in our Universe and a mirror version should be broken, the experiments say otherwise. This mismatch between theory and observations is dubbed "the Strong CP problem"--CP stands for Charge+Parity. What is the CP problem, and why has it puzzled scientists for almost half a century?

In the Standard Model, electromagnetism is symmetric under C (charge conjugation), which replaces particles with antiparticles; P (parity), which replaces all the particles with their mirror image counterparts; and, T (time reversal), which replaces interactions going forwards in time with ones going backwards in time, as well as combinations of the symmetry operations CP, CT, PT, and CPT. This means that experiments sensible to the electromagnetic interaction should not be able to distinguish the original systems from the ones that have been transformed by either of the aforementioned symmetry operations.

In the case of the electromagnetic interaction, the theory matches the observations very well. As anticipated, the problem lays in one of the two nuclear forces--"the strong interaction." As it turns out, the theory allows violations of the combined symmetry operation CP (reflecting particles in a mirror and then changing particle for antiparticle) for both the weak and strong interaction. However, CP violations have so far been only observed for the weak interaction.

More specifically, for the weak interactions, CP violation occurs at approximately the 1-in-1,000 level, and many scientists expected a similar level of violations for the strong interactions. Yet experimentalists have looked for CP violation extensively but to no avail. If it does occur in the strong interaction, it's suppressed by more than a factor of one billion (10?).

In 1977, theoretical physicists Roberto Peccei and Helen Quinn proposed a possible solution: they hypothesized a new symmetry that suppresses CP-violating terms in the strong interaction, thus making the theory match the observations. Shortly after, Steven Weinberg and Frank Wilczek--both of whom went on to win the Nobel Prize in physics in 1979 and 2004, respectively--realized that this mechanism creates an entirely new particle. Wilczek ultimately dubbed this new particle the "axion," after a popular dish detergent with the same name, for its ability to "clean up" the strong CP problem.

The axion should be an extremely light particle, be extraordinarily abundant in number, and have no charge. Due to these characteristics, axions are excellent dark matter candidates. Dark matter makes up about 85 percent of the mass content of the Universe, but its fundamental nature remains one of the biggest mysteries of modern science. Finding that dark matter is made of axions would be one of the greatest discoveries of modern science.

In 1983, theoretical physicist Pierre Sikivie found that axions have another remarkable property: In the presence of an electromagnetic field, they should sometimes spontaneously convert to easily detectable photons. What was once thought to be completely undetectable, turned out to be potentially detectable as long as there is high enough concentration of axions and strong magnetic fields.

Some of the Universe's strongest magnetic fields surround neutron stars. Since these objects are also very massive, they could also attract copious numbers of axion dark matter particles. So physicists have proposed searching for axion signals in the surrounding regions of neutron stars. Now, an international research team, including the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) postdoc Oscar Macias, has done exactly that with two radio telescopes--the Robert C. Byrd Green Bank Telescope in the US, and the Effelsberg 100-m Radio Telescope in Germany.

The targets of this search were two nearby neutron stars known to have strong magnetic fields, as well as the Milky Way's center, which is estimated to host half a billion neutron stars. The team sampled radio frequencies in the 1-GHz range, corresponding to axion masses of 5-11 micro electron-volt. Since no signal was seen, the team was able to impose the strongest limits to date on axion dark matter particles of a few micro electron-volt mass.

Credit: 
Kavli Institute for the Physics and Mathematics of the Universe

Optoelectronic devices that emit warm and cool white light

image: Monolithic LEDs emit a natural-white light without using phosphors.

Image: 
© 2020 KAUST

The advantages of light-emitting diodes (LEDs), such as their tiny size, low cost and excellent power efficiency, mean they are found everywhere in modern life. A KAUST team has recently developed a way of producing a white-light LED that overcomes some critical challenges.

Blinking away on almost every modern electronic device, LEDs transmit messages in their own distinct shade of red, green or blue. The coloration of an LED comes from a semiconductor inside that emits over a narrow spectrum of optical wavelengths. The inability of LEDs to emit across a wider spectrum restricts their use in lighting applications -- emitting a wider spectrum is necessary to generate white light -- or for displays that require a wide palette of different colors.

One approach to fabricate white-light LEDs is to combine devices of different materials, where each material emits a different color. The emission of red, blue and green from the different materials can be combined to create white light, but this increases the complexity and cost of manufacture of LEDs. Alternatively, a single semiconductor can be used by mixing in a phosphor that absorbs some of the light emitted by the semiconductor and then re-emits it as a different color. However, phosphor degrades over time, limiting the useful lifetime of these devices.

Daisuke Iida and Kazuhiro Ohkawa's team have devised a way to build phosphor-free monolithic white-light LEDs using the semiconductor indium gallium nitride.

The emission color of indium gallium nitride depends on the relative content of the indium and gallium atoms. For example, gallium nitride emits ultraviolet light, but adding indium shifts the emission across the visible spectrum and into the infrared. The emission can be controlled further by sandwiching very thin layers of indium gallium nitride with one composition between two layers of different composition, creating so-called quantum wells.

"What is unique about our devices is that we use material defects, or V-pit structures, to enhance the injection of a current into the semiconductor," says Iida. The LEDs designed by the KAUST team included both blue-light emitting quantum wells with a 20 percent indium content and 34 percent indium red quantum wells. Combined, this monolithic LED emits light across the entire visible spectrum. By controlling the current passing through the device, the team could change the emission from a warm white to a natural white and through to a cool white.

"The next step is to improve the emission efficacy of the red emission component," says Iida. "The red emission is a key factor of the high color-rendering LEDs with the natural white emission."

Credit: 
King Abdullah University of Science & Technology (KAUST)

Research analyzes academic abstracts written by students

Abstracts are summaries that introduce scientific articles. Their purpose is to inform as to the content of the text so that in a short time potential readers can get a general idea of the contents and decide whether they are interested in reading the entire document. Formally, it is a basically informative summary that synthesizes the most important contributions of the article: the topic of study, the methodology applied, and, above all, the results obtained. Undergraduate final year projects (TFG) are academic texts written by students starting out as researchers and are also preceded by an abstract or summary.

Although there are studies on scientific abstracts written by experts, few deal with student productions. An article published recently in the Revista de lingüística teórica y aplicada by Maria Dolors Cañada and Carme Bach of the Gr@el research group at the UPF Department of Translation and Language Sciences, analyses abstracts written by future graduates in Applied Languages for their final year project (TFG), "an academic genre for evaluation as it will be graded by teachers, which does not occur with scientific texts".

The abstracts studied were treated at two levels of analysis: macrotextual (rhetorical moves), and microtextual (metadiscourse markers used to indicate relationships, unions and preview ideas and phrases in the discourse).

As the authors point out in their article, "the importance of studying these textual products is due to the fact that all curricula include an end of bachelor's or master's degree project. This leads to the emergence of a new genre with a significant presence in written academic practices, which deserves descriptive research like ours, because the implications for the teaching academic writing are obvious".

The corpus of this study consists of 36 abstracts from TFG by an entire cohort of the bachelor's degree in Applied Languages at UPF, extending to 7,488 words. The students received general instructions on how to perform their TFG, but no special attention was paid to the drafting of the work in general or of the abstract in particular. Abstracts varied in length from a minimum of 83 to a maximum of 327 words.

Helping achieve discourse competence to disseminate research

"Being competent in discourse means having internalized the characteristics of a specific genre not just linguistically but also at sociocultural and at pragmatic level, i.e., directly related to the context in which genre is produced and received", the authors assert. From the point of view of the teaching of writing, helping students become proficient in discourse involves making them aware of the characteristics of the genre.

In many cases, the presentation of the subject expresses the author's identity rather as a student than as a researcher, since they do not succinctly introduce the area of knowledge they are working in, but display their knowledge extensively, which they suppose will deserve positive assessment by the teacher. The study shows that the abstracts analysed are hybrid texts, halfway between academic discourse, produced by students who will be evaluated, and specialist discourse, produced by and targeting expert readers.

Credit: 
Universitat Pompeu Fabra - Barcelona

Research uses a video game to identify attention deficit symptoms

image: Platform video games, like this one with a jumping raccoon, can be used to help diagnose ADHD.

Image: 
UC3M

Adapting a traditional endless runner video game and using a raccoon as the protagonist, researchers from the Universidad Carlos III de Madrid (UC3M) and the Complutense University of Madrid (UCM, in its Spanish acronym), among other institutions, have developed a platform that allows the identification and evaluation of the degree of attention deficit hyperactivity disorder (ADHD) in children and adolescents.

ADHD is a neurodevelopmental disorder with an estimated prevalence of 7.2% in children and adolescents, according to the latest evaluations. It is clinically diagnosed, and this diagnosis is based on the judgement of health care professionals using the patient's medical history, often supported by scales completed by caregivers and/or teachers. No diagnostic tests have been developed for ADHD to date. In a paper recently published in Brain Sciences, this team of researchers proposed using a video game that children are already familiar with to identify the symptoms of ADHD and evaluate the severity of the lack of attention in each case.

In this game genre, the player has a running avatar which they have to use to avoid different obstacles in their way. "In our game, the avatar is a raccoon that has to jump in order to avoid falling into the holes it will encounter on its route," explains David Delgado Gómez, the lead author and professor at the UC3M's Department of Statistics.

"We hypothesise that children diagnosed with ADHD inattentive subtype will make more mistakes by omission and will jump closer to the hole as a result of the symptoms of inattention," says Inmaculada Peñuelas Calvo, another author of the study, psychiatrist at the Jiménez Díaz Foundation University Hospital and professor at the UCM's Department of Personality, Evaluation and Clinical Psychology.

The main benefit of this study is that it allows symptoms of attention deficit to be directly identified, so that the severity of the patient's inattention can be objectively assessed, say the researchers. Therefore, it could be used to supplement the initial diagnosis as well as to assess the evolution of symptoms or even the effectiveness of treatment.

There are also other important advantages, such as the fact that each test would only take 7 minutes to complete and does not require specific hardware, which reduces its cost significantly. In fact, conventional personal computers, tablets, or mobile devices can be used, allowing remote assessments to be done. "Our results indicate that a shorter test may be enough to accurately assess the clinical symptoms of ADHD. This feature makes it particularly attractive in clinical settings where there is a lack of time," the researchers note.

A rapid test that allows early diagnosis

The study was carried out in collaboration with a group of 32 children, between the ages of 8 and 16, diagnosed with ADHD by the Child and Adolescent Psychiatry Unit in the Psychiatry Department at the Jiménez Díaz Foundation University Hospital. As each child was taking the test, supervised by a trained professional, the appropriate caregiver completed the inattention subscale in the attention deficit hyperactivity disorder and normal behaviour symptom classification scale (SWAN), which is an inventory of reports from parents and caregivers developed to evaluate ADHD symptoms.

In the game, the raccoon has to jump over 180 holes that are grouped into 18 blocks. "Each block is identified by the speed of the raccoon, the length of the trunk, and the width of the hole. The length of the trunk and the speed of the avatar determine the time between stimuli, which is about 1.5, 2.5, and 3.5 seconds, while the width of the hole determines how difficult it is to jump over," Inmaculada Peñuelas explains.

Currently, ADHD diagnosis depends mainly on the healthcare professionals' experience and the teacher or caregiver's observation skills. Several studies have determined that these assessments may be altered, by affective factors for example. Therefore, "the development of diagnostic methods such as those proposed in this paper may favour early diagnosis and thus improve these patients' prognosis", David Delgado Gómez concludes.

Researchers from the Rey Juan Carlos University, the Autonomous University of Madrid, CIBER Mental Health, and the Puerta de Hierro Majadahonda University Hospital, as well as the UC3M and the UCM, took part in this research.

Credit: 
Universidad Carlos III de Madrid

Brain stem cells divide over months

image: The picture shows the development over time from the stem cell (in red) via its daughter cells (orange and yellow depending on their stage of development) into new nerve cells (green) that have formed in the adult hippocampus over the course of several months.

Image: 
UZH

Stem cells create new nerve cells in the brain over the entire life span. One of the places this happens is the hippocampus, a region of the brain that plays a significant role in many learning processes. A reduction in the number of newly formed nerve cells has been observed, for example, in the context of depression and Alzheimer's disease, and is associated with reduced memory performance in these conditions.

From stem cell behavior to the activity of genes in individual cells

In a study published in Nature Neuroscience, the group around Sebastian Jessberger, a professor at the University of Zurich's Brain Research Institute, has shown that stem cells in the hippocampus of mice are active over a period of several months. The researchers, led by PhD candidate Sara Bottes and postdocs Baptiste Jaeger and Gregor Pilz, employed state-of-the-art microscopy and genetic analyses (using single-cell RNA sequencing) of stem cells and their daughter cells to analyze the formation of new nerve cells. This enabled them to observe that specific stem cell populations are active over months and can divide repeatedly. This had already been suspected in earlier studies, but this is the first time there has been direct evidence. The researchers have also been able to use single-cell RNA sequencing of stem cells and their daughter cells to demonstrate that stem cells with different division behavior (few cell divisions as opposed to long-lasting stem cell activity) can be differentiated on the basis of their molecular composition and expression of genes.

Harnessing stem cells for therapeutic purposes

"Combining two modern methods - two-photon microscopy and single-cell RNA sequencing - has enabled us to identify precisely the stem cells that can divide over the course of months," explains Jessberger. He adds that the evidence they have now presented of long-lasting stem cell division has implications for future therapeutic approaches: "We now know that there really are stem cells that divide over a period of many months. Single-cell RNA sequencing gives us our first insight into what genes are important in terms of the division behavior of individual cells."

The new findings will form the basis of future endeavors to investigate in detail how specific genes control the activity of stem cells. Jessberger sums up the next research objectives: "Imaging and single-cell RNA sequencing have given us completely new insights that we'll now use to be able to systematically regulate the activity of certain genes in the future. Since we now know that there are stem cells that can divide over a longer period, going forward we want to try to increase the division activity of these cells and thus the formation of new nerve cells, for example in the context of neurodegenerative conditions such as Alzheimer's disease."

Credit: 
University of Zurich

Researchers identify a rare genetic bone disorder through massive sequencing methods

image: Alteration of focal points revealing cell signaling problems in patients with skeletal dysplasia caused by mutations in LAMA5. Green represents immunofluorescence localization of Vinculin (focal adhesions), purple is Phalloidin (actin cytoskeleton) and blue represents nuclei.

Image: 
University of Malaga

Researchers of the "Cell Biology and Physiology-LABRET" group of the University of Malaga (UMA), together with the Networking Biomedical Research Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), have described a new genetic skeletal disorder based on a precision medicine strategy.

By using methods of massive sequencing -of all genes- they have identified the mutations that caused a rare bone disorder, particularly, the mutations in "LAMA5", the gene encoding an extracellular matrix protein around blood vessels in skeletal tissue.

This disorder consists in an extreme bone fragility combined with a lack of mineralization and skeletal deformity associated with joint dislocation and heart diseases, as well as a pulmonary insufficiency that causes perinatal mortality -at the time of birth.

The study was carried out at the Andalusian Centre for Nanomedicine and Biotechnology (BIONAND), in collaboration with the International Skeletal Dysplasia Registry of the University of California (Los Angeles), where the sequencing of affected patients' genes was conducted. The Masaryk University (Czech Republic) also participated in the study.

"Our scientific team has been researching rare genetic syndromes affecting the skeleton for years, with a view to find a medical solution for patients with complicated diagnosis and treatment", explains the researcher of the Department of Cell Biology, Iván Durán, main author of the study, which findings have been published in the scientific journal EBIOMEDICiNE.

According to this expert, precision medicine is the key to uncovering the genetic and molecular factors that produce this type of pathologies, and hence understanding the mechanism that causes them, enabling the development of tailored therapies.

Therefore, the researchers of the UMA have also described the mechanism of disease by generating cell models based on gene editing, simulating the mutations in LAMA5 in order to confirm if these mutations are the cause and determine the molecular process that triggers the problem. These cell models were developed by gene editing with CRISPR, introducing mutations causing a null or hypomorphic gene.

New mechanism of disease

"Thanks to these models we uncovered a new signaling pathway governing the skeleton formation -that makes the bone grow and stay healthy-, which means that our work has not only revealed a new disease, but also an unknown mechanism that could be used for common bone conditions", says Durán.

As he clarifies, the presence of LAMA5 among cells involved in the skeleton formation indicates, therefore, that the appearance of signals from special blood vessels could be a highly effective means of bone repair and regeneration.

"Not only do blood vessels provide irrigation to bones, but also convey signals and support niches for stem cells that can be mobilized to induce a regenerative process. It seems that LAMA5 is a key component to support pericyte-like stem cells", he clarifies.

New osteogenic biomaterial

Osteoporosis and osteogenesis imperfecta are diseases that cause bone fragility and affect a significant percentage of the population. Besides, these pathologies often present bone defects that are very difficult to repair. These scientific breakthrough will facilitate the design of new treatments and strategies for all types of bone fragility conditions.

In this sense, the "Cell Biology and Physiology" group of the UMA, which also belongs to the Biomedical Research Institute of Malaga (IBIMA), along with CIBER-BBN and the Cell Therapy Network, progress on a new project to develop an osteogenic biomaterial that would heal complex fractures in individuals with bone fragility and a low capacity of bone regeneration.

Credit: 
University of Malaga

Researchers illuminate neurotransmitter transport using X-ray crystallography and molecular simulations

image: Illustration.

Image: 
Depositphotos

Scientists from the MIPT Research Center for Molecular Mechanisms of Aging and Age-Related Diseases have joined forces with their colleagues from Forschungszentrum Jülich, Germany, and uncovered how sodium ions drive glutamate transport in the central nervous system. Glutamate is the most important excitatory neurotransmitter and is actively removed from the synaptic cleft between neurons by specialized transport proteins called excitatory amino acid transporters (EAATs). The findings are reported in Science Advances.

Glutamate transmits activating signals from one neuron to another. To ensure that glutamatergic signaling is precisely terminated, the neurotransmitter is rapidly removed from the synaptic cleft after its release; this is the task of specialized proteins, the EAAT glutamate transporters.

EAATs are secondary active transporters and use concentration gradients of sodium ions to drive glutamate uptake into cells. To this end, the transporters bind the neurotransmitter together with three sodium ions from the external side of the membrane to shuttle their cargo to the cell's interior. The physiological sodium gradient, with higher ion concentrations on the extracellular than in the intracellular compartment, thus serves as the energy source.

However, it has been unclear how EAATs coordinate the coupled binding of glutamate together with sodium ions and how the ions drive this process. The researchers have now answered this question: High-resolution X-ray crystallography provided incredibly accurate structural snapshots of a sodium-bound glutamate transporter right before the binding of glutamate. Molecular simulations on Jülich supercomputers and functional experiments could then identify how the binding of two sodium ions triggers the binding of glutamate and a third sodium ion (fig. 1).

These results, earlier reported by Forschungszentrum Jülich in a news release, uncover important molecular principles of information processing in the brain and could inform novel therapeutic approaches for ischemic brain diseases such as stroke, where impaired glutamate transport leads to elevated glutamate concentrations. "Our findings provide insights into how neurotransmitter transport works in the mammalian nervous system and what might disrupt this transport, causing problems with memory and learning," commented Kirill Kovalev of the MIPT Center for Molecular Mechanisms of Aging and Age-Related Diseases.

Credit: 
Moscow Institute of Physics and Technology

Corona: How the virus interacts with cells

image: 18 host proteins play an important role during SARS-CoV-2 infection - two of them are particularly interesting. They could open up new ways to treat infections with SARS-CoV-2 and other RNA viruses.

Image: 
SCIGRAPHIX / S. Westermann

SARS-CoV-2 infections pose a global threat to human health and a formidable research challenge. One of the most urgent tasks is to gain a detailed understanding of the molecular interactions between the virus and the cells it infects. It must also be clarified, whether these interactions favour the multiplication of the virus or - on the contrary - activate defence mechanisms.

In order to multiply, SARS-CoV-2 uses proteins of the host cell. However, thus far no detailed information on the part of the human proteome - i.e. the total of all proteins occurring in human cells - that is in direct contact with the viral RNA existed.

Publication in Nature Microbiology

This void has now been filled. Scientists from the Helmholtz Institute for RNA-based Infection Research (HIRI) Würzburg, the Julius-Maximilians-Universität Würzburg (JMU) and the Broad Institute (Cambridge, USA) have succeeded in creating the first global atlas of direct interactions between the SARS-CoV-2 RNA and the proteome of the human host. In addition, the authors identified important regulators of viral replication. Dr Mathias Munschauer from HIRI and Professor Jochen Bodem from the Institute of Virology and Immunobiology at JMU were responsible for the study. They present the results of their work in the latest issue of the journal Nature Microbiology.

In the biosafety level 3 suite at HIRI, the scientists infected human cells with the new coronavirus, which uses RNA as genetic material. In a second step, they purified the viral RNA and identified the proteins bound to it. "Mass spectrometry allows us to accurately determine the host proteins that directly associate with the viral genome. In this particular case, we were able to perform quantitative measurements to identify the strongest specific binding partners," says Mathias Munschauer.

18 proteins, 2 key factors and 20 potential inhibitors

"The atlas of RNA-protein interactions created in this way offers unique insights into SARS-CoV-2 infections and enables the systematic breakdown of central factors and defence strategies, a crucial prerequisite for the development of new therapeutic strategies," says Jochen Bodem. In total, the scientists identified 18 host proteins that play an important role during SARS-CoV-2 infection.

According to them, the two factors CNBP and LARP1 are particularly interesting. Using genetic tools, the authors identified the exact binding sites of these two host proteins in the viral genome and showed that they can specifically inhibit the replication of the virus. According to Mathias Munschauer, the characterisation of LARP1 as an antiviral factor is a major finding: "The way LARP1 binds to viral RNA is very interesting, because it is similar to the way LARP1 regulates certain cellular messenger RNAs that we already know. This in turn provides insights into possible mechanisms of action."

The multidisciplinary nature of the study also enabled the identification of 20 small molecule inhibitors of host proteins that bind SARS-CoV-2 RNA. The authors show that three out of four inhibitors tested actually inhibit viral replication in different human cell types. This result could open up new ways to treat infections with SARS-CoV-2 and other RNA viruses.

Credit: 
University of Würzburg

Discovery: How Colorado potato beetles beat pesticides

image: Native to the Rocky Mountains, the Colorado potato beetle has now spread to many parts of the world, chowing potato leaves, costing farmers millions -- and quickly overcoming most every pesticide thrown in its way. A new UVM study sheds light on how these insects become resistant so fast.

Image: 
Lily Shapiro

The Colorado potato beetle is a notorious pest--and a kind of unstoppable genius.

The modern pesticide era began in the 1860s when Midwest farmers started killing these beetles by spraying them with a paint color called Paris Green that contained copper arsenate. The beetles soon overcame that poison as well as lead arsenate, mercury DDT, and dieldrin--and over fifty other pesticides. At first, with any new chemical, many beetles are killed--but none of them last for long. The beetles develop resistance, usually within a few years, and continue merrily chomping their way through vast acres of potatoes in farms and gardens around the world.

Scientists have a poor understanding of how this creature turns this trick. Current evolutionary theory, focused on DNA, falls short of explaining the rapid development of pesticide resistance. While the beetle shows a lot of genetic variation, new DNA mutations probably do not show up frequently enough to let them evolve resistance to so many types of pesticides, so fast--over and over.

But now a first-of-its-kind study moves dramatically closer to an explanation.

A team of researchers, led by Prof. Yolanda Chen at the University of Vermont, shows that even small doses of the neonicotinoid pesticide, imidacloprid, can alter how the beetle manages its DNA. To fend off the pesticides, the new research suggests, the beetle may not need to change its underlying genetic code. Instead, the team found that beetles respond by altering the regulation of their DNA, turning certain genes on or off in a process called "DNA methylation." These so-called epigenetic changes allow beetles to quickly ramp up biological defense mechanisms--perhaps putting into overdrive already-existing genes that allow the beetle to tolerate a broad range of toxins found in potato plants.

A flush of enzymes or faster rate of excretion may let the insect stymie each new pesticide with the same ancient biochemical tools that it uses to overcome natural plant defenses--rather than relying on the ponderous evolutionary process of random mutations appearing in key genes, that would slowly cause a pesticide to become less effective.

Most important, the new study shows that these changes--triggered by even small doses of the pesticide--can be passed on to descendants across at least two generations. "We found the same DNA methylation patterns in the grandkid generation. That was surprising because they were not exposed to the insecticide," says Chen.

In several other insect species, exposure to pesticides has been shown to change DNA methylation. And some epigenetic changes have been observed to be passed on to future generations of species that reproduce asexually--such as the tiny crustacean Daphnia magna. "But it's long been assumed that epigenetics resets during sexual reproduction," says Kristian Brevik, the lead author on the new study who completed his doctoral degree working in Chen's lab. "That those changes could be transmitted, through multiple rounds of sexual reproduction, to future generations of insects--that's new."

The study was published in the December edition of the journal Evolutionary Applications.

OFF THE TREADMILL?

Over the last half-century, agricultural researchers and chemical companies have spent millions developing innovative chemical compounds to try to kill off this beetle that causes hundreds of millions of dollars of damage--and almost all eventually fail. "Perhaps it's time to get off the pesticide treadmill of trying to introduce ever-more-toxic chemicals--and recognize that evolution happens, regardless of what we throw at them," says Yolanda Chen. "We could be more strategic in understanding how evolutionary processes work--and invest in more ecological approaches that would enable agriculture to be more sustainable."

REVOLUTION IN EVOLUTION

Epigenetics is an increasingly hot field. Basically, it's the study of how environmental stresses--from starvation to air pollution to pesticides--can add or remove chemical tags to an organism's DNA--flipping a genetic switch that changes its health and behavior.

DNA methylation was first shown to occur in human cancer in 1983 and since the early 2000s the epigenetics revolution in biology began to reveal how environmental change can turn certain genes on or off, leading to profound changes in an organism without changing its DNA. And it's well known that many insects in agricultural areas develop pesticide resistance; it's not just Colorado potato beetles. More than six hundred species have developed resistance to over three hundred pesticides, with tens of thousands of reports from around the world. A growing body of research shows that many of these involve epigenetic mechanisms.

In their experiment, the UVM scientists, with a colleague from the University of Wisconsin, gathered adult beetles from organic farms in Vermont. They divided up the offspring, treating them with different doses of the pesticide imidacloprid--some high, some low, some to a less-toxic chemical similar to imidacloprid--and some to just water. After two generations, beetles whose grandparents had been treated to any level of pesticide showed decreased overall methylation--while the ones exposed to water did not. Many of the sites where where the scientists found changes in methylation are with genes associated with pesticide resistance. The parallel response across all the pesticide treatments suggests that "mere exposure to insecticides can have lasting effects on the epigenetics of beetles," says Chen.

It's one thing to suggest that stress changes a particular organism, quite another to suggest that physical characteristics it acquires by stress or behavior can get passed down for numerous generations. A blacksmith who grows strong from a lifetime of hard work should not expect her children to be extraordinarily strong too. So why does some stress lead to lasting change?

The foundations of epigenetics remain mired in controversy, partly because it has been attached to largely discredited theories of "inheritance of acquired characters"--an ancient idea that stretches back to Aristotle and is most strongly associated with Jean-Baptiste Lamarck, the nineteenth-century French naturalist who proposed that organisms pass down characteristics that are used or disused to their offspring.

Although Lamarck's ideas were previously discredited by evolutionary biologists, the epigenetics revolution is making clear that evolution by natural selection doesn't have to just rely on random advantageous mutations showing up in the genetic code. In the case of the Colorado potato beetles studied at UVM, the research suggests that pesticides may flip a whole raft of epigenetic switches some of which can ramp up production of existing defenses against the toxins--while changes in DNA methylation can unleash portions of the DNA called transposable elements. "These elements have also been called 'jumping genes' and are most closely related to viruses," says Chen, a professor in UVM's Department of Plant and Soil Science and fellow in the Gund Institute for Environment. "Due to their harmful effect on host genomes, they are usually suppressed by DNA methylation." But pesticide exposure, the new research suggests, may let them loose, allowing more mutations associated with pesticide resistance to generate.

In short, the dynamic interplay between epigenetics and genetics points toward an explanation for the largely unexplained reality of rapid evolution and pesticide resistance. How these changes get passed on through multiple generations of sexual recombination remains mysterious--but the new study strongly suggests that they do. "We have more to learn," says Chen, "about how people could manage evolution better."

Credit: 
University of Vermont

Traditional model for disease spread may not work in COVID-19

image: Dr. Arni Rao, a mathematical modeler at the Medical College of Georgia at Augusta University

Image: 
Kim Ratliff, Augusta University photographer

AUGUSTA, Ga. (Dec. 21, 2020) - A mathematical model that can help project the contagiousness and spread of infectious diseases like the seasonal flu may not be the best way to predict the continuing spread of the novel coronavirus, especially during lockdowns that alter the normal mix of the population, researchers report.

Called the R-naught, or basic reproductive number, the model predicts the average number of susceptible people who will be infected by one infectious person. It's calculated using three main factors -- the infectious period of the disease, how the disease spreads and how many people an infected individual will likely come into contact with.

Historically, if the R-naught is larger than one, infections can become rampant and an epidemic or more widespread pandemic is likely. The COVID-19 pandemic had an early R-naught between two and three.

In a letter published in Infection Control and Hospital Epidemiology, corresponding author Dr. Arni S.R. Srinivasa Rao, a mathematical modeler at the Medical College of Georgia at Augusta University, argues that while it's never possible to track down every single case of an infectious disease, the lockdowns that have become necessary to help mitigate the COVID-19 pandemic have complicated predicting the disease's spread.

Rao and his co-authors instead suggest more of a dynamic, moment in time approach using a model called the geometric mean. That model uses today's number to predict tomorrow's numbers. Current number of infections -- in Augusta today, for example -- is divided by the number of predicted infections for tomorrow to develop a more accurate and current reproductive rate.

While this geometric method can't predict long term trends, it can more accurately predict likely numbers for the short term.

"The R-naught model can't be changed to account for contact rates that can change from day to day when lockdowns are imposed," Rao explains. "In the initial days of the pandemic, we depended on these traditional methods to predict the spread, but lockdowns change the way people have contact with each other."

A uniform R-naught is also not possible since the COVID-19 pandemic has varied widely in different areas of the country and world. Places have different rates of infection, on different timelines -- hotspots like New York and California would have higher R-naughts. The R-naught also did not predict the current third wave of the COVID-19 pandemic.

"Different factors continuously alter ground-level basic reproductive numbers, which is why we need a better model," Rao says. Better models have implications for mitigating the spread of COVID-19 and for future planning, the authors say.

"Mathematical models must be used with care and their accuracy must be carefully monitored and quantified," the authors write. "Any alternative course of action could lead to wrong interpretation and mismanagement of the disease with disastrous consequences."

Credit: 
Medical College of Georgia at Augusta University

New optical fiber brings significant improvements to light-based gyroscopes

image: Researchers have incorporated a new type of hollow core optical fiber known as a nodeless antiresonant fiber to boost the performance of resonator fiber optic gyroscopes. These gyroscopes could one day form the basis of navigation technologies that are more compact and more accurate than today's systems.

Image: 
Gregory T. Jasion, Optoelectronics Research Centre, University of Southampton

WASHINGTON -- Researchers have taken an important new step in advancing the performance of resonator fiber optic gyroscopes, a type of fiber optic sensor that senses rotation using only light. Because gyroscopes are the basis of most navigation systems, the new work could one day bring important improvements to these systems.

"High-performance gyroscopes are used for navigation in many types of air, ground, marine and space applications," said Glen A. Sanders, who led the research team from Honeywell International. "Although our gyroscope is still in the early stages of development, if it reaches its full performance capabilities it will be poised to be among the next generation of guidance and navigation technologies that not only push the bounds of accuracy but do so at reduced size and weight."

In The Optical Society (OSA) journal Optics Letters, researchers from Honeywell and the University of Southampton's Optoelectronics Research Centre in the UK describe how they used a new type of hollow core optical fiber to overcome several factors that have limited previous resonator fiber optic gyroscopes. This allowed them to improve the most demanding performance requirement of the gyroscope stability by as much as 500 times over previously published work involving hollow core fibers

"We hope to see these gyroscopes used in the next-generation of civil aviation, autonomous vehicles and the many other applications in which navigation systems are employed," said Sanders. "Indeed, as we enhance the performance of guidance and navigation systems, we hope to open entirely new capabilities and applications."

Sensing rotation with light

Resonator fiber optic gyroscopes use two lasers that travel through a coil of optical fiber in opposite directions. The ends of the fiber are connected to form an optical resonator so that most of the light will recirculate and take multiple trips around the coil. When the coil is at rest, the light beams traveling in both directions share the same resonance frequency, but when the coil is rotating, the resonance frequencies shift relative to each other in a way that can be used to calculate the direction of movement or orientation for the vehicle or device on which the gyroscope is mounted.

Honeywell has been developing resonator fiber optic gyroscope technology for some time because of its potential to deliver high accuracy navigation in a smaller size device compared to current sensors. However, it has been challenging to identify an optical fiber that can withstand the even modest laser power levels at the ultra-fine laser linewidths required by these gyroscopes without causing nonlinear effects that degrade the sensor's performance.

"In 2006, we proposed using a hollow core fiber for the resonator fiber optic gyroscope," said Sanders. "Because these fibers confine the light in a central air or gas-filled void, sensors based on them don't suffer from the nonlinear effects that plague sensors based on solid fibers."

Using an even better fiber

In the new work, led by Austin Taranta at the University of Southampton, the researchers wanted to see if an entirely new type of hollow core fiber could bring even more improvements. Known as nodeless antiresonant fiber (NANF), this new class of fibers exhibits even lower levels of nonlinear effects than other hollow core fibers.

NANFs also have low optical attenuation, which improves the quality of the resonator because the light maintains its intensity over longer propagation lengths through the fiber. In fact, these fibers have been shown to have the lowest light loss of any hollow core fiber, and for many parts of the spectrum, the lowest loss of any optical fiber.

For resonator fiber optic gyroscopes, it is crucial that the light travels only in a single path through the fiber. The NANFs help make this possible by eliminating optical errors caused by backscattering, polarization coupling and modal impurities, which are all potential sources of error or extra noise in the gyroscope. Their elimination removes the most significant performance limiters for other fiber technologies.

"Although the backbone of this sensor is the new type of optical fiber, we also worked to greatly reduce noise when sensing the resonance frequency with unprecedented accuracy," said Sanders. "This was crucial for enhancing the performance and moving toward miniaturizing the sensor."

Achieving long-term stability

The Honeywell researchers performed laboratory studies to characterize the performance of the new fiber optic gyroscope sensor under stable rotation conditions, i.e., only in the presence of Earth's rotation. This establishes the instrument's "bias stability". To eliminate noise and disturbances in the free-space optical setup, the gyroscope was mounted on a stable, static pier. By incorporating the NANFs, the researchers were able to demonstrate a long-term bias stability of 0.05 degrees per hour, which is close to the levels required for civil aircraft navigation.

"By demonstrating the high performance of NANFs in this extremely demanding application, we hope to show the exceptional promise of these fibers for use in other precision scientific resonant cavities," said Taranta.
The researchers are now working to make a prototype gyroscope with a more compact and stable configuration. They also plan to incorporate the latest generation NANFs, which exhibit a four times improvement in optical losses, along with greatly improved modal and polarization purity.

Credit: 
Optica

Scientists complete yearlong pulsar timing study after reviving dormant radio telescopes

While the scientific community grapples with the loss of the Arecibo radio telescope, astronomers who recently revived a long-dormant radio telescope array in Argentina hope it can help modestly compensate for the work Arecibo did in pulsar timing. Last year, scientists at Rochester Institute of Technology and the Instituto Argentino de Radioastronomi­a (IAR) began a pulsar timing study using two upgraded radio telescopes in Argentina that previously lay unused for 15 years.

The scientists are releasing observations from the first year in a new study to be published in The Astrophysical Journal. Over the course of the year, they studied the bright millisecond pulsar J0437âˆ'4715. Pulsars are rapidly rotating neutron stars with intense magnetic fields that regularly emit radio waves, which scientists study to look for gravitational waves caused by the mergers of supermassive black holes.

Professor Carlos Lousto, a member of RIT's School of Mathematical Sciences and the Center for Computational Relativity and Gravitation (CCRG), said the first year of observations proved to be very accurate and provided some bounds to gravitational waves, which can help increase the sensitivity of existing pulsar timing arrays. He said that over the course of the next year they plan to study a younger, less stable pulsar that is more prone to glitches. He hopes to leverage machine learning and artificial intelligence to better understand the individual pulses emitted by pulsars and predict when glitches occur.

"Every second of observation has 11 pulses and we have thousands of hours of observation, so it is a lot of data," said Lousto. "What we hope to accomplish is analogous to monitoring the heartbeat one by one to learn to predict when someone is going to have a heart attack."

Lousto said Ph.D. students from RIT's programs in astrophysical sciences and technology, mathematical modeling, and computer science are at the forefront of the analysis. RIT has a remote station called the Pulsar Monitoring in Argentina Data Enabling Network (PuMA-DEN) to control the radio telescopes and store the data collected. He said the opportunities presented by the collaboration are important for the students from the College of Science and Golisano College of Computing and Information Sciences because "the careers in astronomy are changing very quickly, so you have to keep up with new technology and new ideas."

In the longer term, Lousto said RIT and IAR are seeking out other radio telescopes that can be upgraded for pulsar timing studies, further filling the gap left behind by Arecibo. RIT and IAR's observations seek to contribute to the larger efforts of the North American Nanohertz Observatory for Gravitation Waves (NANOGrav) and the International Pulsar Timing Array, an collaboration of scientists working to detect and study the impact of low frequency gravitational waves passing between the pulsars and the Earth.

Credit: 
Rochester Institute of Technology

Ecosystem dynamics: Topological phases in biological systems

Physicists at Ludwig-Maximilians-Universitaet (LMU) in Munich have shown that topological phases could exist in biology, and in so doing they have identified a link between solid-state physics and biophysics.

The concept of topological phase transitions has become an important topic in theoretical physics, and was first applied to the characterization of unusual states of matter in the 1980s. The quantum Hall effect (QHE) is one example where ideas drawn from topology have yielded new insights into initially puzzling phenomena. The QHE is observed in atomically thin films. When these, effectively two-dimensional, materials are subjected to a smoothly varying magnetic field, their electrical resistance changes in discrete steps. The significance of such topological states in condensed-matter physics was acknowledged by the award of the 2016 Nobel Prize for Physics to its discoverers. 

Now LMU physicists led by Professor Erwin Frey have used this same topological concept to elucidate the dynamics of a biological model system. "We asked whether the kinds of stepwise topological phase transitions discovered in solid-state physics could be found in biological systems," says Philipp Geiger, a doctoral student in Frey's team and joint first author of the new study together with Johannes Knebel. The model system chosen for investigation was one that Frey's group had previously employed to investigate the population dynamics of ecosystems in which diverse mobile species compete with each other. 

The basic elements used to model this system are rock-paper-scissors (RPS) cycles, which are a classical element of game theory. Each of these elements (or strategies) defeats one of the others, but succumbs to the third. "From this basic model, we built an interaction chain by connecting many such RPS cycles to one another," Geiger explains. "In addition, we made the original model much more abstract in character." 

In their abstract version of the model, in which species compete for with their nearest neighbors in dominance relationships that are governed by RPS rules, the authors observed the emergence of a strong degree of polarization on one side or other of the interaction lattice. In other words, species in these positions came to dominate the whole system. Whether the evolutionary dynamics of the model led to peak polarization on the left or the right side of the interaction chain was shown to depend solely on the quantitative relationship between just two interaction rates, and the dynamics was otherwise robust against small perturbations in strengths of interactions. 

With the aid of methods drawn from solid-state physics, Frey and his colleagues were able to account for the polarization of the evolutionary dynamics in terms of topological phases, such that changes in polarization could be treated in the same way as phase transitions. "The model shows for the first time that such effects can occur in biology," says Frey. "This study can be viewed as the first step toward the application of the concept of topological phases in biological systems. It is even conceivable that one could make use of topological phases in the context of the analysis of genetic regulatory networks. How such phases can be realized experimentally is an interesting question and a challenging task for future research."

Credit: 
Ludwig-Maximilians-Universität München