Brain

Live mitochondria seen in unprecedented detail: photobleaching in STED microscopy overcome

video: Video 1. Live mitochondria imaged in unprecedented detail -- for an unprecedented length of time -- using the MitoPB Yellow fluorescent marker created by Nagoya University-led researchers. The marker molecule is designed to be absorbed by only certain membranes within each mitochondrion, and retains its fluoresescence under the STED microscope for a very long time. This video was shot at 1.5 fps and a resolution of 90nm. Still images were captured at 60nm resolution. In response to being deprived of nutrients, mitochondria fuse together and increase the number of cristae. This time-lapse sequence shows events such as two separate mitochondria fusing together to form a single mitochondrion; and a single mitochondrium fusing together. Note that the outer membranes of the mitochondria are invisible: we are seeing the inner membranes fusing together. More videos and images can be viewed at the paper's PNAS website. https://www.pnas.org/content/116/32/15817/tab-figures-data

Image: 
© ITbM, Nagoya University

Light microscopy is the only way in which we can look inside a living cell, or living tissues, in three dimensions. An electron microscope only gives a two-dimensional view, and the organic sample would quickly burn up due to the extreme heat of the electron beam, and therefore cannot be observed alive. Moreover, by marking the biomolecules of the structure we are interested in with a specially designed fluorescent molecule, we can distinguish it from the surroundings: this is fluorescence microscopy.

Until the mid-1990s fluorescence microscopy was hampered by basic physics: due to the diffraction limit, any features on the sample closer together than about 250 nanometres would be blurred together. Viruses and individual proteins are much smaller than this, so they could not be studied this way. But in around 1994, in a wonderful lesson teaching us that we must take care when applying fundamental physical principles, Stefan Hell discovered Stimulated Emission Depletion (STED) microscopy, which is now one of several optical microscopy approaches that achieve "super-resolution", resolution beyond the diffraction limit. He received the Nobel Prize in Chemistry in 2014 "for the development of super-resolved fluorescence microscopy", together with Eric Betzig and William Moerner.

To see why the diffraction limit is a problem, imagine the structure of interest is very small, say, 50 nanometres across, like a virus, and has been marked with a fluorescent biomolecule. Now imagine illuminating it with a laser spot, say, 200 nanometres in diameter. The illuminated marker molecules emit light spontaneously, at random times, by fluorescence, with the probability dropping rapidly with time. The photons from many fluorescing molecules are focussed onto a detector using lenses, creating a single featureless pixel. It's not fully bright because only a small proportion of the sample in the illuminated circle contains fluorescent molecules. If you were to move the laser 200 nanometres in any direction, to where, in this example, no fluorescent molecules are present, the signal will certainly go dark. So, this rather dim pixel tells us that something is present inside this sample area 200 nanometres in diameter. The diffraction limit prevents us forming pixels from smaller areas, if we use the basic approach.

The physical idea of STED microscopy is very simple. With the laser spot illuminating the region around the small fluorescing structure again, suppose you somehow stop light being sent to the detector from as large an area as possible within the spot - leaving a much smaller spot, say, 60 nanometres in diameter. Now if you move the laser 60 nanometres in any direction and the signal goes dark, the pixel in the image represents the presence of structure up to 60 nanometres across. The diffraction limit has been beaten. Of course, one such pixel is featureless, but a sharp image of mitochondria can be built up by scanning across and recording many pixels of varying brightness. (See Figure 1. "Time-gated STED Microscopy" was used to capture most of the images in this paper.)

Stefan Hell's Nobel Prize-winning discovery consists of two insights. First, he thought of the idea of stopping light being sent to the detector from as large an area as possible within an illuminated spot whose size matches the diffraction limit. Second, he figured out how to actually achieve it.

Two lasers illuminate the same spot. The first laser excites the marker molecule electrons and they decay spontaneously back to their ground state, each emitting a visible photon of a specific wavelength. (This is fluorescence.) The process is random, with the emission probability decreasing with time fairly quickly, meaning that most photons are emitted within the first few nanoseconds of the sample being illuminated. A second laser, the "STED beam", shaped with a hole in the middle so as not to affect the marker molecules there, is tuned to stimulate emission of a photon by the excited marker molecule in the outer ring. But how are these photons distinguished from photons emitted from the middle?

The emission process from the outer ring is also random but happens much more quickly, the probability decreasing rapidly, meaning that most of these photons are emitted within a nanosecond or so. As the two superimposed beams scan across the sample, by the time the centre of the ring is fluorescing, the surrounding molecules have already been forced into their ground state by emitting a photon - they have been "switched off". The STED microscopy technique relies on clever timing in this way. In principle, the size of the glowing central spot can be made as small as you want, so any resolution is possible. However, the doughnut-shaped "STED beam" would then be delivering energy in the form of concentrated visible laser light to a larger area of the living cell, risking killing it.

Nevertheless, the process is not ideal, and the resulting image loses some sharpness because some marker molecules in the outer ring are not properly switched off - the process is probabilistic, after all - and when they do fluoresce they contaminate the signal from the centre. However, due to the different timing of the spontaneous and stimulated emission, the earliest photons to arrive at the detector are from regions illuminated by the highest STED beam intensity, and the last photons to arrive are most likely from marker molecules located in the central spot. So by waiting a short time (around one nanosecond) before recording the image, most of the photons from the outer ring can be filtered out. This is called "Time-gated STED Microscopy". Further sharpening of the image is achieved through a process called deconvolution.

The invention of super-resolution microscopy heralded a leap forward in the life sciences. Living organisms could be observed at an unprecedented resolution. However, time-lapse sequences of images could not be made over any decent length of time because the marker molecules would degrade under the intense STED beam and stop fluorescing. This is the photobleaching problem. The damaged marker molecules can also become toxic to the cell.

The photobleaching problem solved

Shigehiro Yamaguchi and Masayasu Taki, of Nagoya University's Institute for Transformative Bio-Molecules (ITbM), led a research team that has developed a marker molecule, called "MitoPB Yellow", that is absorbed by the inner membrane of mitochondria, including the cristae - the fold-like structures - and has a long lifetime under a STED beam. The idea for the marker molecule targeting mitochondria came from co-author Chenguang Wang, of the ITbM. Multicolour STED imaging with a single STED laser is also possible; and the researchers expect that fluorescent markers similar to MitoPB Yellow should find a wide range of applications in other super-resolution techniques as well (such as those developed by Eric Betzig and William Moerner).

To demonstrate the practical usefulness of MitoPB Yellow for live-cell imaging, the group placed mitochondria under conditions that are known to cause certain structural changes - but until now these have only been observed using transmission electron microscopy, which cannot be used on live cells. The mitochondria were treated with a reagent that suppresses DNA replication, inducing dysfunction, in order to observe their survival and dying processes.

Then, using Time-gated STED Microscopy, the research team made still images at 60 nanometre resolution (about one thousandth of the width of a human hair), as well as time-lapse image sequences showing the mitochondria responding to a deprivation of nutrients by changing form in order to survive. The long image sequences - of up to 600 images - are the first ever made of mitochondria at the relatively high spatial resolution of 90 nanometres. (See Video 1, which shows a time-lapse sequence recorded over nearly 7 minutes.)

Over a few minutes the inner mitochondrial structure changed dramatically in a number of ways. Initially, elongation and increase in the number of cristae was seen. One image sequence (see Figure 2a) shows inner membranes of neighbouring mitochondria fusing together - in other words, two mitochondria fusing to make one. Another image sequence (see Figure 2b) shows two cristae within a single mitochondrion apparently fusing together. Elongation and creating more cristae is thought to increase the efficiency of energy production (ATP synthesis) while protecting the mitochondrium from "autophagosomal degradation" - a programmed death whose purpose is to remove unnecessary or dysfunctional components from the cell and allow the orderly degradation and recycling of cellular components.

After the initial period of elongation, the inner membranes of some mitochondria split into globules that swelled and lost cristae (see Movie S2); some globules ruptured (Movie S4). Some formed concentric spheres (Figure 1 and Video 1). The fluorescence intensity remained the same. Noteworthy here is that the cristae and membranes remain as sharply imaged as before, which indicates that the cause of the mitochondrion's death is not toxicity due to degradation of the marker molecule under the beam. The extremely strong STED laser might have damaged the mitochondria, although exactly why they rupture is unknown.

In these images, after seeing initial survival responses, we are watching the death of mitochondria under the intense STED beam. A future direction of research will be to reduce the intensity of the STED laser beam by creating a fluorescent marker molecule that glows when illuminated by light of a longer wavelength and therefore lower energy. The mitochondria might then live longer.

However, even with MitoPB Yellow, the dying process - which is not well understood - can be studied. Nobody knows if the morphological (structural) changes observed during the dying process are related to apoptosis (normal, controlled death) or necrosis (death due to injury or malfunction). Apoptosis is known to be triggered by a signalling molecule called cytochrome C: if a reagent can be found that suppresses cytochrome C, then mitochondria - and human cells - could live longer.

Being able to see the processes occurring inside mitochondria should lead to a better way of diagnosing human mitochondrial disease - and perhaps even to a cure.

Credit: 
Nagoya University

Nanovectors could improve the combined administration of antimalarial drugs

Barcelona, August 8, 2019-.Encapsulating two drugs with different properties into nanovesicles surrounded by antibodies can greatly improve their delivery and efficacy, according to a study led by Xavier Fernández Busquets, director of the joint Nanomalaria unit at the Institute for Bionengineering of Catalonia (IBEC) and the Barcelona Institute for Global Health (ISGlobal), an institution supported by "la Caixa".

Combining two drugs that act through different mechanisms is one of the most efficient approaches currently used to treat malaria. However, differences in the drugs' physichochemical properties (solubility, half-life, etc.) often affect treatment efficacy.

To overcome this obstacle, Fernández Busquets and his team have developed a nanovector -consisting of small spheres or liposomes- that can simultaneously transport compounds that are soluble in water (hydrophilic) and in lipids (lipophylic).

"By encapsulating both drugs in the same nanovector, we make sure that both will persist for the same time in the organism," explains Fernández-Busquets. As proof of concept, the research team introduced the water-soluble drug pyronaridine in the liposome lumen and the lipid-soluble drug atovaquone in its membrane. In addition, they covered the liposome with an antibody that recognises a protein expressed by red blood cells (whether they are infected or not) and gametocytes (the sexual phase of the parasite, responsable for host-to-host transmission). Hence the term immunoliposome.

The results show that both drugs, when encapsulated, inhibited parasite growth in vitro at concentrations that had no effect when used as free drugs. Thanks to the antibodies, the liposomes rapidly bound to the target cells and efficiently delivered the drug.

The authors argue that this strategy could be used in a near future to treat severe cases of malaria that are admitted in hospitals, where intravenous administration of liposomes is feasible. The challenge lies in developing nanovectors adequate for oral administration in order to treat non-complicated malaria, a line of work the research group is currently exploring.

"Current treatments target mostly the asexual phases of the parasite. This new strategy would also target the sexual phase or gametocyte, the only phase that can be transmitted from humans to mosquitoes, and would thereby contribute to reducing the emergence and spread of antimalarial resistance," adds the researcher.

Credit: 
Barcelona Institute for Global Health (ISGlobal)

A key piece to understanding how quantum gravity affects low-energy physics

Researchers have, for the first time, identified the sufficient and necessary conditions that the low-energy limit of quantum gravity theories must satisfy to preserve the main features of the Unruh effect.

In a new study, led by researchers from SISSA (Scuola Internazionale Superiore di Studi Avanzati, the Complutense University of Madrid and the University of Waterloo, a solid theoretical framework is provided to discuss modifications to the Unruh effect caused by the microstructure of space-time.

The Unruh effect, named after the Canadian physicist who theorized it in 1976, is the prediction that someone who has propulsion and hence accelerates would observe photons and other particles in a seemingly empty space while another person who is inertial would see a vacuum in that same area.

"Inertial and accelerated observers do not agree on the meaning of 'empty space," says Raúl Carballo-Rubio, a postdoctoral researcher at SISSA, Italy. "What an inertial observer carrying a particle detector identifies as a vacuum is not experienced as such by an observer accelerating through that same vacuum. The accelerated detector will find particles in thermal equilibrium, like a hot gas."

"The prediction is that the temperature recorded must be proportional to the acceleration. On the other hand, it is reasonable to expect that the microstructure of space-time or, more generally, any new physics that modifies the structure of quantum field theory at short distances, would induce deviations from this law. While probably anyone would agree that these deviations must be present, there is no consensus on whether these deviations would be large or small in a given theoretical framework. This is precisely the issue that we wanted to understand."

"What we've done is analyzed the conditions to have Unruh effect and found that contrary to an extended belief in a big part of the community thermal response for particle detectors can happen without a thermal state," said Eduardo Martin-Martinez, an assistant professor in Waterloo's Department of Applied Mathematics. "Our findings are important because the Unruh effect is in the boundary between quantum field theory and general relativity, which is what we know, and quantum gravity, which we are yet to understand."

"So, if someone wants to develop a theory of what's going on beyond what we know of quantum field theory and relativity, they need to guarantee they satisfy the conditions we identify in their low energy limits."

The researchers analyzed the mathematical structure of the correlations of a quantum field in frameworks beyond standard quantum field theory. This analysis was then used to identify the three necessary conditions that are sufficient to preserve the Unruh effect. These conditions can be used to determine the low-energy predictions of quantum gravity theories and the findings of this research provides the tools necessary to make these predictions in a broad spectrum of situations.

Having been able to determine how the Unruh effect is modified by alterations of the structure of quantum field theory, as well as the relative importance of these modifications, the researchers believe the study provides a solid theoretical framework to discuss and perhaps test this particular aspect as one of the possible phenomenological manifestations of quantum gravity. This is particularly important and appropriates even if the effect has not yet been measured experimentally, as it is expected to be verified in the not so distant future.

Credit: 
Scuola Internazionale Superiore di Studi Avanzati

Observation-driven research to inform better groundwater management policies

Groundwater maintains vital ecosystems and strongly influences water and energy budgets. Although at least 400 million people in sub-Saharan Africa depend on this valuable resource for their domestic water needs, the processes that sustain it and their sensitivity to climatic variability, are poorly understood. IIASA contributed to a study that looked into climate impacts on groundwater in light of changing climatic patterns in Africa.

Groundwater is a hidden resource that collects and flows beneath the Earth's surface, filling the porous spaces in soil, sediment, and rocks. It is an important part of the water cycle and a source for aquifers, springs, and wells. It mostly originates from, and is also replenished by, rain, melting snow, and other forms of precipitation that soak into the ground. It plays a central role in sustaining water supplies and livelihoods in sub-Saharan Africa due to its widespread availability, generally high quality, and ability to buffer the impacts of drought and climate variability that characterizes this region. Groundwater levels are governed by a complex interplay between replenishment from rain and other sources; and outflows to streams, lakes, oceans or the atmosphere, which are all in turn influenced by a variety of factors from the climate, geology, land cover, and of course, human abstraction. Demand for groundwater is growing rapidly across the continent, which makes it crucial for policymakers to put mechanisms in place for the sustainable management of this valuable resource into the future.

According to the authors of the study published in the journal Nature, a robust, data-driven, understanding of groundwater recharge - and critically its dependence on climate - is fundamentally required to inform water resource decision-making. In addition, the authors emphasize that an improved understanding of groundwater-climate sensitivity is integral to understanding not only today's water-climate-ecological-human interactions across the region, but also those of the past. Observation data of groundwater resources for Africa is however sorely lacking, which has caused regional governments to rely heavily on large-scale hydrological models to obtain estimates of potential groundwater resources for their water security assessments. Unfortunately, these models remain unvalidated by groundwater observations, which means that the estimates derived from them contain a high degree of uncertainty.

To address these issues, the researchers set out to collect available groundwater data from nine countries across sub-Saharan Africa, after which they looked into the climate impacts on groundwater, taking into account the changing climatic patterns in recent years. The 14 resulting multi-decadal hydrographs and accompanying precipitation records cover a wide range of climate zones from hyper-arid to humid, as well as a diverse range of geological and landscape settings.

Most of the hydrographs indicate that higher rainfall does not necessarily equate to higher recharge values for groundwater and that aridity and episodicity play an important role in determining the amount of groundwater replenishment. In this regard, the authors highlight seasonal groundwater-level rises of varying magnitude, showing more gains than losses of groundwater during most years on record. The exceptions to this phenomenon are, however, Tanzania, Namibia, and South Africa where multi-year continuous groundwater-level declines marked by episodic replenishment events, were observed. In addition, long term rising trends observed in the hydrographs for Niger reflect increases in recharge rates after the clearance of large patches of native vegetation in the 1960s that have not yet caught up with rates of net groundwater drainage due to long groundwater response times in the area. The absence of long-term trends in other areas indicates a relatively stable balance between long-term rates of groundwater replenishment and outflow.

"Groundwater recharge mechanisms are very complex in Africa. In some sub-Saharan African countries, drying climate trends will lead to challenges for meeting increasing demands for water. Management of groundwater resources in Africa is urgently needed to utilize changing rainfall patterns and the increasing demand for groundwater. This will require observation data and good governance frameworks, which unfortunately is currently often lacking," explains study coauthor and IIASA Acting Water Program Director, Yoshihide Wada.

The researchers say that their data-driven results generally implies a greater resilience to climate change than previously supposed in many locations from a groundwater perspective. They however point out that much more observation-driven research is needed to clarify issues around the management of groundwater and to address the balance of change between groundwater and surface water resources.

Credit: 
International Institute for Applied Systems Analysis

New research provides better way to gauge pain in mice

CAMDEN - For decades, biomedical researchers have used mouse behavior to study pain, but some researchers have questioned the accuracy of the interpretations of how mice experience pain.

Now, Rutgers University-Camden neuroscientist Nathan Fried and colleagues from the University of Pennsylvania have developed a method that can more accurately gauge pain in mice, which could lead researchers to discover new ways to treat pain in human patients.

"When I touch the paw of a mouse, it withdraws the paw. That withdrawal movement is the behavior we've relied on for decades to determine if a pain reliever is working. But that withdrawal is seemingly the same no matter if it's a soft brush or a sharp needle," describes Fried. "So if a mouse moves its paw, how can we be sure it's because the mouse is in pain?"

Using slow-motion video, modern neuroscience techniques, and artificial intelligence, Fried and his fellow researchers could zoom in and perform a more detailed analysis of what a mouse is feeling when it withdraws its paw. The researchers created a "mouse pain scale," which they used to assess pain sensation in a graded manner.

"We can actually analyze the quality of the movement in the animal's paw," says Fried, a Rutgers-Camden assistant teaching professor of biology. "By doing that, we can extract much more information from what the animal is actually experiencing. Importantly, instead of simply saying whether the mouse is or is not in pain, now we can assess the degree of pain the mouse is in."

"We need to do a better job in helping chronic pain patients without using opioids," he continues. "Testing pain therapeutics on mice has been very difficult. This new process refines our ability to determine whether a mouse is in pain, which increases our confidence in whether a new therapeutic will work in humans."

One of the major challenges for pain researchers is the subjective experience of pain. Each patient feels pain in very different ways. In describing pain on a scale of one to 10, one person's pain that feels like a seven might be a 10 for someone else. Measuring pain in a mouse, a nonverbal animal, is even more challenging. "Imagine trying to guess how much pain your friend is in by only looking at their behavior," says Fried. "That's what we're trying to do with the mice because they can't describe their pain to us."

The scientists' videos revealed that when they touched the animal's paw with a cotton swab, it lifted its paw and placed it right back down. When a researcher poked the animal with a pinprick, the animal reacted very differently. In the slow-motion video, they could see that the animal moved its paw, shook the paw, squinted its eyes, and pulled its body back or jumped up in the air. All of these movements were impossible to see in real time. It wasn't until they slowed the movements down by recording at 1,000 frames per second that they could see the nuances of the withdrawal.

The research, "Development of a Mouse Pain Scale Using Sub-second Behavioral Mapping and Statistical Modeling," led by Fried and Ishmail Abdus-Saboor of the University of Pennsylvania, is published in the journal Cell Reports.

Fried says that other researchers can build on his work by using the new technique in their labs. He envisions publically available software that researchers can download and use for their own pain studies.

"If we can create open-sourced software," says Fried, "then other labs are more likely to use it. And if we improve the accuracy of our pain measurements in mice, it'll inevitably increase the chances that we'll find new pain therapeutics for humans."

Fried began the research in 2015 as a postdoctoral fellow in the Wenqin Luo Lab at the University of Pennsylvania, and completed the work after arriving at Rutgers University-Camden last year.

Using the method he and his colleagues developed, the Rutgers-Camden scholar is continuing his pain research utilizing fruit flies instead of mice.

"These little creatures can tell us a lot about the mechanisms behind pain," says Fried. "One of the nicest things about using fruit flies is that they are accessible to undergraduates, which allows Rutgers-Camden students to conduct research on a day-to-day basis."

Fried seeks to engage a new generation of scientists by giving Rutgers-Camden undergraduate students a chance to do significant research that he hopes will lead them to a career in science.

Credit: 
Rutgers University

Quantum momentum

Quantum mechanics is an extraordinarily successful way of understanding the physical world at extremely small scales. Through it, a handful of rules can be used to explain the majority of experimentally observable phenomena. Occasionally, however, we come across a problem in classical mechanics that poses particular difficulties for translation into the quantum world. A new study published in EPJ D has provided some insights into one of them: momentum. The authors, theoretical physicists Fabio Di Pumpo and Matthias Freyberger from Ulm University, Germany, present an elegant mathematical model of quantum momentum that is accessible through another classical concept: time-of-flight.

Many people will recall the traditional definition of momentum from high-school physics as being the product of the mass of an object and the velocity at which it is travelling. In quantum theory an object is represented by a wave function and its position cannot be determined unless the wave function is 'collapsed' into a single state. This is the essence of measurement in quantum mechanics.

Classical momentum can be obtained simply by measuring the time an object takes to pass between two stationary detectors ('time-of-flight'), finding the velocity and multiplying by the mass. Di Pumpo and Freyberger have developed a model of the quantum equivalent of this experiment in which the roles of time and distance are reversed: the time points are fixed, and the probabilistic positions of a wave function at each point, and thus the distance between them, estimated. This approach uses additional quantum systems called pointers that are coupled to a moving wave packet using a method developed by von Neumann, with measurements made to the pointers rather than the wave.

Di Pumpo and Freyberger were thus able to derive a single, measurable quantity that is a quantum equivalent of the classical time-of-flight, and to calculate the momentum of a quantum particle quite precisely on this basis. They end the paper by suggesting ways of further improving the accuracy of the measurement.

Credit: 
Springer

Spinning towards robust microwave generation on the nano scale

Spin-torque oscillators (STOs) are nanoscale devices that generate microwaves using changes in magnetic field direction, but those produced by any individual device are too weak for practical applications. Physicists have attempted - and, to date, consistently failed - to produce reliable microwave fields by coupling large ensembles. Michael Zaks from Humboldt University of Berlin and Arkady Pikovsky from the University of Potsdam in Germany have now shown why connecting these devices in series cannot succeed, and, at the same time, suggested other paths to explore. Their work was recently published in EPJ B.

The physics behind spin-torque oscillations is the same as that behind the hard disk drive of the computer on which you are very likely reading this text. It is a quantum mechanical effect known as 'giant magnetoresistance', in which changing the external magnetic field around a stack of layers of alternating ferromagnetic and non-magnetic metals gives rise to substantial changes in electrical resistance.

If the electric force produced is strong enough and the magnetic layers are free to rotate, magnetic oscillation occurs and microwaves are generated; this is the STO effect. However, only synchronised oscillations from large ensembles of oscillators can produce microwaves that are sufficiently powerful to be useful. Zaks and Pikovsky's work illustrates why it has proven so difficult to synchronise them.

To do so, the physicists simulated the motion of an ensemble of serially coupled STOs using the equations of non-linear dynamics. Their analysis revealed that the ensembles were always too unstable for the oscillations to remain coherent. In particular, they found that the random fluctuations of electric current that affect all oscillators simultaneously - so-called 'common noise' - do not stabilise the oscillations, as some had predicted. Instead, in some cases, sufficiently strong fluctuations were able to suppress the oscillations altogether.

Zaks and Pikovsky have dubbed this newly discovered phenomenon 'noise-induced oscillation death'. Armed with new theoretical knowledge on this system, they are now investigating other methods for coupling these nanoscale machines to produce robust microwaves on the macro scale.

Credit: 
Springer

What can you do with two omes that you can't do with one?

image: As analytical tools improve, researchers are combining proteomic, genomic, transcriptomic and other omic data to better understand biological systems. The current issue of Molecular and Cellular Proteomics focuses on multiomics tools and research results.

Image: 
Molecular & Cellular Proteomics

What can you learn from two omes that you can't tell from one? You might determine how different bacterial strains in a water sample contribute specific functions to its overall microbiome. You might find that duplication of a section of a chromosome in cancer cells has wide-reaching effects on important proteins--or that it has a smaller effect than expected. First, though, you need to find a way to wrangle gigabytes of data saved in numerous, perhaps incompatible formats.

As high-throughput analytical tools improve, allowing researchers to collect more and more data, the challenge becomes how to interpret it all. When transcriptomic, genomic, metabolomic and proteomic analyses are layered together, parsing out a signal can be a monumental task. The field needs new analytical strategies, and new user-friendly software, to condense RNA counts, genotypes, and mass spectra showing proteins, post-translational modifications, complex carbohydrates and metabolites into coherent, interpretable results.

Researchers surfing this multiomics wave report a plethora of new tools and approaches this month in a special issue of the journal Molecular & Cellular Proteomics devoted to integrating multiple omics. The issue, edited by Bernhard Kuster of the Technical University of Munich and Bing Zhang of the Baylor College of Medicine, includes sixteen articles that explore ways to combine data from two or more omes at a time.

Credit: 
American Society for Biochemistry and Molecular Biology

Kids might be naturally immunized after C. difficile colonization in infancy

Exposure to C. difficile in infancy produces an immune response that might protect against this gastrointestinal infection later in childhood, according to a study published in Clinical Infectious Diseases journal. Researchers found that infants who were naturally exposed to C. difficile in the environment and became colonized with the bacteria had antibodies in their blood. Analyses using a state-of-the-art assay revealed that these antibodies neutralized toxins that cause C. difficile infection, preventing harmful effects to cells exposed to these toxins. This suggests that a natural immunization occurs, although future studies will need to determine if it would prevent illness years later after another C. difficile exposure.

"We found an immune response in infants colonized with C. difficile, which might be beneficial as they get older, although we are still studying the extent and duration of this natural immunization," says lead author Larry Kociolek, MD, MSCI, from Stanley Manne Children's Research Institute at Ann & Robert H. Lurie Children's Hospital of Chicago, who is an Assistant Professor in Pediatrics-Infectious Diseases at Northwestern University Feinberg School of Medicine and Associate Medical Director of Infection Prevention and Control at Lurie Children's. "We are optimistic because we know from previous studies that adults with anti-toxin antibodies have a lower risk for illness from C. difficile infection."

C. difficile is a frequent cause of community- and healthcare-associated infection in adults and children. While roughly half of all infants get exposed, they normally do not get sick from these bacteria. Older children and adults usually get diarrhea that needs to be treated by antibiotics. A more severe form of the infection may cause inflammation of the colon that requires surgery and could be fatal. Children tend to have milder symptoms than adults. The pediatric incidence of C. difficile infection peaks in the 1-to-4-year age group and during teenage years.

"Given our results, we suspect that young children who get sick from C. difficile were probably not exposed as infants and so did not develop immunity," says Dr. Kociolek. "In adolescents, immunity might be waning. If with more research we can show that this is true, then there might be a role for vaccinating susceptible children and teens against C. difficile."

Currently, vaccines against C. difficile toxins are in clinical trials for adults. Pediatric clinical trials will be needed before a vaccine is available for children and adolescents.

Credit: 
Ann & Robert H. Lurie Children's Hospital of Chicago

How do you forecast eruptions at volcanoes that sit 'on the cusp' for decades?

image: This is the Telica Volcano in Nicaragua.

Image: 
Carnegie Institution for Science

Washington, DC--Some volcanoes take their time--experiencing protracted, years-long periods of unrest before eventually erupting. This makes it difficult to forecast when they pose a danger to their surrounding areas, but Carnegie's Diana Roman and Penn State's Peter LaFemina are trying to change that.

"Dormancy, brief unrest, eruption--this is a familiar pattern for many volcanoes, and for many parents," joked Roman. "But for some volcanoes the unrest is anything but brief--potentially lasting for decades.'"

It turns out that these so-called "persistently restless volcanoes" experience three different states of unrest, some of which are more likely to result in explosive eruptions than others, according to Roman and LaFemina's 10-year research project on the Telica Volcano in Nicaragua. Their latest results are published by Geochemistry, Geophysics, Geosystems.

"Persistently restless volcanoes remain right on the cusp of being about to erupt, periodically tipping into outright eruption," explained lead author Roman.

She, LaFemina and an international group of collaborators found that Telica experienced three states of unrest during the decade of their observations--two of which may lead to eruptions.

The first is accompanied by relatively steady release of gas as well as changes in seismic activity. In this state, gas can move easily through and out of the system of cracks and reservoirs underlying the volcano, avoiding pressure accumulations that lead to explosions. But sometimes one of these channels gets sealed off, blocking the release of gas. The second and third states are defined by the strength of the blockage.

The second state is characterized by a series of weak explosions until one comes along that is strong enough to remove the obstruction. However, if the explosions only partially remove the blockage, it can lead to the third, destabilized state, in which pressure rapidly accumulates, driving deformation of the surrounding landscape and large explosions that include ejection of rock fragments. Roman and LaFemina believe that the 2011 eruption of Telica is an example of the former and the 2015 eruption is an example of the latter.

"Over the course of our decade of monitoring Telica, we observed all three of our proposed states of unrest," LaFemina noted. "This reinforces the importance of continuous surveillance efforts."

He and Roman said that their team's findings could lead to a forecasting models for persistently restless volcanoes, but first it is critical to establish similar patterns at other persistently restless volcanoes like Telica.

Credit: 
Carnegie Institution for Science

Patterns of substance use and co-use by adolescents

Using in-depth interviews with 13 adolescents (16-19 years of age) who used alcohol and marijuana, this study examines the role that social and physical contexts play in adolescent decision-making about simultaneous use of alcohol and marijuana.

The research findings show that context matters in three ways:

1) context characteristics inform decisions about simultaneous use,

2) context characteristics determine simultaneous use patterns such as the sequence in which substances were used, and

3) simultaneous alcohol and marijuana use occurred in both destination locations and transitional locations.

First, researchers found that adolescents described decisions about which substance to use - or both together - based on how the physiological effects of the substance would fit with the social, physical, and situational characteristics of the context.

For example, marijuana was named as a substance that could be used in situations during which youth had to maintain control or where they were likely to encounter authority figures.

Because, I don't know, people usually get high because they like the way it enhances everything. .. [Of country clubs, school events, and malls] I would only smoke weed there, because the only substance I feel where I can fully act like myself and not be obvious would be weed. 'Cause if it's public, you know. Restaurant, same thing. I wouldn't go drunk to a restaurant, but I would go high." (Participant 5, Male, age 17).

Second, use of which drug was used - or both together - was related to how adolescents wanted to feel or behave in a particular context.

"Generally, the formula that I kind of go with is there will usually be a pregame. I'll have a couple of shots of something. Usually it's whatever we have on hand, if it's vodka or if it's whiskey. Then we'll walk to the party...Usually at the party, I'll drink mostly hard alcohol, 'cause that's what there is. Then kind of as the night winds down, maybe if there's beer, I'll drink that...Then usually, at the end of the night, me and my friends will smoke [marijuana], because it helps kind of wind down and it's supposed to make the hangover less bad." (Participant 8, Male, age 18)

Finally, adolescents described simultaneous alcohol and marijuana use in two types of contexts: destination and transitional. Destination contexts were places where adolescents stayed for a long period of time. Transitional contexts were on the way to place, such as in a car.

The authors conclude that interventions designed to reduce simultaneous alcohol and marijuana use could benefit from paying attention to substance use contexts.

Credit: 
Pacific Institute for Research and Evaluation

In the inner depths of the ear: The shape of the cochlea is an indicator of sex

image: Average female (left) and male (right) shapes for the cochlear spiral curve, whose torsion has been coded on a coloured scale. While the two forms are oriented in the same way, the geometric differences are visible.

Image: 
C. Samir, A. Fradi, and J. Braga

The auditory section of the inner ear, or the "cochlea," does not have the same shape from birth depending on whether one is a man or a woman. This is due to the torsion of the cochlear spiral, which differs based on gender, especially at its tip. Demonstrated by a French-South African collaboration, an interdisciplinary effort evolving scientists primarily from the CNRS, UT3 Paul Sabatier, and l'Université Clermont Auvergne,(1) these results have helped develop the first reliable method for sex determination, including among children and cases where DNA is missing or too altered. Until now, it was impossible to determine the sex of a child from its skeleton, while for adults this could be done reliably only from studying the pelvis, which is not always preserved. Since the cochlea is among the hardest bones in the skull--a bone that is found very frequently at archaeological sites--this technique can determine the sex of very old fossils, even when fragmentary or immature. This research was featured in an article published by Scientific Reports.

This research received support from the CNRS as part of the 80|Prime programme, designed to support and strengthen interdisciplinarity among CNRS institutes.

Credit: 
CNRS

Magnetic plasma pulses excited by UK-size swirls in the solar atmosphere

image: The photospheric and chromospheric images were recorded with the Hinode satellite, while coloured lines between are visualizing the presence of magnetic field lines from the researchers' realistic numerical simulations using the Sheffield Advanced Code (SAC). Red and blue curves are swirls detected by the Automated Swirl Detection Algorithm (ASDA) developed by the researchers.

Image: 
Liu et al. <em>Nature Communications,</em> 10:3504, 2019

Research led by the University of Sheffield shows the first observational evidence that ubiquitous swirls in the solar atmosphere could generate short-lived Alfvén pulses

These Alfvén pulses have been found to be generated by prevalent photospheric plasma swirls the size of the UK

An international team of scientists led by the University of Sheffield have discovered previously undetected observational evidence of frequent energetic wave pulses the size of the UK, transporting energy from the solar surface to the higher solar atmosphere.

Magnetic plasma waves and pulses have been widely suggested as one of the key mechanisms which could answer the long-standing question of why the temperature of the solar atmosphere rises dramatically, from thousands to millions of degrees, as you move away from the solar surface.

There have been many theories put forward, including some developed at the University of Sheffield - for example, heating the plasma by magnetic waves or magnetic plasma - but observational validation of the ubiquity of a suitable energy transport mechanism has proved challenging until now.

By developing innovative approaches, applied mathematicians at the Solar Physics and Space Plasma Research Centre (SP2RC) in the School of Mathematics and Statistics at the University of Sheffield, and the University of Science and Technology of China, have discovered unique observational evidence of plentiful energetic wave pulses, named after the Nobel laureate Hannes Alfvén, in the solar atmosphere.

These short-lived Alfvén pulses have been found to be generated by prevalent photospheric plasma swirls about the size of the British Isles, which are suggested to have a population of at least 150,000 in the solar photosphere at any moment of time.

Professor Robertus Erdélyi (a.k.a. von Fáy-Siebenbürgen), Head of SP2RC, said: "Swirling motions are everywhere in the universe, from sinking water in domestic taps with a size of centimeters, to tornadoes on Earth and on the Sun, solar jets and spiral galaxies with a size of up to 520,000 light years. This work has shown, for the first time, the observational evidence that ubiquitous swirls in the solar atmosphere could generate short-lived Alfvén pulses.

"The generated Alfvén pulses easily penetrate the solar atmosphere along cylinder-like magnetic flux tubes, a form of magnetism a bit like trees in a forest. The pulses could travel all the way upward and reach the top of the solar chromospheric layers, or, even beyond."

Alfvén modes are currently very hard to observe directly, because they do not cause any local intensity concentrations or rarefactions as they make their journey through a magnetised plasma. They are hard to be observationally distinguished from some other types of magnetic plasma modes, like the well-known transversal magnetic plasma waves, often called kink modes.

"The energy flux carried by the Alfvén pulses we detected now are estimated to be more than 10 times higher than that needed for heating the local upper solar chromosphere," said Dr Jiajia Liu, postdoctoral research associate.

"The chromosphere is a relatively thin layer between the solar surface and the extremely hot corona. The solar chromosphere appears as a red ring around the Sun during total solar eclipses."

Professor Erdélyi added: "It has been a fascinating question for the scientific community for a long while - how the Sun and many other stars supply energy and mass to their upper atmospheres. Our results, as part of an exciting UK-China collaboration, involving our very best early-career scientists like Drs Jiajia Liu, Chris Nelson and Ben Snow, are an important step forward in addressing the supply of the needed non-thermal energy for solar and astrophysical plasma heating.

"We believe, these UK-sized photospheric magnetic plasma swirls are also very promising candidates not just for energy but also for mass transportation between the lower and upper layers of the solar atmosphere. Our future research with my colleagues at SP2RC will now focus on this new puzzle. "

Credit: 
University of Sheffield

Study shows why a common form of immunotherapy fails, and suggests solution

WASHINGTON (August 5, 2019) -- New research has uncovered a mechanism thought to explain why some cancers don't respond to a widely used form of immunotherapy called "checkpoint inhibitors" or anti-PD-1. In addition, the scientists say they have found a way to fix the problem, paving a way to expand the number of patients who may benefit from the treatment.

Immunotherapy, which enables the body's own immune system to attack cancer, has not yet met the promise it holds. While it has been a major advance in the treatment of cancer, up to 85 percent of patients whose cancer is treated with checkpoint inhibitors don't benefit, according to estimates.

In a new study published online July 29 in Nature Immunology, a research team, led by Samir N. Khleif, MD, director of The Loop Immuno-Oncology Laboratory at Georgetown Lombardi Comprehensive Cancer Center show that the condition of immune cells (T cells) prior to anti-PD-1 therapy is a crucial determinant for the ability of cancer to respond.

"If the immune cells are not in the appropriately activated state, treatment with anti-PD-1 drives these T cells into a dysfunctional, non-reprogrammable state, inducing resistance to further immune therapy," Khleif explains.

In order to prevent the immune system from attacking normal cells, the body has a way of protecting these cells from immune attack. Cancer cells often adopt this system of checkpoints in order to put the brakes on immune surveillance to protect themselves and grow. Checkpoint inhibitors release those brakes.

These inhibitors target molecules, such as PD-1 (programmed cell death 1), which sits on the surface of a T cell, and the molecule, PDL-1 (PD-ligand 1) that is present on tumor cells and bind PD-1. This PD-1/PDL-1 pairing inhibits the normal functioning of T cells known as (killer CD8+), which would otherwise attack the cancer cell. So drugs, in the form of antibodies that bind to either PD-1 or PDL-1, work to remove that protection, allowing T cells to recognize and attack the tumor.

Khleif says it has been known that the tumors that respond more readily to checkpoint inhibitors are those that have already engaged the immune system, such as melanoma and cancers that express a lot of mutations. The question has been why the agents don't work on immunologically "quiet" tumors. This discovery now shed a light on the issue.

The team also was able to find a strategy to overcome such resistance to immunotherapy.

"When we first activate T cells by using a simple vaccine, or remove the dysfunctional T cells, we found that the checkpoint inhibitor therapy works better," says Khleif, biomedical scholar and professor of oncology at Georgetown Lombardi.

He added that clinical trials are already being developed to confirm these findings in patients, which were made using animal models and patient tumor samples. Cancer vaccines, based on a patient's specific tumor, are being explored as a way to prime the tumors - to invigorate T-cell activity and to enhance PD-1 inhibitors.

"In the past, some of these vaccines have been used after checkpoint immunotherapy. Our findings suggest that the vaccines should be used first, or at least in conjunction with anti-PD-1 therapy," says Khleif.

By examining patient tumor samples from several clinical trials, the researchers have also discovered a signature that identifies patients who would be resistant "biomarker."

"This might provide an easy and cost-effect prediction method of drug response," Khleif says.

Credit: 
Georgetown University Medical Center

Water treatment cuts parasitic roundworm infections affecting 800 million people

image: Water sources may be a significant factor in the transmission of roundworm (usually via fecal contamination). Proper water treatment alone can reduce roundworm transmission by 18 percent.

Image: 
Amy Pickering, Tufts University

MEDFORD/SOMERVILLE, Mass. (August 1, 2019)-- Roundworm infections can be reduced significantly simply by improving the treatment and quality of drinking water in high risk regions, according to an international team of researchers led by Tufts University.

The discovery emerged from a two-year study, published in PloS Medicine, which examined the effects of water quality, sanitation, handwashing and nutritional interventions on rates of intestinal worm and Giardia infections in rural Kenya. Water treatment alone was sufficient to cause an 18 percent reduction in infection rates in roundworm (Ascaris) infections; the reduction was 22 percent when water treatment was combined with improved sanitation and handwashing with soap. None of the interventions reduced the prevalence of Giardia infections among the young children studied.

Intestinal worm and protozoan infections affect more than 1 billion children worldwide and are associated with stunted growth and impaired cognitive development. These parasites often reside in the soil and contaminated drinking water or fecal-contaminated surfaces and lead to common infections in children in low resource settings. High re-infection rates have prevented school-based mass drug administration programs from controlling the transmission of these parasitic infections. The study authors hypothesized that improved water quality, sanitation, hygiene and/or nutrition could interrupt the environmental transmission of parasites, but few trials evaluating these interventions have measured actual infections as an outcome. In contrast to aggressive medical treatment programs, the water treatment, sanitation, and handwashing approaches represent a sustainable approach to disease control.

"Out of all the interventions we tested, we were extremely surprised that water treatment appeared to be the most effective at reducing roundworm infections. Water treatment is a relatively unexplored strategy for intestinal worm control," said Amy Pickering, assistant professor of civil and environmental engineering at Tufts University, and first author of the study. "At least 800 million people in the world are infected by roundworm (Ascaris lumbricoides), so even a relative reduction of 18 percent from water treatment interventions could have a major beneficial impact. Our study also suggests that water treatment could complement large-scale deworming medication delivery programs in the global effort to eliminate roundworm infections."

With reinfection rates reaching 94 percent after deworming treatment for roundworm infection, a combined approach of mass drug administration and environmental controls (water, sanitation, hygiene) could be critical to gaining an upper hand on these endemic infections, the researchers say.

Credit: 
Tufts University