Culture

CU Anschutz scientists reverse deadly impacts of asthma in mice

AURORA, Colo. (Jan. 13, 2021) - Mucus in the lungs can be fatal for asthma patients, but scientists at the University of Colorado Anschutz Medical Campus have broken up those secretions at the molecular level and reversed their often deadly impacts.

In a study published Monday in the journal Nature Communications, the researchers explained how they created an inhaled treatment that disrupted the production of excess mucus by reducing disulfide bonds in mice and opening up their airways. The same treatment had similar impacts on human mucus samples.

"Currently about 10% of the population has asthma," said the study's lead author Christopher Evans, PhD, professor of Pulmonary Sciences & Critical Care at the CU School of Medicine. "Excessive mucus blocks airflow, causing wheezing, and worsening the effects of inflammation and contraction of the muscles that line the airways."

Yet treatments for asthma like bronchodilators and steroids are rarely effective against mucus. Evans said they hydrate the mucus making it easier to cough up, but fail to treat the problem at the molecular level.

He and his team targeted macromolecules in mucus called polymeric mucin glycoproteins. They help protect the lungs and airways from infection in healthy individuals. But when overproduced, they can make gelatinous plugs that block airways as seen in asthma and other pulmonary conditions.

The researchers tried to shut down this process by breaking up mucin disulfide bonds which contribute to the overproduction of mucus. They treated asthmatic mice with a chemical known as TCEP (tris(2-carboxyethyl)phosphine) which quickly reversed the disease. It also worked on human mucus taken as samples from asthma patients.

"We showed that disrupting mucin disulfide bonds loosens the mucus and reverses the pathological effects of mucus hypersecretion in a mouse allergic asthma model," said study co-author Ana Maria Jaramillo, PhD, a postdoctoral fellow at CU Anschutz. "Loosening the mucus reduces airway inflammation, enhances mucociliary clearance and abolishes airway hyperactivity."

The researchers said that while TCEP would likely irritate human lungs, something similar could be added to drugs treating asthma, COPD, cystic fibrosis and other pulmonary diseases making them much more effective at reducing mucus.

"You can develop safer mucolytic compounds using this kind of strategy," Evans said. "They could help steroids and albuterol penetrate deeper into the lungs and airways. They could be used as an adjunct therapy."

Such new compounds might also be used in treating COPD, pulmonary fibrosis and even infections such as pneumonia or Covid-19 which attacks the lungs and airways.

"These findings establish grounds for developing treatments to inhibit effects of mucus hypersecretion in asthma," Evans said. "I believe they have life-saving potential."

Credit: 
University of Colorado Anschutz Medical Campus

Bacteria carried by mosquitos may protect them against pesticides

image: One of the Culex quinquefasciatus mosquitoes in the study

Image: 
University of Reading

A common bacterial species naturally infecting mosquitoes may actually be protecting them against specific mosquito pesticides, a study has found.

Wolbachia - a bacterium that occurs naturally and spreads between insects - has become more frequently used in recent years as a means of controlling mosquito populations.

Scientists at the University of Reading, and the INBIOTEC-CONICET and the National University of San Juan in Argentina, studied the effect of Wolbachia on a common mosquito species and found those carrying the bacteria were less susceptible to widely used pesticides.

Dr Alejandra Perotti, an Associate Professor in invertebrate biology at the University of Reading, and a co-author of the study, said: "This shows the importance of looking more closely at how bacteria in mosquitoes and pesticides interact, especially at a time when new plans are being formulated for which methods to use, where to use them and which species to target."

Mosquitoes transmit several diseases like dengue fever, malaria, zika and yellow fever to humans through their bites, and collectively kill more than a million people worldwide every year.

In the new study, published in Scientific Reports, the researchers looked at Culex quinquefasciatus - also known as the southern house mosquito - that had been reared for several years under environmental controlled conditions at a lab in Argentina (INBIOTECT insectary).

This is one of the most widespread species in countries with hotter climates. The mosquito species transmits several diseases, a wide range of viruses such as the West Nile Virus (WNV), the San Luis Encephalitis Virus (SLEV) and the Venezuelan Equine Encephalitis Virus, and in addition a variety of parasites (filarial worms) in Central and South America, Africa and Asia.

The team found that the mosquito larvae naturally infected by an Argentinian native strain of Wolbachia were less susceptible to three bacterial pesticides (Bacillus thuringiensis israelensis, Bacillus wiedmannii biovar thuringiensis, and Lysinibacillus sphaericus), two of which are commercially available and used in many countries to control mosquito populations.

Credit: 
University of Reading

Blue-light stride in perovskite-based LEDs

image: Researchers at Linköping University, Sweden, have developed efficient blue light-emitting diodes based on halide perovskites.

Image: 
Thor Balkhed

Researchers at Linköping University, Sweden, have developed efficient blue light-emitting diodes based on halide perovskites. "We are very excited about this breakthrough", says Feng Gao, professor at Linköping University. The new LEDs may open the way to cheap and energy-efficient illumination.

Illumination is responsible for approximately 20% of global electricity consumption, a figure that could be reduced to 5% if all light sources consisted of light-emitting diodes (LEDs). The blue-white LEDs currently in use, however, need complicated manufacturing methods and are expensive, which makes it more difficult to achieve a global transition.

LEDs manufactured from halide perovskites could be a cheaper and more eco-friendly alternative for both illumination and LED-based monitors. Perovskites are a family of semiconducting materials defined by their cubic crystal structure. They have good light-emitting properties and are easy to manufacture. Using elements from the halogen group, i.e. fluorine, chlorine, bromine and iodine, perovskites can be given properties that depend on the chemical composition of the crystal.

LEDs for green and red light have already been created with perovskites, but one colour, blue, has so far been lacking, making it impossible to achieve white light.

"Blue light is the key to bringing light-emitting perovskites to practical applications. Our most recent breakthrough is one step on the way", says Feng Gao, professor at the Department of Physics, Chemistry and Biology at Linköping University.

Feng Gao's research group, in collaboration with colleagues in Lund, Great Britain, Germany, China and Denmark, has managed to create halide perovskites that give stable emission in the wavelength range 451-490 nanometres - corresponding to deep blue to sky blue colours. Max Karlsson is doctoral student at Linköping University and joint first author of the article now published in Nature Communications. He says:

"Metal-halide perovskites are easily colour-tuneable over the whole visible spectrum by simple alloying. Unfortunately, they exhibit demixing and a blue LED turns green during operation. We have found a method that can prevent this colour shift by controlling the film crystallisation dynamics when creating the perovskite. These findings pave the way for stable perovskite alloys, not only for LEDs but also for solar cells."

The challenge of creating blue light in perovskites is that it requires a chemical composition with a large fraction of chloride, which makes the perovskite unstable. Blue perovskite-based LEDs have previously been created with using what is known as the "quantum confinement technique", which gives low-intensity LEDs with poor efficiency. However, stable perovskites with the desired amount of chloride can be created with the aid of the "vapour-assisted crystallisation technique". Furthermore, the Linköping University researchers have achieved an energy efficiency of up to 11% for the blue perovskite-based LEDs.

"We have shown that blue light-emitting diodes based on halide perovskites can be both efficient and stable across a broad spectrum, without using quantum confinement. We have managed to create one of the most efficient blue perovskite-based LEDs so far known", says Weidong Xu, postdoc at Linköping University.

The science of perovskites is a relatively new research field that has aroused major international interest, since it offers a great potential for developing cheap and efficient materials. Feng Gao, however, is quick to point out that the work they have done is basic research, and applications are still some way off in future.

"Perovskite LEDs are a young technology and have some way to go before they see the light of day. Currently, the short lifetime and poor performance of blue LEDs are the main obstacles for perovskite light-emitting diodes before they can start to compete with existing technologies such as light-emitting diodes based on organic and inorganic semiconductors. We will keep working on that to make PeLEDs comparable to the other technologies", says Feng Gao.

Credit: 
Linköping University

A new study identifies possible biomarkers of severe malaria in African children

The levels of small molecules called microRNAs (miRNAs) circulating in blood could help identify early on children with life-threatening forms of malaria, according to a study led by the Barcelona Institute for Global Health, an institution supported by "la Caixa" Foundation, in collaboration with the Manhiça Health Research Center (CISM) in Mozambique. The results, published in Emerging Infectious Diseases journal, could also help better understand the mechanisms underlying severe malaria.

Malaria mortality among young African children remains unacceptably high. To improve the outcome, it is important to rapidly identify and treat children with severe forms of the disease. However, at the beginning of the infection, it is not always easy to distinguish early on between uncomplicated and life-threatening disease symptoms. One characteristic of severe malaria is the sequestration of red blood cells infected with the malaria parasite (P. falciparum) in vital organs such as the lungs, kidneys or brain. This leads to organ damage, which in turn results in the release of small molecules called microRNAs (miRNAs) into body fluids, including blood.

"We hypothesised that miRNA levels in plasma would be differently expressed in children with severe and uncomplicated malaria, due to parasite sequestration in vital organs," explains ISGlobal researcher Alfredo Mayor, who coordinated the study. To test this hypothesis, he and his team first used an advanced sequencing technique to identify miRNAs released by human brain endothelial cells when exposed to red blood cells infected by P. falciparum in a dish. They then measured expression of these miRNAs in blood samples from Mozambican children with severe or uncomplicated malaria. They found that six of the identified miRNAs were higher in children with severe malaria. One of these miRNAs, which is expressed by a variety of tissues, was also positively related with the amount of a parasite-derived protein named HRP2. "This suggests that increasing amounts of parasite associated with parasite sequestration may lead to higher levels of secretion of this miRNA by damaged tissues," explains Himanshu Gupta, first author of the study.

"Our results indicate that the different pathological events in severe and uncomplicated malaria lead to differential expression of miRNAs in plasma," says Mayor. "These miRNAs could be used as prognostic biomarkers of disease, but we need larger studies to validate this", he adds. The findings also provide a ground for better understanding the mechanisms underlying severe malaria.

Credit: 
Barcelona Institute for Global Health (ISGlobal)

New molecular structures associated with ALS

Researchers from the University of Seville and the University of Pavia have identified a link between Amyotrophic Lateral Sclerosis (ALS) and the accumulation of DNA-RNA hybrids in the genome. The accumulation of these hybrids causes increased genomic damage and boosts genetic instability. This finding will make it possible to better understand the molecular basis of the disease, as well as to propose new solutions to curb it.

Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disease of the central nervous system, characterised by progressive degeneration of motor neurons leading to muscle paralysis. Classified as a rare disease, there is no cure despite efforts to understand the disease's molecular basis and research devoted to identifying therapies that can at least slow its evolution.

At the CABIMER centre, the group from the University of Seville led by Professor Andrés Aguilera, in collaboration with the group led by Dr. Cristina Cereda from the Mondino Foundation at the University of Pavia, have identified a link between this disease and the accumulation of DNA-RNA hybrids in the genome. The researchers have observed that patient-derived mutations in TDP-43, a protein of the RNA metabolism whose deficiency is associated with ALS, result in the accumulation of these hybrids in both neuronal and non-neuronal cells, thereby causing increased genomic damage and genetic instability.

This discovery has led to a new perception of the disease's molecular basis related to the role of the TDP-43 protein in the cellular cytoplasm. The mutations analysed cause a TDP-43 deficiency in the cell nucleus, thus leading to the accumulation of these aberrant structures in the DNA. This research opens up new avenues of exploration to understand the disease's molecular basis, as well as new approaches to try to slow its evolution.

Credit: 
University of Seville

Copper-indium oxide: A faster and cooler way to reduce our carbon footprint

image: A record-high CO2 conversion rates at relatively low temperatures in a modified chemical-looping version of RWGS using a novel copper-indium oxide

Image: 
Waseda University

With ever-worsening climate change, there is a growing need for technologies that can capture and use up the atmospheric CO2 (carbon dioxide) and reduce our carbon footprint. Within the realm of renewable energy, CO2-based e-fuels have emerged as a promising technology that attempts to convert atmospheric CO2 into clean fuels. The process involves production of synthetic gas or syngas (a mixture of hydrogen and carbon monoxide (CO)). With the help of the reverse water-gas shift (RWGS) reaction, CO2 is broken down into the CO necessary for syngas. While promising in its conversion efficiency, the RWGS reaction requires incredibly high temperatures (>700°C) to proceed, while also generating unwanted byproducts.

To tackle these problems, scientists developed a modified chemical-looping version of the RWGS reaction that converts CO2 to CO in a two-step method. First, a metal oxide, used as an oxygen storage material, is reduced by hydrogen. Subsequently, it is re-oxidized by CO2, yielding CO. This method is free of undesirable byproducts, makes gas separation simpler, and can be made feasible at lower temperatures depending on the oxide chosen. Consequently, scientists have been looking for oxide materials that exhibit high oxidation-reduction rates without requiring high temperatures.

In a recent study published in Chemical Science, scientists from Waseda University and ENEOS Corporation in Japan have revealed that a novel indium oxide modified with copper (Cu--In2O3) exhibits a record-breaking CO2 conversion rate of 10 mmolh-1g-1 at relatively modest temperatures (400-500°C), making it a frontrunner among oxygen storage materials required for low-temperature CO2 conversion. To better understand this behavior, the team investigated the structural properties of Cu-In oxide along with the kinetics involved in the chemical-looping RWGS reaction.

The scientists carried out X-ray-based analyses and found that the sample initially contained a parent material, Cu2In2O5, which was first reduced by hydrogen to form a Cu-In alloy and indium oxide (In2O3) and then oxidized by CO2 to yield Cu--In2O3 and CO. X-ray data further revealed that it underwent oxidation and reduction during the reaction, providing the key clue to scientists. "The X-ray measurements made it clear that the chemically looped RWGS reaction is based on the reduction and oxidation of Indium which leads to the formation and oxidation of the Cu-In alloy," explains Professor Yasushi Sekine of Waseda University, who led the study.

The kinetics investigations provided further insights into the reaction. The reduction step revealed that Cu was responsible for the reduction of indium oxide at low temperatures, while the oxidation step showed that the Cu-In alloy surface preserved a highly reduced state while its bulk got oxidized. This allowed the oxidation to happen twice as quickly as that of other oxides. The team attributed this peculiar oxidation behavior to a rapid migration of negatively charged oxygen ions from the Cu-In alloy surface to its bulk, which assisted in the preferential bulk oxidation.

The results have, quite expectedly, excited scientists about the future prospects of copper-indium oxides. "Given the current situation with carbon emission and global warming, a high-performance carbon dioxide conversion process is greatly desired. Although the chemically looped RWGS reaction works well with many oxide materials, our novel Cu-In-oxide here shows a remarkably higher performance than any of them. We hope that this will contribute significantly to reducing our carbon footprint and driving humankind towards a more sustainable future", concludes Sekine.

Credit: 
Waseda University

Workaholism leads to mental and physical health problems

image: Morteza Charkhabi,
Associate Professor at the Institute of Education at the HSE University

Image: 
Morteza Charkhabi

Workaholism or work addiction risk is a growing public health concern that can lead to many negative mental and physical health outcomes such as depression, anxiety or sleep disorder. Perception of work (job demands and job control) may become a major cause of employees' work addiction. The international group of researchers including the HSE University scientist explored the link between work addiction risk and health-related outcomes using the framework of Job Demand Control Model. The results were published in the International Journal of Environmental Research and Public Health.

Workaholics are people who usually work seven and more hours more than others per week. There are potential reasons for that: financial problems, poor marriage or pressure by their organization or supervisor. What can differentiate a workaholic behaviour from similar behaviour like work engagement? Workaholism is also known as a behavioural disorder, which means the excessive involvement of the individual in work when an employer doesn't require or expect it.

The scientists aimed to demonstrate the extent to which the work addiction risk is associated with the perception of work (job demands and job control), and mental health in four job categories suggested by Karasek's model or Job Demand-Control-Support model (JDCS). The JDCS model assumes four various work environments (four quadrants) in which workers may experience a different level of job demands and job control: passive, low-strain, active, and tense/job-strain. Job control is the extent to which an employee feels control over doing work.

"Passive" jobs (low job control, low job demands) might be satisfying to a worker as long as the workers reach the set goal. "Low strain" jobs have high job control and low job demands. Individuals in this category are not particularly at risk of mental health problems, and it corresponds typically to creative jobs such as architects. "Active" workers have high job demands and high job control. They are highly skilled professionals with responsibilities, such as heads or directors of companies. Those highly skilled workers have very demanding tasks but they have high levels of decision latitude to solve problems. Finally, workers at risk of stress-related disorders are those within the "job strain" group (high demand and low control). For example, healthcare workers from emergency departments are typically in job strain because they cannot control the huge workload.

The study was conducted in France because it is one of the industrial countries with growing numbers of occupations. The authors of the research collected data from 187 out of 1580 (11.8%) French workers who agreed to participate in a cross-sectional study using the WittyFit software online platform. The self-administered questionnaires were the Job Content Questionnaire by Karasek, the Work Addiction Risk Test, the Hospital Anxiety and Depression scale and socio-demographics. The authors of this study divided all the participants based on their occupational groups and investigated the link between work addiction risk and mental and physical health outcomes.

'One of the novelties of this research was to introduce vulnerable occupational groups to organizations or job holders. For example, if we find that work addiction risk can be found more in some occupations and may result in negative outcomes for the health situation then we can give this information to decision makers in this organization or, for example, to the ministry of health. And they could intervene to prevent this problem,' explains Morteza Charkhabi, Associate Professor at the Institute of Education at the HSE University.

The results show that high job demands at work are strongly associated with work addiction risk but the job control level does not play the same role. The prevalence of work addiction risk is higher for active and high-strain workers than for passive and low-strain workers. These two groups of workers appeared to be more vulnerable and therefore can suffer more from the negative outcomes of work addiction risk, in terms of depression, sleep disorder, stress and other health issues.

'We found that job demands could be the most important factor that can develop work addiction risk. So this factor should be controlled or should be investigated by the organization's manager, for example, HR staff, psychologists. Also another conclusion could be the job climate like job demands of each job category can influence the rate of work addiction risk. Thus in this study we actually focused on external factors like job demands not internal factors like the personal characteristics,' adds Morteza Charkhabi.

The researchers found that people with higher work addiction risk compared to people with low work addiction risk have twice the risk of developing depression. Sleep quality was lower to workers with high risk of work addiction compared to workers with low risk of work addiction. Also women had almost twice the work addiction risk than men.

Credit: 
National Research University Higher School of Economics

Could we harness energy from black holes?

A remarkable prediction of Einstein's theory of general relativity--the theory that connects space, time, and gravity--is that rotating black holes have enormous amounts of energy available to be tapped.

For the last 50 years, scientists have tried to come up with methods to unleash this power. Nobel physicist Roger Penrose theorized that a particle disintegration could draw energy from a black hole; Stephen Hawking proposed that black holes could release energy through quantum mechanical emission; while Roger Blandford and Roman Znajek suggested electromagnetic torque as a main agent of energy extraction.

Now, in a study published in the journal Physical Review D, physicists Luca Comisso from Columbia University and Felipe Asenjo from Universidad Adolfo Ibanez in Chile, found a new way to extract energy from black holes by breaking and rejoining magnetic field lines near the event horizon, the point from which nothing, not even light, can escape the black hole's gravitational pull.

"Black holes are commonly surrounded by a hot 'soup' of plasma particles that carry a magnetic field," said Luca Comisso, research scientist at Columbia University and first author on the study.

"Our theory shows that when magnetic field lines disconnect and reconnect, in just the right way, they can accelerate plasma particles to negative energies and large amounts of black hole energy can be extracted."

This finding could allow astronomers to better estimate the spin of black holes, drive black hole energy emissions, and might even provide a source of energy for the needs of an advanced civilization, Comisso said.

Comisso and Asenjo built their theory on the premise that reconnecting magnetic fields accelerates plasma particles in two different directions. One plasma flow is pushed against the black hole's spin, while the other is propelled in the spin's direction and can escape the clutches of the black hole, which releases power if the plasma swallowed by the black hole has negative energy.

"It is like a person could lose weight by eating candy with negative calories," said Comisso, who explained that essentially a black hole loses energy by eating negative-energy particles. "This might sound weird," he said, "but it can happen in a region called the ergosphere, where the spacetime continuum rotates so fast that every object spins in the same direction as the black hole."

Inside the ergosphere, magnetic reconnection is so extreme that the plasma particles are accelerated to velocities approaching the speed of light.

Asenjo, professor of physics at the Universidad Adolfo Ibáñez and coauthor on the study, explained that the high relative velocity between captured and escaping plasma streams is what allows the proposed process to extract massive amounts of energy from the black hole.

"We calculated that the process of plasma energization can reach an efficiency of 150 percent, much higher than any power plant operating on Earth," Asenjo said. "Achieving an efficiency greater than 100 percent is possible because black holes leak energy, which is given away for free to the plasma escaping from the black hole."

The process of energy extraction envisioned by Comisso and Asenjo might be already operating in a large number of black holes. That may be what is driving black hole flares--powerful bursts of radiation that can be detected from Earth.

"Our increased knowledge of how magnetic reconnection occurs in the vicinity of the black hole might be crucial for guiding our interpretation of current and future telescope observations of black holes, such as the ones by the Event Horizon Telescope," Asenjo said.

While it may sound like the stuff of science fiction, mining energy from black holes could be the answer to our future power needs.

"Thousands or millions of years from now, humanity might be able to survive around a black hole without harnessing energy from stars," Comisso said. "It is essentially a technological problem. If we look at the physics, there is nothing that prevents it."

The study, Magnetic reconnection as a mechanism for energy extraction from rotating black holes, was funded by the National Science Foundation's Windows on the Universe initiative, NASA, and Chile's National Fund for Scientific and Technological Development.

"We look forward to the potential translation of seemingly esoteric studies of black hole astrophysics into the practical realm," Lukin said.

“The ideas and concepts discussed in this work are truly fascinating,” said Vyacheslav (Slava) Lukin, a program director at the National Science Foundation. He said NSF aims to catalyze new theoretical efforts based on frontier observations, bringing together theoretical physics and observational astronomy under one roof.

“We look forward to the potential translation of seemingly esoteric studies of black hole astrophysics into the practical realm," he added.

Credit: 
Columbia University

Inferring human genomes at a fraction of the cost promises to boost biomedical research

image: From left to right: Robin Hofmeister, Diogo Ribeiro, Simone Rubinacci and Olivier Delaneau

Image: 
Delaneau Group

Thousands of genetic markers have already been robustly associated with complex human traits, such as Alzheimer's disease, cancer, obesity, or height. To discover these associations, researchers need to compare the genomes of many individuals at millions of genetic locations or markers, and therefore require cost-effective genotyping technologies. A new statistical method, developed by Olivier Delaneau's group at the SIB Swiss Institute of Bioinformatics and the University of Lausanne (UNIL), offers game-changing possibilities. For less than $1 in computational cost, GLIMPSE is able to statistically infer a complete human genome from a very small amount of data. The method offers a first realistic alternative to current approaches relying on a predefined set of genetic markers, and so allows a wider inclusion of underrepresented populations. The study, which suggests a paradigm shift for data generation in biomedical research, is published in Nature Genetics.

A cost-effective approach to probing genetic markers

Low-coverage whole genome sequencing (LC-WGS) followed by genotype imputation is a method by which a whole genome can be inferred statistically from a very low sequencing effort. It has been proposed as a less biased and more powerful alternative to SNP arrays (see box), but its high computational cost has prevented it from becoming a widely used alternative. The team of scientists led by Olivier Delaneau, Group Leader at SIB and UNIL, has developed an open-source software, called GLIMPSE, that finally overcomes these issues. "GLIMPSE provides a framework that is 10-1,000 times faster, and thus cheaper, than other LC-WGS methods, while being much more accurate for rare genetic markers'' explains Olivier Delaneau. "GLIMPSE is able to greatly enhance a low-coverage genome at millions of markers for less than $1 in computational cost, making it the first real alternative to SNP arrays".

From unbiased data to unbiased healthcare
Genome-wide association studies have so far mostly focused on Europeans: 80% of all GWAS participants are individuals of European descent, yet these make up only 16% of the world population. This is an important ethical issue in terms of healthcare inclusiveness and equitable access to the benefits of biomedical research, as the way genetic markers contribute to disease susceptibility varies across human populations. LC-WGS naturally circumvents the bias inherent to pre-established sets of genetic markers (SNP arrays). It can thus be successfully applied to underrepresented populations, as shown in this study for an African-American population as a proof-of-concept. "In addition to breaking down the financial barrier to enable GWAS studies based on LC-WGS, what is really exciting about this approach is that it enables researchers to efficiently uncover associations in understudied populations" says Simone Rubinacci, Postdoctoral Researcher in Olivier Delaneau's Group and first author of the paper.

Taking advantage of genomes already sequenced
"Our original thinking was: can we make use of the wealth of sequenced genomes to improve those that are newly sequenced? In other words, more for less: this is exactly what GLIMPSE does," explains Diogo Ribeiro, Postdoctoral Researcher in Olivier Delaneau's Group and co-author of the paper. How does it work? By building on the idea that we all share relatively recent common ancestors, from which small portions of our DNA are inherited. Briefly, GLIMPSE mines large collections of human genomes that have been very accurately sequenced (high-coverage WGS) to identify portions of DNA that are shared with newly sequenced genomes. In this way, GLIMPSE can reliably fill in the gaps in the low-coverage data.

A new paradigm for future genomic studies with far-ranging applications

Made available as part of an open-source suite of tools, GLIMPSE paves the way for wide adoption of low-coverage WGS, promoting a paradigm shift in data generation for future genomic studies. Since the first release of the software as a preprint in April 2020, ongoing research has already started to use the tool, for instance to reconstruct the genomes of people living thousands of years ago from ancient DNA, or of COVID-19 patients from SARS-CoV-2 nasopharyngeal swabs as part of a GWAS study.

Credit: 
Swiss Institute of Bioinformatics

Researchers at Brazil's space institute discover why lightning branches and flickers

image: Analysis of the first super slow motion recordings of upward flashes suggests a possible explanation for the formation of luminous structures after electrical discharges split in the atmosphere

Image: 
INPE

Researchers at Brazil's National Space Research Institute (INPE), in partnership with colleagues in the United States, United Kingdom and South Africa, have recorded for the first time the formation and branching of luminous structures by lightning strikes.

Analyzing images captured by a super slow motion camera, they discovered why lightning strikes bifurcate and sometimes then form luminous structures interpreted by the human eye as flickers.

The study was supported by São Paulo Research Foundation - FAPESP. An article outlining its results is published in Scientific Reports.

"We managed to obtain the first optical observation of these phenomena and find a possible explanation for branching and flickering," Marcelo Magalhães Fares Saba, principal investigator for the project, told. Saba is a researcher in INPE's Atmospheric Electricity Group (ELAT).

The researchers used ultra high speed digital video cameras to record more than 200 upward flashes during summer thunderstorms in São Paulo City (Brazil) and Rapid City, South Dakota (USA) between 2008 and 2019. Upward lightning strikes start from the top of a tall building or other ground-based structure and propagate upward to the overlying cloud.

The upward flashes they recorded were triggered by positively charged cloud-to-ground lightning discharges, which are much more common, as described by the same INPE research group in a previous study (read more at: https://agencia.fapesp.br/31947).

"Upward lightning originates at the top of a tower or the lightning conductor on a skyscraper, for example, when the storm's electrical field is disturbed by a cloud-to-ground discharge as far away as 60 kilometers," Saba said.

Although the study conditions were very similar in Brazil and the US, luminous structures were observed in only three upward flashes, recorded in the US. These were formed by a positive leader discharge propagating toward the cloud base.

"The advantage of recording images of upward lightning is that they let us see the entire trajectory of these positive leaders from ground to cloud base. Once inside the cloud, they can no longer be seen," Saba said.

The researchers found that a low-luminosity discharge with a structure resembling a paintbrush sometimes forms at the tip of the positive leader. "We observed that this discharge, often referred to as a corona brush, may change direction, split in two, and define the path of the lightning flash and how it branches," Saba said.

When an upward flash branches successfully, it may proceed to the left or right. When branching fails, the corona brush may give rise to very short segments as bright as the leader itself. These segments first appear milliseconds after the corona brush splits, and pulsate as the leader propagates upward toward the cloud base, the videos show.

"The flickers are recurring failed attempts to start a branch," Saba said, adding that the flickers may explain why multiple lightning discharges are frequent, but more studies are needed to verify this theory.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Evolution: Speciation in the presence of gene flow

Spatial isolation is known to promote speciation - but researchers at Ludwig-Maximilians-Universitaet (LMU) in Munich have now shown that, at least in yeast, the opposite is also true. New ecological variants can also evolve within thoroughly mixed populations. 

The idea that speciation is based on the selection of variants that are better adapted to the local environmental conditions is at the heart of Charles Darwin's theory of the origin of species - and it is now known to be a central component of biological evolution, and thus of biodiversity. Geographic isolation of populations is often regarded as a necessary condition for ecotypes to diverge and eventually form new species. When populations of a given species are separated by geographic barriers, favorable mutations that emerge in either can become fixed locally, as mating between the two populations is precluded. Whether or not speciation can occur under conditions in which gene flow between two populations is possible - such that genetic mixing can still occur - remains controversial. In order to resolve the issue, LMU evolutionary biologist Jochen Wolf and his group in cooperation with Simone Immler (University of East Anglia, UK) have used baker's yeast as a model system to experimentally explore what happens when the degree of gene flow between genetically differentiated populations is gradually increased.

"The starting point for this project, which has now been running for 6 years, was a single founder cell, which gave rise to our original population," says Wolf. "We then followed the accumulation of mutations within this population over the course of many generations." Starting from the original ancestor, the scientists first selected cells that floated in a suspension on top or sank to the bottom. In this way, they obtained two populations which were adapted to different 'habitats' - referred to simply as 'top' and 'bottom'. The two behaviors are related to differences in the morphology of the cells and in their propensity to from multi-cellular clusters with one another.

Having obtained these genetically differentiated populations, the researchers proceeded to mix them in various proportions and monitored their subsequent evolution. "We first observed what would be expected according to the classical isolation model, when the top and bottom populations were kept strictly separated from one another," says Wolf. Under these conditions, the two 'geographically' isolated populations continued to adapt to the demands of their respective niches and rapidly diverged from each other, becoming clearly distinct with time. For example, the top cells preferentially reproduced by asexual cell division, and therefore grew at a much higher rate than their bottom counterparts. Owing to the concomitant drop in the frequency of mating, the cells in the upper compartment also produced fewer sexual spores. "This finding confirms that the effects of selection do not remain constant over an organism's life cycle. Instead, selection is associated with 'trade-offs'. In other words, mutations that may be advantageous in one context may be deleterious in another," Wolf explains

In the next step, Wolf and his colleagues simulated the effects of migration between the two populations. They did so by first adding approximately 1% of the minority population to the dominant fraction, and then progressively increasing the proportion of the former in each succeeding generation until the two populations had been thoroughly mixed. Theoretical models suggest that mixing should lead to a homogenization of the gene pool, and should therefore lead to a reduction in the diversity of the mixed population. This effect was in fact observed at intermediate levels of mixing. Although such mixtures continue to evolve and their members can increase their fitness relative to the ancestral population, distinctly different variants can no longer be discerned within them.

"But to our surprise, when the populations had been thoroughly mixed over time, we found very marked differences in phenotype," says Wolf. "When the tap is turned on fully, so to speak, one suddenly finds that mixtures contain two distinct variants, a generalist and a specialist." The generalist can survive equally well in the top or bottom compartment. This is not true of the specialist. But it divides at a faster rate than the generalist, and can therefore compensate for its lack of versatility. In Wolf's view, the emergence of these two classes can be regarded as the first step in a speciation process which takes place in the presence of maximal gene flow.

In addition to these phenotypic results, the team characterized the full genetic inventory of all populations. These genetic experiments show that adaptation to top and bottom compartments in the absence of gene flow is accompanied by the selection of genetic variants from among those that were already present in the progenitor population. In contrast, the emergence of specialist lineages in 50:50 mixtures is attributable to newly acquired mutations. And such mutations are obviously not in short supply: "The mutations seen in our replicates are completely independent. We very seldom see the same mutation in different samples - yet the phenotypic division between generalists and specialists in completely mixed populations has been observed repeatedly," Wolf says.  

These results are of significance in the context of how populations react to alterations in the character and distribution of variable niches. "It has always been assumed that interruption of gene flow is a prerequisite for adaptive divergence," says Wolf. "But our study shows that, even when populations are highly connected, diverse adaptations can nevertheless emerge, such that all available niches can be filled."

Credit: 
Ludwig-Maximilians-Universität München

NIH scientists study salmonella swimming behavior as clues to infection

image: Salmonella bacteria (pink), a common cause of foodborne disease, invade a human epithelial cell.

Image: 
NIAID

WHAT:
Salmonella enterica serovar Typhimurium bacteria (S. Typhimurium) commonly cause human gastroenteritis, inflammation of the lining of the intestines. The bacteria live inside the gut and can infect the epithelial cells that line its surface. Many studies have shown that Salmonella use a "run-and-tumble" method of short swimming periods (runs) punctuated by tumbles when they randomly change direction, but how they move within the gut is not well understood.

National Institutes of Health scientists and their colleagues believe they have identified a S. Typhimurium protein, McpC (Methyl-accepting chemotaxis protein C), that allows the bacteria to swim straight when they are ready to infect cells. This new study, published in Nature Communications, describes S. Typhimurium movement and shows that McpC is required for the bacteria to invade surface epithelial cells in the gut.

The study authors suggest that McpC is a potential target for developing new antibacterial treatments to hinder the ability of S. Typhimurium to infect intestinal epithelial cells and colonize the gut. National Institute of Allergy and Infectious Diseases scientists at Rocky Mountain Laboratories in Hamilton, Montana, led the study. Collaborators included groups from the University of Texas A&M campuses in College Station and Kingsville.

S. Typhimurium use flagella--long whip-like projections--to move through fluids. When the flagella rotate counterclockwise, they form a rotating bundle behind the bacteria and propel them forward. However, the flagella frequently switch rotation from counterclockwise to clockwise, disrupting the bundle and causing the bacteria to tumble and change direction. Using special microscopes and cameras to observe live S. Typhimurium, the scientists found that bacteria grown under conditions that activate their invasive behavior swam in longer straight runs because the flagella did not switch rotation from counterclockwise to clockwise. Bacteria lacking McpC still demonstrated the "run-and-tumble" method of swimming under these conditions and had an invasion defect in a calf intestine model, indicating that straight swimming is important for efficient invasion of intestinal epithelial cells.

The researchers hypothesize that controlled smooth swimming could be a widespread bacterial infection strategy. Similar smooth swimming behavior can be seen in unrelated enteric bacteria, such as Vibrio, which can cause infection when undercooked seafood is eaten. These findings may inform the development of novel antibiotics.

Credit: 
NIH/National Institute of Allergy and Infectious Diseases

How to keep drones flying when a motor fails

image: When one rotor fails, the drone begins to spin on itself like a ballerina. (UZH)

Image: 
UZH

As anxious passengers are often reassured, commercial aircrafts can easily continue to fly even if one of the engines stops working. But for drones with four propellers - also known as quadcopters - the failure of one motor is a bigger problem. With only three rotors working, the drone loses stability and inevitably crashes unless an emergency control strategy sets in.

Researchers at the University of Zurich and the Delft University of Technology have now found a solution to this problem: They show that information from onboard cameras can be used to stabilize the drone and keep it flying autonomously after one rotor suddenly gives out.

Spinning like a ballerina

"When one rotor fails, the drone begins to spin on itself like a ballerina," explains Davide Scaramuzza, head of the Robotics and Perception Group at UZH and of the Rescue Robotics grand challenge at NCCR Robotics, which funded the research. "This high-speed rotational motion causes standard controllers to fail unless the drone has access to very accurate position measurements." In other words, once it starts spinning, the drone is no longer able to estimate its position in space and eventually crashes.

One way to solve this problem is to provide the drone with a reference position through GPS. But there are many places where GPS signals are unavailable. In their study, the researchers solved this issue for the first time without relying on GPS, instead using visual information from different types of onboard cameras.

Event cameras work well in low light

The researchers equipped their quadcopters with two types of cameras: standard ones, which record images several times per second at a fixed rate, and event cameras, which are based on independent pixels that are only activated when they detect a change in the light that reaches them.

The research team developed algorithms that combine information from the two sensors and use it to track the quadrotor's position relative to its surroundings. This enables the onboard computer to control the drone as it flies - and spins - with only three rotors. The researchers found that both types of cameras perform well in normal light conditions. "When illumination decreases, however, standard cameras begin to experience motion blur that ultimately disorients the drone and crashes it, whereas event cameras also work well in very low light," says first author Sihao Sun, a postdoc in Scaramuzza's lab.

Increased safety to avoid accidents

The problem addressed by this study is a relevant one, because quadcopters are becoming widespread and rotor failure may cause accidents. The researchers believe that this work can improve quadrotor flight safety in all areas where GPS signal is weak or absent.

https://youtu.be/Ww8u0KH7Ugs

Credit: 
University of Zurich

The meat of the matter: Environmental dissemination of beef cattle agrochemicals

A recent Point of Reference article, "The meat of the matter: Environmental dissemination of beef cattle agrochemicals," published in Environmental Toxicology and Chemistry, points at synthetic chemical cocktails being emitted from cattle feed yards into the environment and how they can impact our ecosystem and our health.

Industrial meat production facilities have a bad reputation for their impact on the environment. Concentrated animal feeding operations (CAFOs) are known to release greenhouse gases related to global warming and for discharge of manure to watersheds, which affects water quality. A less publicized impact of modern beef production is the excessive use of pharmaceuticals and pesticides, which end up in the environment. The animal production agriculture sector holds the record as the single greatest consumer of antimicrobials. Dust from feed yards typically contains antibiotics, synthetic steroids (growth hormones) and pesticides. At a time, when honeybee population decline is a hot topic, it is curious that the dust emitted each day from feed yards in the US alone theoretically contains enough permethrin to kill over a billion honeybees per day.

Since many feed yards, especially those in the US, occur in arid to semiarid regions, chemicals associated with manure particles can be transported on windborne dust over large distances. As a result, open-air beef cattle feed yards may collectively represent one of the largest unconstrained and unrecognized sources of pesticide and antimicrobial emissions on earth. Humans and wildlife, including birds and mammals, may be exposed to these chemicals either directly or indirectly.

A limited number of laws and regulations addressing odors, dust emissions, and water contamination have arisen in response to complaints from people living in the vicinity of feed yards. However, no regulations address agrochemical content of feed yard dust emissions. Environmental Impact Assessment guidance for the FDA New Animal Drug Application process does not recognize particulate-driven aerial transport of drugs into the environment. Globally, regulations on the use of agrochemicals in beef production vary considerably.

The need to produce affordable, readily available, and nutritious beef is steadily increasing, but it should not be met at the expense of environmental and human health. Feed yard waste management strategies, when used appropriately, can mitigate introduction of agrochemicals into the environment but implementation can be costly. Potential ameliorative approaches may include reduced reliance on veterinary pharmaceuticals that have potential to adversely impact local and distant ecosystems, development of alternative green chemistries, integrated pest management strategies, effective waste treatment and management strategies and broader understanding of the impacts of aerial agrochemical dissemination.

Credit: 
Society of Environmental Toxicology and Chemistry

Infection biology: How one pathogen evades the immune system

Our immune system is never idle. Their task is to detect and eliminate invasive pathogens, and they have no time to lose. The adaptive immune system identifies infectious organisms by recognizing foreign proteins on the surfaces of bacteria, viruses and unicellular protozoans. The interaction of these antigens with immune cells triggers a series of downstream events, which in most cases leads to the elimination of the pathogen.

But pathogenic organisms have developed strategies that enable them to escape detection by the immune system, and the strategies employed by remotely related organisms are often remarkably similar to each other. One way of confusing the immune system is to increase the structural heterogeneity of the antigens it encounters. In bacteria,pathogenic yeast and parasites this can be done by randomly activating different members of gene families, which code for non-identical versions of the proteins expressed on their surfaces. This strategy essentially allows the infectious agent to duck under the immune system's radar. By doing so, it significantly increases the likelihood that the invader will survive to establish an infection, and has a better chance to be transmitted to new hosts. If pathogens alter their surface proteins rarely - or too often - the white blood cells that are responsible for recognizing them have a much easier task. Nicolai Siegel (Professor of Molecular Parasitology at LMU) and his group, in collaboration with colleagues at the University of Dundee, have now elucidated an important step in the mechanism that controls surface-antigen variation.

The experimental model: Trypanosomes

While Siegel's team is part of the department of Experimental Parasitology and affiliated with the Faculty of Veterinary Medicine at LMU, it makes use of laboratories located in the Physiological Chemistry section of the Biomedical Center (Faculty of Medicine). "This arrangement greatly facilitates scientific discussion and interdisciplinary exchanges," he says.

His team works with the unicellular organism Trypanosoma brucei. There are several reasons for this. T. brucei causes sleeping sickness. It is transmitted by the tsetse fly, and it presents a threat to millions of people in 36 African countries south of the Sahara. From a scientific point of view, however, this species has become a model system for the study of antigen variation in pathogens, and has therefore been widely studied.

The genome of T. brucei includes more than 2000 genes that code for variant forms of the major protein expressed on its surface. In each individual cell, only one of these genes is activated - and it directs the production of a single surface-protein variant. "The pathogen must therefore ensure that only one of these genes ­- not a few, and certainly not all - of the genes for surface proteins are expressed at any given time," Siegel explains. "We have now identified the mechanism that guarantees that the product of only one of these genes is expressed."

Notably, T. brucei does not possess complex arrays of regulatory genomic sequences - such as enhancers - which are involved in determining the set of genes that are transcribed from the genomic DNA into messenger RNAs (mRNAs) at any given moment. These mRNAs subsequently direct the synthesis of the corresponding proteins. "The control mechanism that we have discovered appears to achieve the required selectivity by differentially regulating mRNA maturation," Siegel says. This in turn is accomplished by chemically modifying specific mRNAs, which prevents them from being rapidly destroyed.
 

The authors of the new study have identified a three-dimensional structure in the nucleus of T. brucei that serves as a separate compartment, in which the mRNA molecules that encode the cell's single surface protein variant are modified. As a result, they avoid rapid destruction, and therefore survive long enough to produce the protein in the required amount. Conversely, when one of the proteins that contribute to the assembly of this compartment was inactivated, several different surface antigens were synthesized at the same time.

"So we now know why only one surface antigen is successfully expressed," says Siegel. Moreover, these new results have implications that transcend their importance for basic research. "If we could control the process that leads to the switching of surface antigens, it might be possible to inhibit it," he muses. And indeed, in the medium term, he sees in this possibility a new approach to the elimination - by the body's immune system - of pathogens that depend on this form of antigenic variation.

Credit: 
Ludwig-Maximilians-Universität München