Culture

Lightning strikes played a vital role in life's origins on Earth

image: An illustration of early Earth, as it would have looked around 4 billion years ago

Image: 
Lucy Entwisle

Lightning strikes were just as important as meteorites in creating the perfect conditions for life to emerge on Earth, geologists say.

Minerals delivered to Earth in meteorites more than 4 billion years ago have long been advocated as key ingredients for the development of life on our planet.

Scientists believed minimal amounts of these minerals were also brought to early Earth through billions of lightning strikes.

But now researchers from the University of Leeds have established that lightning strikes were just as significant as meteorites in performing this essential function and allowing life to manifest.

They say this shows that life could develop on Earth-like planets through the same mechanism at any time if atmospheric conditions are right. The research was led by Benjamin Hess during his undergraduate studies at the University of Leeds in the School of Earth and Environment.

Mr Hess and his mentors were studying an exceptionally large and pristine sample of fulgurite, - a rock created when lightning strikes the ground. The sample was formed when lightning struck a property in Glen Ellyn, Illinois, USA, in 2016, and donated to the geology department at Wheaton College nearby.

The Leeds researchers were initially interested in how fulgurite is formed but were fascinated to discover in the Glen Ellyn sample a large amount of a highly unusual phosphorous mineral called schreibersite.

Phosphorus is essential to life and plays a key role in all life processes from movement to growth and reproduction. The phosphorous present on early Earth's surface was contained in minerals that cannot dissolve in water, but schreibersite can.

Mr Hess, now a PhD student at Yale University, Connecticut, USA, said: "Many have suggested that life on Earth originated in shallow surface waters, following Darwin's famous "warm little pond" concept.

"Most models for how life may have formed on Earth's surface invoke meteorites which carry small amounts of schreibersite. Our work finds a relatively large amount of schreibersite in the studied fulgurite.

"Lightning strikes Earth frequently, implying that the phosphorus needed for the origin of life on Earth's surface does not rely solely on meteorite hits.

"Perhaps more importantly, this also means that the formation of life on other Earth-like planets remains possible long after meteorite impacts have become rare."

The team estimate that phosphorus minerals made by lightning strikes surpassed those from meteorites when the earth was around 3.5 billion years old, which is about the age of the earliest known micro-fossils, making lightning strikes significant in the emergence of life on the planet.

Furthermore, lightning strikes are far less destructive than meteor hits, meaning they were much less likely to interfere with the delicate evolutionary pathways in which life could develop.

The research, titled Lightning strikes as a major facilitator of prebiotic phosphorus reduction on early Earth, is published today (SEE EMBARGO) in Nature Communications.

The School of Earth and Environment funded the project under a scheme which enables undergraduate led research using high-end analytical facilities.

Dr Jason Harvey, Associate Professor of Geochemistry in Leeds' School of Earth and Environment, and Sandra Piazolo, Professor of Structural Geology and Tectonics in the School of Earth and Environment, mentored Mr Hess in the research project.

Dr Harvey said: "The early bombardment is a once in a solar system event. As planets reach their mass, the delivery of more phosphorus from meteors becomes negligible.

"Lightning, on the other hand, is not such a one-off event. If atmospheric conditions are favourable for the generation of lightning, elements essential to the formation of life can be delivered to the surface of a planet.

"This could mean that life could emerge on Earth-like planets at any point in time."

Professor Piazolo said: "Our exciting research opens the door to several future avenues of investigation, including search for and in-depth analysis of fresh fulgurite in Early Earth-like environment; in-depth analysis of the effect of flash heating on other minerals to recognize such features in the rock record, and further analysis of this exceptionally well-preserved fulgurite to identify the range of physical and chemical processes within.

"All these studies will help up to increase our understanding of the importance of fulgurite in changing the chemical environment of Earth through time."

Credit: 
University of Leeds

Practical nanozymes discovered to fight antimicrobial resistance

Nanozymes, a group of inorganic catalysis-efficient particles, have been proposed as promising antimicrobials against bacteria. They are efficient in killing bacteria, thanks to their production of reactive oxygen species (ROS).

Despite this advantage, nanozymes are generally toxic to both bacteria and mammalian cells, that is, they are also toxic to our own cells. This is mainly because of the intrinsic inability of ROS to distinguish bacteria from mammalian cells.

In a study published in Nature Communications, the research team led by XIONG Yujie and YANG Lihua from University of Science and Technology (USTC) of the Chinese Academy of Sciences (CAS) proposed a novel method to construct efficient-while-little-toxic nanozymes.

The researchers showed that nanozymes that generate surface-bound ROS selectively kill bacteria, while leaving the mammalian cells safe.

The selectivity is attributed to, on the one hand, the surface-bound nature of ROS generated by the nanozymes prepared by the team, and on the other hand, an unexpected antidote role of endocytosis, a cellular process that is common for mammalian cells while absent in bacteria.

Moreover, the researchers observed a few different nanozymes that generate surface-bound ROS but vary in chemical components and in physical structures, ending up finding that the anti-bacteria behaviors are similar. This fact brings to the conclusion that the advantage of selectively killing bacteria over mammalian cells is the general property of the nanozymes that produce surface-bound ROS.

Antimicrobial resistance (AMR) poses a threat to global health, which is the reason why our use of drugs against germs are gradually less effective.

Credit: 
University of Science and Technology of China

How bacterial traffic jams lead to antibiotic-resistant, multilayer biofilms

image: Swarming Bacillus subtilis cells form a multilayered biofilm in a dish when exposed to the antibiotic kanamycin (pictured here on the right-hand side of the dish)

Image: 
Grobas et al. (CC BY 4.0)

The bacterial equivalent of a traffic jam causes multilayered biofilms to form in the presence of antibiotics, shows a study published today in eLife.

The study reveals how the collective behaviour of bacterial colonies may contribute to the emergence of antibiotic resistance. These insights could pave the way to new approaches for treating bacterial infections that help thwart the emergence of resistance.

Bacteria can acquire resistance to antibiotics through genetic mutations. But they can also defend themselves via collective behaviours such as joining together in a biofilm - a thin, slimy film made up of many bacteria that is less susceptible to antibiotics. Swarms of bacteria can also undergo a phenomenon similar to human traffic jams called 'motility-induced phase separation', in which they slow down when there are large numbers of bacteria crammed together.

"In our study, we wanted to see whether swarming bacteria can use physical interactions such as motility-induced phase separation to overcome certain stresses including exposure to antibiotics," says first author Iago Grobas, a PhD student at Warwick Medical School, University of Warwick, UK.

In their study, Grobas and colleagues exposed a colony of a common environmental bacteria called Bacillus subtilis to an antibiotic called kanamycin in a dish in the laboratory. They recorded a time-lapse video of the bacteria's behaviour and found that they formed biofilms in the presence of the drug.

Specifically, the team showed that the biofilm forms because bacteria begin to group together a distance away from the antibiotic, giving way to multiple layers of swarming bacteria.

"The layers build up through a physical mechanism whereby groups of cells moving together collide with each other," Grobas explains. "The collision generates enough stress to pile up the cells, which then move slower, attracting more cells through a mechanism similar to motility-induced phase separation. These multiple layers then lead to biofilm formation."

Next, the team tested a strategy to stop this formation and thereby prevent antibiotic resistance from occurring in this way. They found that splitting a single dose in two steps without changing the total amount of antibiotics strongly reduced the emergence of a biofilm.

The authors say further research is now needed to determine if bacteria that are harmful to humans use similar behaviours to survive antibiotic exposure. If they do, then future treatments should take these behaviours into account in order to reduce antibiotic resistance.

"Our discoveries question the way we use antibiotics and show that increasing the dosage is not always the best way to stop biofilm development," says co-senior author Munehiro Asally, Associate Professor at the School of Life Sciences, University of Warwick. "The timing of the bacteria's exposure to drugs is also important."

"These insights could lead us to rethink the way antibiotics are administered to patients during some infections," concludes co-senior author Marco Polin, Associate Professor at the Department of Physics, University of Warwick, and a researcher at the Mediterranean Institute for Advanced Studies (IMEDEA), Mallorca.

Credit: 
eLife

Combination therapy may provide significant protection against lethal influenza

image: These findings provide important clues regarding the inflammatory cellular signature at different time intervals during severe influenza infection. Temporal dynamics in hematoxylin and eosin-stained lung sections or modified Geimsa-stained bronchoalveolar lavage cells from control and lethal influenza-challenged mice were evaluated at 1, 3, 5, 6, and 8 dpi. Initial epithelial cytopathic effects were prominently seen at 1 dpi (open arrow), which spread to alveoli and were followed by the severe inflammatory phase (3-5 dpi). By 6 dpi, significant vascular leakage and collapse of the alveoli became prominent, and animals died between 8-9 dpi with severe hemorrhagic exudates (hash) with abundant proteinaceous material in the alveolar spaces (asterisk). CON-Control; DPI-Days post infection; M-macrophage; L-lymphocyte; N-neutrophil, EP-epithelial cell. Representative image was selected from 5 mice per group.

Image: 
The American Journal of Pathology

Philadelphia, March 16, 2021 - A significant proportion of hospitalized patients with influenza develop complications of acute respiratory distress syndrome, driven by virus-induced cytopathic effects as well as exaggerated host immune response. Reporting in The American Journal of Pathology, published by Elsevier, investigators have found that treatment with an immune receptor blocker in combination with an antiviral agent markedly improves survival of mice infected with lethal influenza and reduces lung pathology in swine-influenza-infected piglets. Their research also provides insights into the optimal timing of treatment to prevent acute lung injury.

Previously, the investigators found that an excessive influx of neutrophils, infection fighting immune cells, and the networks they create to kill pathogens, known as neutrophil extracellular traps (NETs), contribute to acute lung injury in influenza infection. Formation of NETs by activated neutrophils occurs via a cell death mechanism called NETosis and the released NETs contain chromatin fibers that harbor toxic components.

A mouse model, commonly used in exploring influenza pathophysiology and drug therapies, was used in the current study. Because mice are not natural hosts for influenza, further validation in larger animals is necessary before testing in humans. Therefore, researchers also tested piglets infected with swine influenza virus. The animals were treated with a combination of a CXCR2 antagonist, SCH527123, together with an antiviral agent, oseltamivir.

The combination of SCH527123 and oseltamivir significantly improved survival in mice compared to either of the drugs administered alone. The combination therapy also reduced pulmonary pathology in piglets.

"Combination therapy reduces lung inflammation, alveolitis, and vascular pathology, indicating that aberrant neutrophil activation and release in NETs exacerbate pulmonary pathology in severe influenza," explains lead investigator Narasaraju Teluguakula, PhD, Center for Veterinary Health Sciences, Oklahoma State University, Stillwater, OK, USA. "These findings support the evidence that antagonizing CXCR2 may alleviate lung pathology and may have significant synergistic effects with antiviral treatment to reduce influenza-associated morbidity and mortality."

It can be challenging to balance the suppression of excessive neutrophil influx without compromising the beneficial host immunity conferred by neutrophils. Therefore, the researchers examined the temporal dynamics of NETs release in correlation with pathological changes during the course of infection in mice. During the early inflammatory phase, three to five days post infection, significant neutrophil activation and NETs release with relatively few hemorrhagic lesions was observed. In the late hemorrhagic exudative phase, significant vascular injury with declining neutrophil activity was seen.

Dr. Teluguakula also emphasizes that these findings provide the first evidence to support the strategy of testing combination therapy in a large animal influenza model. "In view of the close similarities in pulmonary pathology and immune responses between swine and humans, pig-influenza pneumonia models can serve as a common platform in understanding pathophysiology and host-directed drug therapies in human influenza infections and may be useful in advancing the translational impact of drug treatment studies in human influenza infections."

Credit: 
Elsevier

New imaging technology could help predict heart attacks

image: A new technique known as intravascular laser speckle imaging could one day be used to detect coronary plaques that are likely to lead to a heart attack. The researchers developed a small diameter intravascular catheter that incorporates a small-diameter fiber bundle, polarizer and GRIN lens to image the reflected speckle patterns onto a CMOS sensor.

Image: 
Seemantini Nadkarni, Wellman Center for Photomedicine

WASHINGTON -- Researchers have developed a new intravascular imaging technique that could one day be used to detect coronary plaques that are likely to lead to a heart attack. Heart attacks are often triggered when an unstable plaque ruptures and then blocks a major artery that carries blood and oxygen to the heart.

"If unstable coronary plaques could be detected before they rupture, pharmacological or other treatments could be initiated early to prevent heart attacks and save lives," said research team leader Seemantini Nadkarni from the Wellman Center for Photomedicine at Massachusetts General Hospital. "Our new imaging technique represents a major step toward achieving this."

In The Optical Society (OSA) journal Biomedical Optics Express, the researchers report a preclinical demonstration of their new intravascular laser speckle imaging (ILSI) technique in a living animal model. They show, for the first time, that ILSI can identify the distinct mechanical features of plaques that are most likely to rupture under physiological conditions of cardiac motion, blood flow and breathing.

"Reducing mortality from heart attacks in the general population requires a comprehensive screening strategy to identify at-risk patients and detect high-risk vulnerable plaques while they can be treated," said Nadkarni. "By providing the unique capability to measure mechanical stability -- a critical metric in detecting unstable plaques -- ILSI is poised to provide a new approach for coronary assessment."

Capturing mechanical stability of plaques
Although intravascular technologies have been developed to evaluate microstructural features of unstable plaques, recent studies have shown that mechanical features, in addition to microstructural and compositional features, influence plaque rupture.

"Measurement of the plaque mechanical properties is crucial in identifying unstable plaques with a propensity for rupture and subsequent heart attack," said Nadkarni. "ILSI provides the unique capability to quantify an index of mechanical properties of coronary plaques, thus providing a direct assessment of mechanical stability."

To estimate mechanical properties, ILSI uses laser speckle patterns that are formed when laser light is scattered from tissue. When viewed with a high-speed camera, the speckles fluctuate in time due to the viscoelastic properties of the plaque. This allows the researchers to measure and discriminate the mechanical properties of unstable plaques, which tend to be rich in lipids.

"For this new study, we developed a small diameter intravascular catheter that incorporates an optical fiber that delivers light to the coronary artery wall," said Nadkarni. "We also used a small-diameter fiber bundle, polarizer and GRIN lens to image the reflected speckle patterns onto a CMOS sensor."

For preclinical testing, the researchers evaluated the ability of their ILSI instrument to detect unstable plaques in a human coronary to swine xenograft model. This model system uses human coronary arteries that are sutured onto the beating heart of an anesthetized living pig. They assessed the mechanical properties of plaque inside the arteries by calculating the rate, or time constant, of fluctuations in the intensity of the speckle pattern and then compared their results with histopathological findings.

"The time constants in unstable plaques were significantly and distinctly lower than other stable plaques in the coronary wall," said Nadkarni. "These results demonstrated the exquisite diagnostic sensitivity and specificity of ILSI for detecting human lipid pool plaques that were most likely to rupture under physiological conditions."

The researchers say that the new technique could be easily integrated with other intracoronary technologies such as optical coherence tomography or intravascular ultrasound to combine the mechanical findings from ILSI with morphological information to improve the evaluation of plaque stability.

The researchers plan to continue to evaluate the capability of their ILSI instrument for rapid assessment of the coronary vasculature in live animals. Once these preclinical studies are complete, they will assess the safety of the catheter for use in humans and then begin the process of gaining regulatory approval for clinical use.

Credit: 
Optica

Patients value staff dedication most when evaluating substance use treatment facilities

Machine learning can be used to comb through online reviews of substance use treatment facilities to home in on qualities that are important to patients but remain hard to capture via formal means, such as surveys, researchers from the Perelman School of Medicine at the University of Pennsylvania show. The researchers found that professionalism and staff dedication to patients were two of the top qualities that could be attributed to either a negative or positive review of the facility. Findings from this study were published today in the Journal of General Internal Medicine.

"Searching for - and connecting with - therapy can be very difficult and confusing. Many individuals start their search online, where they are likely to see an online review accompanying other information about a treatment facility," said the study's lead author, Anish Agarwal, MD, a clinical innovation manager in the Penn Medicine Center for Digital Health and an assistant professor of Emergency Medicine. "These online reviews can provide commentary on what is driving positive, or negative, patient experiences throughout recovery, but they must be accurately identified. Through machine learning, we've shown that this is possible, and we hope such findings can be used to improve patient-centered addiction care."

Currently, there are no nationwide measures of quality to evaluate and compare facilities that treat substance use disorder. In the past, Agarwal - along with this study's co-author, Sharath Guntuku, PhD, a researcher in the Center for Digital Health and an assistant professor of Computer Science, and senior author Raina Merchant, MD, the director of the Center for Digital Health - analyzed reviews on Google and Yelp to see whether a national survey offered by the Substance Abuse and Mental Health Services Administration (SAMHSA) to inventory services being offered was also able to gauge patient satisfaction (it largely didn't). So the team set out to use a similar technique to gauge what drives positive or negative experiences with facilities through the unfiltered lens of Yelp.

"We felt that this would provide a great deal of insight into the patient experience," Guntuku said. "Tapping into user-generated reviews offers a way to understand their narrative."

To accomplish this, the researchers pulled reviews from SAMSHA-recognized facilities that had been reviewed at least five times. This amounted to more than 500 facilities across the United States. Text of the reviews were then run through a natural language processing algorithm powered by machine learning. Through this, topics were identified and collected from within the reviews. The researchers then were able to categorize them thematically.

Overall, the researchers classified 16 recurring themes in reviews. When it came to the positive reviews, the top themes were "long-term recovery," "dedicated staff," and "dedication to patients." The top three themes found in negative reviews were "professionalism," "phone communication," and "overall communication."

The fact that the top themes for both positive and negative reviews had to do with the conduct and commitment of facility staff does not come as a great surprise.

"Dedication and professionalism are critical to the recovery experience," Agarwal said. "Having trusted and approachable staff who care about individuals is the crux of all health care, but it is likely underscored in addiction."

Other top themes on the positive side were "group therapy experience" and "inpatient rehabilitation," while "wait times in facility" and "management" rounded out the top five most prevalent themes among negative reviews.

"We are still relatively early in this research, but we're getting a direct look into some core values that substance use treatment facilities could use to guide and improve their treatment," Merchant said. "This feedback is tremendously valuable, and we're showing that it can be distilled effectively into key themes through the use of machine learning."

Agarwal said he and the other researchers are hoping to explore more ways that patients, families, and their support networks think about substance use treatment and health care as a whole.

"In today's digital world, there is a robust and enormous amount of interconnectedness which we can harness to drive forward quality care and support health systems in learning from their patients," Agarwal said.

Credit: 
University of Pennsylvania School of Medicine

New study predicts changing Lyme disease habitat across the West Coast

FLAGSTAFF, Ariz. -- March 16, 2021 -- The findings of a recent analysis conducted by the Translational Genomics Research Institute (TGen), an affiliate of City of Hope, suggest that ecosystems suitable for harboring ticks that carry debilitating Lyme disease could be more widespread than previously thought in California, Oregon and Washington.

Bolstering the research were the efforts of an army of "citizen scientists" who collected and submitted 18,881 ticks over nearly three years through the Free Tick Testing Program created by the Bay Area Lyme Foundation, which funded the research, producing a wealth of data for scientists to analyze.

This new study builds on initial research led by the late Nate Nieto, Ph.D., at Northern Arizona University, and Daniel Salkeld, Ph.D., of Colorado State University.

This immense sample collection represented a multi-fold increase in the number of ticks that could be gathered by professional biologists conducting field surveys in far less time and at a fraction of the cost. This kind of citizen participation -- which in the future could include smart-phone apps and photography -- could become "a powerful tool" for tracking other animal- and insect-borne infectious diseases important for monitoring human and environmental health, according to study results published in the scientific journal PLOS ONE.

This study expands on previous work in California and is the first study to produce high resolution distributions of both actual and potential tick habitat in Oregon and Washington.

"This study is a great example of how citizen scientists can help -- whether tracking climate change, fires, habitat changes or species distribution shifts -- at a much finer scale than ever before," said Tanner Porter, Ph.D., a TGen Research Associate and lead author of the study.

Specifically, Dr. Porter said the findings of this study could help raise awareness among physicians across the West, and throughout the nation, that tick-borne diseases are possible throughout a wider expanse than ever thought before.

Lyme disease is caused by a bacteria, Borrelia burgdorferi (sensu lato), which is carried by ticks, and in this study specifically, the western black-legged tick known as Ixodes pacificus. These ticks also carry pathogens associated with relapsing fever and anaplasmosis, which like Lyme disease can cause fever, headache, chills and muscle aches. Some patients with Lyme disease may experience a rash that may look like a red oval or bull's-eye.

If not treated promptly, Lyme disease can progress to a debilitating stage, becoming difficult and sometimes impossible to cure. This may include inflammation of the heart and brain.

Lyme disease is the most common tickborne illness in the U.S., annually causing an estimated 500,000 infections, according to the CDC. However, even the most commonly-used diagnostic test for Lyme disease misses up to 70% of early stage cases. There is no treatment that works for all patients.

"We hope this study data encourages residents of California, Oregon and Washington to take precautions against ticks in the outdoors, and helps to ensure that local healthcare professionals will consider diagnoses of Lyme when patients present with symptoms," said Linda Giampa, Executive Director of the Bay Area Lyme Foundation.

Citizen scientists were encouraged to mail in ticks collected off individuals' bodies, pets and clothing. They noted the time and place where the ticks were discovered, and described activities involved, the surrounding environment, and in many cases specific GPS coordinates.

Field studies could take decades to produce the same amount of data, said Dr. Porter, adding, "this citizen science technique could allow for real-time distribution monitoring of ticks and other relevant species, an important consideration with emerging pathogens, changing land-use patterns, and climate change."

Credit: 
The Translational Genomics Research Institute

Is there life on mars today and where?

image: FIGURE 1: Mars Biosphere Engine. a, The zonally averaged Mars elevation from MOLA8 shows how the formation of the planetary crustal dichotomy has driven hydrology and energy flux throughout geological times, creating both conditions for an origin of life, the formation of habitats, and dispersal pathways. While conditions do not allow sustained surface water in the present day, recent volcanic activity and subsurface water reservoirs may maintain habitats and dispersal pathways for an extant biosphere. The origin(s) of methane emissions remain enigmatic, their spatial distribution overlapping with areas of magma and water/ice accumulations at the highland/lowlands boundary. b, Young volcanoes in Coprates Chasma, Valles Marineris estimated to be 200-400 million years old by Brož et al. (2017). c, Regions of subglacial water (blue) detected at the base of the south polar layered deposits by the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS) instrument. ).

Image: 
Credit Image: (b) NASA-JPL/MRO-University of Arizona (c) Lauro et al., (2020)

March 16, 2021, Mountain View, CA - In a comment published today in Nature Astronomy, Dr. Nathalie Cabrol, Director of the Carl Sagan Center for Research at the SETI Institute, challenges assumptions about the possibility of modern life on Mars held by many in the scientific community.

As the Perseverance rover embarks on a journey to seek signs of ancient life in the 3.7 billion years old Jezero crater, Cabrol theorizes that not only life could still be present on Mars today, but it could also be much more widespread and accessible than previously believed. Her conclusions are based on years of exploration of early Mars analogs in extreme environments in the Chilean altiplano and the Andes funded by the NASA Astrobiology Institute. It's essential, she argues, that we consider microbial habitability on Mars through the lens of a 4-billion-year-old environmental continuum rather than through frozen environmental snapshots as we tend to do. Also critical is to remember that, by all terrestrial standards, Mars became an extreme environment very early.

In extreme environments, while water is an essential condition, it is far from being enough. What matters most, Cabrol says, it's how extreme environmental factors such as a thin atmosphere, UV radiation, salinity, aridity, temperature fluctuations and many more interact with each other, not only water. "You can walk on the same landscape for miles and find nothing. Then, maybe because the slope changes by a fraction of a degree, the texture or the mineralogy of the soil is different because there is more protection from UV, all of a sudden, life is here. What matters in extreme worlds to find life is to understand the patterns resulting from these interactions". Follow the water is good. Follow the patterns is better.

This interaction unlocks life's distribution and abundance in those landscapes. That does not necessarily make it easier to find, as the last refuges for microbes in extreme environments can be at the micro- to nanoscale within the cracks in crystals. On the other hand, observations made in terrestrial analogs suggest that these interactions considerably expand the potential territory for modern life on Mars and could bring it closer to the surface than long theorized.

If Mars still harbors life today, which Cabrol thinks it does, to find it we must take the approach of Mars as a biosphere. As such, its microbial habitat distribution and abundance are tightly connected not only to where life could theoretically survive today but also where it was able to disperse and adapt over the entire history of the planet, and the keys to that dispersion lie in early geological times. Before the Noachian/Hesperian transition, 3.7-3.5 billion years ago, rivers, oceans, wind, dust storms would have taken it everywhere across the planet. "Importantly, dispersal mechanisms still exist today, and they connect the deep interior to the subsurface," Cabrol says.

But a biosphere cannot run without an engine. Cabrol proposes that the engine to sustain modern life on Mars still exists, that it is over 4 billion years old and migrated out of sight today, underground.

If this correct, these observations may modify our definition of what we call "Special Regions" to include the interaction of extreme environmental factors as a critical element, one that potentially expands their distribution in substantial ways and could have us rethink how to approach them. The issue, here, says Cabrol, is that we do not yet have the global environmental data at a scale and resolution that matters to understand modern microbial habitability on Mars. As human exploration gives us a deadline to retrieve pristine samples, Cabrol suggests options regarding the search for extant life, including the type of missions that could fulfill objectives critical to astrobiology, human exploration, and planetary protection.

Credit: 
SETI Institute

What happened to mars's water? It is still trapped there

image: While it was previously suspected that most of Mars's water was lost to space, a significant portion--between 30 and 90 percent--has been lost to hydration of the crust, according to a new study. Some water was released from the interior via volcanism, but not enough to replenish the planet's once significant supply. Evidence for the water's fate was found in the ratio of deuterium to hydrogen in the planet's atmosphere and rocks.

Image: 
Caltech

Billions of years ago, the Red Planet was far more blue; according to evidence still found on the surface, abundant water flowed across Mars and forming pools, lakes, and deep oceans. The question, then, is where did all that water go?

The answer: nowhere. According to new research from Caltech and JPL, a significant portion of Mars's water--between 30 and 99 percent--is trapped within minerals in the planet's crust. The research challenges the current theory that the Red Planet's water escaped into space.

The Caltech/JPL team found that around four billion years ago, Mars was home to enough water to have covered the whole planet in an ocean about 100 to 1,500 meters deep; a volume roughly equivalent to half of Earth's Atlantic Ocean. But, by a billion years later, the planet was as dry as it is today. Previously, scientists seeking to explain what happened to the flowing water on Mars had suggested that it escaped into space, victim of Mars's low gravity. Though some water did indeed leave Mars this way, it now appears that such an escape cannot account for most of the water loss.

"Atmospheric escape doesn't fully explain the data that we have for how much water actually once existed on Mars," says Caltech PhD candidate Eva Scheller (MS '20), lead author of a paper on the research that was published by the journal Science on March 16 and presented the same day at the Lunar and Planetary Science Conference (LPSC). Scheller's co-authors are Bethany Ehlmann, professor of planetary science and associate director for the Keck Institute for Space Studies; Yuk Yung, professor of planetary science and JPL senior research scientist; Caltech graduate student Danica Adams; and Renyu Hu, JPL research scientist. Caltech manages JPL for NASA.

The team studied the quantity of water on Mars over time in all its forms (vapor, liquid, and ice) and the chemical composition of the planet's current atmosphere and crust through the analysis of meteorites as well as using data provided by Mars rovers and orbiters, looking in particular at the ratio of deuterium to hydrogen (D/H).

Water is made up of hydrogen and oxygen: H2O. Not all hydrogen atoms are created equal, however. There are two stable isotopes of hydrogen. The vast majority of hydrogen atoms have just one proton within the atomic nucleus, while a tiny fraction (about 0.02 percent) exist as deuterium, or so-called "heavy" hydrogen, which has a proton and a neutron in the nucleus.

The lighter-weight hydrogen (also known as protium) has an easier time escaping the planet's gravity into space than its heavier counterpart. Because of this, the escape of a planet's water via the upper atmosphere would leave a telltale signature on the ratio of deuterium to hydrogen in the planet's atmosphere: there would be an outsized portion of deuterium left behind.

However, the loss of water solely through the atmosphere cannot explain both the observed deuterium to hydrogen signal in the Martian atmosphere and large amounts of water in the past. Instead, the study proposes that a combination of two mechanisms--the trapping of water in minerals in the planet's crust and the loss of water to the atmosphere--can explain the observed deuterium-to-hydrogen signal within the Martian atmosphere.

When water interacts with rock, chemical weathering forms clays and other hydrous minerals that contain water as part of their mineral structure. This process occurs on Earth as well as on Mars. Because Earth is tectonically active, old crust continually melts into the mantle and forms new crust at plate boundaries, recycling water and other molecules back into the atmosphere through volcanism. Mars, however, is mostly tectonically inactive, and so the "drying" of the surface, once it occurs, is permanent.

"Atmospheric escape clearly had a role in water loss, but findings from the last decade of Mars missions have pointed to the fact that there was this huge reservoir of ancient hydrated minerals whose formation certainly decreased water availability over time," says Ehlmann.

"All of this water was sequestered fairly early on, and then never cycled back out," Scheller says. The research, which relied on data from meteorites, telescopes, satellite observations, and samples analyzed by rovers on Mars, illustrates the importance of having multiple ways of probing the Red Planet, she says.

Ehlmann, Hu, and Yung previously collaborated on research that seeks to understand the habitability of Mars by tracing the history of carbon, since carbon dioxide is the principal constituent of the atmosphere. Next, the team plans to continue to use isotopic and mineral composition data to determine the fate of nitrogen and sulfur-bearing minerals. In addition, Scheller plans to continue examining the processes by which Mars's surface water was lost to the crust using laboratory experiments that simulate Martian weathering processes, as well as through observations of ancient crust by the Perseverance rover. Scheller and Ehlmann will also aid in Mars 2020 operations to collect rock samples for return to Earth that will allow the researchers and their colleagues to test these hypotheses about the drivers of climate change on Mars.

Credit: 
California Institute of Technology

Going back in time restores decades of quiet corn drama

image: Using a chronosequence of corn lines, University of Illinois researchers found decades of breeding and reliance on chemical fertilizers prevents modern corn from recruiting nitrogen-fixing microbes.

Image: 
Alonso Favela, University of Illinois.

URBANA, Ill. - Corn didn't start out as the powerhouse crop it is today. No, for most of the thousands of years it was undergoing domestication and improvement, corn grew humbly within the limits of what the environment and smallholder farmers could provide.

For its fertilizer needs, early corn made friends with nitrogen-fixing soil microbes by leaking an enticing sugary cocktail from its roots. The genetic recipe for this cocktail was handed down from parent to offspring to ensure just the right microbes came out to play.

But then the Green Revolution changed everything. Breeding tools improved dramatically, leading to faster-growing, higher-yielding hybrids than the world had ever seen. And synthetic fertilizer application became de rigueur.

That's the moment corn left its old microbe friends behind, according to new research from the University of Illinois. And it hasn't gone back.

"Increasing selection for aboveground traits, in a soil setting where we removed all reliance on microbial functions, degraded microbial sustainability traits. In other words, over the course of half a century, corn breeding altered its microbiome in unsustainable ways," says Angela Kent, professor in the Department of Natural Resources and Environmental Sciences at the University of Illinois and co-author of a new study in the International Society of Microbial Ecology Journal.

Kent, along with co-authors Alonso Favela and Martin Bohn, found modern corn varieties recruit fewer "good" microbes - the ones that fix nitrogen in the soil and make it available for crops to take up - than earlier varieties. Instead, throughout the last several decades of crop improvement, corn has been increasingly recruiting "bad" microbes. These are the ones that help synthetic nitrogen fertilizers and other sources of nitrogen escape the soil, either as potent greenhouse gases or in water-soluble forms that eventually end up in the Gulf of Mexico and contribute to oxygen-starved "dead zones."

"When I was first analyzing our results, I got a little disheartened," says Favela, a doctoral student in the Program in Ecology, Evolution, and Conservation Biology at Illinois and first author on the study. "I was kind of sad we had such a huge effect on this plant and the whole ecosystem, and we had no idea we were even doing it. We disrupted the very root of the plant."

To figure out how the corn microbiome has changed, Favela recreated the history of corn breeding from 1949 to 1986 by planting a chronological sequence of 20 off-patent maize lines in a greenhouse.

"We have access to expired patent-protected lines that were created during different time periods and environmental conditions. We used that understanding to travel back in time and look at how the associated microbiome was changing chronologically," he says.

As a source of microbes, Favela inoculated the pots with soil from a local ag field that hadn't been planted with corn or soybeans for at least two years. Once the plants were 36 days old, he sequenced the microbial DNA he collected from soil adhering to the roots.

"We characterized the microbiome and microbial functional genes related to transformations that occur in the nitrogen cycle: nitrogen fixation, nitrification, and denitrification," he says. "We found more recently developed maize lines recruited fewer microbial groups capable of sustainable nitrogen provisioning and more microbes that contribute to nitrogen losses."

Kent says breeding focused on aboveground traits, especially in a soil context flooded with synthetic nitrogen fertilizers, may have tweaked the sugary cocktail roots exude to attract microbes.

"Through that time period, breeders weren't selecting for maintenance of microbial functions like nitrogen fixation and nitrogen mineralization because we had replaced all those functions with agronomic management. As we started selecting for aboveground features like yield and other traits, we were inadvertently selecting against microbial sustainability and even actively selecting for unsustainable microbiome features such as nitrification and denitrification," she says.

Now that it's clear something has changed, can breeders bring good microbes back in corn hybrids of the future?

Bohn, corn breeder and associate professor in the Department of Crop Sciences at Illinois, thinks it's very possible to "rewild" the corn microbiome. For him, the answer lies in teosinte, a wild grass most people would have to squint pretty hard at to imagine as the parent of modern corn.

Like wild things everywhere, teosinte evolved in the rich context of an entire ecosystem, forming close relationships with other organisms, including soil microbes that made soil nutrients easier for the plant to access. Bohn thinks it should be possible to find teosinte genes responsible for creating the root cocktail that attracts nitrogen-fixing microbes. Then, it's just a matter of introducing those genes into novel corn hybrids.

"I never thought we would go back to teosinte because it's so far removed from what we want in our current agricultural landscape. But it may hold the key not only for encouraging these microbial associations; it also may help corn withstand climate change and other stresses," Bohn says. "We actually need to go back to teosinte and start investigating what we left behind so we can bring back these important functions."

Bringing back the ability for corn to recruit its own nitrogen fixation system would allow producers to apply less nitrogen fertilizer, leading to less nitrogen loss from the system overall.

"Farmers don't always know how much nitrogen they will need, so, historically, they've dumped as much as possible onto the fields. If we bring these characteristics back into corn, it might be easier for them to start rethinking the way they manage nitrogen," Bohn says.

Kent adds that a little change could go a long way.

"If we could reduce nitrogen losses by even 10% across the growing region of the Midwest, that would have huge consequences for the environmental conditions in the Gulf of Mexico," she says.

Credit: 
University of Illinois College of Agricultural, Consumer and Environmental Sciences

Standing out from the crowd

Corporate strategies should be as unique as possible, in fact highly specific to each individual company. This enables companies to compete successfully in the long term. However, the capital market and others, including analysts, often react negatively to the idea of unique strategies. The reason is that deviating from typical industry standards makes them more complex to evaluate. This regularly discourages companies from focusing on unique strategies, even though they would be beneficial for the company in the long term. This contradiction is known as the "uniqueness paradox". A research team from the Universities of Göttingen and Groningen has investigated the influence of different types of investors on the extent of the paradox and thus on the choice of unique strategies. The results were published in Strategic Management Journal.

The researchers analyzed data from around 900 listed US companies over a period of twelve years. The focus was on US companies because of the larger number of companies listed on the stock exchange. The main findings: a company's investors have significant influence on the uniqueness of its chosen strategy; and "dedicated" investors (e.g. pension funds) exert increasing influence. Their actions are characterized by the fact that they take a long-term perspective and are prepared to take a close look at the company's strategies. They can help resolve the uniqueness paradox, as they can recognize the value of unique strategies for long-term corporate development. "They are more likely to encourage management to implement these strategies," says co-author Professor Michael Wolff, who holds the Chair of Management and Controlling at the University of Göttingen. "This is especially true for industries where companies are more difficult for the capital market to assess because, for example, corporate profits vary widely."

The research team derives specific recommendations for action from the study. Firstly, the study shows companies the great importance of continuous and, above all, detailed strategy communication with investors. In particular, it is important to explain the reasons in a transparent and coherent manner for following a different course from the strategies of other companies in the same industry. Secondly, the study shows investors how important their function as owners is in actively engaging with corporate strategy in order to encourage management to operate in the long term. Thirdly, the study makes it clear that investors who have so far acted very passively should be made more accountable by institutions that regulate the capital market to engage more intensively with companies and their strategies.

Credit: 
University of Göttingen

The city formula

(Vienna, March 17, 2021) When complex systems double in size, many of their parts do not. Characteristically, some aspects will grow by only about 80 percent, others by about 120 percent. The astonishing uniformity of these two growth rates is known as "scaling laws." Scaling laws are observed everywhere in the world, from biology to physical systems. They also apply to cities. Yet, while a multitude of examples show their presence, reasons for their emergence are still a matter of debate.

A new publication in the Journal of The Royal Society Interface now provides a simple explanation for urban scaling laws: Carlos Molinero and Stefan Thurner of the Complexity Science Hub Vienna (CSH) derive them from the geometry of a city.

Scaling laws in cities

An example of an urban scaling law is the number of gas stations: If a city with 20 gas stations doubles its population, the number of gas stations does not increase to 40, but only to 36. This growth rate of about 0.80 per doubling applies to much of the infrastructure of a city. For example, the energy consumption per person or the land coverage of a town rises by only 80 percent with each doubling. Since this growth is slower than what is expected from doubling, it is called sub-linear growth.

On the other hand, cities show more-than-doubling rates in more socially driven contexts. People in larger cities earn consistently more money for the same work, make more phone calls, and even walk faster than people in smaller towns. This super-linear growth rate is around 120 percent for every doubling.

Remarkably, these two growth rates, 0.8 and 1.2., are showing up over and over again in literally dozens of city-related contexts and applications. However, so far it is not really understood where these numbers come from.

It's all in the geometry

Stefan Thurner and former CSH researcher Carlos Molinero, who worked on this publication during his time in Vienna, now show that these scaling laws can be explained by the spatial geometry of cities. "Cities are always built in a way that infrastructure and people meet," says Molinero, an urban science expert. "We therefore think that scaling laws must somehow emerge from the interplay between the places people live in, and the spaces they use to move through a city -- basically its streets."

"The innovative finding of this paper is how the spatial dimensions of a city relate to each other," adds complexity researcher and physicist Stefan Thurner.

Fractal geometry

To come to this conclusion, the researchers first mapped three-dimensionally where people live. They used open data for the height of buildings in more than 4,700 cities in Europe. "We know most of the buildings in 3D, so we can estimate how many floors a building has and how many people live in it," says Thurner. The scientists assigned a dot to every person living in a building. Together, these dots form sort of a "human cloud" wthin a city.

Clouds are fractals. Fractals are self-similar, meaning that if you zoom in, their parts look very similar to the whole. Using the human cloud, the researchers were able to determine the fractal dimension of a city's population: They retrieved a number that describes the human cloud in every city. Similarly, they calculated the fractal dimension of cities' road networks.

"Although these two numbers vary widely from city to city, we discovered that the ratio between the two is a constant," Thurner says. The researchers identified this constant as the "sublinear scaling exponent."

Aside from the elegance of the explanation, the finding has potential practical value, as the scientists point out. "At first sight this looks like magic, but it makes perfect sense if one takes a closer look," Thurner says. "It's this scaling exponent that determines how the properties of a city change with its size, and that is relevant because many cities around the world are growing rapidly."

A formula for sustainable urban planning

The number of people living in cities worldwide is expected to roughly double in the next 50 to 80 years. "Scaling laws show us what this doubling means in terms of wages, crime, inventiveness or resources needed per person -- all this is important information for urban planners," Thurner points out.

To know the scaling exponent of a particular city could help urban planners to keep the gigantic resource demands of urban growth at bay. "We can now think specifically about how to get this number as small as possible, for example through clever architectural solutions and radically different approaches to mobility and infrastructure construction," Stefan Thurner is convinced. "The smaller the scaling exponent, the higher the resource efficiency of a city," he concludes.

Credit: 
Complexity Science Hub

How sperm remember

It has long been understood that a parent's DNA is the principal determinant of health and disease in offspring. Yet inheritance via DNA is only part of the story; a father's lifestyle such as diet, being overweight and stress levels have been linked to health consequences for his offspring. This occurs through the epigenome - heritable biochemical marks associated with the DNA and proteins that bind it. But how the information is transmitted at fertilization along with the exact mechanisms and molecules in sperm that are involved in this process has been unclear until now.

A new study from McGill, published recently in Developmental Cell, has made a significant advance in the field by identifying how environmental information is transmitted by non-DNA molecules in the sperm. It is a discovery that advances scientific understanding of the heredity of paternal life experiences and potentially opens new avenues for studying disease transmission and prevention.

A paradigm shift in understanding of heredity

"The big breakthrough with this study is that it has identified a non-DNA based means by which sperm remember a father's environment (diet) and transmit that information to the embryo," says Sarah Kimmins, PhD, the senior author on the study and the Canada Research Chair in Epigenetics, Reproduction and Development. The paper builds on 15 years of research from her group. "It is remarkable, as it presents a major shift from what is known about heritability and disease from being solely DNA-based, to one that now includes sperm proteins. This study opens the door to the possibility that the key to understanding and preventing certain diseases could involve proteins in sperm."

"When we first started seeing the results, it was exciting, because no one has been able to track how those heritable environmental signatures are transmitted from the sperm to the embryo before," adds PhD candidate Ariane Lismer, the first author on the paper. "It was especially rewarding because it was very challenging to work at the molecular level of the embryo, just because you have so few cells available for epigenomic analysis. It is only thanks to new technology and epigenetic tools that we were able to arrive at these results."

Changes in sperm proteins affect offspring

To determine how information that affects development gets passed on to embryos, the researchers manipulated the sperm epigenome by feeding male mice a folate deficient diet and then tracing the effects on particular groups of molecules in proteins associated with DNA.

They found that diet-induced changes to a certain group of molecules (methyl groups), associated with histone proteins, (which are critical in packing DNA into cells), led to alterations in gene expression in embryos and birth defects of the spine and skull. What was remarkable was that the changes to the methyl groups on the histones in sperm were transmitted at fertilization and remained in the developing embryo.

"Our next steps will be to determine if these harmful changes induced in the sperm proteins (histones) can be repaired. We have exciting new work that suggest that this is indeed the case," adds Kimmins. "The hope offered by this work is that by expanding our understanding of what is inherited beyond just the DNA, there are now potentially new avenues for disease prevention which will lead to healthier children and adults."

Credit: 
McGill University

New study points to novel drug target for treating COVID-19

image: Researchers from Cleveland Clinic's Florida Research and Innovation Center (FRIC) have identified a potential new target for anti-COVID-19 therapies. Their findings were published in Nature Microbiology.

Image: 
Cleveland Clinic

March 16, 2021, PORT ST. LUCIE, FL: Researchers from Cleveland Clinic's Florida Research and Innovation Center (FRIC) have identified a potential new target for anti-COVID-19 therapies. Their findings were published in Nature Microbiology.

Led by FRIC scientific director Michaela Gack, Ph.D., the team discovered that a coronavirus enzyme called PLpro (papain-like protease) blocks the body's immune response to the infection. More research is necessary, but the findings suggest that therapeutics that inhibit the enzyme may help treat COVID-19.

"SARS-CoV-2 - the virus that causes COVID-19 - has evolved quickly against many of the body's well-known defense mechanisms," Gack said. "Our findings, however, offer insights into a never-before characterized mechanism of immune activation and how PLpro disrupts this response, enabling SARS-CoV-2 to freely replicate and wreak havoc throughout the host. We discovered that inhibiting PLpro may help rescue the early immune response that is key to limiting viral replication and spread."

One of the body's frontline immune defenses is a class of receptor proteins, including one called MDA5, that identify invaders by foreign patterns in their genetic material. When the receptors recognize a foreign pattern, they become activated and kick-start the immune system into antiviral mode. This is done in part by increasing the downstream expression of proteins encoded by interferon-stimulated genes (ISGs).

In this study, Gack and her team identified a novel mechanism that leads to MDA5 activation during virus infection. They found that ISG15 must physically bind to specific regions in the MDA5 receptor - a process termed ISGylation - in order for MDA5 to effectively activate and unleash antiviral actors against invaders. They showed that ISGylation helps to promote the formation of larger MDA5 protein complexes, which ultimately results in a more robust immune response against a range of viruses.

"While discovery of a novel mechanism of immune activation is exciting on its own," Gack said, "we also discovered a bit of bad news, which is that SARS-CoV-2 also understands how the mechanism works, considering it has already developed a strategy to block it."

The research team shows that the coronavirus enzyme PLpro physically interacts with the receptor MDA5 and inhibits the ISGylation process.

"We're already looking forward to the next phase of study to investigate whether blocking PLpro's enzymatic function, or its interaction with MDA5, will help strengthen the human immune response against the virus," Gack said. "If so, PLpro would certainly be an attractive target for future anti-COVID-19 therapeutics."

Credit: 
Cleveland Clinic

Sleep troubles may complicate the grieving process

Those who have persistent trouble sleeping may have an especially difficult grieving process after the death of a loved one, a new study co-authored by a University of Arizona researcher finds.

Most people who lose a close friend or family member will experience sleep troubles as part of the grieving process, as the body and mind react to the stress of the event, said study co-author Mary-Frances O'Connor, a professor in the UArizona Department of Psychology.

But O'Connor and her collaborators found that those who had persistent sleep challenges before losing someone were at higher risk for developing complicated grief after a loss. Complicated grief is characterized by a yearning for a lost loved one so intense and persistent that it disrupts a person's daily functioning. It occurs in 7-10% of bereaved people, O'Connor said.

"We know that, for many people, experiencing the death of a loved one is followed by sleep disruption - not surprisingly, given how stressful it is to lose a loved one," said O'Connor, who directs the university's Grief, Loss and Social Stress Laboratory. "We also know that people who have a more prolonged grief disorder tend to have persistent sleep problems. That led us to ask: What if the reverse is possible? Could it be that people who have had sleep disruption and then experience the death of a loved one are more likely to develop complicated grief?"

O'Connor and her collaborators at Erasmus University Medical Center in the Netherlands and the Phoenix VA Health Care System looked at data from the multiyear Rotterdam Study, which followed a group of middle-aged and older adults over time and looked at various aspects of their physical and mental health.

Participants in the study were asked, among other things, to keep sleep diaries documenting the quality of their sleep. They also were asked to wear a wristwatch monitor, called an actigraph, that objectively measures how long it takes a person to fall asleep, how often a person wakes during the night and how much time spent in bed is awake versus asleep.

In addition, participants were asked in interviews if they were still grieving the loss of someone who died in recent months or years, and they completed follow-up assessments of their grief symptoms.

The researchers compared study participants' initial responses to what they said approximately six years later, focusing specifically on participants who experienced the loss of a loved one between the first interview and the follow-up.

"What we saw was that if at the first time point you had sleep disruption - both objective and self-reported - you were more likely to be in the complicated grief group than the non-complicated grief group at the second time point," O'Connor said. "So, poor sleep might not only accompany grief but also be a risk factor for developing complicated grief after a loss."

The researchers' findings are published in the Journal of Psychiatric Research.

Sleep is critical for both physical and mental health, which could be why it impacts the grieving process, O'Connor said.

"We know that sleep is important for processing emotional events that happen during the daytime," she said. "Sleep also helps us to rest and restore our physical body, and grief is a very stressful experience for the body. Being able to rest and restore probably helps us wake up the next day a little more physically prepared to deal with the grief."

O'Connor says temporary sleep disturbances prior to the death of a loved one - such as stress-induced sleeplessness while caring for a sick family member - are not of as much concern. What is of more concern is a persistent sleep issue, which is more likely to put a person at risk for complicated grief.

O'Connor suggests health care and other support professionals consider sleep history when treating a bereaved person.

"Because grief is such a disruptive and difficult event, doctors often, I think, forget to ask about history when considering how to intervene, rather than just about what's going on during this intense moment," she said. "When physicians and the helping professions are working with bereaved people, they should ask about the history of sleep problems they've had, and not just what sleep problems they're having right now."

Credit: 
University of Arizona