Culture

New COVID-19 model shows little benefit in vaccinating high-risk individuals first

BROOKLYN, New York, Tuesday, January 19, 2021 - The World Health Organization reports that as of January 19, 2021, there are approximately 94 million cases of COVID-19 globally, with over 2 million deaths. In the face of these numbers -- driven in part by an aggressive resurgence of the virus in the U.S. -- health authorities face a tenuous balancing act: how to enact policies to keep citizens safe while doing the least possible damage to quality of life and local economies, especially in smaller cities and towns, where short supply of intensive care units and tight budgets make the thin line between precautionary measures and normalcy even thinner.

A new theory and simulation platform that can create predictive models based on aggregated data from observations taken across multiple strata of society could prove invaluable.

Developed by a research team led by Maurizio Porfiri, Institute Professor at the NYU Tandon School of Engineering, the novel open-source platform comprises an agent-based model (ABM) of COVID-19 for the entire town of New Rochelle, located in Westchester County in New York State.

In the paper "High-Resolution Agent-Based Modeling of COVID-19 Spreading in a Small Town," published in Advanced Theory and Simulations, the team trains its system, developed at the resolution of a single individual, on the city of New Rochelle -- one of the first outbreaks registered in the United States.

The ABM replicates, geographically and demographically, the town structure obtained from U.S. Census statistics and superimposes a high-resolution -- both temporal and spatial -- representation of the epidemic at the individual level, considering physical locations as well as unique features of communities, like human behavioral trends or local mobility patterns.

Among the study's findings are those suggesting that prioritizing vaccination of high-risk individuals has only a marginal effect on the number of COVID-19 deaths. To obtain significant improvements, a very large fraction of the town population should, in fact, be vaccinated. Importantly, the benefits of the restrictive measures in place during the first wave greatly surpass those from any of these selective vaccination scenarios. Even with a vaccine available, social distancing, masks, and mobility restrictions will still be key tools to fight COVID-19.

Porfiri pointed out that focusing on a city of New Rochelle's size was crucial to the research because most cities in the U.S. have comparable population sizes and concentrations.
"We chose New Rochelle not only because of its place in the COVID timeline, but because agent-based modelling for mid-size towns is relatively unexplored despite the U.S. being largely composed of such towns and small cities," he said.

Supported by expert knowledge and informed by officially reported COVID-19 data, the model incorporates detailed elements of pandemic spread within a statistically realistic population. Along with testing, treatment, and vaccination options, the model also accounts for the burden of other illnesses with symptoms similar to those of COVID-19.

Unique to the model is the possibility to explore different testing approaches -- in hospitals or drive-through facilities-- and vaccination strategies that could prioritize vulnerable groups.
"We think decision making by public authorities could benefit from this model, not only because it is 'open source,' but because it offers a 'fine-grain' resolution at the level of the individual and a wide range of features," noted Porfiri.

The research team included Zhong-Ping Jiang, professor of electrical and computer engineering; post-docs Agnieszka Truszkowska, who led the implementation of the computational framework for the project, and Brandon Behring; and graduate student Jalil Hasanyan; as well as Lorenzo Zino from the University of Groningen, Sachit Butail from Southern Illinois University, Emanuele Caroppo from the Università Cattolica del Sacro Cuore, and Alessandro Rizzo from Turin Polytechnic, and visiting professor of mechanical and aerospace engineering at NYU Tandon.

Credit: 
NYU Tandon School of Engineering

NASA explores solar wind with new view of small sun structures

image: Scientists used image processing on high-resolution images of the Sun to reveal distinct "plumelets" within structures on the Sun called solar plumes. The full-disk Sun and the left side of the inset image were captured by NASA's Solar Dynamics Observatory in a wavelength of extreme ultraviolet light and processed to reduce noise. The right side of the inset has been further processed to enhance small features in the images, revealing the edges of the plumelets in clear detail. These plumelets could help scientists understand how and why disturbances in the solar wind form.

Image: 
NASA/SDO/Uritsky, et al.

Scientists have combined NASA data and cutting-edge image processing to gain new insight into the solar structures that create the Sun's flow of high-speed solar wind, detailed in new research published today in The Astrophysical Journal. This first look at relatively small features, dubbed "plumelets," could help scientists understand how and why disturbances form in the solar wind.

The Sun's magnetic influence stretches billions of miles, far past the orbit of Pluto and the planets, defined by a driving force: the solar wind. This constant outflow of solar material carries the Sun's magnetic field out into space, where it shapes the environments around Earth, other worlds, and in the reaches of deep space. Changes in the solar wind can create space weather effects that influence not only the planets, but also human and robotic explorers throughout the solar system -- and this work suggests that relatively small, previously-unexplored features close to the Sun's surface could play a crucial role in the solar wind's characteristics.

"This shows the importance of small-scale structures and processes on the Sun for understanding the large-scale solar wind and space weather system," said Vadim Uritsky, a solar scientist at the Catholic University of America and NASA's Goddard Space Flight Center, who led the study.

Like all solar material, which is made up of a type of ionized gas called plasma, the solar wind is controlled by magnetic forces. And the magnetic forces in the Sun's atmosphere are particularly complex: The solar surface is threaded through with a constantly-changing combination of closed loops of magnetic field and open magnetic field lines that stretch out into the solar system.

It's along these open magnetic field lines that the solar wind escapes from the Sun into space. Areas of open magnetic field on the Sun can create coronal holes, patches of relatively low density that appear as dark splotches in certain ultraviolet views of the Sun. Often, embedded within these coronal holes are geysers of solar material that stream outward from the Sun for days at a time, called plumes. These solar plumes appear bright in extreme ultraviolet views of the Sun, making them easily visible to observatories like NASA's Solar Dynamics Observatory satellite and other spacecraft and instruments. As regions of particularly dense solar material in open magnetic field, plumes play a large role in creating the high-speed solar wind -- meaning that their attributes can shape the characteristics of the solar wind itself.

Using high-resolution observations from NASA's Solar Dynamics Observatory satellite, or SDO, along with an image processing technique developed for this work, Uritsky and collaborators found that these plumes are actually made up of much smaller strands of material, which they call plumelets. While the entirety of the plume stretches out across about 70,000 miles in SDO's images, the width of each plumelet strand is only a few thousand miles across, ranging from around 2,300 miles at the smallest to around 4,500 miles in width for the widest plumelets observed.

Though earlier work has hinted at structure within solar plumes, this is the first time scientists have observed plumelets in sharp focus. The techniques used to process the images reduced the "noise" in the solar images, creating a sharper view that revealed the plumelets and their subtle changes in clear detail.

Their work, focused on a solar plume observed on July 2-3, 2016, shows that the plume's brightness comes almost entirely from the individual plumelets, without much additional fuzz between structures. This suggests that plumelets are more than just a feature within the larger system of a plume, but rather the building blocks of which plumes are made.

"People have seen structure in and at the base of plumes for a while," said Judy Karpen, one of the authors of the study and chief of the Space Weather Laboratory in the Heliophysics Science Division at NASA Goddard. "But we've found that the plume itself is a bundle of these denser, flowing plumelets, which is very different from the picture of plumes we had before."

They also found that the plumelets move individually, each oscillating on its own -- suggesting that the small-scale behavior of these structures could be a major driver behind disruptions in the solar wind, in addition to their collective, large-scale behavior.

Searching for plumelet signatures

The processes that create the solar wind often leave signatures in the solar wind itself -- changes in the wind's speed, composition, temperature, and magnetic field that can provide clues about the underlying physics on the Sun. Solar plumelets may also leave such fingerprints, revealing more about their exact role in the solar wind's creation, even though finding and interpreting them can be its own complex challenge.

One key source of data will be NASA's Parker Solar Probe, which has flown closer to the Sun than any other spacecraft -- reaching distances as close as 4 million miles from the solar surface by the end of its mission -- captures high-resolution measurements of the solar wind as it swings by the Sun every few months. Its observations, closer to the Sun and more detailed than those from prior missions, could reveal plumelet signatures.

In fact, one of Parker Solar Probe's early and unexpected findings might be connected to plumelets. During its first solar flyby in November 2018, Parker Solar Probe observed sudden reversals in the magnetic field direction of the solar wind, nicknamed "switchbacks." The cause and the exact nature of the switchbacks is still a mystery to scientists, but small-scale structures like plumelets could produce similar signatures.

Finding signatures of the plumelets within the solar wind itself also depends on how well these fingerprints survive their journey away from the Sun -- or whether they would be smudged out somewhere along the millions of miles they travel from the Sun to our observatories in space.

Evaluating that question will rely on remote observatories, like ESA and NASA's Solar Orbiter, which has already taken the closest-ever images of the Sun, including a detailed view of the solar surface -- images that will only improve as the spacecraft gets closer to the Sun. NASA's upcoming PUNCH mission -- led by Craig DeForest, one of the authors on the plumelets study -- will study how the Sun's atmosphere transitions to the solar wind and could also provide answers to this question.

"PUNCH will directly observe how the Sun's atmosphere transitions to the solar wind," said Uritsky. "This will help us understand if the plumelets can survive as they propagate away from the Sun -- if can they actually be injected into the solar wind."

Credit: 
NASA/Goddard Space Flight Center

Constructing termite turrets without a blueprint

image: The interior of a termite nest shows complex, interconnecting floors and ramps.

Image: 
(Image courtesy of Guy Theraulaz/Harvard SEAS)

Following a series of studies on termite mound physiology and morphogenesis over the past decade, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences have now developed a mathematical model to help explain how termites construct their intricate mounds.

The research is published in the Proceedings of the National Academy of Sciences.

"Termite mounds are amongst the greatest examples of animal architecture on our planet," said L. Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology, and of Physics and lead author of the study. "What are they for? How do they work? How are they built? These are the questions that have puzzled many scientists for a long time."

In previous research, Mahadevan and his team showed that day-to-night temperature variations drive convective flow in the mound that not only ventilates the colony but also move pheromone-like cues around, which trigger building behavior in termites.

Here, the team zoomed in further to understand how termites build the intricately connected floors in individual mounds without a plan or a planner. With experimentalists from the University of Toulouse, France led by Guy Theraulaz, the researchers mapped the interior structures of two nests using CT scans, and quantified the spacing and arrangement of floors and ramps. Adding to the complexity of the nests is the fact that not only do termites build simple ramps to connect floors but they also build spiral ramps, like the ramps in parking garages, to connect multiple floors.

Using these visualizations and incorporating the previous findings on how factors such as daily temperature shifts and pheromone flows drive building, OEB graduate student Alexander Heyde and Mahadevan constructed a mathematical framework to explain the layout of the mound.

Heyde and Mahadevan thought of each component of the mound -- the air, the mud and the termites -- as intermixed fluids that vary in space and time.

"We can think of the collection of hundreds of thousands of termites as a fluid that can sense its environment and act upon it," said Heyde. "Then you have a real fluid, air, transporting pheromones through that environment, which drives new behaviors. Finally, you have mud, which is moved around by the termites, changing the way in which the pheromones flow. Our mathematical framework provided us with clear predictions for the spacing between the layers, and showed the spontaneous formation of linear and helical ramps."

"Here is an example where we see that the usual division between the study of nonliving matter and living matter breaks down," said Mahadevan. "The insects create a micro-environment, a niche, in response to pheromone concentrations. This change in the physical environment changes the flow of pheromones, which then changes the termite behaviors, linking physics and biology through a dynamic architecture that modulates and is modulated by behavior. "

In addition to partially solving the mystery of how these mounds work and are built, the research may well have implications for swarm intelligence in a range of other systems and even understanding aspects of tissue morphogenesis.

Credit: 
Harvard John A. Paulson School of Engineering and Applied Sciences

Exploration of toxic Tiger Rattlesnake venom advances use of genetic science techniques

video: A research team led by the University of South Florida studied the genome of the Tiger Rattlesnake.

Image: 
Michael P. Hogan, Florida State University

The Tiger Rattlesnake possesses the simplest, yet most toxic venom of any rattlesnake species, and now new research from a team lead by a University of South Florida biologist can explain the genetics behind the predator's fearsome bite.

Published in the new edition of "Proceedings of the National Academy of Sciences," USF Department of Integrative Biology Assistant Professor Mark Margres and colleagues across the southeastern United States have sequenced the genome of the Tiger Rattlesnake to understand the genotype of the venom trait. Despite the simplicity of the Tiger Rattlesnake's venom, Margres says it is roughly 40 times more toxic than the venom of the Eastern Diamondback Rattlesnakes here in Florida.

Their work is the most complete characterization of the venom gene-regulatory network to date and its identification of key mechanisms in producing the particularly toxic venom will help scientists explain a wide array of genetic questions.

"Simple genotypes can produce complex traits," Margres said. "Here, we have shown the opposite is also true - a complex genotype can produce simple traits."

Margres collaborated with colleagues at Clemson University, Florida State University and the University of South Alabama, in the project, which sought to explain whether trait differences are derived from differences in the number of genes, their sequence or how they are regulated. Their work is only the second time a rattlesnake genome has been decoded.

An organism's genotype is the set of genes it carries, and its phenotype is all of its observable characteristics, which can be influenced by its genes, the environment in which it lives, and other factors. Evolutionary biologists work to understand how genes influence the variation in phenotype among otherwise similar organisms. In this case, they looked at why different species of rattlesnakes differ in venom composition and toxicity.

Tiger Rattlesnakes are native to the Sonoran Desert of southern Arizona and northern Mexico where the relatively small pit viper preys on lizards and rodents. While some species of rattlesnakes have complex venoms that are the result of scores of genes, Margres said the Tiger Rattlesnake's venom is quite simple - as few as 15 of its 51 toxic-producing genes actively drive the production of proteins and peptides that attacks its prey's nervous system, forces blood pressure to drop and causes blood clotting to cease.

The team found that the number of venom genes greatly exceeds the number of proteins produced in the simple phenotype, indicating a complex process was at the heart of the toxic venom and the Tiger Rattlesnakes even has toxic genes to spare.

"Only about half of the venom genes in the genotype were expressed," Margres said. "To me, the interesting part is why are the non-expressed genes still present? These genes can make functional toxins, they just don't. That needs to be explored further."

Beyond understanding this one species of venomous snake, Margres said the research will help advance genetic science by showing the techniques more commonly used on genetic research on mice and fruit flies, organisms that are often used in genetic studies, can also work when applied to less-studied organisms like snakes. The team used genetic sequencing techniques that are common in human genetics research and in doing so, opened the door for scientists to understand the genotype-phenotype relationship in many other organisms.

Another potential side benefit of the research, Margres said, is that snake venom is used in medicine for humans to combat stroke and high blood pressure. The more scientists understand about venom, the better medical engineering can apply that knowledge in drug discovery and development.

Credit: 
University of South Florida

Rush researchers demonstrate success with new therapy for COVID-19

A new therapy developed by researchers at Rush University Medical Center is showing success as a way to prevent COVID-19 symptoms in mice.

In a study published in the Journal of Neuroimmune Pharmacology, mouse models with COVID-19 showed positive results when a small peptide was introduced nasally. The peptide proved effective in reducing fever, protecting the lungs, improving heart function and reversing cytokine storm -- a condition in which an infection triggers the immune system to flood the bloodstream with inflammatory proteins. The researchers also report success in preventing the disease from progression.

"This could be a new approach to prevent SARS-CoV-2 infection and protect COVID-19 patients from breathing problems and cardiac issues," said Kalipada Pahan, PhD, the Floyd A. Davis Professor of Neurology at Rush and a Research Career Scientist at the Jesse Brown VA Medical Center. "Understanding the mechanism is proving important to developing effective therapies for COVID-19."

Many COVID-19 patients in the ICU suffer from cytokine storm that affects lungs, heart and other organs. Although anti-inflammatory therapies such as steroids are available, very often these treatments cause immunosuppression.

"Since SARS-CoV-2 binds to angiotensin-converting enzyme 2 (ACE2) for entering into the cells, we have designed a hexapeptide corresponding to the ACE2-interacting domain of SARS-CoV-2 (AIDS) to inhibit the binding of virus with ACE-2," Pahan said. "AIDS peptide inhibits cytokines produced by only SARS-CoV-2 spike protein, not other inflammatory stimuli, indicating that AIDS peptide would not cause immunosuppression. We found that after intranasal treatment, AIDS peptide reduces fever, protects lungs, normalizes heart function, and enhances locomotor activities in a mouse model of COVID-19."

Although vaccine is available, COVID-19 could potentially morph into a seasonal and an opportunistic event. For example, despite flu vaccination, about 40,000 to 50,000 people die each year in United States from the flu.

Therefore, a specific medicine for reducing SARS-CoV-2-related inflammatory events and taking care of respiratory and cardiac issues of COVID-19 will be necessary for better management of COVID-19 even in the post-vaccine era.

"If our AIDS peptide results can be replicated in COVID-19 patients, it would be a remarkable advance in controlling this devastating pandemic," Pahan said.

Common symptoms of COVID-19 are fever, cough, and shortness of breath. With a mortality rate of around 4-5 percent, it is 10 times more lethal than flu. While anyone is susceptible to COVID-19, those over 60 years of age and those with preexisting conditions, such as hypertension, obesity, asthma, or diabetes, are more vulnerable to severe symptoms. Currently it appears that COVID-19 is more lethal in men than women. To date, about 2 million people died throughout the world due to COVID-19.

Credit: 
Rush University Medical Center

Sequencing of wastewater useful for control of SARS-CoV-2

Washington, DC - January 19, 2021 - Viral genome sequencing of wastewater can detect new SARS-CoV-2 variants before they are detected by local clinical sequencing, according to a new study reported in mBio, an open-access journal of the American Society for Microbiology. The ability to track SARS-CoV-2 mutations in wastewater could be particularly useful for tracking new variants, like the B.1.17 strain that is now widespread in the U.K. and has already been introduced in the U.S.

"SARS CoV-2 virus is excreted by individuals that are infected by COVID -19and the fecal waste ends up in the wastewater systems. By sampling wastewater, we can get information on infections for a whole population. Some wastewater systems serve several thousand people. Some serve hundreds of thousands of people," said principal study investigator Kara Nelson, PhD, professor of civil and environmental engineering, in the College of Engineering at the University of California-Berkeley. "Sampling wastewater is a very efficient way to get information. It is also a less biased source of information, because we can get information from all individuals in the sewershed, whether or not they are being tested in a clinic. We know that there are individuals that have asymptomatic infections that may never get tested."

In the new study, researchers developed and used a novel method for sampling wastewater. When researchers sequence RNA concentrated and extracted from wastewater samples, there may be many different strains present because there are many individuals contributing to the sample. However, it is challenging to distinguish the SARS-CoV-2 genetic signal from the billions of bacteria and viruses people excrete every day. Researchers must identify SARS CoV-2 amidst a whole soup of other genomic material.

"The way that we need to process the sequence information is complex. One contribution of this paper is the ability to prepare samples for sequencing from wastewater. Instead of directly sequencing everything present, we used an enrichment approach where you first try to enrich the RNA that you are interested in," said Dr. Nelson. "Then we developed a novel bioinformatic analysis approach which was sensitive enough to detect a single nucleotide difference. You can't get any more sensitive than that."

The researchers sequenced RNA directly from sewage collected by municipal utility districts in the San Francisco Bay Area to generate complete and nearly complete SARS-CoV-2 genomes. The researchers found that the major consensus SARS-CoV-2 genotypes detected in the sewage were identical to clinical genomes from the region. While the observed wastewater variants were more similar to local California patient-derived genotypes than they were to those from other regions, they also detected single nucleotide variants that had only been reported from elsewhere in the United States or globally. Thus, the researchers found that wastewater sequencing can provide evidence for recent introductions of viral lineages before they are detected by local clinical sequencing. By understanding which strains of SARS-CoV-2 are present in populations over time, researchers can gain insight into how transmission is occurring and whether new variants, like B.1.1.7, are dominating transmission.

"Of everyone who gets tested, only a fraction of those samples even get sequenced. When you are sampling the wastewater, you get a more comprehensive and less biased data on your population," said Dr. Nelson. "It appears that we might be able to get an earlier signal in the wastewater if a new variant shows up compared to only relying on the sequencing of clinical samples. Just knowing that SARS-CoV-2 is present in a population is the first step in providing information to help control the spread of the virus, but knowing which variants are present provides additional but very useful information."

Credit: 
American Society for Microbiology

Inflamed environment is C. diff paradise

A new study from North Carolina State University shows that the inflammation caused by Clostridioides difficile (C. diff) infection gives the pathogen a two-fold advantage: by both creating an inhospitable environment for competing bacteria and providing nutrients that enable C. diff to thrive.

C. diff is a bacterium that causes diarrhea, often with severe or even fatal consequences. As part of its growth cycle, C. diff produces two toxins which cause inflammation and damage the lining of the gut.

"C. diff thrives when other microbes in the gut are absent - which is why it is more prevalent following antibiotic therapy," says Casey Theriot, associate professor of infectious disease at NC State and corresponding author of the research. "But when colonizing the gut, C. diff also produces two large toxins, TcdA and TcdB, which cause inflammation. We wanted to know if these inflammation-causing toxins actually give C. diff a survival benefit - whether the pathogen can exploit an inflamed environment in order to thrive."

Theriot and former NC State postdoctoral researcher Josh Fletcher led a team that studied two varieties of C. diff - one that produced the toxins and a genetically modified strain that did not - both in vitro and in a mouse model. In both models, toxin-producing C. diff was associated with increased inflammation and cellular damage. Genetic analysis found that C. diff in an inflamed environment expressed more genes related to carbohydrate and amino acid metabolism. Finally, in vitro experiments demonstrated that C. diff was able to utilize amino acids from collagen for growth.

"C. diff's toxins damage the cells that line the gut," Theriot says. "These cells contain collagen, which is made up of amino acids and peptides. When collagen is degraded by toxins, C. diff responds by turning on expression of genes that can use these amino acids for growth."

The researchers also noted that an inflamed environment suppressed the numbers of other microbes in the gut. So the toxins play a dual role: by causing inflammation, C. diff both removes competition for resources and creates more resources for its own growth.

"I always found it interesting that C. diff causes such intense inflammation," Fletcher says. "Our research shows that this inflammation may contribute to the persistence of C. diff in the gut environment, prolonging infection."

Credit: 
North Carolina State University

Getting under your skin: Molecular research builds new understanding of skin regeneration

As the air continues to dry and temperatures drop, the yearly battle against dry hands and skin has officially begun. New research from Northwestern University has found new evidence deep within the skin about the mechanisms controlling skin repair and renewal.

Skin's barrier function gives it the unique ability to fight winter woes and retain water for our bodies. The outer layer of the skin, the epidermis, is constantly turning over to replace dead or damaged cells, creating new cells to reinforce the barrier function and heal damage. The gene regulatory mechanisms that control epidermis turnover remain incompletely understood.

"Every month we're covered with a new layer of epidermis," said Northwestern's Xiaomin Bao, who led the study. "The next question is what does that process involve?"

The paper will be published Jan. 19 in the journal Nature Communications.

Bao is an assistant professor of molecular biosciences in the Weinberg College of Arts and Sciences with a joint appointment in the Department of Dermatology at the Feinberg School of Medicine.

Genetic 'junk'

The scientific community has developed a wide breadth of knowledge about proteins, the workhorses of various cellular activities. However, proteins are only encoded by less than 2% of the human genome. Many mysteries remain about the nature of introns, non-coding segments of DNA that make up 24% of the human genome.

Despite the general belief that introns are nothing but "genetic junk," they actually play critical roles in modulating RNA transcription throughout a tissue's lifespan. RNA transcription is the first step of gene expression, in which the information from DNA is copied into RNA, which is then subsequently used as template for synthesizing proteins to drive the specific function of a cell. Depending on where transcription terminates -- in an intron or at the end of a gene -- an epidermal stem cell will either remain a stem cell or become a specified cell barrier function. Bao said while it's well-know that transcription ends at the end of a gene, her lab's research found conflicting data.

"We found a lot of sites where transcription terminates--not just at the end of a gene, but often within an intron in the middle of a gene," Bao said. "Even the same genes may have different transcription termination patterns in epidermal stem cells versus terminally differentiated cells."

The finding may apply to many more self-renewing regenerative systems in the human body. Future research could have implications on carcinoma research.

Technology critical to discovery of phenomenon

Skin cells are gaining popularity with researchers in part due to their regenerative properties and readiness to grow in cultures. This allows researchers to apply a variety of state-of-the-art technologies. By growing skin cells and regenerating skin tissue in a petri dish, the Bao Lab can experiment with this fast-growing tissue to determine molecular mechanisms and regulatory elements within DNA.

"Technology development is a key driver that allowed us to uncover this new phenomenon," Bao said.

The team used a novel genomic technique that precisely maps where transcription stops. The integration of proteomic approaches identified RNA-binding proteins that read specific regulatory sequences in the introns. The team further leveraged CRISPR technology to delete genomic sequences in the intron, which provided direct evidence demonstrating the essential roles of introns in modulating gene expression.

Before this research, mechanisms involving introns to govern the switch between a skin stem cell and a terminally differential state (for example, a cell that participates in forming a skin barrier), were unknown. Most studies ignored introns, despite them accounting for 10 to 20 times more sequences than the protein-coding regions (exons) in the human genome.

The study shows that different genes may involve different sets of RNA-binding proteins to recognize the regulatory sequences in their introns. These RNA-binding proteins help maturing RNA "decide" whether to cut transcription early, or ignore termination sites within an intron during differentiation due to changes in protein availability.

"We are only beginning to appreciate the roles of intron in human health and diseases," Bao said.

Results of the study could have wider impacts because, according to Bao, the processes regulating skin cells are almost definitely not restricted to skin cells. Future research on other systems, including other epithelial tissues, will likely uncover similar patterns.

"We are very hopeful that what we've found is the first step to knowing what we have ignored in the past," Bao said. "With the contribution of the non-coding genome and in this case, particularly the contribution of introns, this information is revelatory to gene expression. My students also want to know more about the RNA binding proteins that provide specificity in governing which site to use to terminate transcription."

Credit: 
Northwestern University

Stop global roll out of 5G networks until safety is confirmed, urges expert

We should err on the side of caution and stop the global roll out of 5G (fifth generation) telecoms networks until we are certain this technology is completely safe, urges an expert in an opinion piece published online in the Journal of Epidemiology & Community Health.

There are no health concerns about 5G and COVID-19, despite what conspiracy theorists have suggested.

But the transmitter density required for 5G means that more people will be exposed to radio frequency electromagnetic fields (RF-EMFs), and at levels that emerging evidence suggests, are potentially harmful to health, argues Professor John William Frank, Usher Institute, University of Edinburgh.

The advent of 5G technology has been hailed by governments and certain vested interests as transformative, promising clear economic and lifestyle benefits, through massively boosting wireless and mobile connectivity at home, work, school and in the community, he says.

But it has become the subject of fierce controversy, fuelled by four key areas of scientific uncertainty and concern.

The lack of clarity about precisely what technology is included in 5G; and a growing but far from comprehensive body of laboratory research indicating the biologically disruptive potential of RF-EMFs

An almost total lack (as yet) of high quality epidemiological studies of the impact on human health from 5G EMF exposure

Mounting epidemiological evidence of such effects from previous generations of RF-EMF exposure at lower levels

Persistent allegations that some national telecomms regulatory authorities haven't based their RF-EMF safety policies on the latest science, amid potential conflicts of interest

5G uses much higher frequency (3 to 300GHz) radio waves than in the past and it makes use of very new--and relatively unevaluated, in terms of safety--supportive technology to enable this higher data transmission capacity, points out Professor Frank.

Its inherent fragility means that transmission boosting 'cell' antennae are generally required every 100-300 m--which is far more spatially dense than the transmission masts required for older 2G, 3G and 4G technology, using lower frequency waves, he says.

A dense transmission network is also required to achieve the 'everywhere/anytime' connectivity promised by 5G developers.

Existing 4G systems can service up to 4000 radio frequency-using devices per square kilometre; 5G systems will connect up to one million devices per square kilometre--greatly increasing the speed of data transfer (by a factor of 10) and the volume of data transmitted (by a factor of 1000), he explains.

While several major reviews of the existing evidence on the potential health harms of 5G have been published over the past decade, these have been of "varying scientific quality," suggests Professor Frank.

And they have not stopped the clamour from "a growing number of engineers, scientists, and doctors internationally...calling on governments to raise their safety standards for RF-EMFs, commission more and better research, and hold off on further increases in public exposure, pending clearer evidence of safety," he writes.

Permitted maximum safety limits for RF-EMF exposure vary considerably around the world, he points out.

What's more, '5G systems' is not a consistently defined term, comprising quite different specific technologies and components.

"It is highly likely that each of these many forms of transmission causes somewhat different biological effects--making sound, comprehensive and up-to-date research on those effects virtually impossible," he explains.

Recent reviews of lab data on RF-EMFs indicate that exposures can produce wide-ranging effects, including reproductive, fetal, oncological, neuropsychiatric, skin, eye and immunological. But there is absolutely no evidence whatsoever to suggest that it is implicated in the spread of COVID-19, as some conspiracy theorists have suggested, he emphasises.

"There are knowledgeable commentators' reports on the web debunking this theory, and no respectable scientist or publication has backed it," he says, adding: "the theory that 5G and related EMFs have contributed to the pandemic is baseless."

But for the current 5G roll-out, there's a sound basis for invoking 'the precautionary principle' because of significant doubts about the safety of a new and potentially widespread human exposure, which should be reason enough "to call a moratorium on that exposure, pending adequate scientific investigation of its suspected adverse health effects," he says.

There is no compelling public health or safety rationale for the rapid deployment of 5G, he insists. The main gains being promised are either economic, and then possibly for some more than for others, or related to increased consumer convenience, he suggests.

"Until we know more about what we are getting into, from a health and ecological point of view, those putative gains need to wait," he concludes.

Credit: 
BMJ Group

A 'super-puff' planet like no other

image: Artistic rendition of the exoplanet WASP-107b and its star, WASP-107. Some of the star's light streams through the exoplanet's extended gas layer.

Image: 
ESA/Hubble, NASA, M. Kornmesser.

The core mass of the giant exoplanet WASP-107b is much lower than what was thought necessary to build up the immense gas envelope surrounding giant planets like Jupiter and Saturn, astronomers at Université de Montréal have found.

This intriguing discovery by Ph.D. student Caroline Piaulet of UdeM's Institute for Research on Exoplanets (iREx) suggests that gas-giant planets form a lot more easily than previously believed.

Piaulet is part of the groundbreaking research team of UdeM astrophysics professor Björn Benneke that in 2019 announced the first detection of water on an exoplanet located in its star's habitable zone.

Published today in the Astronomical Journal with colleagues in Canada, the U.S., Germany and Japan, the new analysis of WASP-107b's internal structure "has big implications," said Benneke.

"This work addresses the very foundations of how giant planets can form and grow," he said. "It provides concrete proof that massive accretion of a gas envelope can be triggered for cores that are much less massive than previously thought."

As big as Jupiter but 10 times lighter

WASP-107b was first detected in 2017 around WASP-107, a star about 212 light years from Earth in the Virgo constellation. The planet is very close to its star -- over 16 times closer than the Earth is to the Sun. As big as Jupiter but 10 times lighter, WASP-107b is one of the least dense exoplanets known: a type that astrophysicists have dubbed "super-puff" or "cotton-candy" planets.

Piaulet and her team first used observations of WASP-107b obtained at the Keck Observatory in Hawai'i to assess its mass more accurately. They used the radial velocity method, which allows scientists to determine a planet's mass by observing the wobbling motion of its host star due to the planet's gravitational pull. They concluded that the mass of WASP-107b is about one tenth that of Jupiter, or about 30 times that of Earth.

The team then did an analysis to determine the planet's most likely internal structure. They came to a surprising conclusion: with such a low density, the planet must have a solid core of no more than four times the mass of the Earth. This means that more than 85 percent of its mass is included in the thick layer of gas that surrounds this core. By comparison, Neptune, which has a similar mass to WASP-107b, only has 5 to 15 percent of its total mass in its gas layer.

"We had a lot of questions about WASP-107b," said Piaulet. "How could a planet of such low density form? And how did it keep its huge layer of gas from escaping, especially given the planet's close proximity to its star?

"This motivated us to do a thorough analysis to determine its formation history."

A gas giant in the making

Planets form in the disc of dust and gas that surrounds a young star called a protoplanetary disc. Classical models of gas-giant planet formation are based on Jupiter and Saturn. In these, a solid core at least 10 times more massive than the Earth is needed to accumulate a large amount of gas before the disc dissipates.

Without a massive core, gas-giant planets were not thought able to cross the critical threshold necessary to build up and retain their large gas envelopes.

How then do explain the existence of WASP-107b, which has a much less massive core? McGill University professor and iREx member Eve Lee, a world-renowned expert on super-puff planets like WASP-107b, has several hypotheses.

"For WASP-107b, the most plausible scenario is that the planet formed far away from the star, where the gas in the disc is cold enough that gas accretion can occur very quickly," she said. "The planet was later able to migrate to its current position, either through interactions with the disc or with other planets in the system."

Discovery of a second planet, WASP-107c

The Keck observations of the WASP-107 system cover a much longer period of time than previous studies have, allowing the UdeM-led research team to make an additional discovery: the existence of a second planet, WASP-107c, with a mass of about one-third that of Jupiter, considerably more than WASP-107b's.

WASP-107c is also much farther from the central star; it takes three years to complete one orbit around it, compared to only 5.7 days for WASP-107b. Also interesting: the eccentricity of this second planet is high, meaning its trajectory around its star is more oval than circular.

"WASP-107c has in some respects kept the memory of what happened in its system," said Piaulet. "Its great eccentricity hints at a rather chaotic past, with interactions between the planets which could have led to significant displacements, like the one suspected for WASP-107b."

Several more questions

Beyond its formation history, there are still many mysteries surrounding WASP-107b. Studies of the planet's atmosphere with the Hubble Space Telescope published in 2018 revealed one surprise: it contains very little methane.

"That's strange, because for this type of planet, methane should be abundant," said Piaulet. "We're now reanalysing Hubble's observations with the new mass of the planet to see how it will affect the results, and to examine what mechanisms might explain the destruction of methane."

The young researcher plans to continue studying WASP-107b, hopefully with the James Webb Space Telescope set to launch in 2021, which will provide a much more precise idea of the composition of the planet's atmosphere.

"Exoplanets like WASP-107b that have no analogue in our Solar System allow us to better understand the mechanisms of planet formation in general and the resulting variety of exoplanets," she said. "It motivates us to study them in great detail."

Credit: 
University of Montreal

Successive governments' approach to obesity policies has destined them to fail

Successive governments' approach to obesity policies has destined them to fail, say researchers.

Government obesity policies in England over the past three decades have largely failed because of problems with implementation, lack of learning from past successes or failures, and a reliance on trying to persuade individuals to change their behaviour rather than tackling unhealthy environments. This is the conclusion of new research by a team at the University of Cambridge funded by the NIHR School for Public Health Research.

The researchers say their findings may help to explain why, after nearly thirty years of government obesity policies, obesity prevalence in England has not fallen and substantial inequalities persist. According to a report by NHS Digital in May 2020, 67% of men and 60% of women live with overweight or obesity, including 26% of men and 29% of women who suffer clinical obesity. More than a quarter of children aged two to 15 years live with obesity or overweight and the gap between the least and most deprived children is growing.

Successive governments have tried to tackle the obesity problem: in research published today in The Milbank Quarterly, Dolly Theis and Martin White in the Centre for Diet and Activity Research (CEDAR) at the University of Cambridge identified 14 government-led obesity strategies in England from 1992-2020. They analysed these strategies - which contained 689 wide-ranging policies - to determine whether they have been fit for purpose in terms of their strategic focus, content, basis in theory and evidence, and implementation viability.

Seven of the strategies were broad public health strategies containing obesity as well as non-obesity policies such as on tobacco smoking and food safety. The other seven contained only obesity-related policies, such as on diet and/or physical activity. Twelve of the fourteen strategies contained obesity reduction targets. However, only five of these were specific, numerical targets rather than statements such as 'aim to reduce obesity'.

Theis said: "In almost 30 years, successive UK governments have proposed hundreds of wide-ranging policies to tackle obesity in England, but these are yet to have an impact on levels of obesity or reduce inequality. Many of these policies have largely been flawed from the outset and proposed in ways that make them difficult to implement. What's more, there's been a fairly consistent failure to learn from past mistakes. Governments appear more likely to publish another strategy containing the same, recycled policies than to implement policies already proposed.

"If we were to produce a report card, overall we might only give them 4 out of 10: could do much better."

Theis and White identified seven criteria necessary for effective implementation, but found that only 8% of policies fulfilled all seven criteria, while the largest proportion of policies (29%) did not fulfil a single one of the criteria. Fewer than a quarter (24%) included a monitoring or evaluation plan, just 19% cited any supporting scientific evidence, and less than one in ten (9%) included details of likely costs or an allocated budget.

The lack of such basic information as the cost of implementing policies was highlighted in a recent National Audit Office report on the UK Government's approach to tackling childhood obesity in England, which found that the Department of Health and Social Care did not know how much central government spent tackling childhood obesity.

"No matter how well-intended and evidence-informed a policy, if it is nebulously proposed without a clear plan or targets it makes implementation difficult and it is unlikely the policy will be deemed successful," added Theis. "One might legitimately ask, what is the purpose of proposing policies at all if they are unlikely to be implemented?"

Thirteen of the 14 strategies explicitly recognised the need to reduce health inequality, including one strategy that was fully focused on reducing inequality in health. Yet the researchers say that only 19% of policies proposed were likely to be effective in reducing inequalities because of the measures proposed.

UK governments have to date largely favoured a less interventionist approach to reducing obesity, regardless of political party, prioritising provision of information to the public in their obesity strategies, rather than more directly shaping the choices available to individuals in their living environments through regulation or taxes. The researchers say that governments may have avoided a more deterrence-based, interventionist approach for fear of being perceived as 'nannying' - or because they lacked knowledge about what more interventionist measures are likely to be effective.

There is, however, evidence to suggest that policymaking is changing. Even though the current UK government still favours a less interventionist approach, more recent strategies have contained some fiscal and regulatory policies, such as banning price promotions of unhealthy products, banning unhealthy food advertisements and the Soft Drinks Industry Levy. This may be because the government has come under increasing pressure and recognises that previous approaches have not been effective, that more interventionist approaches are increasingly acceptable to the public, and because evidence to support regulatory approaches is mounting.

The researchers found little attempt to evaluate the strategies and build on their successes and failures. As a result, many policies proposed were similar or identical over multiple years, often with no reference to their presence in a previous strategy. Only one strategy (Saving Lives, published in 1999) commissioned a formal independent evaluation of the previous government's strategy.

"Until recently, there seems to have been an aversion to conducting high quality, independent evaluations, perhaps because they risk demonstrating failure as well as success," added White. "But this limits a government's ability to learn lessons from past policies. This may be potentially compounded by the often relatively short timescales for putting together a strategy or implementing policies.

"Governments need to accompany policy proposals with information that ensures they can be successfully implemented, and with built-in evaluation plans and time frames. Important progress has been made with commissioning evaluations in the last three years. But, we also need to see policies framed in ways that make them readily implementable. We also need to see a continued move away from interventions that rely on individual's changing their diet and activity, and towards policies that change the environments that encourage people to overeat and to be sedentary in the first place."

Living with obesity or excess weight is associated with long-term physical, psychological and social problems. Related health problems, such as type-2 diabetes, cardiovascular disease and cancers, are estimated to cost NHS England at least £6.1 billion per year and the overall cost of obesity to wider society in England is estimated to be £27 billion per year. The COVID-19 pandemic has brought to light additional risks for people living with obesity, such as an increased risk of hospitalisation and more serious disease.

Credit: 
University of Cambridge

Inexpensive battery charges rapidly for electric vehicles, reduces range anxiety

image: A thermally modulated battery for mass-market electric vehicles without range anxiety and with unsurpassed safety, low cost, and containing no cobalt, is being developed by a team of Penn State engineers.

Image: 
Chao-Yang Wang's lab, Penn State

Range anxiety, the fear of running out of power before being able to recharge an electric vehicle, may be a thing of the past, according to a team of Penn State engineers who are looking at lithium iron phosphate batteries that have a range of 250 miles with the ability to charge in 10 minutes.

"We developed a pretty clever battery for mass-market electric vehicles with cost parity with combustion engine vehicles," said Chao-Yang Wang, William E. Diefenderfer Chair of mechanical engineering, professor of chemical engineering and professor of materials science and engineering, and director of the Electrochemical Engine Center at Penn State. "There is no more range anxiety and this battery is affordable."

The researchers also say that the battery should be good for 2 million miles in its lifetime.

They report today (Jan. 18) in Nature Energy that the key to long-life and rapid recharging is the battery's ability to quickly heat up to 140 degrees Fahrenheit, for charge and discharge, and then cool down when the battery is not working.

"The very fast charge allows us to downsize the battery without incurring range anxiety," said Wang.

The battery uses a self-heating approach previously developed in Wang's center. The self-heating battery uses a thin nickel foil with one end attached to the negative terminal and the other extending outside the cell to create a third terminal. Once electrons flow it rapidly heats up the nickel foil through resistance heating and warm the inside of the battery. Once the battery's internal temperature is 140 degrees F, the switch opens and the battery is ready for rapid charge or discharge.

Wang's team modeled this battery using existing technologies and innovative approaches. They suggest that using this self-heating method, they can use low-cost materials for the battery's cathode and anode and a safe, low-voltage electrolyte. The cathode is thermally stable, lithium iron phosphate, which does not contain any of the expensive and critical materials like cobalt. The anode is made of very large particle graphite, a safe, light and inexpensive material.

Because of the self-heating, the researchers said they do not have to worry about uneven deposition of lithium on the anode, which can cause lithium spikes that are dangerous.

"This battery has reduced weight, volume and cost," said Wang. "I am very happy that we finally found a battery that will benefit the mainstream consumer mass market."

According to Wang, these smaller batteries can produce a large amount of power upon heating -- 40 kilowatt hours and 300 kilowatts of power. An electric vehicle with this battery could go from zero to 60 miles per hour in 3 seconds and would drive like a Porsche, he said.

"This is how we are going to change the environment and not contribute to just the luxury cars," said Wang. "Let everyone afford electric vehicles."

Credit: 
Penn State

New discovery in breast cancer treatment

image: Androgen counterbalances estrogen-driven breast cancer

Image: 
University of Adelaide

Researchers at the University of Adelaide have found new evidence about the positive role of androgens in breast cancer treatment with immediate implications for women with estrogen receptor-driven metastatic disease.

Published today in Nature Medicine, the international study conducted in collaboration with the Garvan Institute of Medical Research, looked at the role of androgens - commonly thought of as male sex hormones but also found at lower levels in women - as a potential treatment for estrogen receptor positive breast cancer.

Watch a video explainer about the new study at - https://youtu.be/NYalzv4C35U

In normal breast development, estrogen stimulates and androgen inhibits growth at puberty and throughout adult life. Abnormal estrogen activity is responsible for the majority of breast cancers, but the role of androgen activity in this disease has been controversial.

Androgens were historically used to treat breast cancer, but knowledge of hormone receptors in breast tissue was rudimentary at the time and the treatment's efficacy misunderstood. Androgen therapy was discontinued due to virilising side effects and the advent of anti-estrogenic endocrine therapies.

While endocrine therapy is standard-of-care for estrogen receptor positive breast cancer, resistance to these drugs are the major cause of breast cancer mortality.

Professor Wayne Tilley, Director of the Dame Roma Mitchell Cancer Research Laboratories, and Associate Professor Theresa Hickey, Head of the Breast Cancer Group, who led the study say the need for alternative treatment strategies has renewed interest in androgen therapy for breast cancer.

However, previous studies had produced conflicting evidence on how best to therapeutically target the androgen receptor for treatment of breast cancer, which caused widespread confusion and hampered clinical application.

Using cell-line and patient-derived models, a global team, including researchers at the University of Adelaide and the Garvan Institute, demonstrated that androgen receptor activation by natural androgen or a new androgenic drug had potent anti-tumour activity in all estrogen receptor positive breast cancers, even those resistant to current standard-of-care treatments. In contrast, androgen receptor inhibitors had no effect.

"This work has immediate implications for women with metastatic estrogen receptor positive breast cancer, including those resistant to current forms of endocrine therapy,'' said Associate Professor Theresa Hickey.

Professor Tilley added: "We provide compelling new experimental evidence that androgen receptor stimulating drugs can be more effective than existing (e.g. Tamoxifen) or new (e.g. Palbociclib) standard-of-care treatments and, in the case of the latter, can be combined to enhance growth inhibition.

Moreover, currently available selective androgen receptor activating agents lack the undesirable side effects of natural androgens, and can confer benefits in women including promotion of bone, muscle and mental health.

Associate Professor Elgene Lim, a breast oncologist and Head of the Connie Johnson Breast Cancer Research Lab at the Garvan Institute, said: "The new insights from this study should clarify the widespread confusion over the role of the androgen receptor in estrogen receptor driven breast cancer. Given the efficacy of this treatment strategy at multiple stages of disease in our study, we hope to translate these findings into clinical trials as a new class of endocrine therapy for breast cancer."

Dr Stephen Birrell, a breast cancer specialist and pioneer in androgens and women's health who was part of the Adelaide based team, pointed out that this seminal finding has application beyond the treatment of breast cancer, including breast cancer prevention and treatment of other disorders also driven by estrogen.

Chloe Marshall, 33, has a breast cancer recurrence while pregnant with her second child. She said that endocrine therapy has terrible side effects and there was an urgent need for better options to prevent and treat breast cancer recurrence.

"I was diagnosed with a hormone positive breast cancer in July 2017 and subsequently found out I carried the BRACA gene,'' she said.

"I underwent a double mastectomy and neo adjuvant chemotherapy followed by two years of hormone suppressive treatment. The hormone suppressive treatment that I experienced was one of the hardest parts of having cancer. The impact it has on your mind/life/body is incredibly challenging.

"Now, three years later, I find myself with a recurrent cancer while 25 weeks pregnant. The thought of having hormone suppressive treatment for a further five to ten years is overwhelming.

"I think this study will help patients like myself have hope that there is another answer to life after the cancer diagnosis."

Credit: 
University of Adelaide

How cells move and don't get stuck

image: Cancer cells moving on glycoproteine strips: These strips act like splints, which allow to control and to study the movement of the cells better.

Image: 
Rädler Lab, Ludwig Maximilians Universität München

Cell velocity, or how fast a cell moves, is known to depend on how sticky the surface is beneath it, but the precise mechanisms of this relationship have remained elusive for decades. Now, researchers from the Max Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC) and Ludwig Maximilians Universität München (LMU) have figured out the precise mechanics and developed a mathematical model capturing the forces involved in cell movement. The findings, reported in the journal Proceedings of the National Academy of Sciences (PNAS), provide new insight for developmental biology and potential cancer treatment.

Cell movement is a fundamental process, especially critical during development when cells differentiate into their target cell type and then move to the correct tissue. Cells also move to repair wounds, while cancer cells crawl to the nearest blood vessel to spread to other parts of the body.

"The mathematical model we developed can now be used by researchers to predict how different cells will behave on various substrates," says Professor Martin Falcke, who heads MDC's Mathematical Cell Physiology Lab and co-led the research. "Understanding these basic movements in precise detail could provide new targets to interrupt tumor metastasis."

Teaming up to pin down

The finding comes thanks to experimental physicists at LMU teaming up with theoretical physicists at MDC. The experimentalists, led by Professor Joachim Rädler, tracked how quickly more than 15,000 cancer cells moved along narrow lanes on a sticky surface, where the stickiness alternated between low and high. This allowed them to observe what happens as the cell transitions between stickiness levels, which is more representative of the dynamic environment inside the body.

Then, Falcke and Behnam Amiri, co-first paper author and Ph.D. student in Falcke's lab, used the large dataset to develop a mathematical equation that captures the elements shaping cell motility.

"Previous mathematical models trying to explain cell migration and motility are very specific, they only work for one feature or cell type," Amiri says. "What we tried to do here is keep it as simple and general as possible."

The approach worked even better than expected: the model matched the data gathered at LMU and held true for measurements about several other cell types taken over the past 30 years. "This is exciting," Falcke says. "It's rare that you find a theory explaining such a large spectrum of experimental results."

Friction is key

When a cell moves, it pushes out its membrane in the direction of travel, expanding an internal network of actin filaments as it goes, and then peels off its back end. How fast this happens depends on adhesion bonds that form between the cell and the surface beneath it. When there are no bonds, the cell can hardly move because the actin network doesn't have anything to push off against. The reason is friction: "When you are on ice skates you cannot push a car, only when there is enough friction between your shoes and the ground can you push a car," Falcke says.

As the number of bonds increase, creating more friction, the cell can generate more force and move faster, until the point when it is so sticky, it becomes much harder to pull off the back end, slowing the cell down again.

Slow, but not stuck

The researchers investigated what happens when the front and rear ends of the cell experience different levels of stickiness. They were particularly curious to figure out what happens when it is stickier under the back end of the cell than the front, because that is when the cell could potentially get stuck, unable to generate enough force to pull off the back end.

This might have been the case if the adhesion bonds were more like screws, holding the cell to the substrate. At first, Falcke and Amiri included this type of "elastic" force in their model, but the equation only worked with friction forces.

"For me, the most challenging part was to wrap my mind around this mechanism working only with friction forces," Falcke says, because there is nothing for the cell to firmly latch onto. But it is the friction-like forces that allow the cell to keep moving, even when bonds are stronger in the back than the front, slowly peeling itself off like scotch tape. "Even if you pull just a little with a weak force, you are still able to peel the tape off - very slowly, but it comes off," Falcke says. "This is how the cell keeps itself from getting stuck."

The team is now investigating how cells move in two dimensions, including how they make hard right and left turns, and U-turns.

Credit: 
Max Delbrück Center for Molecular Medicine in the Helmholtz Association

Synthesis of potent antibiotic follows unusual chemical pathway

image: The synthesis of the potent antibiotic thiostrepton uses a radical SAM protein TsrM, whose crystal structure is shown at left while bound to an iron-sulfur cluster and cobalamin. New images of this crystal structure allowed researchers from Penn State to infer the chemical steps during the antibiotic's synthesis (right), as a methyl group moves from a molecule called S-adenosyl-L-methionine (SAM) to the cobalamin in TsrM to the substrate tryptophan.

Image: 
Booker Lab, Penn State

UNIVERSITY PARK, Pa. -- Images of a protein involved in creating a potent antibiotic reveal the unusual first steps of the antibiotic's synthesis. The improved understanding of the chemistry behind this process, detailed in a new study led by Penn State chemists, could allow researchers to adapt this and similar compounds for use in human medicine.

"The antibiotic thiostrepton is very potent against Gram-positive pathogens and can even target certain breast cancer cells in culture," said Squire Booker, a biochemist at Penn State and investigator with the Howard Hughes Medical Institute. "While it has been used topically in veterinary medicine, so far it has been ineffective in humans because it is poorly absorbed. We studied the first steps in thiostrepton's biosynthesis in hopes of eventually being able to hijack certain processes and make analogs of the molecule that might have better medicinal properties. Importantly, this reaction is found in the biosynthesis of numerous other antibiotics, and so the work has the potential to be far reaching."

The first step in thiostrepton's synthesis involves a process called methylation. A molecular tag called methyl group, which is important in many biological processes, is added to a molecule of tryptophan, the reaction's substrate. One of the major systems for methylating compounds that are not particularly reactive, like tryptophan, involves a class of enzymes called radical SAM proteins.

"Radical SAM proteins usually use an iron-sulfur cluster to cleave a molecule called S-adenosyl-L-methionine (SAM), producing a "free radical" or an unpaired electron that helps move the reaction forward," said Hayley Knox, a graduate student in chemistry at Penn State and first author of the paper. "The one exception that we know about so far is the protein involved in the biosynthesis of thiostrepton, called TsrM. We wanted to understand why TsrM doesn't do radical chemistry, so we used an imaging technique called X-ray crystallography to investigate its structure at several stages throughout its reaction."

In all radical SAM proteins characterized to date, SAM binds directly to the iron-sulfur cluster, which helps to fragment the molecule to produce the free radical. However, the researchers found that the site where SAM would typically bind is blocked in TsrM.

"This is completely different from any other radical SAM protein," said Booker. "Instead, the portion of SAM that binds to the cluster associates with the tryptophan substrate and plays a key role in the reaction, in what is called substrate-assisted catalysis."

The researchers present their results in an article appearing Jan. 18 in the journal Nature Chemistry.

In solving the structure, the researchers were able to infer the chemical steps during the first part of thiostrepton's biosynthesis, when tryptophan is methylated. In short, the methyl group from SAM transfers to a part of TsrM called cobalamin. Then, with the help of an additional SAM molecule, the methyl group transfers to tryptophan, regenerating free cobalamin and producing the methylated substrate, which is required for the next steps in synthesizing the antibiotic.

"Cobalamin is the strongest nucleophile in nature, which means it is highly reactive," said Knox. "But the substrate tryptophan is weakly nucleophilic, so a big question is how cobalamin could ever be displaced. We found that an arginine residue sits under the cobalamin and destabilizes the methyl-cobalamin, allowing tryptophan to displace cobalamin and become methylated."

Next the researchers plan to study other cobalamin-dependent radical SAM proteins to see if they operate in similar ways. Ultimately, they hope to find or create analogs of thiostrepton that can be used in human medicine.

"TsrM is clearly unique in terms of known cobalamin-dependent radical SAM proteins and radical SAM proteins in general," said Booker. "But there are hundreds of thousands of unique sequences of radical SAM enzymes, and we still don't know what most of them do. As we continue to study these proteins, we may be in store for many more surprises."

Credit: 
Penn State