Tech

How does the brain process fear?

image: Scientists know that fear memories for mice are made in the amygdala, an almond-shaped structure deep in the brain. New research shows that the fear circuit extends far beyond the amygdala, including to the globus pallidus, a regulator of movements

Image: 
Li lab/CSHL, 2020

When a frightful creature startles you, your brain may activate its fear-processing circuitry, sending your heart racing to help you escape the threat. It's also the job of the brain's fear-processing circuits to help you learn from experience to recognize which situations are truly dangerous and to respond appropriately--so if the scare comes from a costumed goblin, you'll probably recover quickly.

In more dire circumstances, however, the brain's fear response can be critical for survival. "Being able to fear is the ability to sense the danger and is the driving force to figure out a way to escape or fight back," said Cold Spring Harbor Laboratory Professor Bo Li.

[VIDEO: Watching a mouse think about fear and pleasure - https://www.youtube.com/watch?v=B7MujP-Z0ds]

Li's team is probing the brain circuits that underlie fear, using sophisticated neuroscience tools to map their connections and tease out how specific components contribute to learning fear. A deeper understanding of these circuits could lead to better ways to control the overactive or inappropriate fear responses experienced by people with anxiety disorders.

Many of their studies begin with the amygdala, an almond-shaped structure that is considered the hub for fear processing in the brain. While the amygdala was once thought to be devoted exclusively to processing fear, researchers are now broadening their understanding of its role. Li's team has found that the amygdala is also important for reward-based learning, and as they trace its connections to other parts of the brain, they are uncovering additional complexity. Li said:

"It is important for formation of fearful memory, but it's also important for interacting with other brain systems in a different behavior context. We think that this circuit that we discovered that plays a role in regulating fearful memory is only a tip of the iceberg. It is indeed important for regulating fearful memory, but probably is also involved in more complex behavior."

Li and his colleagues were surprised recently to find that the amygdala communicates with a part of the brain best known for its role in controlling movement. The structure, called the globus pallidus, was not known to be involved in fear processing or memory formation. But when the researchers interfered with signaling between the amygdala and the globus pallidus in the brains of mice, they found that the animals failed to learn that a particular sound cue signaled an unpleasant sensation. Based on their experiments, this component of the fear-processing circuitry might be important for alerting the brain "which situations are worth learning from," Li said.

Li's team and collaborators at Stanford University reported recent findings in the Journal of Neuroscience. For more of Li's research on how fear is processed in the brain, check out this video of his talk at "Life Science Across the Globe".

Credit: 
Cold Spring Harbor Laboratory

New study shows that football fixture pile-ups are forcing layers and coaches to change

image: Fixture congestion is affecting top-level football

Image: 
Emilio Garcia @piensaenpixel

Dr Liam Harper has co-authored a new paper on fixture congestion and performance with colleagues Dr Richard Page of Edge Hill University and Ross Julian from the University of Münster in Germany.

Published in the journal Sports Medicine, the findings of their systematic review and meta-analysis include that while modern players can still cover great distances in games, they are conserving energy for the kind of intense bursts that occur around a key moment.

Three days between games testing players and coaches to the limit

Their research also shows how coaches, confronted with schedules that see players involved in domestic, European and international games in quick succession, are resting some players in certain positions more than others, with possible consequences for a player's performance and risk of injury.

"There aren't any differences in total distances covered between a congested period and a non-congested period," says Dr Harper, Senior Lecturer in Sport Exercise and Nutrition Sciences. "It seems that players can maintain that physical performance in terms of distance covered no matter how many games they've played.

"But total distance is just one gross measure of performance. Sprints and high intensity running - typically considered as over 15 km/h - are usually linked to notable actions in a game, with a German study showing that 45% of goals are preceded by a sprint. It seems that players perform fewer high intensity runs when they only have three days between games. There's more walking and jogging."

Coronavirus adds complication to congested schedule

Fixture congestion leading to tired players is often blamed for poor performances by international teams in tournaments like the World Cup, but the current coronavirus pandemic has added more complications. COVID-19 saw domestic schedules and European competitions come to a sudden halt in March, with the Premier League and the Championship managing to finish the 2019/20 season under 'Project Restart' in the summer.

The 2020/21 campaign got under way after a short break, with international fixtures adding to the demands of domestic and European fixtures for many elite-level players. Spurs have a 21-day spell where they play seven games in the Premier League and the Europa League between blocks of international fixtures which will involve many of their squad - a typical cycle for many clubs.

Dr Harper, whose doctoral thesis looked at the effects of extra time on players' performance, adds that, "Players are adopting pacing strategies; they reduce low intensity actions to save themselves for sprints and high intensity runs."

If it'sThursday, it must be the Europa League...

A typical week for a club also involved in Europe will see a Premier League game on a Sunday, followed by a Champions League game on Wednesday and then back to domestic matters on the Saturday. For those in the Europa League, the cycle is typically Sunday-Thursday-Sunday.

"Some variances preclude making robust conclusions," Dr Harper continues, "but it seems to be that less than four days does have an impact on some aspects of physical performance."

A 2012 study by leading Dutch coach Raymond Verheijen showed that teams with two days preparation were 40% less likely to win their next game than a team with three days to get ready. As part of the work towards the new paper, Dr Harper also commissioned Huddersfield undergraduate Sam Jones to assess how two clubs apiece in the Premier League, Championship and Spain's La Liga managed their squads when they had less than 96 hours between games.

"Some coaches rotate key players; the percentage of those who play 75-90 mins decreases when European games are played midweek between domestic matches.

"Perhaps this shows that coaches recognise the need for players to rest, maybe have them off the bench for 30 minutes rather than play the full match. However, it might also mean that players are not available due to fatigue and injury.

"Wide midfielders and strikers in particular tend to be rotated more frequently during periods of fixture congestion. But central defenders tend not to be subbed off. This may be due to the fact they typically cover less distance and less high-intensity actions than other positions, but that does not mean they are not susceptible to tiredness and injury arising from fixture congestion.

"Indeed, whilst outside the scope of our recent research, large scale studies involving multiple clubs competing in the UEFA Champions League has shown that players are at greater risk of soft tissue injuries during fixture congestion".

Club vs. country increasing the load

Another factor is that breaks in club fixtures are now not just for 'meaningless' international friendlies. European Nations League games mean more competitive matches with future prospects hanging on them, increasing the chances that players will be playing demanding top-level games for their countries just a couple of days before they are in action for their clubs.

Liverpool manager Jurgen Klopp called the Nations League "senseless" in 2018, and Jose Mourinho called Spurs' eight games in 21 days start to this season "a joke". Mourinho's side conceded three goals in the last 10 minutes of their first match after the October international break - with three Spurs defenders having played in most of two international games apiece in the preceding week and a half before playing all 90 minutes of the 3-3 with West Ham.

Dr Harper's study points towards players being just below their best due to their busy schedules, and this being crucial at the key moments in games - a missed tackle, or being caught out of position and allowing an opponent a chance to score. A study involving Liverpool looked at where players were positioned in relation to where they should be, and found differences for players with games in quick succession.

"In a congestion cycle, synchronisation between players might reduce. For example, the distance between a right back and a right winger might become greater. Then, the right back doesn't recover as quickly, is more exposed to counter attacks, and opponents might find more space on their left flank. The defensive midfielder might have to work more, and the team gets pulled around.

"We need to see more of this - it's key information for coaches in a congested cycle. Is their system going to be different because their players are going to be tired, and so do they need to adapt? Preparation for every game is going to be different according to the strengths and weaknesses of the opposition. A combination of fatigue and less time to work on tactics and shape during fixture congestion makes the coach's job even harder."

Credit: 
University of Huddersfield

Blue phosphorus: How a semiconductor becomes a metal

image: The international team modelled a two-layer buckled honeycomb structure of blue phosphorus by means of highly precise calculations on high-performance computers. The compound is very stable and due to the very small distance between the two layers, it has metallic properties.

Image: 
Copyright: Jessica Arcudia

The results of these investigations were published as highlight article in the current issue of the journal Physical Review Letters.

The chemical element phosphorus is considered one of the most essential elements for life. Phosphorus compounds are deeply involved in the structure and function of organisms. Every human carries about one kilogram of it in the body. But even outside our bodies we are surrounded by phosphates and phosphonates every day: in our food, in detergents, fertilizers or in medicines.

Phosphorus occurs in several modifications that have extremely different properties. Under normal conditions, a distinction is made between white, purple, red and black phosphorus. In 2014, a team from the Michigan State University, USA, computationally predicted "blue phosphorus", which could be produced experimentally two years later.

Blue phosphorus is a so-called two-dimensional (2D) material. Due to its single-layer honeycomb-like structure, it is reminiscent of what is probably the best known 2D material: graphene. Analogous to its famous forerunner, it was then also called blue phosphorene. This novel semiconductor material has since been investigated as an extremely promising candidate for optoelectronic devices.

The Dresden chemist Prof Thomas Heine, in cooperation with Mexican scientists, has now made a unique discovery: by applying a topological concept they identified computationally a remarkably stable two-layer buckled honeycomb structure of blue phosphorene by means of highly precise calculations on high-performance computers. This two-layered compound is extremely stable. As the scientists surprisingly discovered, it has metallic properties due to the very small distance between the two layers.

Like all components, these devices must be supplied with power, which usually enters the material via metal electrodes. At the metal-semiconductor interface, energy losses are inevitable, an effect known as the Schottky barrier. Blue phosphorus is semiconducting as a single layer, but predicted to be metallic as a double layer. Metallic 2D materials are very rare, and for the first time a pure elemental material has been discovered that exhibits a semiconductor-metal transition from the monolayer to the double layer. Thus, an electronic or optoelectronic component for use in transistors or photocells can be realized from only one chemical element. Since there is no interface between semiconductor and metal in these devices, the Schottky barrier is greatly reduced and a higher efficiency can be expected.

"Imagine you put two layers of paper on top of each other and suddenly the double sheet shines metallically like gold foil. This is exactly what we predict for blue phosphorene. This work underlines the importance of interdisciplinarity in basic research. Using a topological-mathematical model and theoretical chemistry, we were able to design a new material on the computer and predict its physical properties. Applications in the field of nano- and optoelectronics are expected," explains Prof Heine.

For these promising results in basic research, first author Jessica Arcudia from Mexico has already been awarded the LatinXChem poster prize and the ACS Presidential Award. The young chemist was a guest student in the research group of Thomas Heine in 2018, where also her doctoral supervisor Prof Gabriel Merino had worked before.

Credit: 
Technische Universität Dresden

Corn and other crops are not adapted to benefit from elevated carbon dioxide levels

image: In their recent paper, scientists analyzed 49 species of grass crops and found that by rebalancing the leaves' resources, plants would better thrive in today's climate. Pictured: The WEST project sorghum diversity panel field trials from 2017.

Image: 
WEST project

The U.S. backs out of the Paris climate agreement even as carbon dioxide (CO2) levels continue to rise. Through photosynthesis, plants are able to turn CO2 into yield. Logic tells us that more CO2 should boost crop production, but a new review from the University of Illinois shows that some crops, including corn, are adapted to a pre-industrial environment and cannot distribute their resources effectively to take advantage of extra CO2.

Most plants (including soybeans, rice, canola, and all trees) are C3 because they fix CO2 first into a carbohydrate containing three carbon atoms. Corn, sorghum, and sugarcane belong to a special group of plants known as C4, so-called because they first fix CO2 into a four-carbon carbohydrate during photosynthesis. On average, C4 crops are 60 percent more productive than C3 crops.

When crops are grown in elevated CO2 that mimic future atmospheric conditions, research shows that C3 crops can become more productive while some experiments suggest that C4 crops would be no more productive in a higher CO2 world.

"As scientists, we need to think several steps ahead to anticipate what the Earth will look like five to 30 years from now, and how we can design crops to perform well under those conditions," said Charles Pignon, a former postdoctoral researcher at Illinois. "We decided that a literature review and a retrospective analysis of biochemical limitations in photosynthesis would be able to give us some insight into why C4 crops might not respond and how we might alter this."

The literature review, published in Plant, Cell & Environment, was supported by Water Efficient Sorghum Technologies (WEST), a research project that aimed to develop bioenergy crops that produce more biomass with less water, with funding from the Advanced Research Projects Agency-Energy (ARPA-E).

The team assembled a dataset of photosynthesis measurements from 49 C4 species, including the crops that could reveal photosynthetic limitations. The consistent pattern that emerged was that at low CO2--well below what plants would have experienced before the industrial revolution--C4 photosynthesis was limited by the activity of the enzyme that fixes CO2. However, at today's CO2 levels, C4 photosynthesis was limited by the capacity to provide the three-carbon molecule that accepts the fourth CO2.

"This finding is analogous to a car assembly line where the supply of engines is outpacing the supply of chassis to accept them," said co-author Stephen Long, the Stanley O. Ikenberry Chair Professor of Plant Biology and Crop Sciences. "We need to engineer these plants to better balance their resources in one or both of two-ways."

First, the authors suggest that C4 crops need to cut back on the amount of the enzyme used to fix CO2 and re-invest the saved resources into making more of the CO2 acceptor molecule.

Secondly, they need to restrict the supply of CO2 into the leaf by reducing the number of pores (stomata) on the leaf surface. "Lowering the CO2 within the leaf would re-optimize the biochemistry, without lowering the rate of photosynthesis, and with fewer stomata, less water would be lost so we are increasing the crop's water use efficiency," Long said.

The WEST project concluded in 2019. These proposed changes to C4 crops are now being pursued through the Center for Advanced Bioenergy and Bioproducts Innovation (CABBI), which is supported by the Department of Energy.

Credit: 
Carl R. Woese Institute for Genomic Biology, University of Illinois at Urbana-Champaign

Water striders learn from experience how to jump up safely from water surface

You probably do not find it surprising that humans, dogs or cats, can adjust their behavior based on the experience. For instance, we all move more slowly after we slide and fall on the ice when we learn ice-skating. A new study shows that water striders can do that too.

It was known that water striders jump upwards from the water surface without breaking it (https://www.eurekalert.org/pub_releases/2016-12/lobe-jws120816.php; https://www.eurekalert.org/pub_releases/2015-08/lobe-wsj080515.php; https://www.youtube.com/watch?v=4Sr0im-umSU; https://www.youtube.com/watch?v=GLy7Obl6jLc; https://www.youtube.com/watch?v=8sjSmX5pNw8). But how can they do it? How do they know when the water surface would be broken? A multidisciplinary team of scientists from Korea, USA and Poland found that water striders adjust their jumping behavior by modifying the leg movements based on the experience acquired during frequent jumping, and that this modifications from experience depend on the body weight of the animal. Biologists from Korea and Poland led by Sang-im Lee (Laboratory of Integrative Animal Ecology, DGIST, Korea) and Piotr Jablonski (Laboratory of Behavioral Ecology and Evolution, Seoul National University and Museum & Institute of Zoology, Polish Academy of Sciences) teamed up with biologists from USA led by Hangkyo Lim (Notre Dame of Maryland University) and engineers lead by Ho-Young Kim (Seoul National University) and provided an experimental proof, at least for female water striders.

The scientists used high-speed video to measure jumping behavior of males and females of a common Korean water strider species, Gerris latiabdominis. They compared individuals with experimentally added weight (heavy individuals) with control individuals (light individuals). Half of the heavy and half of the light individuals were prompted to jump every day for three days by frequently poking them gently during several hours per day, while the other half did not experience frequent jumping during this period. Then the scientists measured their jumping behavior at the very beginning of the experiment and after those three days, and found that the heavy females (those with the added additional weight) changed their jumping behavior differently than the females without any extra weight. Additionally, the adjustments to light vs heavy body were only detectable for individuals who experienced frequent jumping. "We observed adjustments in the speed of leg movements and in the jump velocity - final outcome of the jump" says Minjung Baek, the first author of the study. Females experiencing lighter body weight during the three days of frequent jumping moved their midlegs more slowly than those not prompted to jump frequently. On the other hand, females experiencing heavier body weight during frequent jumping moved the legs faster, as if they "knew" that in order to produce a fast jump they need to "work harder" due to the added weight. But they made these adjustments "carefully" so that the surface tension of the water was not broken. Males did not show such a clear difference in adjustments of their jumping behavior to their body weight.

A question arises: why do we see this difference between males and females? "In water striders including this species, mating males may ride on top of females for hours or even for days, and this means that during mating a female must sustain the weight of the mating male in addition to her own body weight on the water surface" says Dr. Piotr Jablonski, who studied mating behavior of water striders in the past. "Males do not experience such repeated temporary increases in the perceived body weight and therefore they did not need the ability to use their experience to adjust their jumping behavior" - adds Dr. Sang-im Lee, who seeks ecological explanations to biological phenomena by integrating biology, physics and engineering.

This interesting result shows how the plastic responses of animals can evolve depending on the environmental condition; in this case optimal behavioral adjustments were made through personal experience. There are many examples of animals, including insects, adjusting their behavior to changing environmental conditions through developmental or behavioral plasticity but this study clearly shows that they can do it through personal experience, just like we do.

Credit: 
Laboratory of Behavioral Ecology and Evolution at Seoul National University

Reducing global food system emissions key to meeting climate goals

Reducing fossil fuel use is essential to stopping climate change, but that goal will remain out of reach unless global agriculture and eating habits are also transformed, according to new research from the University of Minnesota and University of Oxford.

A paper published Thursday in the journal Science reveals that emissions from global food production alone could lead to a global temperature increase of more than 1.5°C by mid-century and of nearly 2°C by the end of the century, even if emissions from fossil fuels were to end immediately. The study also identifies the need for large and rapid improvements in farming practices, as well as changes in what we eat and in how much food we waste, to help achieve the Paris Agreement's goal of limiting global temperature increases to 1.5°C or 2°C.

"Our work shows that food is a much greater contributor to climate change than is widely known. Fortunately, we can fix this problem by using fertilizer more efficiently, by eating less meat and more fruits, vegetables, whole grains and nuts and by making other important changes to our food system," said Jason Hill, professor in the Department of Bioproducts and Biosystems Engineering in the University of Minnesota's College of Food, Agricultural and Natural Resource Sciences and College of Science and Engineering.

The study determined that, if left unchanged, future greenhouse gas emissions from food production would alone lead to the world warming by 1.5°C by 2050 and by 2°C by the end of the century compared to pre-industrial levels. The authors projected future emissions using expected trends in population growth, dietary changes and the additional amount of land required to feed the world.

"There are at least five different changes that would allow us to prevent this agriculturally-driven climate change" said David Tilman, Regents professor in the Department of Ecology, Evolution, and Behavior in the College of Biological Sciences.

"These are farming more efficiently, helping farmers in low-income countries increase their yields, eating healthier foods, avoiding overeating and wasting less food. Even partially adopting several of these five changes would solve this problem as long as we start right now."

The paper points to recent research that shows all five strategies are readily achievable and have many benefits beyond controlling climate change, such as improving human health, reducing water pollution, improving air quality, preventing species extinctions and improving farm profitability.

"Discussions on mitigating climate change typically focus on reducing greenhouse gas emissions from burning fossil fuels, for instance, from transportation or energy production. However, our research emphasizes the importance of also reducing emissions from the global food system," said Michael Clark, researcher in the Oxford Martin School and Nuffield Department of Population Health at the University of Oxford.

The research makes clear that reducing greenhouse gas emissions from food systems will require coordinated action across sectors and between national governments, and that action will need to start soon, with policies fully adopted by 2050, to achieve the climate goals.

Credit: 
University of Minnesota

Clay subsoil at Earth's driest place may signal life on Mars

ITHACA, N.Y. - Earth's most arid desert may hold a key to finding life on Mars.

Diverse microbes discovered in the clay-rich, shallow soil layers in Chile's dry Atacama Desert suggest that similar deposits below the Martian surface may contain microorganisms, which could be easily found by future rover missions or landing craft.

Led by Cornell University and Spain's Centro de Astrobiología, scientists now offer a planetary primer to identifying microbial markers on shallow rover digs in Martian clay, in their work published Nov. 5 in Nature Scientific Reports.

In that dry environment at Atacama, the scientists found layers of wet clay about a foot below the surface.

"The clays are inhabited by microorganisms," said corresponding author Alberto G. Fairén, a visiting scientist in the Department of Astronomy at Cornell University. "Our discovery suggests that something similar may have occurred billions of years ago - or it still may be occurring - on Mars."

If microbes existed on Mars in the past, their biomarkers likely would be preserved there, Fairén said. "If microbes still exist today," he said, "the latest possible Martian life still may be resting there."

The red planet will see rovers cruising across the surface there in the next few years. NASA's rover Perseverance will land on Mars in February 2021; Europe's Rosalind Franklin rover will arrive in 2023. Both of those missions will seek microbial biomarkers in the clay below the planet's surface.

"This paper helps guide the search," Fairén said, "to inform where we should look and which instruments to use on a search for life."

In the Yungay region of the Atacama desert, the scientists found the clay layer, a previously unreported habitat for microbial life, is inhabited by at least 30 salt-loving microbial species of metabolically active bacteria and archaea (single-cell organisms).

The researchers' Atacama discovery reinforces the notion that early Mars may have had a similar subsurface with protected habitable niches, particularly during the first billion years of its history.

"That's why clays are important," he said. "They preserve organic compounds and biomarkers extremely well and they are abundant on Mars."

Credit: 
Cornell University

De novo protein decoys block COVID-19 infection in vitro and protect animals in vivo

Neoleukin Therapeutics, Inc., "Neoleukin" (NASDAQ:NLTX), a biopharmaceutical company utilizing sophisticated computational methods to design de novo protein therapeutics, today announced the publication in Science of research describing novel molecules designed to treat or prevent infection by the virus that causes COVID-19, SARS-CoV-2. This report details the creation of de novo protein decoys that were specifically designed to bind the SARS-CoV-2 spike protein with high affinity, preventing its association with the viral receptor hACE2, which is required for infection. The manuscript titled "De novo design of potent and resilient hACE2 decoys to neutralize SARS-CoV-2" is available online here1 via Science First Release.

As reported, the optimized, hyperstable proteins act as decoys that bind to the virus and block cellular entry. The lead molecule, NL-CVX1 (CTC-445.2d), is shown to prevent infection of multiple human cell lines and to protect hamsters from serious consequences of SARS-CoV-2 infection. Prophylactic intranasal administration of the protein decoy led to survival of all hamsters challenged with a lethal dose of SARS-CoV-2.

"Our de novo proteins are designed to mimic the natural SARS-CoV-2 receptor, making them intrinsically resistant to viral mutation," said Daniel-Adriano Silva, Ph.D., Vice President Head of Research, who led the discovery effort at Neoleukin. "We believe the development of NL-CVX1 is the fastest development of a therapeutic de novo protein from concept to preclinical validation, and it represents our most sophisticated design to date."

"The rapid development of this targeted protein demonstrates the potential of our de novo protein design platform and our team of scientists to address a broad spectrum of important biological problems," said Jonathan Drachman, M.D., Chief Executive Officer of Neoleukin. "NL-CVX1 is designed to be stable and could potentially be administered by intranasal spray or inhalation to prevent and treat infection in the lungs and upper airways by SARS-CoV-2. We are currently evaluating the possibility of advancing this molecule to clinical trials in humans."

Credit: 
Rathbun Communications, INC.

Nanobodies that neutralize SARS-CoV-2

Two separate studies have identified nanobodies - which could be produced less expensively than monoclonal antibodies - that bind tightly to the SARS-CoV-2 spike protein and efficiently neutralize SARS-CoV-2 in cells. "The combined stability, potency, and diverse epitope engagement of our ... nanobodies ... provide a unique potential prophylactic and therapeutic strategy to limit the continued toll of the COVID-19 pandemic," write authors on one paper (Michael Schoof et al.) In the battle against COVID-19, monoclonal antibodies that bind to the spike protein of the SARS-CoV-2 virus are being explored as potential therapeutics. These show promise but must be produced in mammalian cells and need to be delivered intravenously. By contrast, single-domain antibodies called nanobodies can be produced in bacteria or yeast and their stability gives the potential for aerosol delivery. In two separate studies, Michael Schoof et al. and Yufei Xiang et al. describe the identification of nanobodies that efficiently neutralize SARS-CoV-2. Schoof and colleagues screened a yeast surface display of synthetic nanobodies, while Xiang and colleagues screened anti-spike nanobodies produced by a llama. Both papers describe nanobodies that bind tightly to the spike and efficiently neutralize SARS-CoV-2 in cells. Xiang et al note that thermostable nanobodies they developed can be rapidly produced in bulk from microbes. "We envision that the nanobody technology described here will contribute to curbing the current pandemic and possibly a future event," they say.

Credit: 
American Association for the Advancement of Science (AAAS)

Stable protein decoy neutralized SARS-CoV-2 in cells and protected hamsters from viral challenge

Researchers have designed a protein "decoy" that mimics the interface where the SARS-CoV-2 spike protein binds a human cell, one version of which could neutralize virus infection in cells and protect hamsters from viral challenge. The SARS-CoV-2 virus enters human cells when the spike protein binds to the human ACE2 receptor. While neutralizing antibodies to the spike protein have been isolated, the spike can develop "escape mutations" that help it evade them. A pressing need, therefore, is to develop therapeutics that can be more resistant to SARS-CoV-2 mutational escape. Here, to address this challenge, Thomas Linsky and colleagues developed a computational protein design strategy that enabled the rapid design of stable de novo protein "decoys" that replicate the protein receptor interface in hACE2 where SARS-CoV-2 binds. After using their approach to generate approximately 35,000 computational decoys, the researchers selected the top-ranking designs for further testing, identifying one particularly strong candidate. Administering a version of it prevented infection of multiple human cell lines by SARS-CoV-2. In a Syrian hamster model, a single prophylactic dose administered 12 hours before viral challenge allowed all animals to survive the lethal dose, with modest weight loss. Because the decoy replicates the spike protein target interface in hACE2, it is intrinsically resilient to viral mutational escape, the authors say.

Credit: 
American Association for the Advancement of Science (AAAS)

Shifts in water temperatures affect eating habits of larval tuna at critical life stage

image: Larval tuna.

Image: 
Cedric Guigand

NEWPORT, Ore. - Small shifts in ocean temperature can have significant effects on the eating habits of blackfin tuna during the larval stage of development, when finding food and growing quickly are critical to long-term survival, a new study from Oregon State University researchers has found.

In a year of warmer water conditions, larval blackfin tuna ate less and grew more slowly, in part because fewer prey were available, compared to the previous year, when water conditions were one to two degrees Celsius cooler, the researchers found.

The findings provide new insight into the relationship between larval tuna growth and environmental conditions, as well as the broader impacts of climate change on marine fish populations. As the climate continues to warm, over the long term, increasing water temperatures may interact with changing food webs to pose critical problems for fish populations, the researchers said.

"There was a drastic difference in the fish between the two years. It was obvious tuna in one year had very full guts with much bigger prey," said Miram Gleiber, the study's lead author. Gleiber worked on the project as part of her doctoral dissertation at Oregon State and has since completed her Ph.D.

"This gives us a better understanding of how these fish are surviving in this vulnerable early life stage as temperatures change. It's not just the temperature change that is important, but the impacts on prey are also important."

The study results were published today in the ICES Journal of Marine Science. Co-authors are Su Sponaugle, a professor of integrative biology at OSU's Hatfield Marine Science Center, and Robert Cowen, director of the Hatfield Marine Science Center.

Blackfin tuna are among the smallest tuna species and are one of the most common larval fish found in the Straits of Florida, a region with a high diversity of fish species. Blackfin tunas' diets are similar to other, more commercially popular tuna species such as bluefin and albacore, which make them a good model for studying how tuna respond to constraints in the food web, such as those induced by warming temperatures.

"It's common for fish to produce lots of eggs," Sponaugle said. "But past research has shown that any small change that occurs early in a fish's life has big implications down the road. What happens during this larval stage of development can greatly influence the whole population."

Researchers collected hundreds of samples of larval blackfin tuna throughout the Straits of Florida during research cruises in 2014 and 2015. During those cruises, the researchers also documented the prey environment for the fish using an imaging system that measures distribution of zooplankton - on which the blackfin feed - in the sampling area.

Researchers also collected data on water temperature, which averaged about 1.2 degrees Celsius higher in 2015 than in 2014.

"Anecdotally, even before we started our data collection and analysis, you could see there were about 10 times more blackfin tuna in 2014 than in 2015," Gleiber said. "This particular tuna is known to be abundant in this area, so one of the questions we wanted to answer was why were there so many more in one year compared to the other? We thought it might be related to their diet."

Gleiber spent about a year painstakingly dissecting and analyzing the stomach contents of hundreds of larval tuna, which were 3 to 10 millimeters in size, to determine what and how much they were eating. She also removed and studied the fishes' otoliths - small ear stones that researchers can use to determine the age and growth rate of the fish.

"These otoliths are like tiny onions, depositing a little material every day in concentric layers smaller than the width of a human hair. We can use this record just like tree rings to estimate fish age and daily growth," Sponaugle said.

Fish generally tend to grow faster in warmer temperatures and tuna are a fast-growing species, which heightens their need for adequate food sources.

But a comparison of the two years of samples, including stomach contents, growth rate data from the ear stones and information about available prey from the imaging system, showed that in the warmer water conditions, the blackfin ate less, ate different prey that were smaller in size and grew more slowly.

"It's not just the temperature that determines their growth. Growth has to be supported with prey availability, as well," Gleiber said.

Although the study was based on only two years of data, the results indicate a clear relationship between water temperature, the abundance of zooplankton prey and larval fish growth and survival.

"We saw these big differences with just a degree or two of temperature change," Sponaugle said. "That's a concern because if prey are not available or are reduced in numbers in warmer conditions, these young tuna just won't survive."

Credit: 
Oregon State University

Immunotherapy may work better in stomach cancer when combined with chemo, given earlier

image: Stomach cancer cells from a mouse model; PDL1 molecules are stained pink.

Image: 
Wang lab, Columbia University Irving Medical Center

NEW YORK, NY (Nov. 5, 2020)--Immunotherapy for stomach cancer may work better if the therapy is delivered earlier in the course of disease and in combination with standard chemotherapy, a new study from researchers at Columbia University Vagelos College of Physicians and Surgeons suggests. 

The study, in mice, was published online in the journal Gastroenterology

"Patients with advanced stomach cancer have limited treatment options," says Woosook Kim, PhD, first author of the paper who was an associate research scientist at Columbia when the study was done. "Many are not eligible for surgical resection, and response to radiotherapy or chemotherapy is often low."

Many cancers express proteins that prevent our immune cells from attacking the tumor. Immunotherapies block these proteins, thereby unleashing the immune cells. Immunotherapies that block proteins called PD1 and PDL1 have been approved for patients with advanced stomach cancer as a second- or third-line treatment after chemotherapy, but response rates are low. 

To better understand why immunotherapies don't work well in advanced stomach cancer, the Columbia researchers looked closely at the microenvironment around the tumors in mice that develop this disease.

The researchers discovered that mice with more advanced disease had an abundance of myeloid-derived suppressor cells (MDSCs) that also express PDL1 proteins, which appear to overpower the immunotherapy. 

When immunotherapy was given to mice with these advanced tumors, the cancer was unaffected. Only when immunotherapy was given to the mice early, before tumors formed and prior to the accumulation of MDSCs, could cancer progression be slowed.  

Combining immunotherapy with standard chemotherapy also shrank larger stomach tumors, because the chemotherapy killed many of the MDSCs.

"Our study suggests that adding chemotherapy to immunotherapy may improve responsiveness in part through the targeting of MDSCs," says Timothy Wang, MD, the Dorothy L. and Daniel H. Silberberg Professor of Medicine at Columbia University Vagelos College of Physicians and Surgeons and study leader. "While we do not have enough information to determine if the level of MDSCs may predict response to this dual regimen, our findings show that administering immunotherapy in combination with chemotherapy earlier in the course of the disease, when MDSC levels are much lower, may boost response rates in stomach cancer," says Wang. 

Credit: 
Columbia University Irving Medical Center

Keeping our cool

Fossil fuel burning accounts for the majority of global greenhouse gas emissions, and to the world's credit, several countries are working to reduce their use and the heat-trapping emissions that ensue. The goal is to keep global temperatures under a 1.5° to 2°C increase above preindustrial levels -- the upper limits of the Paris Climate Agreement.

If we stopped burning all fossil fuels this minute, would that be enough to keep a lid on global warming?

Acording to UC Santa Barbara ecology professor David Tilman, petroleum energy sources are only part of the picture. In a paper published in the journal Science, Tilman and colleagues predict that even in the absence of fossil fuels, cumulative greenhouse gas emissions could still cause global temperatures to exceed climate change targets in just a few decades.

The source? Our food system.

"Global food demand and the greenhouse gases associated with it are on a trajectory to push the world past the one-and-a-half degree goal, and make it hard to stay under the two degree limit," said Tilman, who holds a dual appointment at UCSB's Bren School of Environmental Science & Management and at the University of Minnesota. The world's growing population as well as its diet are driving food production practices that generate and release massive and increasing amounts of carbon dioxide, methane and other greenhouse gases into the atmosphere. According to the paper, left unchecked, agricultural emissions alone could exceed the 1.5°C limit by about 2050.

These findings are especially concerning given that we haven't stopped using fossil fuels, Tilman said. And with a 1°C average increase in global temperature since 1880, we've got only a slim margin before global warming results in widespread sea level rise, ocean acidification, biodiversity loss and other effects that will change life as we know it.

"All it would take for us to exceed the two degree warming limit is for food emissions to remain on their path and one additional year of current fossil fuel emissions," Tilman said. "And I guarantee you, we're not going to stop fossil fuel emissions in a year."

Reducing the emissions from food production, "will likely be essential" to keeping the planet livable in its current state, according to the scientists.

Seeds of Solutions

"It's well known that agriculture releases about 30% of all greenhouse gases," Tilman said. Major sources include deforestation and land clearing, fertilizer overuse and gassy livestock, all of which are increasing as the global population increases. In "high-yield" countries such as the U.S., which have the benefit of large scale modern agriculture, intensive animal farming and heavy-handed fertilizer use are major contributors of greenhouse gases. Meanwhile, in "low yield" countries such as those in sub-Saharan Africa, population growth and increasing affluence are driving demand for more food, and toward more "urban" diets that are richer in meat and meat products, Tilman explained.

"Their demand for food is going up, but the farmers don't have the resources to have high yields, so they just clear more and more land," he said.

And yet, it isn't as though we can just stop producing food, which is perhaps the main reason why agricultural emissions have received less attention than fossil fuels as a target for reduction, according to the researchers.

"You can't look at agriculture as if we can somehow get rid of it," said Tilman, whose research focuses on the environmental impacts of agriculture, as well as the links between diet, environment and health. "We need it; it's essential for society."

But, according to the paper's authors, global warming does not have to be an unavoidable impact of feeding the the world. Through early and widespread adoption of several feasible food system strategies, it is possible to limit emissions from agriculture in a way that keeps us from exceeding the 2°C limit by the end of the century while feeding a growing population.

The most effective, according to the paper, is a switch toward more plant-rich diets, which aren't just healthier overall, but also reduce the demand for beef and other ruminant meats. That, in turn, reduces the pressure to clear for grazing land or produce the grains and grasses (more farming, more fertilizer) required to feed them.

"We're not saying these diets have to be vegetarian or vegan," Tilman said. Widespread reduction of red meat consumption to once a week and having protein come from other sources such as chicken or fish, while increasing fruits and vegetables, in conjunction with decreasing fossil fuel use, could help keep the planet livably cool in the long run.

Another strategy: ease up on fertilizer.

"Many countries have high yields because from 1960 until now they have been using more and more fertilizer," he said. "But recent research has shown that almost all of these countries are actually using much more than they need to attain the yield they have." A drop of roughly 30% in fertilizer use would not only save the farmer money for the same yield, it prevents the release of nitrous oxide that occurs when excess fertilizer goes unused.

"About 40% of all future climate warming from agriculture may come from nitrous oxide from fertilizer," Tilman added. "So adding the right amount of fertilizer has a large benefit for climate change and would save farmers money."

Other strategies the researchers explored included adjusting global per capita calorie consumption to healthy levels; improving yields to help meet demand where it may reduce the pressure to clear more land; and reducing food waste by half.

"The nice thing is that we can do each of these things sort of halfway and still solve the problem," Tilman said. The sooner we employ these strategies, the closer we can get to keeping the Earth cool and avoiding the wholesale changes we would have to adopt if we wait too much longer, he added.

"I'm optimistic," he said. "We have a viable path for achieving global environmental sustainability and better lives for all of us."

Credit: 
University of California - Santa Barbara

Scientists work to shed light on Standard Model of particle physics

image: Typical magnetic field variations as mapped by the trolley at different positions in the Muon g-2 experiment's storage ring, shown at the parts-per-million level.

Image: 
(Image by Argonne National Laboratory.)

Mapping the magnetic field for Fermilab’s Muon g-2 experiment

As scientists await the highly anticipated initial results of the Muon g-2 experiment at the U.S. Department of Energy’s (DOE) Fermi National Accelerator Laboratory, collaborating scientists from DOE’s Argonne National Laboratory continue to employ and maintain the unique system that maps the magnetic field in the experiment with unprecedented precision.

Argonne scientists upgraded the measurement system, which uses an advanced communication scheme and new magnetic field probes and electronics to map the field throughout the 45-meter circumference ring in which the experiment takes place.

“There was a large deviation between Brookhaven’s measurement and the theoretical prediction, and if we confirm this discrepancy, it will signal the existence of undiscovered particles.” — Simon Corrodi, postdoctoral appointee in Argonne’s HEP division

The experiment, which began in 2017 and continues today, could be of great consequence to the field of particle physics. As a follow-up to a past experiment at DOE’s Brookhaven National Laboratory, it has the power to affirm or discount the previous results, which could shed light on the validity of parts of the reigning Standard Model of particle physics.

High-precision measurements of important quantities in the experiment are crucial for producing meaningful results. The primary quantity of interest is the muon’s g-factor, a property that characterizes magnetic and quantum mechanical attributes of the particle.

The Standard Model predicts the value of the muon’s g-factor very precisely. “Because the theory so clearly predicts this number, testing the g-factor through experiment is an effective way to test the theory,” said Simon Corrodi, a postdoctoral appointee in Argonne’s High Energy Physics (HEP) division. “There was a large deviation between Brookhaven’s measurement and the theoretical prediction, and if we confirm this discrepancy, it will signal the existence of undiscovered particles.” 

Just as the Earth’s rotational axis precesses — meaning the poles gradually travel in circles — the muon’s spin, a quantum version of angular momentum, precesses in the presence of a magnetic field. The strength of the magnetic field surrounding a muon influences the rate at which its spin precesses. Scientists can determine the muon’s g-factor using measurements of the spin precession rate and the magnetic field strength.

The more precise these initial measurements are, the more convincing the final result will be. The scientists are on their way to achieve field measurements accurate to 70 parts per billion. This level of precision enables the final calculation of the g-factor to be accurate to four times the precision of the results of the Brookhaven experiment. If the experimentally measured value differs significantly from the expected Standard Model value, it may indicate the existence of unknown particles whose presence disturbs the local magnetic field around the muon.

Trolley ride

During data collection, a magnetic field causes a beam of muons to travel around a large, hollow ring. To map the magnetic field strength throughout the ring with high resolution and precision, the scientists designed a trolley system to drive measurement probes around the ring and collect data.

The University of Heidelberg developed the trolley system for the Brookhaven experiment, and Argonne scientists refurbished the equipment and replaced the electronics. In addition to 378 probes that are mounted within the ring to constantly monitor field drifts, the trolley holds 17 probes that periodically measure the field with higher resolution.

“Every three days, the trolley goes around the ring in both directions, taking around 9,000 measurements per probe and direction,” said Corrodi. “Then we take the measurements to construct slices of the magnetic field and then a full, 3D map of the ring.”

The scientists know the exact location of the trolley in the ring from a new barcode reader that records marks on the bottom of the ring as it moves around.

The ring is filled with a vacuum to facilitate controlled decay of the muons. To preserve the vacuum within the ring, a garage connected to the ring and vacuum stores the trolley between measurements. Automating the process of loading and unloading the trolley into the ring reduces the risk of the scientists compromising the vacuum and the magnetic field by interacting with the system. They also minimized the power consumption of the trolley’s electronics in order to limit the heat introduced to the system, which would otherwise disrupt the precision of the field measurement.

The scientists designed the trolley and garage to operate in the ring’s strong magnetic field without influencing it. “We used a motor that works in the strong magnetic field and with minimal magnetic signature, and the motor moves the trolley mechanically, using strings,” said Corrodi. “This reduces noise in the field measurements introduced by the equipment.”

The system uses the least amount of magnetic material possible, and the scientists tested the magnetic footprint of every single component using test magnets at the University of Washington and Argonne to characterize the overall magnetic signature of the trolley system.

The power of communication

Of the two cables pulling the trolley around the ring, one of them also acts as the power and communication cable between the control station and the measurement probes.

To measure the field, the scientists send a radio frequency through the cable to the 17 trolley probes. The radio frequency causes the spins of the molecules inside the probe to rotate in the magnetic field. The radio frequency is then switched off at just the right moment, causing the water molecules’ spins to precess. This approach is called nuclear magnetic resonance (NMR).

The frequency at which the probes’ spins precess depends on the magnetic field in the ring, and a digitizer on board the trolley converts the analog radio frequency into multiple digital values communicated through the cable to a control station. At the control station, the scientists analyze the digital data to construct the spin precession frequency and, from that, a complete magnetic field map.

During the Brookhaven experiment, all signals were sent through the cable simultaneously. However, due to the conversion from analog to digital signal in the new experiment, much more data has to travel over the cable, and this increased rate could disturb the very precise radio frequency needed for the probe measurement. To prevent this disturbance, the scientists separated the signals in time, switching between the radio frequency signal and data communication in the cable.

“We provide the probes with a radio frequency through an analog signal,” said Corrodi, “and we use a digital signal for communicating the data. The cable switches between these two modes every 35 milliseconds.”

The tactic of switching between signals traveling through the same cable is called “time-division multiplexing,” and it helps the scientists reach specifications for not only accuracy, but also noise levels. An upgrade from the Brookhaven experiment, time-division multiplexing allows for higher-resolution mapping and new capabilities in magnetic field data analysis.

Upcoming results

Both the field mapping NMR system and its motion control were successfully commissioned at Fermilab and have been in reliable operation during the first three data-taking periods of the experiment.

The scientists have achieved unprecedented precision for field measurements, as well as record uniformity of the ring’s magnetic field, in this Muon g-2 experiment. Scientists are currently analyzing the first round of data from 2018, and they expect to publish the results by the end of 2020.

The scientists detailed the complex setup in a paper, titled “Design and performance of an in-vacuum, magnetic field mapping system for the Muon g-2 experiment,” published in the Journal of Instrumentation.

Credit: 
DOE/Argonne National Laboratory

Technique to regenerate optic nerve offers hope for future glaucoma treatment

Scientists have used gene therapy to regenerate damaged nerve fibres in the eye, in a discovery that could aid the development of new treatments for glaucoma, one of the leading causes of blindness worldwide.

Axons - nerve fibres - in the adult central nervous system (CNS) do not normally regenerate after injury and disease, meaning that damage is often irreversible. However, over the past decade there have been a number of discoveries that suggest it may be possible to stimulate regeneration.

In a study published today in Nature Communications, scientists tested whether the gene responsible for the production of a protein known as Protrudin could stimulate the regeneration of nerve cells and protect them from cell death after an injury.

The team, led by Dr Richard Eva, Professor Keith Martin and Professor James Fawcett from the John van Geest Centre for Brain Repair at the University of Cambridge, used a cell culture system to grow brain cells in a dish. They then injured their axons using a laser and analysed the response to this injury using live-cell microscopy. The researchers found that increasing the amount or activity of Protrudin in these nerve cells vastly increased their ability to regenerate.

Nerve cells in the retina, known as retinal ganglion cells, extend their axons from the eye to the brain through the optic nerve in order to relay and process visual information. To investigate whether Protrudin might stimulate repair in the injured CNS in an intact organism, the researchers used a gene therapy technique to increase the amount and activity of Protrudin in the eye and optic nerve. When they measured the amount of regeneration a few weeks after a crush injury to the optic nerve, the team found that Protrudin had enabled the axons to regenerate over large distances. They also found that the retinal ganglion cells were protected from cell death.

The researchers showed that this technique may help protect against glaucoma, a common eye condition. In glaucoma, the optic nerve that connects the eye to the brain is progressively damaged, often in association with elevated pressure inside the eye. If not diagnosed early enough, glaucoma can lead to loss of vision. In the UK, round one in 50 people over the age of 40, and one in ten people over the age of 75 is affected by glaucoma.

To demonstrate this protective effect of Protrudin against glaucoma, the researchers used a whole retina from a mouse eye and grew it in a cell-culture dish. Usually around a half of retinal neurons die within three days of retinal removal, but the researchers found that increasing or activating Protrudin led to almost complete protection of retinal neurons.

Dr Veselina Petrova from the Department of Clinical Neurosciences at the University of Cambridge, the study's first author, said: "Glaucoma is one of leading causes of blindness worldwide. The causes of glaucoma are not completely understood, but there is currently a large focus on identifying new treatments by preventing nerve cells in the retina from dying, as well as trying to repair vision loss through the regeneration of diseased axons through the optic nerve.

"Our strategy relies on using gene therapy - an approach already in clinical use - to deliver Protrudin into the eye. It's possible our treatment could be further developed as a way of protecting retinal neurons from death, as well as stimulating their axons to regrow. It's important to point out that these findings would need further research to see if they could be developed into effective treatments for humans."

Protrudin normally resides within the endoplasmic reticulum, tiny structures within our cells. In this study, the team showed that the endoplasmic reticulum found in axons appears to provide materials and other cellular structures important for growth and survival in order to support the process of regeneration after injury. Protrudin stimulates transport of these materials to the site of injury.

Dr Petrova added: "Nerve cells in the central nervous system lose the ability to regenerate their axons as they mature, so have very limited capacity for regrowth. This means that injuries to the brain, spinal cord and optic nerve have life-altering consequences.

"The optic nerve injury model is often used to investigate new treatments for stimulating CNS axon regeneration, and treatments identified this way often show promise in the injured spinal cord. It's possible that increased or activated Protrudin might be used to boost regeneration in the injured spinal cord."

Credit: 
University of Cambridge