Tech

Noble metal clusters can enhance performance of catalysts and save resources

image: Schematic representation of a noble metal catalyst with inactive single atoms (left) and active clusters (right; noble metal: white; carrier metal: yellow; oxygen: red).

Image: 
Florian Maurer, KIT

Billions of noble metal catalysts are used worldwide for the production of chemicals, energy generation, or cleaning the air. However, the resources required for this purpose are expensive and their availability is limited. To optimize the use of resources, catalysts based on single metal atoms have been developed. A research team of Karlsruhe Institute of Technology (KIT) demonstrated that noble metal atoms may assemble to form clusters under certain conditions. These clusters are more reactive than the single atoms and, hence, exhaust gases can be much better removed. The results are reported in Nature Catalysis (DOI: 10.1038/s41929-020-00508-7).

Noble metal catalysts are used for a wide range of reactions. Among others, they are applied in nearly all combustion processes to reduce pollutant emissions. Often, they consist of very small particles of the active component, such as a noble metal, which are applied to a carrier material. These so-called nanoparticles are composed of several thousands of metal atoms. "But only atoms on the outside are active in the reaction, while most atoms remain unused," explains Professor Jan-Dierk Grunwaldt from KIT's Institute for Chemical Technology and Polymer Chemistry (ITCP). By changing operation conditions, the structure of such a catalyst and, hence, its activity may be changed. "At high temperatures in the exhaust gas system of a car, which are reached during a longer drive on a motorway, for instance, interaction between the noble metal and carrier may lead to the formation of single atoms, i.e. isolated, separate metal atoms on the carrier," Grunwaldt says. "Such single-atom catalysts might be expected to reach a very high utilization rate of the noble metal components, because all atoms can theoretically participate in the reaction." Contrary to this expectation, however, the team of Grunwaldt, in cooperation with professors Christof Wöll from the Institute of Functional Interfaces of KIT and Felix Studt from KIT's Institute of Catalysis Research and Technology, has found that these atoms first have to form noble metal clusters under reaction conditions to become active.

The researchers specifically induced the formation of single atoms and examined their structure thoroughly during the reaction. With the help of highly specialized spectroscopy and theoretical calculations, which were used for the first time ever for this class of catalysts, the team succeeded in explaining why platinum atoms frequently have a low activity. "To convert pollutants, they usually have to react with oxygen in the catalyst. For this, both components must be available at the same time and place, which cannot be achieved for isolated platinum atoms, as the oxygen for the required reaction is much too strongly bound to the carrier component - in our case cerium oxide," says Florian Maurer from ITCP, one of the main authors of the study. "After breaking the platinum-cerium oxide bonds, platinum atoms can move across the carrier surface. In a next step, these platinum atoms form small platinum clusters, on which the reaction takes place much faster than on single atoms."

Clusters Have an Optimal Structure for High Activity

The team's studies prove that neither nanoparticles nor isolated atoms reach the highest activity. "The optimum lies in between. It is reached by small noble metal clusters," Grunwaldt says. "Stabilizing these noble metal clusters might be the key to substantially reduce the consumption of noble metals when producing catalysts. For years, increasingly fine distribution of the noble metal component has been one of the main strategies in designing new catalysts. Our experiments have now revealed the limits in the atomic range." The results of the study will now be used for knowledge-based design and development of catalysts of enhanced stability and long-term activity. This will be a major focus of the work of the Karlsruhe Exhaust Gas Center of KIT, whose Scientific Director, Dr. Maria Casapu, is co-author of the study.

Credit: 
Karlsruher Institut für Technologie (KIT)

Tunable free-electron X-ray radiation from van der Waals materials

Researchers at the Technion - Israel Institute of Technology have developed precise radiation sources that may replace the expensive and cumbersome facilities currently used for such tasks. The suggested apparatus produces controlled radiation with a narrow spectrum that can be tuned with high resolution, at a relatively low energy investment. The findings are likely to lead to breakthroughs in a variety of fields, including the analysis of chemicals and biological materials, medical imaging, X-ray equipment for security screening, and other uses of accurate X-ray sources.

Published in the journal Nature Photonics, the study was led by Professor Ido Kaminer and his master's student Michael Shentcis as part of a collaboration with several research institutes at the Technion: the Andrew and Erna Viterbi Faculty of Electrical Engineering, the Solid State Institute, the Russell Berrie Nanotechnology Institute (RBNI), and the Helen Diller Center for Quantum Science, Matter and Engineering.

The researchers' paper shows an experimental observation that provides the first proof-of-concept for theoretical models developed over the last decade in a series of constitutive articles. The first article on the subject also appeared in Nature Photonics. Written by Prof. Kaminer during his postdoc at MIT, under the supervision of Prof. Marin Soljacic and Prof. John Joannopoulos, that paper presented theoretically how two-dimensional materials can create X-rays. According to Prof. Kaminer, "that article marked the beginning of a journey towards radiation sources based on the unique physics of two-dimensional materials and their various combinations -- heterostructures. We have built on the theoretical breakthrough from that article to develop a series of follow-up articles, and now, we are excited to announce the first experimental observation on the creation of X-ray radiation from such materials, while precisely controlling the radiation parameters."

Two-dimensional materials are unique artificial structures that took the scientific community by storm around the year 2004 with the development of graphene by physicists Andre Geim and Konstantin Novoselov, who later won the Nobel Prize in Physics in 2010. Graphene is an artificial structure of a single atomic thickness made from carbon atoms. The first graphene structures were created by the two Nobel laureates by peeling off thin layers of graphite, the "writing material" of the pencil, using duct tape. The two scientists and subsequent researchers discovered that graphene has unique and surprising properties that are different from graphite properties: immense strength, almost complete transparency, electrical conductivity, and light-transmitting capability that allows radiation emission ¬ an aspect related to the present article. These unique features make graphene and other two-dimensional materials promising for future generations of chemical and biological sensors, solar cells, semiconductors, monitors, and more.

Another Nobel laureate that should be mentioned before returning to the present study is Johannes Diderik van der Waals, who won the Nobel Prize in Physics exactly one hundred years earlier, in 1910. The materials now named after him ¬¬¬- vdW materials - are the focus of Prof. Kaminer's research. Graphene is also an example of a vdW material, but the new study now finds that other advanced vdW materials are more useful for the purpose of producing X-rays. The Technion researchers have produced different vdW materials and sent electron beams through them at specific angles that led to X-ray emission in a controlled and accurate manner. Furthermore, the researchers demonstrated precise tunability of the radiation spectrum at unprecedented resolution, utilizing the flexibility in designing families of vdW materials.

The new article by the research group contains experimental results and new theory that together provide a proof-of-concept for an innovative application of two-dimensional materials as a compact system that produce controlled and accurate radiation.

"The experiment and the theory we developed to explain it make a significant contribution to the study of light-matter interactions and pave the way for varied applications in X-ray imaging (medical X-ray, for example), X-ray spectroscopy used to characterize materials, and future quantum light sources in the X-ray regime," said Prof. Kaminer.

Credit: 
Technion-Israel Institute of Technology

Hubble observes spectacular supernova time-lapse

image: Pictured here is part of the captivating galaxy NGC 2525. Located nearly 70 million light-years from Earth, this galaxy is part of the constellation of Puppis in the southern hemisphere. Together with the Carina and the Vela constellations, it makes up an image of the Argo from ancient greek mythology.

On the left, a brilliant supernova is clearly visible in the image. The supernova is formally known as SN2018gv and was first spotted in mid-January 2018. The NASA/ESA Hubble Space Telescope captured the supernova in NGC 2525 as part of one of its major investigations; measuring the expansion rate of the Universe, which can help answer fundamental questions about our Universe's very nature. Supernovae like this one can be used as cosmic tape measures, allowing astronomers to calculate the distance to their galaxies.

ESA/Hubble has now published a unique time-lapse of this galaxy and it's fading supernova.

Image: 
ESA/Hubble & NASA, A. Riess and the SH0ES team Acknowledgment: Mahdi Zamani

The NASA/ESA's Hubble Space Telescope has tracked the fading light of a supernova in the spiral galaxy NGC 2525, located 70 million light years away. Supernovae like this one can be used as cosmic tape measures, allowing astronomers to calculate the distance to their galaxies. Hubble captured these images as part of one of its major investigations, measuring the expansion rate of the Universe, which can help answer fundamental questions about our Universe's very nature.

The supernova, formally known as SN2018gv, was first spotted in mid-January 2018. The NASA/ESA's Hubble Space Telescope began observing the brilliant brightness of the supernova in February 2018 as part of the research program led by lead researcher and Nobel Laureate Adam Riess of the Space Telescope Science Institute (STScI) and Johns Hopkins University, in Baltimore, USA. The Hubble images center on the barred spiral galaxy NGC 2525, which is located in the constellation of Puppis in the Southern Hemisphere.

The supernova is captured by Hubble in exquisite detail within this galaxy in the left portion of the image. It appears as a very bright star located on the outer edge of one of its beautiful swirling spiral arms. This new and unique time-lapse of Hubble images created by the ESA/Hubble team shows the once bright supernova initially outshining the brightest stars in the galaxy, before fading into obscurity during the year of observations. This time-lapse consists of observations taken over the course of one year, from February 2018 to February 2019.

"No Earthly fireworks display can compete with this supernova, captured in its fading glory by the Hubble Space Telescope," shared Riess of this new time-lapse of the supernova explosion in NGC 2525.

Supernovae are powerful explosions which mark the end of a star's life. The type of supernova seen in these images, known as a Type Ia supernova, originate from a white dwarf in a close binary system accreting material from its companion star. If the white dwarf reaches a critical mass (1.44 times the mass of our Sun), its core becomes hot enough to ignite carbon fusion, triggering a thermonuclear runaway process that fuses large amounts of oxygen and carbon together in a matter of seconds. The energy released tears the star apart in a violent explosion, ejecting matter at speeds up to 6% the speed of light and emitting huge amounts of radiation. Type Ia supernovae consistently reach a peak brightness of 5 billion times brighter than our Sun before fading over time.

Because supernovae of this type produce this fixed brightness, they are useful tools for astronomers, known as 'standard candles', which act as cosmic tape measures. Knowing the actual brightness of the supernova and observing its apparent brightness in the sky, astronomers can calculate the distance to these grand spectacles and therefore their galaxies. Riess and his team combined the distance measurements from the supernovae with distances calculated using variable stars known as Cepheid variables. Cepheid variables pulsate in size, causing periodic changes in brightness. As this period is directly related to the star's brightness, astronomers can calculate the distance to them: allowing them to act as another standard candle in the cosmic distance ladder.

Riess and his team are interested in accurately measuring the distance to these galaxies since it helps them better constrain the expansion rate of the Universe, known as the Hubble constant. This value accounts for how fast the Universe is expanding depending on its distance from us, with more distant galaxies moving faster away from us. Since it launched, NASA/ESA's Hubble Space Telescope has helped dramatically improve the precision of the Hubble constant. Results from the same observing program led by Riess have now reduced the uncertainty of their measurement of the Hubble constant to an unprecedented 1.9% [1]. Further measurements of NGC 2525 will contribute to their goal of reducing the uncertainty down to 1%, pinpointing how fast the Universe is expanding. A more accurate Hubble constant may uncover clues about the invisible dark matter and mysterious dark energy, responsible for accelerating the Universe's rate of expansion. Together this information can help us understand the history and future fate of our Universe.

A supermassive black hole is also known to be lurking at the centre of NGC 2525. Nearly every galaxy contains a supermassive black hole, which can range in mass from hundreds of thousands to billions of times the mass of the Sun.

Credit: 
ESA/Hubble Information Centre

Gene expression altered by direction of forces acting on cell

CHAMPAIGN, Ill. -- Tissues and cells in the human body are subjected to a constant push and pull - strained by other cells, blood pressure and fluid flow, to name a few. The type and direction of the force on a cell alters gene expression by stretching different regions of DNA, researchers at University of Illinois Urbana-Champaign and collaborators in China found in a new study.

The findings could provide insights into physiology and diseases such as fibrosis, cardiovascular disease and malignant cancer, the researchers said.

"Force is everywhere in the human body, and both external and internal forces can influence your body far more than you may have thought," said study leader Ning Wang, a professor of mechanical science and engineering at Illinois. "These strains profoundly influence cellular behaviors and physiological functions, which are initiated at the level of gene expression."

The effects of physical forces and signals on cells, tissues and organs have been less studied than those of chemical signals and responses, yet physical forces play an important role in how cells function and respond to their environment, Wang said.

Most studies seeking to understand the mechanics of cells apply force using a microscope cantilever probe to tap a cell's surface or a focused laser beam to move a tiny particle across the surface. However, these techniques can only move in one dimension. This incomplete picture leaves fundamental questions unanswered, Wang said - for example, the difference in the responses to shear stress from blood flow and stretching from blood pressure.

Wang and his collaborators developed a method that allows them to move a magnetic bead in any direction, giving them a picture of the ways forces act on a cell in 3D. They call it three-dimensional magnetic twisting cytometry.

They found that the force from the magnetic bead caused a rapid increase in expression for certain genes, but the amount of the increase depended on the direction the bead moved. When the bead rolled along the long axis of the cell, the increase was the lowest, but when the force was applied perpendicularly - across the short axis of the cell - gene activity increased the most. When the bead was moved at a 45-degree angle or rotated in the same plane as the cell to induce shear stress, the response was intermediate.

"These observations show that gene upregulation and activation are very sensitive to the mode of the applied force, when the magnitude of the force remains unchanged," Wang said.

In further experiments, the researchers found that the reason for the difference lies in the method that the forces are relayed to the cell's nucleus, where DNA is housed. Cells have a network of support structures called the cytoskeleton, and the main force-bearing elements are long fibers of the protein actin. When they bend due to a force, they relay that force to the nucleus and stretch the chromosomes.

These actin fibers run lengthwise along the cell. So when the force strains them widthwise, they deform more, stretching the chromosomes more and causing greater gene activity, the researchers found. They published their findings in the journal Nature Communications.

"A stress fiber is like a tense violin string. When a stress is applied across the short axis of the cell, it's just like when a person plucks a violin string vertically from the string's direction to produce a louder, more forceful sound," Wang said.

The researchers' next step will be to create disease models to see how different forces might help explain the mechanism of certain diseases, and to identify possible therapeutic targets or applications.

"In certain diseases, such as aortic valve calcification, arterial atherosclerosis, liver fibrosis or malignant tumors, these cellular responses and adaptation go awry, causing the tissues and organs to function abnormally," Wang said. "This is the first time that the mechanism of living cells' different biological responses to the direction of forces at the level of genes has been revealed, so perhaps with our three-dimensional approach we can understand these diseases better."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Our health: New focus on the synergy effect of nanoparticles

Nanoparticles are used in a wide range of products and manufacturing processes because the properties of a material can change dramatically when the material comes down in nano-form.

They can be used, for example, to purify wastewater and to transport medicine around the body. They are also added to, for example, socks, pillows, mattresses, phone covers and refrigerators to supply the items with an antibacterial surface.

Much research has been done on how nanoparticles affect humans and the environment and a number of studies have shown that nanoparticles can disrupt or damage our cells.

This is confirmed by a new study that has also looked at how cells react when exposed to more than one kind of nano particle at the same time.

The lead author of the study is Barbara Korzeniowska from the Department of Biochemistry and Molecular Biology at SDU. The head of research is Professor Frank Kjeldsen from the same department.

His research into metal nanoparticles is supported by a European Research Grant of DKK 14 million.

"Throughout a lifetime, we are exposed to many different kinds of nano-particles, and we should investigate how the combination of different nano-particles affects us and also whether an accumulation through life can harm us," says Barbara Korzeniowska.

She herself became interested in the subject when her little daughter one day was going in the bathtub and got a rubber duck as a toy.

- It turned out that it had been treated with nano-silver, probably to keep it free of bacteria, but small children put their toys in their mouths, and she could thus ingest nano-silver. That is highly worrying when research shows that nano-silver can damage human cells, she says.

In her new study, she looked at nano-silver and nano-platinum. She has investigated their individual effect and whether exposure of both types of nanoparticles results in a synergy effect in two types of brain cells.

- There are almost no studies of the synergy effect of nano particles, so it is important to get started with these studies, she says.

She chose nano-silver because it is already known to be able to damage cells and nano-platinum, because nano-platinum is considered to be so-called bio-inert; i.e. has a minimal interaction with human tissue.

The nanoparticles were tested on two types of brain cells: astrocytes and endothelial cells. Astrocytes are supporter cells in the central nervous system, which i.a. helps to supply the nervous system with nutrients and repair damage to the brain. Endothelial cells sit on the inside of the blood vessels and transport substances from the bloodstream to the brain.

When the endothelial cells were exposed to nano-platinum, nothing happened. When exposed to nano-silver, their ability to divide deteriorated. When exposed to both nano-silver and nano-platinum, the effect was amplified, and they died in large numbers. Furthermore, their defense mechanisms decreased, and they had difficulty communicating with each other.

- So even though nano-platinum alone does not do harm, something drastic happens when they are combined with a different kind of nano-particle, says Frank Kjeldsen.

The astrocytes were more hardy and reacted "only" with impaired ability to divide when exposed to both types of nano-particles.

An earlier study, conducted by Frank Kjeldsen, has shown a dramatic synergy effect of silver nanoparticles and cadmium ions, which are found naturally all around us on Earth.

In that study, 72 % of the cells died (in this study it was intestinal cells) as they were exposed to both nano-silver and cadmium ions. When they were only exposed to nano-silver, 25% died. When exposed to cadmium ions only, 12% died.

Read more about it here: https://www.sdu.dk/da/om_sdu/fakulteterne/naturvidenskab/nyheder2018/2018_08_16_nanocadmium

We are involuntarily exposed

- Little is known about how large concentrations of nano-particles are used in industrial products. We also do not know what size particles they use - size also has an effect on whether they can enter a cell, says Barbara Korzeniowska and continues:

- But we know that a lot of people are involuntarily exposed to nano-particles, and that there can be lifelong exposure.

https://onlinelibrary.wiley.com/doi/abs/10.1002/ppsc.202000135

There are virtually no restrictions on adding nanoparticles to products. In the EU, however, manufacturers must have an approval if they want to use nanoparticles in products with antibacterial properties. In Denmark, they must also declare nano-content in such products on the label.

Credit: 
University of Southern Denmark

NASA finds Hurricane Marie rapidly intensifying

image: On Oct. 1 at 4:10 a.m. EDT (0910 UTC) NASA's Aqua satellite analyzed Hurricane Marie's cloud top temperatures and found strongest storms (yellow) were around Marie's center of circulation. Temperatures in those areas were as cold as minus 80 degrees Fahrenheit (minus 62.2 Celsius). Strong storms with cloud top temperatures as cold as minus 70 degrees (red) Fahrenheit (minus 56.6. degrees Celsius) surrounded the center.

Image: 
NASA/NRL

NASA infrared imagery revealed that Hurricane Marie is rapidly growing stronger and more powerful. Infrared imagery revealed that powerful thunderstorms circled the eye of the hurricane as it moved through the Eastern Pacific Ocean.

NOAA's National Hurricane Center (NHC) expects Marie to become a major hurricane late on Oct. 1.  

Infrared Imagery Reveals a More Powerful Marie

One of the ways NASA researches tropical cyclones is using infrared data that provides temperature information. Cloud top temperatures identify where the strongest storms are located. The stronger the storms, the higher they extend into the troposphere, and the colder the cloud top temperatures.

On Oct. 1 at 4:10 a.m. EDT (0910 UTC) NASA's Aqua satellite analyzed the storm using the Moderate Resolution Imaging Spectroradiometer or MODIS instrument. Hurricane Marie's cloud top temperatures and found strongest storms were around Marie's center of circulation. Temperatures in those areas were as cold as minus 80 degrees Fahrenheit (minus 62.2 Celsius). Strong storms with cloud top temperatures as cold as minus 70 degrees Fahrenheit (minus 56.6. degrees Celsius) surrounded the center.

NASA research has shown that cloud top temperatures that cold indicate strong storms that have the capability to create heavy rain.

At 5 a.m. EDT on Oct 1, NHC Hurricane Specialist Andrew Latto noted, "Recent microwave data and satellite images indicate that Marie has become much better organized over the past several hours, with a nearly completely closed eye noted in a (12:51 a.m. EDT) 0451Z AMSU composite microwave overpass."

NASA then provides data to tropical cyclone meteorologists so they can incorporate it in their forecasts.

Marie's Status on Oct. 1

At 5 a.m. EDT (0900 UTC), the center of Hurricane Marie was located near latitude 14.8 degrees north and longitude 118.1 degrees west. It is about 775 miles (1,245 km) southwest of the southern tip of Baja California, Mexico. Marie is moving toward the west near 17 mph (28 kph), and this general motion is expected to continue through tonight, followed by a gradual turn toward the west-northwest with decreasing forward speed.

Maximum sustained winds are near 90 mph (150 kph) with higher gusts. Hurricane-force winds extend outward up to 15 miles (30 km) from the center and tropical-storm-force winds extend outward up to 70 miles (110 km). The estimated minimum central pressure is 983 millibars.

Marie's Forecast

Rapid strengthening is forecast by the National Hurricane Center. Marie is expected to become a major hurricane by tonight with some additional strengthening possible through Friday. Marie is then forecast to begin weakening this weekend.

Credit: 
NASA/Goddard Space Flight Center

Sensor with 100,000 times higher sensitivity could bolster thermal imaging

image: Army-funded research develop a graphene bolometer sensor with 100,000x higher sensitivity than currently available commercial sensors. Detecting microwave radiation is key to thermal imaging, electronic warfare, radio communications and radar, but detection sensitivity limits the performance of these systems.

Image: 
Raytheon BBN Technologies

RESEARCH TRIANGLE PARK, N.C. -- Army-funded research developed a new microwave radiation sensor with 100,000 times higher sensitivity than currently available commercial sensors. Researchers said better detection of microwave radiation will enable improved thermal imaging, electronic warfare, radio communications and radar.

Researchers published their study in the peer-reviewed journal Nature. The team includes scientists from Harvard University, The Institute of Photonic Sciences, Massachusetts Institute of Technology, Pohang University of Science and Technology, and Raytheon BBN Technologies. The Army, in part, funded the work to fabricate this bolometer by exploiting the giant thermal response of graphene to microwave radiation.

"The microwave bolometer developed under this project is so sensitive that it is capable of detecting a single microwave photon, which is the smallest amount of energy in nature," said Dr. Joe Qiu, program manager for solid-state electronics and electromagnetics, Army Research Office, an element of the U.S. Army Combat Capabilities Development Command's Army Research Laboratory. "This technology will potentially enable new capabilities for applications such as quantum sensing and radar, and ensure the U.S. Army maintains spectral dominance in the foreseeable future."

The graphene bolometer sensor detects electromagnetic radiation by measuring the temperature rise as the photons are absorbed into the sensor. Graphene is a two dimensional, one-atom layer thick material. The researchers achieved a high bolometer sensitivity by incorporating graphene in the microwave antenna.

A key innovation in this advancement is to measure the temperature rise by superconducting Josephson junction while maintaining a high microwave radiation coupling into the graphene through an antenna, researchers said. The coupling efficiency is essential in a high sensitivity detection because "every precious photon counts."

A Josephson junction is a quantum mechanical device which is made of two superconducting electrodes separated by a barrier (thin insulating tunnel barrier, normal metal, semiconductor, ferromagnet, etc.)

In addition to being thin, the electrons in graphene are also in a very special band structure in which the valence and conduction bands meet at only one point, known as Dirac point.

"The density of states vanishes there so that when the electrons receive the photon energy, the temperature rise is high while the heat leakage is small," said Dr. Kin Chung Fong, Raytheon BBN Technologies.

With increased sensitivity of bolometer detectors, this research has found a new pathway to improve the performance of systems detecting electromagnetic signal such as radar, night vision, LIDAR (Light Detection and Ranging), and communication. It could also enable new applications such as quantum information science, thermal imaging as well as the search of dark matter.

The part of the research conducted at MIT included work from the Institute for Soldier Nanotechnologies. The U.S. Army established the institute in 2002 as an interdisciplinary research center to dramatically improve protection, survivability and mission capabilities of the Soldier and of Soldier-supporting platforms and systems.

Credit: 
U.S. Army Research Laboratory

Pain relief caused by SARS-CoV-2 infection may help explain COVID-19 spread

video: An animated look at the science behind how SARS-CoV-2 infection induces analgesia.

Image: 
University of Arizona Health Sciences/Debra Bowles

SARS-CoV-2, the virus that causes COVID-19, can relieve pain, according to a new study by University of Arizona Health Sciences researchers.

The finding may explain why nearly half of people who get COVID-19 experience few or no symptoms, even though they are able to spread the disease, according to the study's corresponding author Rajesh Khanna, PhD, a professor in the College of Medicine - Tucson's Department of Pharmacology.

"It made a lot of sense to me that perhaps the reason for the unrelenting spread of COVID-19 is that in the early stages, you're walking around all fine as if nothing is wrong because your pain has been suppressed," said Dr. Khanna. "You have the virus, but you don't feel bad because you pain is gone. If we can prove that this pain relief is what is causing COVID-19 to spread further, that's of enormous value."

The paper, "SARS-CoV-2 Spike protein co-opts VEGF-A/Neuropilin-1 receptor signaling
to induce analgesia," will be published in PAIN, the journal of the International Association for the Study of Pain.

The U.S. Centers for Disease Control and Prevention released updated data Sept. 10 estimating 50% of COVID-19 transmission occurs prior to the onset of symptoms and 40% of COVID-19 infections are asymptomatic.

"This research raises the possibility that pain, as an early symptom of COVID-19, may be reduced by the SARS-CoV-2 spike protein as it silences the body's pain signaling pathways," said UArizona Health Sciences Senior Vice President Michael D. Dake, MD. "University of Arizona Health Sciences researchers at the Comprehensive Pain and Addiction Center are leveraging this unique finding to explore a novel class of therapeutics for pain as we continue to seek new ways to address the opioid epidemic."

Viruses infect host cells through protein receptors on cell membranes. Early in the pandemic, scientists established that the SARS-CoV-2 spike protein uses the angiotensin-converting enzyme 2 (ACE2) receptor to enter the body. But in June, two papers posted on the preprint server bioRxiv pointed to neuropilin-1 as a second receptor for SARS-CoV-2.

"That caught our eye because for the last 15 years my lab has been studying a complex of proteins and pathways that relate to pain processing that are downstream of neuropilin," said Dr. Khanna, who is affiliated with the UArizona Health Sciences Comprehensive Pain and Addiction Center and is a member of the UArizona BIO5 Institute. "So we stepped back and realized this could mean that maybe the spike protein is involved in some sort of pain processing."

Many biological pathways signal the body to feel pain. One is through a protein named vascular endothelial growth factor-A (VEGF-A), which plays an essential role in blood vessel growth but also has been linked to diseases such as cancer, rheumatoid arthritis and, most recently, COVID-19.

Like a key in a lock, when VEGF-A binds to the receptor neuropilin, it initiates a cascade of events resulting in the hyperexcitability of neurons, which leads to pain. Dr. Khanna and his research team found that the SARS-CoV-2 spike protein binds to neuropilin in exactly the same location as VEGF-A.

With that knowledge, they performed a series of experiments in the laboratory and in rodent models to test their hypothesis that the SARS-CoV-2 spike protein acts on the VEGF-A/neuropilin pain pathway. They used VEGF-A as a trigger to induce neuron excitability, which creates pain, then added the SARS-CoV-2 spike protein.

"Spike completely reversed the VEGF-induced pain signaling," Dr. Khanna said. "It didn't matter if we used very high doses of spike or extremely low doses - it reversed the pain completely."

Dr. Khanna is teaming up with UArizona Health Sciences immunologists and virologists to continue research into the role of neuropilin in the spread of COVID-19.

In his lab, he will be examining neuropilin as a new target for non-opioid pain relief. During the study, Dr. Khanna tested existing small molecule neuropilin inhibitors developed to suppress tumor growth in certain cancers and found they provided the same pain relief as the SARS-CoV-2 spike protein when binding to neuropilin.

"We are moving forward with designing small molecules against neuropilin, particularly natural compounds, that could be important for pain relief," Dr. Khanna said. "We have a pandemic, and we have an opioid epidemic. They're colliding. Our findings have massive implications for both. SARS-CoV-2 is teaching us about viral spread, but COVID-19 has us also looking at neuropilin as a new non-opioid method to fight the opioid epidemic."

Credit: 
University of Arizona Health Sciences

Researchers hear more crickets and katydids 'singing in the suburbs'

image: Collection methods used to monitor populations of katydids, such as this rattler round-winged katydid, can be challenging because many katydid species live high in trees, according to D.J. McNeil, the study's lead author.

Image: 
D.J. McNeil, Penn State

UNIVERSITY PARK, Pa. -- The songs that crickets and katydids sing at night to attract mates can help in monitoring and mapping their populations, according to Penn State researchers, whose study of Orthoptera species in central Pennsylvania also shed light on these insects' habitat preferences.

"We were surprised to find more species in suburban areas than in either urban or rural areas," said the study's lead researcher, D.J. McNeil, postdoctoral fellow in Penn State's Insect Biodiversity Center and the Department of Entomology.

The study was the first to show that the use of aural point count surveys -- a method commonly used by wildlife biologists to study birds and other vertebrates by listening to their songs -- can be effective in exploring the population dynamics of night-singing insect species, the researchers said.

"Insect populations are showing declines globally, and several studies have indicated that Orthopterans, such as grasshoppers, crickets and katydids, are among the most threatened insect groups," said study co-author Christina Grozinger, Publius Vergilius Maro Professor of Entomology, Penn State College of Agricultural Sciences. "Having a nondestructive way to monitor and map these species is vital for understanding how to conserve and expand their populations."

McNeil explained that Orthoptera species -- such as those in suborder Ensifera, which consists of crickets and katydids -- are known to be highly sensitive to variation in habitat conditions. Since they feed on plants, these species also can be affected negatively when insecticides and herbicides are applied to vegetation.

McNeil noted that few efficient, standardized monitoring protocols exist for Ensifera, and many involve lethal trapping or time-intensive collection efforts such as mark-recapture. In addition, other collection methods such as sweep-netting are challenging in densely vegetated habitats, especially for katydid species, many of which live high in trees. The researchers pointed out that the conspicuous stridulations, or mating calls, produced by singing Ensifera make them excellent candidates for aural population surveys.

"Although researchers have used acoustic sampling methods for crickets and katydids in the past, these methods often require specialized audio gear and complex machine-learning algorithms to disentangle the insects' calls from the background noise," McNeil said. "This is very expensive, and it requires a very high-tech skill set. Developing a simple and efficient monitoring protocol can greatly improve our ability to study and understand Ensifera population ecology."

To address this need, McNeil drew on his background in ornithology -- his doctoral research focused on birds -- to develop a protocol that required only a human being to conduct aural point count surveys.

"You can identify birds by their calls really easily, and I came to realize that this was true for crickets and katydids," he said. "For example, one cricket species makes a particular type of chirp, and another one has a different pattern. So, over the course of a few years, I've taught myself the different breeding calls of the crickets and katydids, and I've reached the point where I can confidently identify a large portion of the species that we have in this region."

The researchers defined the study area by selecting a central point in downtown State College, Pennsylvania, and plotting four transects extending 10 kilometers east, west, north and south. Along each transect, they selected 10 points, about 1 km apart, as survey locations. The resulting 41 roadside sampling points encompassed deciduous forest, row-crop agricultural fields, pastures, and varying degrees of urban and suburban cover types.

"That allowed us not only to get different habitat types, but to capture a smooth gradient across the entire spectrum of what a cricket might experience," McNeil said.

McNeil conducted all of the surveys by standing stationary for three minutes at each location and using a checklist to record the number of Ensifera species detected. Because the study focused on crickets and katydids that sing mostly after dark, sampling was performed between sunset and midnight, and each location was sampled five times from July to November in 2019, a time of year that includes the seasonal singing periods for most local Ensifera. He then used occupancy modeling approaches to map the species distributions across the urban-to-rural gradient.

The findings, reported Sept. 29 in the Journal of Insect Conservation, provided the first quantitative glimpse into the habitat needs for a variety of night-singing Orthoptera in eastern North America, the researchers said. For example, some species preferred agriculturally dominated landscapes, some preferred urban habitats, and others were found across all areas surveyed.

"We found that intermediate levels of urbanization, such as what you'd find in suburban areas, hosted the highest number of species, perhaps because areas with intermediate levels of disturbance host the greatest number of habitat niches and can support more species than heavily disturbed or totally undisturbed ecosystems," McNeil said.

"We hope that this study inspires people to listen carefully to the diverse insect songs in their backyards at night and think about ways to improve the habitat for these important species," Grozinger said.

Credit: 
Penn State

Tool helps clear biases from computer vision

image: In one data set, REVISE uncovered a potential gender bias in images containing people (red boxes) and the musical instrument organ (blue boxes). Analyzing the distribution of inferred 3-D distances between the person and the organ showed that males tended to be featured as actually playing the instrument, whereas females were often merely in the same space as the instrument.

Image: 
Princeton Visual AI Lab

Researchers at Princeton University have developed a tool that flags potential biases in sets of images used to train artificial intelligence (AI) systems. The work is part of a larger effort to remedy and prevent the biases that have crept into AI systems that influence everything from credit services to courtroom sentencing programs.

Although the sources of bias in AI systems are varied, one major cause is stereotypical images contained in large sets of images collected from online sources that engineers use to develop computer vision, a branch of AI that allows computers to recognize people, objects and actions. Because the foundation of computer vision is built on these data sets, images that reflect societal stereotypes and biases can unintentionally influence computer vision models.

To help stem this problem at its source, researchers in the Princeton Visual AI Lab have developed an open-source tool that automatically uncovers potential biases in visual data sets. The tool allows data set creators and users to correct issues of underrepresentation or stereotypical portrayals before image collections are used to train computer vision models. In related work, members of the Visual AI Lab published a comparison of existing methods for preventing biases in computer vision models themselves, and proposed a new, more effective approach to bias mitigation.

The first tool, called REVISE (REvealing VIsual biaSEs), uses statistical methods to inspect a data set for potential biases or issues of underrepresentation along three dimensions: object-based, gender-based and geography-based. A fully automated tool, REVISE builds on earlier work that involved filtering and balancing a data set's images in a way that required more direction from the user. The study was presented Aug. 24 at the virtual European Conference on Computer Vision.

REVISE takes stock of a data set's content using existing image annotations and measurements such as object counts, the co-occurrence of objects and people, and images' countries of origin. Among these measurements, the tool exposes patterns that differ from median distributions.

For example, in one of the tested data sets, REVISE showed that images including both people and flowers differed between males and females: Males more often appeared with flowers in ceremonies or meetings, while females tended to appear in staged settings or paintings. (The analysis was limited to annotations reflecting the perceived binary gender of people appearing in images.)

Once the tool reveals these sorts of discrepancies, "then there's the question of whether this is a totally innocuous fact, or if something deeper is happening, and that's very hard to automate," said Olga Russakovsky, an assistant professor of computer science and principal investigator of the Visual AI Lab. Russakovsky co-authored the paper with graduate student Angelina Wang and Arvind Narayanan, an associate professor of computer science.

For example, REVISE revealed that objects including airplanes, beds and pizzas were more likely to be large in the images including them than a typical object in one of the data sets. Such an issue might not perpetuate societal stereotypes, but could be problematic for training computer vision models. As a remedy, the researchers suggest collecting images of airplanes that also include the labels mountain, desert or sky.

The underrepresentation of regions of the globe in computer vision data sets, however, is likely to lead to biases in AI algorithms. Consistent with previous analyses, the researchers found that for images' countries of origin (normalized by population), the United States and European countries were vastly overrepresented in data sets. Beyond this, REVISE showed that for images from other parts of the world, image captions were often not in the local language, suggesting that many of them were captured by tourists and potentially leading to a skewed view of a country.

Researchers who focus on object detection may overlook issues of fairness in computer vision, said Russakovsky. "However, this geography analysis shows that object recognition can still can be quite
biased and exclusionary, and can affect different regions and people unequally," she said.

"Data set collection practices in computer science haven't been scrutinized that thoroughly until recently," said co-author Angelina Wang, a graduate student in computer science. She said images are mostly "scraped from the internet, and people don't always realize that their images are being used [in data sets]. We should collect images from more diverse groups of people, but when we do, we should be careful that we're getting the images in a way that is respectful."

"Tools and benchmarks are an important step ... they allow us to capture these biases earlier in the pipeline and rethink our problem setup and assumptions as well as data collection practices," said Vicente Ordonez-Roman, an assistant professor of computer science at the University of Virginia who was not involved in the studies. "In computer vision there are some specific challenges regarding representation and the propagation of stereotypes. Works such as those by the Princeton Visual AI Lab help elucidate and bring to the attention of the computer vision community some of these issues and offer strategies to mitigate them."

A related study from the Visual AI Lab examined approaches to prevent computer vision models from learning spurious correlations that may reflect biases, such as overpredicting activities like cooking in images of women, or computer programming in images of men. Visual cues such as the fact that zebras are black and white, or basketball players often wear jerseys, contribute to the accuracy of the models, so developing effective models while avoiding problematic correlations is a significant challenge in the field.

In research presented in June at the virtual International Conference on Computer Vision and Pattern Recognition, electrical engineering graduate student Zeyu Wang and colleagues compared four different techniques for mitigating biases in computer vision models.

They found that a popular technique known as adversarial training, or "fairness through blindness," harmed the overall performance of image recognition models. In adversarial training, the model cannot consider information about the protected variable -- in the study, the researchers used gender as a test case. A different approach, known as domain-independent training, or "fairness through awareness," performed much better in the team's analysis.

"Essentially, this says we're going to have different frequencies of activities for different genders, and yes, this prediction is going to be gender-dependent, so we're just going to embrace that," said Russakovsky.

The technique outlined in the paper mitigates potential biases by considering the protected attribute separately from other visual cues.

"How we really address the bias issue is a deeper problem, because of course we can see it's in the data itself," said Zeyu Wang. "But in in the real world, humans can still make good judgments while being aware of our biases" -- and computer vision models can be set up to work in a similar way, he said.

Credit: 
Princeton University, Engineering School

Bright light bars big-eyed birds from human-altered landscapes

image: In a study of 240 bird species, Florida Museum of Natural History researchers found strong links between eye size, light and habitat use. The findings suggest eye size could be an important predictor for how sensitive certain bird species may be to habitat disturbance.

Image: 
Ian Ausprey/Florida Museum

GAINESVILLE, Fla. --- New research shows the glaring light in human-altered landscapes, such as livestock pastures and crop fields, can act as a barrier to big-eyed birds, potentially contributing to their decline.

Florida Museum of Natural History researchers found strong links between bird eye size, habitat and foraging technique. Birds that kept to the shade of the forest had larger eyes than those that inhabited the canopy, and birds with relatively small eyes were more numerous in agricultural settings.

The findings suggest eye size is an overlooked, but important trait in determining birds' vulnerability to changes in their habitat and could help inform future research on their sensitivity to other bright environments, such as cities.

"Many bird species literally disappear from highly disturbed, anthropogenic habitats such as agricultural landscapes," said lead author Ian Ausprey, a Ph.D. student in the Florida Museum's Ordway Lab of Ecosystem Conservation and a National Geographic Explorer. "That's probably due to many reasons, but this paper suggests light could be part of that."

Despite numerous studies on how light influences the makeup of plant communities, little research has focused on how it drives the ecology of vertebrates. Ausprey said while some of the study's results may seem like "a no-brainer," it is the first to document the relationships between light, eye size and how birds navigate their world.

Light is especially key for birds, which use their vision to detect food. Big eyes house more photoreceptors and are a common feature in birds of prey such as owls and raptors, enabling them to resolve images at longer distances and in darker settings.

But large eyes can also be susceptible to overexposure and glare in bright environments. Previous research has shown too much light can overwhelm birds, causing them to alter their feeding behavior and diminish their alertness to threats.

For four years, Ausprey and fellow University of Florida Ph.D. student Felicity Newell, a study co-author, surveyed birds in the cloud forests of northern Peru, part of the tropical Andes, a global biodiversity hotspot. In these forests, light is structured on a vertical gradient, powerful at the canopy and increasingly weaker as it filters down to the darkest parts of the understory. Gaps in the canopy open up patches of startling brightness, changing light intensity "over infinitesimally small scales," Ausprey said. "You can go from being very dark to very bright within inches."

The swift, dramatic changes in the landscape are mirrored in its variety of birds: A difference of 1,000 feet in elevation can uncover a completely distinct avian community.

The region is also home to small-scale farms with livestock pastures and vegetable fields, often interspersed with islands of remaining forest. The broad range of ambient light, from the deep, dark forest interior to wide open country, made an ideal model system for measuring birds' use of light, Newell said.

Ausprey and Newell measured eye size relative to body size in 240 species that make up the cloud forest bird community of Amazonas, their study region.

They found the largest-eyed insect-eating birds were "far-sighted" species, those that nab prey on the wing, such as flycatchers. Eye size in "near-sighted" species that hunt in the dimly lit understory increased the closer to the ground they lived. One such big-eyed species is the rufous-vented tapaculo, Scytalopus femoralis, a bird only found in Peru. Ausprey said the species behaves much like a mouse, scuttling across the forest floor in search of insects in mossy logs and under tree roots.

For bird groups that eat fruit, seeds and nectar - food items that don't require capture - eye size did not vary based on which part of the forest they inhabited.

The researchers also attached tiny light-sensing backpacks to 71 birds representing 15 focal species. The sensors tracked the intensity of light the birds encountered over a period of days, providing a first look at their light "micro-environments."

Of these 15 species, the bird that inhabited the darkest environment was the rusty-tinged antpitta, Grallaria przewalskii, another species exclusive to Peru, which spends much of its life walking along the forest floor. The blue-capped tanager, Thraupis cyanocephala, lived in the brightest environments.

The researchers also found that eye size was correlated with the abundance of a species in agricultural settings, with smaller-eyed birds being more common, suggesting that birds better adapted to the dark forest understory would struggle to adjust to the flood of light in a field, Ausprey said.

Preliminary results from subsequent research suggest these patterns hold at a global scale. The trend might also carry over into urban areas, which "are basically extreme forms of agricultural landscapes in some ways," he said.

In fact, the rufous-collared sparrow, Zonotrichia capensis, the bird most commonly found in agricultural fields, is also the most abundant species in Latin American cities, Newell said.

The study is the first to emerge from Ausprey and Newell's project, which examined how climate and land use influence cloud forest birds.

"This study makes excellent use of emerging technologies to answer one of the major questions in ecology - how do light levels affect the niches of birds and their vulnerability to habitat modification," said Scott Robinson, Ordway Eminent Scholar at the Florida Museum.

But the technology required a bit of MacGyvering: The light sensors don't directly transmit data, meaning Ausprey had to figure out a way to get them back. The solution was to superglue a radio tag to the delicate sensor and use a surgical adhesive to attach the packet to a bird's back, sticking long enough to get meaningful information, but detaching after a few days. Ausprey would then clamber over steep ridges and through thick shrubs and bamboo, antenna in hand, to retrieve it.

They also had to select the bird species that would cooperate: Large tanagers, toucans and woodcreepers were excluded due to their strong bills and proclivity for aggressive behavior. Even so, three of the expensive, imported sensors wound up chewed and destroyed.

"When you work with technology in the field, you have to have a strong stomach for tragedy," Ausprey said.

Ausprey and Newell expressed thanks to the large team - about 100 people - of field assistants, hosts, nature reserve staff and community members that contributed to the project.

Ausprey is also a fellow with the UF Biodiversity Institute.

Credit: 
Florida Museum of Natural History

Why do people respond differently to the same drug?

Scientists at Scripps Research have comprehensively mapped how a key class of proteins within cells regulates signals coming in from cell surface receptors.

The study reveals, among other things, that people commonly have variants in these proteins that cause their cells to respond differently when the same cell receptor is stimulated--offering a plausible explanation for why people's responses to the same drugs can vary widely.

The findings, published October 1 in Cell, set the stage for a better understanding of the complex roles these proteins, known as RGS proteins, play in health and disease. That in turn could lead to new treatment approaches for a range of conditions.

"Before you can fix things, you need to know how they're broken and how they work normally, and in this study that's essentially what we've done for these important regulatory proteins," says study senior author Kirill Martemyanov, PhD, professor and chair of the Department of Neuroscience at Scripps Research's Florida campus.

A reset button for cell receptors

RGS proteins, discovered about 25 years ago, provide an essential "braking" function for a large family of cellular receptors called G-protein-coupled receptors. GPCRs, as they're known, control hundreds of important functions on cells throughout the body, and have been implicated in dozens of diseases, from heart problems to vision impairments and mood disorders. Accordingly, GPCRs comprise the largest single category of drug targets--more than a third of FDA-approved drugs treat diseases by binding to GPCRs and modifying their activities.

When GPCRs are activated by hormones or neurotransmitters, they initiate signaling cascades within their host cells, via signal-carrying proteins called G-proteins. RGS (Regulator of G-Protein Signaling) proteins work by deactivating G-proteins, shutting off this signaling cascade. This shutoff mechanism limits G-protein signaling to a brief time window and allows cells to reset and accept new incoming signals. Without it, the GPCR-initiated signal stays on inappropriately and functional signaling becomes dysfunctional.

"One condition I studied earlier in my career involves the loss of RGS regulation in light-detecting cells in the retina," Martemyanov says. "Patients born with this condition can't stop perceiving light, even when they go into a dark room, and they can't track moving objects very well because they lack the normal visual refresh rate. It's easy to imagine how devastating it would be if you had a similar loss of RGS regulation in the heart or the brain where timing is so important."

Scanning the 'barcodes' for clues

Researchers have evaluated some RGS proteins individually, but in the new study, Martemyanov and colleagues painstakingly covered all 20 of the RGS proteins found in human cells, studying how each one selectively recognizes and regulates its G-protein counterparts. In so doing the researchers essentially created a roadmap for how GPCR signals are routed in cells.

"This selective recognition of G-protein subunits turns out to be performed by a few elements in each RGS protein--elements organized in a pattern resembling a barcode," says study first author Ikuo Masuho, PhD, staff scientist in the Martemyanov lab.

In an analysis of the genomes of more than 100,000 people, the researchers showed in general how mutations and common variations in RGS barcode regions can disrupt RGS proteins' recognition of G-proteins or even cause them to recognize the wrong G-proteins. The team also demonstrated a particular example, showing how mutations in the RGS protein known as RGS16, which have been linked to insomnia, cause it to lose its usual recognition of G proteins.

"It's clear that genetic variation in the RGS barcode regions has the potential to disrupt normal GPCR signaling, to cause disease or to create more subtle differences or traits," Martemyanov says. "For example, it may help explain why different individuals treated with the same GPCR-targeting drug often differ widely in their responses."

Martemyanov and his team found that RGS proteins' barcode regions and the G-proteins they regulate are constantly evolving. They were able to reconstruct less refined, "ancestral" RGS proteins, based on analyses of different species. From these findings they were able to devise principles for crafting "designer" RGS proteins that regulate a desired set of G-proteins.

The same principles could guide the development of drugs targeting RGS proteins for therapeutic benefits, a major ongoing effort in the GPCR field. Treatments that put corrective new RGS proteins in cells might be another avenue, Martemyanov says.

Credit: 
Scripps Research Institute

Decent living for all does not have to cost the Earth

Global energy consumption in 2050 could be reduced to the levels of the 1960s and still provide a decent standard of living for a population three times larger, according to a new study.

The study led by the University of Leeds has estimated the energy resource needed for everyone to be provided decent living standards in 2050 - meaning all their basic human needs such as shelter, mobility, food and hygiene are met, while also having access to modern, high quality healthcare, education and information technology.

The findings, published in in the journal Global Environmental Change, reveal that decent living standards could be provided to the entire global population of 10 billion that is expected to be reached by 2050, for less than 40% of today's global energy. This is roughly 25% of that forecast by the International Energy Agency if current trends continue.

This level of global energy consumption is roughly the same as that during the 1960s, when the population was only three billion.

The authors emphasise that achieving this would require sweeping changes in current consumption, widespread deployment of advanced technologies, and the elimination of mass global inequalities.

However, not only do the findings show that the energy required to provide a decent living could likely be met entirely by clean sources, but it also offers a firm rebuttal to reactive claims that reducing global consumption to sustainable levels requires an end to modern comforts and a 'return to the dark ages'.

The authors' tongue in cheek response to the critique that sweeping energy reform would require us all to become 'cave dwellers' was: "Yes, perhaps, but these are rather luxurious caves with highly-efficient facilities for cooking, storing food and washing clothes; comfortable temperatures maintained throughout the year, computer networks -- among other things -- not to mention the larger caves providing universal healthcare and education to all 5-19 year olds."

The study calculated minimum final energy requirements, both direct and indirect, to provide decent living standards. Final energy is that delivered to the consumer's door, for example, heating, electricity or the petrol that goes into a car, rather than the energy embedded in fuels themselves - much of which is lost at power stations in the case of fossil fuels.

The team built a final energy-model, which builds upon a list of basic material needs that underpin human well-being previously developed by Narasimha Rao and Jihoon Min.

The study compared current final energy consumption across 119 countries to the estimates of final energy needed for decent living and found the vast majority of countries are living in significant surplus. In countries that are today's highest per-capita consumers, energy cuts of nearly 95% are possible while still providing decent living standards to all.

Study lead author Dr Joel Millward-Hopkins from the School of Earth and Environment at Leeds said: "Currently, only 17% of global final energy consumption is from non-fossil fuel sources. But that is nearly 50% of what we estimate is needed to provide a decent standard of living for all in 2050."

"Overall, our study is consistent with the long-standing arguments that the technological solutions already exist to support reducing energy consumption to a sustainable level. What we add is that the material sacrifices needed to for these reductions are far smaller than many popular narratives imply."

Study co-author Professor Julia Steinberger leader of the Living Well Within Limits project at the University Leeds and professor at the Université de Lausanne in Switzerland said: "While government official are levelling charges that environmental activists 'threaten our way of life' it is worth re-examining what that way of life should entail. There has been a tendency to simplify the idea of a good life into the notion that more is better.

"It is clearly within our grasp to provide a decent life for everyone while still protecting our climate and ecosystems."

Study co-author Professor Narasimha Rao from Yale University said: "This study also confirms our earlier findings at a global scale that eradicating poverty is not an impediment to climate stabilization, rather it's the pursuit of unmitigated affluence across the world."

Study co-author Yannick Oswald, PhD researcher at the School of Earth and Environment at Leeds said: "To avoid ecological collapse, it is clear that drastic and challenging societal transformations must occur at all levels, from the individual to institutional, and from supply through to demand."

Credit: 
University of Leeds

Tumor progression depends on the tumor microenvironment

image: The tumor microenvironment (TME) consists of various components, including cancer cells, tumor vessels, cancer-associated fibroblasts (CAFs), and inflammatory cells. These components interact with each other via various cytokines including TGF-β and TNF-α, which often induce tumor progression. In the present study, we showed that TGF-β and TNF-α induce the endothelial-to-mesenchymal transition (EndMT), in which endothelial cells (ECs) acquire mesenchymal phenotypes. The ECs that have undergone EndMT, in turn, secrete TGF-β2 and Activin by themselves. The irreversible and long-lasting effects by the cytokines result in the stabilization of mesenchymal phenotypes of ECs. The secreted cytokines derived from the ECs also induce the epithelial-to-mesenchymal transition (EMT) of epithelial cancer cells, which contributes to tumor progression.

Image: 
Department of Biochemistry,TMDU

Researchers from Tokyo Medical and Dental University (TMDU) and Niigata University identify a novel mechanism by which tumors progress

Tokyo, Japan - Tumor cells constantly interact with the cellular environment they live in, affecting tumor progression and metastasis. In a new study, researchers from Tokyo Medical and Dental University (TMDU) and Niigata University discovered that the proteins transforming growth factor-β (TGF-β) and tumor necrosis factor alpha (TNF-α) promote the development of cancer-associated fibroblasts, which in turn contribute to tumor progression.

Tumor cells live within a specific tumor microenvironment, in which they are surrounded by other cell types, including those comprising blood vessels, cancer-associated fibroblasts (CAFs) and inflammatory cells. The tumor microenvironment consists of a network of molecules that provide structural and biochemical support to all cells. One such molecule is TGF-β, which is known to induce endothelial-mesenchymal transition (EndMT), a process that involves the conversion of endothelial cells, which comprise the inner lining of blood vessels, to CAFs, which modulate the tumor progression.

"We have previously shown that CAFs derived from EndMT promote tumor formation and progression. However, we do not know much about the molecular mechanisms underlying these interactions," says corresponding author of the study Tetsuro Watabe (TMDU). "The goal of our study was to investigate how the tumor microenvironment contributes to tumor progression."

To achieve their goal, the researchers focused on the protein TNF-α, a known cytokine secreted by inflammatory cells. Because tumors are often infiltrated by inflammatory cells, the tumor microenvironment contains high levels of TNF-α. To understand the roles of TNF-α in TGF-β-induced EndMT, the researchers treated human endothelial cells with TGF-β, TNF-α, or both. TGF-β robustly induced EndMT as shown through increased expression of various markers of CAFs as well as a transition towards a CAF morphology of the human endothelial cells. However, interestingly enough, TNF-α enhanced the molecular effects of TGF-β.

Because one of the main biological functions of CAFs is to secrete proteins into the tumor microenvironment and induce tumor progression, the researchers next cultured human oral cancer cells in the presence of the proteins secreted from EndMT-derived CAFs. The researchers found that oral cancer cells underwent epithelial-mesenchymal transition (EMT). Since EMT is a hallmark of tumor progression and metastasis, these results demonstrated that the proteins secreted by CAFs contribute to tumor progression. The researchers also found that these effects of CAFs on tumor progression was suppressed by inhibition of TGF-β, suggesting that TGF-β protein secreted from CAFs induced EMT.

"These are striking results that identify a molecular mechanism underlying the role of the tumor microenvironment in tumor biology. We hope that our findings will aid in the development of novel cancer therapies," says first and co-corresponding author Yasuhiro Yoshimatsu (Niigata University).

Credit: 
Tokyo Medical and Dental University

'Social cells' related to social behavior identified in the brain

image: A. Imaging using a miniaturized fluorescence microscope. B. Changes in GCaMP6f fluorescence in six cells. C. Reconstructed images of the GRIN lens implanted in the AI. D. Traces of 2 Social-ON cells and 1 Social-OFF cell. E. Pie charts showing the subcategories of neurons.

Image: 
© 2020 Miura et al.

A research team led by Professor TAKUMI Toru of Kobe University's Graduate School of Medicine (also a Senior Visiting Scientist at RIKEN Center for Biosystems Dynamics Research) have identified 'social cells' in the brain that are related to social behavior. The cells were identified via Ca imaging (*1) conducted using a microendoscope (*2). It is expected that further research will illuminate the neural network for decision-making in social behavior.

These research findings were published in the American scientific journal 'PLOS Biology' on September 21.

Main Points

Using a microendoscope, the research team discovered social cells in the cerebral insular cortex (*3) of mice that are related to social behavior.

The insular cortex acts as an interface between social and emotional modules in the 'social decision-making network'.

It is known that the insular cortex is altered in various neuropsychiatric disorders such as schizophrenia and autism. The results of this study provide insight into the social functions of the insular cortex at a cellular level

Research Background

Social distancing is one of the methods that is being endorsed to prevent widespread novel coronavirus infection. It has been said that a physical distance of 2 meters is necessary to prevent virus transmission. How are distances between people, socially distanced exchanges and societal interactions determined?

This kind of social behavior is a complex behavioral pattern consisting of 'sensory inputs', 'internal states' and 'decision making' that span the stages from 'sensory processing' to 'behavior modification'. The behavioral pattern sequence begins with individuals using multimodal (*4) sensory cues (such as sight, smell, hearing and touch) to process information about another individual. This information is then referenced against (social) internal states (such as motive, arousal, emotion, interoception and memory) and behavior is decided upon (for example, investigative, mating, aggressive, nurturing or dominating behaviors).

It is understood that regions of the brain such as the amygdala, pituitary, mesencephalon and the prefrontal cortex are involved in social behavior. However, the detailed neural networks of these regions are not yet well understood, particularly at a cellular level.

Methods to map neural activity across the entire brain have been developed. These include functional MRI and mapping the expression of immediate-early genes, such as c-fos. In addition, specific neural network regions in freely moving mice have been monitored by methods including fiber photometry and Ca imaging conducted using a miniaturized fluorescence microscope affixed to the rodent's head (as in this study), in addition to observations conducted with a two-photon microscope on mice with their heads fixed in place.

Research Methodology and Findings

A test mouse was placed in a 'home cage' and its behavior towards either a stranger mouse (that it had never come into contact with before) or a static object placed in the same cage was observed (Figure 1A). Both the duration and frequency of interactions between the test mouse and stranger mouse were significantly greater than the test mouse's interactions with the static object (Figure 1B). The test mouse demonstrated social behaviors, including contact with the nose, body and anus of the stranger mouse (Figure 1C).

The research team injected Ca indicator (GCaMP) (*5) into the insular cortex of mice using an adeno-associated virus vector. Next a GRIN lens (*6) was inserted into the agranular insular cortex (AI). This enabled the research team to use Ca imaging conducted via the miniaturized fluorescence microscope fixed to each mouse's head to record the neural activity of freely moving mice (Figure 2A).

Cellular recordings of the nerve cells (neurons) of nine mice were analyzed. Through this analysis, two types of neurons were identified; Social-ON cells, the activation of which corresponds with social interactions, and Social-OFF cells, which are active when there is no social interaction. Out of a total of 737 neurons, Social-ON cells accounted for 22.8% (168 cells) and Social OFF cells made up 1.4% (10 cells). When the test mouse was stationary during social interaction, 60.1% of the 168 Social-ON cells were active. On the other hand, 7.1% were active when the mouse was moving during social interaction. In addition, the percentage of active Social-ON cells in the test mouse correlated with different types of contact with the stranger mouse (nose-to-nose contact: 35.7% of Social-ON cells active, contact with body: 20.2%, and contact with anus: 5.4%) (see Figure 2E).

Next a different test was conducted where a mouse was placed in a rectangular chamber, with two smaller divided off sections (Section A and B respectively) at either end (see Figure 3A). For the control experiment, Sections A and B were left empty. For the 1st Interaction experiment, a static object was placed in Section A and a stranger mouse was placed in Section B. For the 2nd Interaction, the positions of the object and stranger mouse were reversed. In each experiment the test mouse's behavior was observed for 4 minutes and the difference in the duration of social interaction (contact) was investigated for each experiment. In the control experiment, there was no difference in the time spent between Sections A and B. However, in the both 1st and 2nd Interaction experiments, the test mouse had a longer contact time with the section that contained the stranger mouse (1st Interaction: Section B, 2nd Interaction: Section A) (Figure 3B).

Moreover, the researchers were able to identify Social-ON cells (that responded to the presence of the stranger mouse) and Social-OFF cells (active during periods of no social interaction) through analysis of the neurons inside the AI. These cells reacted to the stranger mouse, regardless of whether it was placed in Sections A or B (Figure 3C).

Further Research

The insular cortex is anatomically situated to integrate multimodal sensory social cues into the 'Social decision-making network', which consists of the 'social behavior network' and the mesolimbic reward system. This research succeeded in directly observing AI activity during social interaction at a single cell level, providing new insight into the cellular basis of the insular cortex's social functions. In other words, the researchers identified a large number of Social-ON cells and a smaller number of Social-OFF cells that demonstrated opposing activity during social interactions. Further research to identify and manipulate the activity of Social-ON and Social-OFF cells' projection targets is expected to advance our understanding of them at a circuitry level.

Furthermore, this discovery regarding the characteristics of AI neurons implies that the insular cortex can induce a bottom-up process in response to salient stimuli (strong sensory stimuli). This result corresponds with knowledge accumulated so far regarding how the insular cortex acts as an interface between social and emotional modules in the 'social decision-making network'.

In addition, it is known that the structure and function of the insular cortex is altered in various neuropsychiatric disorders such as schizophrenia and autism. The results of this study provide insight into the social functions of the insular cortex at a cellular level. Next, further analysis will be carried out using animal models of these neuropsychiatric disorders.

Acknowledgements

Part of this study received the following funding: A Japan Society for the Promotion of Science (JSPS) Grant-in-aid for Scientific Research (S), A JSPS Grant-in-Aid for Scientific Research on Innovative Areas: 'Dynamic regulation of brain function by Scrap & Build system.' and a research grant from the Takeda Science Foundation.

Credit: 
Kobe University