Tech

Low-carbon policies can be 'balanced' to benefit small firms and average households - study

Some of the low-carbon policy options currently used by governments may be detrimental to the households and small businesses less able to manage added short-term costs from energy price hikes, according to a new study.

However, it also suggests that this menu of decarbonising policies, from quotas to feed-in tariffs, can be designed and balanced to benefit local firms and lower-income families - vital for achieving 'Net Zero' carbon and a green recovery.

University of Cambridge researchers combed through thousands of studies to create the most comprehensive analysis to date of widely used types of low-carbon policy, and compared how they perform in areas such as cost and competitiveness.

The findings are published today in the journal Nature Climate Change. The researchers also poured all their data into an interactive online tool that allows users to explore evidence around carbon-reduction policies from across the globe.

"Preventing climate change cannot be the only goal of decarbonisation policies," said study lead author Dr Cristina Peñasco, a public policy expert from the University of Cambridge.

"Unless low-carbon policies are fair, affordable and economically competitive, they will struggle to secure public support - and further delays in decarbonisation could be disastrous for the planet."

Around 7,000 published studies were whittled down to over 700 individual findings. These results were coded to allow comparison - with over half the studies analysed "blind" by different researchers to avoid bias.

The ten policy "instruments" covered in the study include forms of investment - targeted R&D funding, for example - as well as financial incentives including different kinds of subsidies, taxes, and the auctioning of energy contracts.

The policies also include market interventions - e.g. emissions permits; tradable certificates for clean or saved energy - and efficiency standards, such as those for buildings.

Researchers looked at whether each policy type had a positive or negative effect in various environmental, industrial and socio-economic areas.

When it came to "distributional consequences" - the fairness with which the costs and benefits are spread - the mass of evidence suggests that the impact of five of the ten policy types are far more negative than positive.

"Small firms and average households have less capacity to absorb increases in energy costs," said co-author Laura Diaz Anadon, Professor of Climate Change Policy.

"Some of the investment and regulatory policies made it harder for small and medium-size firms to participate in new opportunities or adjust to changes.

"If policies are not well designed and vulnerable households and businesses experience them negatively, it could increase public resistance to change - a major obstacle in reaching net zero carbon," said Anadon.

For example, feed-in tariffs pay renewable electricity producers above market rates. But these costs may bump energy prices for all if they get passed on to households - leaving the less well-off spending a larger portion of their income on energy.

Renewable electricity traded as 'green certificates' can redistribute wealth from consumers to energy companies - with 83% of the available evidence suggesting they have a "negative impact", along with 63% of the evidence for energy taxes, which can disproportionately affect rural areas.

However, the vast tranche of data assembled by the researchers reveals how many of these policies can be designed and aligned to complement each other, boost innovation, and pave the way for a fairer transition to zero carbon.

For example, tailoring feed-in tariffs (FiTs) to be "predictable yet adjustable" can benefit smaller and more dispersed clean energy projects - improving market competitiveness and helping to mitigate local NIMBYism*.

Moreover, revenues from environmental taxes could go towards social benefits or tax credits e.g. reducing corporate tax for small firms and lowering income taxes, providing what researchers call a "double dividend": stimulating economies while reducing emissions.

The researchers argue that creating a "balance" of well-designed and complementary policies can benefit different renewable energy producers and "clean" technologies at various stages.

Government funding for research and development (R&D) that targets small firms can help attract other funding streams - boosting both eco-innovation and competitiveness. When combined with R&D tax credits, it predominantly supports innovation in startups rather than corporations.

Government procurement, using tiered contracts and bidding, can also improve innovation and market access for smaller businesses in "economically stressed" areas. This could aid the "levelling up" between richer and poorer regions as part of any green recovery.

"There is no one-size-fits-all solution," said Peñasco. "Policymakers should deploy incentives for innovation, such as targeted R&D funding, while also adapting tariffs and quotas to benefit those across income distributions.

"We need to spur the development of green technology at the same time as achieving public buy-in for the energy transition that must start now to prevent catastrophic global heating," she said.

Credit: 
University of Cambridge

Low-carbon policies can be 'balanced' to benefit small firms and average households

Some of the low-carbon policy options currently used by governments may be detrimental to the households and small businesses less able to manage added short-term costs from energy price hikes, according to a new study.

However, it also suggests that this menu of decarbonising policies, from quotas to feed-in tariffs, can be designed and balanced to benefit local firms and lower-income families - vital for achieving 'Net Zero' carbon and a green recovery.

University of Cambridge researchers combed through thousands of studies to create the most comprehensive analysis to date of widely used types of low-carbon policy, and compared how they perform in areas such as cost and competitiveness.

The findings are published today in the journal Nature Climate Change. The researchers also poured all their data into an interactive online tool that allows users to explore evidence around carbon-reduction policies from across the globe.

"Preventing climate change cannot be the only goal of decarbonisation policies," said study lead author Dr Cristina Peñasco, a public policy expert from the University of Cambridge.

"Unless low-carbon policies are fair, affordable and economically competitive, they will struggle to secure public support - and further delays in decarbonisation could be disastrous for the planet."

Peñasco authored the review with Prof Laura Diaz Anadon, Director of Cambridge's Centre for Environment, Energy and Natural Resource Governance (C-EENRG), and Prof. Elena Verdolini from the RFF-CMCC European institute on Economics and the Environments (EIEE) and the Euro-Mediterranean Centre on Climate Change and University of Brescia. Anadon and Verdolini lead the workpackage of the EU project INNOPATHS that funded the work.

Around 7,000 published studies were whittled down to over 700 individual findings. These results were coded to allow comparison - with over half the studies analysed "blind" by different researchers to avoid bias.

The ten policy "instruments" covered in the study include forms of investment - targeted R&D funding, for example - as well as financial incentives including different kinds of subsidies, taxes, and the auctioning of energy contracts.

The policies also include market interventions - e.g. emissions permits; tradable certificates for clean or saved energy - and efficiency standards, such as those for buildings.

Researchers looked at whether each policy type had a positive or negative effect in various environmental, industrial and socio-economic areas.

When it came to "distributional consequences" - the fairness with which the costs and benefits are spread - the mass of evidence suggests that the impact of five of the ten policy types are far more negative than positive.

"Small firms and average households have less capacity to absorb increases in energy costs," said co-author Laura Diaz Anadon, Professor of Climate Change Policy.

"Some of the investment and regulatory policies made it harder for small and medium-size firms to participate in new opportunities or adjust to changes.

"If policies are not well designed and vulnerable households and businesses experience them negatively, it could increase public resistance to change - a major obstacle in reaching net zero carbon," said Anadon.

For example, feed-in tariffs pay renewable electricity producers above market rates. But these costs may bump energy prices for all if they get passed on to households - leaving the less well-off spending a larger portion of their income on energy.

Renewable electricity traded as 'green certificates' can redistribute wealth from consumers to energy companies - with 83% of the available evidence suggesting they have a "negative impact", along with 63% of the evidence for energy taxes, which can disproportionately affect rural areas.

However, the vast tranche of data assembled by the researchers reveals how many of these policies can be designed and aligned to complement each other, boost innovation, and pave the way for a fairer transition to zero carbon.

For example, tailoring feed-in tariffs (FiTs) to be "predictable yet adjustable" can benefit smaller and more dispersed clean energy projects - improving market competitiveness and helping to mitigate local NIMBYism*.

Moreover, revenues from environmental taxes could go towards social benefits or tax credits e.g. reducing corporate tax for small firms and lowering income taxes, providing what researchers call a "double dividend": stimulating economies while reducing emissions.

The researchers argue that creating a "balance" of well-designed and complementary policies can benefit different renewable energy producers and "clean" technologies at various stages.

Government funding for research and development (R&D) that targets small firms can help attract other funding streams - boosting both eco-innovation and competitiveness. When combined with R&D tax credits, it predominantly supports innovation in startups rather than corporations.

Government procurement, using tiered contracts and bidding, can also improve innovation and market access for smaller businesses in "economically stressed" areas. This could aid the "levelling up" between richer and poorer regions as part of any green recovery.

"There is no one-size-fits-all solution," said Peñasco. "Policymakers should deploy incentives for innovation, such as targeted R&D funding, while also adapting tariffs and quotas to benefit those across income distributions.

"We need to spur the development of green technology at the same time as achieving public buy-in for the energy transition that must start now to prevent catastrophic global heating," she said.

Peñasco and Anadon contributed to the recent report from Cambridge Zero - the University's climate change initiative. In it, they argue for piloting a UK government research programme akin to ARPA in the US, but focused on new net-zero technologies.

Credit: 
CMCC Foundation - Euro-Mediterranean Center on Climate Change

Timing is of the essence when treating brain swelling in mice

image: NIH researchers tracked the progression of brain blood vessel repair after injury. At day 1 after injury (left panel), the brain had areas of bleeding and broken vessels. Ten days later (right panel) the vessels were almost completely rebuilt.

Image: 
Image McGavern Lab/NINDS.

Researchers from the National Institutes of Health have discovered Jekyll and Hyde immune cells in the brain that ultimately help with brain repair but early after injury can lead to fatal swelling, suggesting that timing may be critical when administering treatment. These dual-purpose cells, which are called myelomonocytic cells and which are carried to the brain by the blood, are just one type of brain immune cell that NIH researchers tracked, watching in real-time as the brain repaired itself after injury. The study, published in Nature Neuroscience, was supported by the National Institute of Neurological Disorders and Stroke (NINDS) Intramural Research Program at NIH.

"Fixing the brain after injury is a highly orchestrated, coordinated process, and giving a treatment at the wrong time could end up doing more harm than good," said Dorian McGavern, Ph.D., NINDS scientist and senior author of the study.

Cerebrovascular injury, or damage to brain blood vessels, can occur following several conditions including traumatic brain injury or stroke. Dr. McGavern, along with Larry Latour, M.D., NINDS scientist, and their colleagues, observed that a subset of stroke patients developed bleeding and swelling in the brain after surgical removal of the blood vessel clot responsible for the stroke. The swelling, also known as edema, results in poor outcomes and can even be fatal as brain structures become compressed and further damaged.

To understand how vessel injury can lead to swelling and to identify potential treatment strategies, Dr. McGavern and his team developed an animal model of cerebrovascular injury and used state-of-the-art microscopic imaging to watch how the brain responded to the damage in real-time.

Immediately after injury, brain immune cells known as microglia quickly mobilize to stop blood vessels from leaking. These "first responders" of the immune system extend out and wrap their arms around broken blood vessels. Dr. McGavern's group discovered that removing microglia causes irreparable bleeding and damage in the brain.

A few hours later, the damaged brain is invaded by circulating peripheral monocytes and neutrophils (or, myelomonocytic cells). As myelomonocytic cells move from the blood into the brain, they each open a small hole in the vasculature, causing a mist of fluid to enter the brain. When thousands of these cells rush into the brain simultaneously, a lot of fluid comes in all at once and results in swelling.

"The myelomonocytic cells at this stage of repair mean well, and do want to help, but they enter the brain with too much enthusiasm. This can lead to devastating tissue damage and swelling, especially if it occurs around the brain stem, which controls vital functions such as breathing," said Dr. McGavern.

After this initial surge, the monocytic subset of immune cells enter the brain at a slower, less damaging rate and get to work repairing the vessels. Monocytes work together with repair-associated microglia to rebuild the damaged vascular network, which is reconnected within 10 days of injury. The monocytes are required for this important repair process.

In the next set of experiments, Dr. McGavern and his colleagues tried to reduce secondary swelling and tissue damage by using a combination of therapeutic antibodies that stop myelomonocytic cells from entering the brain. The antibodies blocked two different adhesion molecules that myelomonocytic cells use to attach to inflamed blood vessels. These were effective at reducing brain swelling and improving outcomes when administered within six hours of injury.

Interestingly, the therapeutic antibodies did not work if given after six hours or if they were given for too long. In fact, treating mice over a series of days with these antibodies inhibited the proper repair of damaged blood vessels, leading to neuronal death and brain scarring.

"Timing is of the essence when trying to prevent fatal edema. You want to prevent the acute brain swelling and damage, but you do not want to block the monocytes from their beneficial repair work," said Dr. McGavern.

Plans are currently underway for clinical trials to see if administering treatments at specific timepoints will reduce edema and brain damage in a subset of stroke patients. Future research studies will examine additional aspects of the cerebrovascular repair process, with the hope of identifying other therapeutic interventions to promote reparative immune functions.

Credit: 
NIH/National Institute of Neurological Disorders and Stroke

Vermont's BIPOC drivers are most likely to have a run-in with police, study shows

image: University of Vermont Economics professor Stephanie Seguino is co-author of the new study that shows widespread racial bias persists among Vermont police departments after examining more than 800,000 vehicular stops over five years.

Image: 
Ian Thomas Jansen-Lonnquist

New research examining more than 800,000 traffic stops in Vermont over the course of five years substantiates the term "driving while Black and Brown."

Compared to white drivers, Black and Latinx drivers in Vermont are more likely to be stopped, ticketed, arrested, and searched. But they are less likely to be found with contraband than white drivers. The report finds evidence not only of racial disparities but also racial bias in policing. What's more, a number of these gaps widened over the years examined in the report. With such comprehensive data encompassing the state of Vermont, the authors also found that Vermont police stop cars at a rate of 255 per 1,000 residents, which is more than three times higher than the national average of 86 stops per 1,000 residents.

The report "Trends in Racial Disparities in Vermont Traffic Stops, 2014-19" -- led by University of Vermont Economics professor and Gund Fellow Stephanie Seguino with Cornell University's Nancy Brooks and data analyst Pat Autilio -- is a comprehensive review of racial disparities in the state's vehicular stops, tickets, arrests, searches, and contraband. As well, it analyzes the impact of 2018 legalization of cannabis -- previously considered contraband -- on these numbers.

Using data from 79 law enforcement agencies across the state, the report builds on Seguino and Brooks' past studies of traffic stops to include 50 more agencies and additional years of data. The authors first reported in 2017 (PDF) that in 2015, for every white driver arrested, nearly two Black drivers were arrested. That statistic has remained roughly the same into 2019.

This study also provides a breakdown of the data by agency, revealing wide variation across those 79 agencies and regions.

For example, on average, Black drivers are about 3.5 times more likely and Hispanics 3.9 times more likely to be searched during a stop than white drivers. But, in Brattleboro, Black drivers are almost 9 times more likely to be searched than white drivers; in Shelburne, 4.4 times greater; in South Burlington; 3.9 times greater; in Vergennes, 3.8 times greater; in Burlington, 3.6 times greater; and in Rutland, 3.45 times greater. This compares to Stowe, where Black drivers are less likely to be searched than white drivers.

The study finds that legalization of cannabis did little to narrow the Black and Latinx search rate disparities with white drivers. Even after legalization in 2018, Black drivers are 3 times as likely to be searched as white drivers, and Latinx drivers, 2.6 times more likely.

The researchers note there is a consistent issue with the quality of traffic stop data: some agencies don't always comply with the requirement to report the driver's race during a stop. "In fact," Autilio notes, "in more than a dozen agencies, the percentage of reports that exclude the driver's race is double the percentage of reports that indicate a BIPOC driver."

"This is concerning because the purpose of the legislation requiring agencies to collect data on traffic stops is to identify and track racial disparities in traffic policing," Brooks says. In a small state with few BIPOC communities, "just a small number of stops missing race of the driver can undermine the quality of the data and the ability to detect racial disparities."

Previous years' reports have been shared with Vermont's law enforcement officials, spurring racial bias training in some agencies and disputes in others. Seguino was recently unanimously appointed to the Burlington Police Commission for her work and expertise.

"Though this work is challenging, not the least because of the resistance of some law enforcement agencies to acknowledging these troubling racial disparities, we believe it important to continue to provide solid evidence on which to assess racial bias in policing," Seguino said. Brooks added, "Our hope is that our analysis is useful to law enforcement agencies committed to bias free policing. And we hope community members find these data helpful in holding their local law enforcement agencies accountable."

Credit: 
University of Vermont

Scientists shed light on how and why some people report "hearing the dead"

image: The Fox sisters: Kate (1838-92), Leah (1814-90) and Margaret (or Maggie) (1836-93). Lithograph after a daguerreotype by Appleby. Published by N. Currier, New York. In 1848, two sisters from upstate New York, Maggie and Kate Fox, reported hearing 'rappings' and 'knocks' that they interpreted as communication coming from a spirit in their house. These events and these sisters would eventually be considered the originators of Spiritualism.

Image: 
N. Currier, New York

Spiritualist mediums might be more prone to immersive mental activities and unusual auditory experiences early in life, according to new research.

This might explain why some people and not others eventually adopt spiritualist beliefs and engage in the practice of 'hearing the dead', the study led by Durham University found.

Mediums who "hear" spirits are said to be experiencing clairaudient communications, rather than clairvoyant ("seeing") or clairsentient ("feeling" or "sensing") communications.

The researchers conducted a survey of 65 clairaudient spiritualist mediums from the Spiritualists' National Union and 143 members of the general population in the largest scientific study into the experiences of clairaudient mediums.

They found that these spiritualists have a proclivity for absorption - a trait linked to immersion in mental or imaginative activities or experience of altered states of consciousness.

Mediums are also are more likely to report experiences of unusual auditory phenomena, like hearing voices, often occurring early in life.

Many who experience absorption or hearing voices encounter spiritualist beliefs when searching for the meaning behind, or supernatural significance of, their unusual experiences, the researchers said.

The findings are published in the journal Mental Health, Religion and Culture. The research is part of Hearing the Voice - an interdisciplinary study of voice-hearing based at Durham University and funded by the Wellcome Trust.

Spiritualism is a religious movement based on the idea that human souls continue to exist after death and communicate with the living through a medium or psychic.

Interest in Spiritualism is increasing in Britain with several organisations supporting, training, and offering the services of practising mediums. One of the largest, the SNU, claims to serve at least 11,000 members through its training college, churches, and centres.

Through their study, the researchers gathered detailed descriptions of the way that mediums experience spirit 'voices', and compared levels of absorption, hallucination-proneness, aspects of identity, and belief in the paranormal.

They found that 44.6 per cent of spiritualist participants reported hearing the voices of the deceased on a daily basis, with 33.8 per cent reporting an experience of clairaudience within the last day.

A large majority (79 per cent) said that experiences of auditory spiritual communication were part of their everyday lives, taking place both when they were alone and when they were working as a medium or attending a spiritualist church.

Although spirits were primarily heard inside the head (65.1 per cent), 31.7 per cent of spiritualist participants said they experienced spirit voices coming from both inside and outside the head.

When rated on scales of absorption, as well as how strongly they believe in the paranormal, spiritualists scored much more highly than members of the general population.

Spiritualists were less likely to care about what others thought of them than people generally, and they also scored more highly for proneness to unusual hallucination-like auditory experiences.

Both high levels of absorption and proneness to such auditory phenomena were linked to reports of more frequent clairaudient communications, according to the findings.

For the general population, absorption was associated with levels of belief in the paranormal, but there was no significant corresponding link between belief and hallucination-proneness.

There was also no difference in levels of superstitious belief or proneness to visual hallucinations between spiritualist and non-spiritualist participants.

Spiritualists reported first experiencing clairaudience at an average age of 21.7 years. However, 18 per cent of spiritualists reported having clairaudient experiences 'for as long as they could remember' and 71 per cent had not encountered Spiritualism as a religious movement prior to their first experiences.

The researchers say their findings suggest that it is not giving in to social pressure, learning to have specific expectations, or a level of belief in the paranormal that leads to experiences of spirit communication.

Instead, it seems that some people are uniquely predisposed to absorption and are more likely to report unusual auditory experiences occurring early in life. For many of these individuals, spiritualist beliefs are embraced because they align meaningfully with those unique personal experiences.

Lead researcher Dr Adam Powell, in Durham University's Hearing the Voice project and Department of Theology and Religion, said: "Our findings say a lot about 'learning and yearning'. For our participants, the tenets of Spiritualism seem to make sense of both extraordinary childhood experiences as well as the frequent auditory phenomena they experience as practising mediums.

"But all of those experiences may result more from having certain tendencies or early abilities than from simply believing in the possibility of contacting the dead if one tries hard enough."

Dr Peter Moseley, co-author on the study at Northumbria University, commented: "Spiritualists tend to report unusual auditory experiences which are positive, start early in life and which they are often then able to control. Understanding how these develop is important because it could help us understand more about distressing or non-controllable experiences of hearing voices too"

Durham's researchers are now engaged in further investigation of clairaudience and mediumship, working with practitioners to gain a fuller picture of what it is like to be on the receiving end of such unusual and meaningful experiences.

Credit: 
Taylor & Francis Group

New study connects religiosity in US South Asians to cardiovascular disease

BOSTON - The Study on Stress, Spirituality and Health (SSSH), a cutting-edge proteomics analysis, suggests that religious beliefs modulate protein expression associated with cardiovascular disease in South Asians in the United States. The research, published by investigators from Massachusetts General Hospital (MGH) and Beth Israel Deaconess Medical Center (BIDMC) and the University of California San Francisco (UCSF) in Scientific Reports, demonstrates that spiritual struggles, in particular, significantly modify the impact of unique proteins on risk of developing cardiovascular disease (CVD) in U.S. South Asians, a community that has especially high rates of CVD.

This study represents the first proteomics analysis ever conducted on protein levels in relationship to CVD within a U.S. South Asian population and the first published study to analyze proteomics signatures in relationship to religion and spirituality in any population.

"Before we can develop the best interventions to reduce CVD disparities, we need to understand the biological pathways through which health disparities are produced," says the study's principal investigator and co-senior author Alexandra Shields, PhD, director of the Harvard/MGH Center on Genomics, Vulnerable Populations and Health Disparities at the MGH Mongan Institute and associate professor of Medicine at Harvard Medical School (HMS). "As this study shows, psychosocial factors - and religious or spiritual struggles in particular - can affect biological processes that lead to CVD in this high-risk population. Spirituality can also serve as a resource for resilience and have a protective effect. Given that many of the minority communities that experience higher levels of CVD also report higher levels of religiosity and spirituality, studies such as the SSSH may help identify new leverage points, such as spiritually focused psychotherapy for those in spiritual distress, that could reduce risk of CVD for such individuals."

Results of the study, which included 50 participants who developed CVD and 50 sex- and age-matched controls without CVD from the Mediators of Atherosclerosis in South Asians Living in America (MASALA) Study (100 participants), indicate that there may be unique protein expression profiles associated with CVD in U.S. South Asian populations, and that these associations may also be impacted by religious struggles, in which, for example, individuals experiencing adverse life events feel they are being punished or abandoned by their God, or have a crisis of faith. The MASALA study includes 1,164 South Asians who were recruited from the San Francisco Bay Area and the greater Chicago area and followed for approximately eight years with the goal of investigating factors that lead to heart disease among this high-risk ethnic group. MASALA is one of the original cohorts participating in SSSH, through which this research was conducted.

"Understanding the pathways of this mechanism at the molecular level using proteomics technology is crucial to developing potential interventions that can help reduce CVD incidence in this population," says Long H. Ngo, PhD, lead author and co-director of Biostatistics in the Division of General Medicine at BIDMC and associate professor of Medicine at HMS.

Co-senior author Towia Libermann, PhD, director of Genomics, Proteomics, Bioinformatics and Systems Biology Center at BIDMC, adds: "The kinds of blood-based protein biomarkers used in this study are particularly effective in assessing CVD risk because they carry clinical information about risk of disease and are the most commonly used molecules for diagnostic applications."

Credit: 
Massachusetts General Hospital

Nanodiamonds feel the heat

image: (a) Illustration of the structure of a nanodiamond quantum sensor coated with a pyrogenic polymer, and how it operates as a hybrid nanoheater/thermometer. (b) Electron microscope image of hybrid sensors. (c) Working principle of the hybrid sensor for measuring nanometric thermal conductivity. In a medium with high thermal conductivity, the temperature increase of the diamond sensor is moderate, because heat readily diffuses away. In contrast, in a low thermal conductivity medium, the temperature rise is significantly larger. Intracellular thermal conductivity can be determined by measuring the temperature change of the hybrid sensors in cells.

Image: 
Osaka University

Osaka, Japan - A team of scientists from Osaka University, The University of Queensland, and the National University of Singapore's Faculty of Engineering used tiny nanodiamonds coated with a heat-releasing polymer to probe the thermal properties of cells. When irradiated with light from a laser, the sensors acted both as heaters and thermometers, allowing the thermal conductivity of the interior of a cell to be calculated. This work may lead to a new set of heat-based treatments for killing bacteria or cancer cells.

Even though the cell is the fundamental unit of all living organisms, some physical properties have remained difficult to study in vivo. For example, a cell's thermal conductivity, as well as the rate that heat can flow through an object if one side is hot while the other side is cold, remained mysterious. This gap in our knowledge is important for applications such as developing thermal therapies that target cancer cells, and for answering fundamental questions about cell operation.

Now, the team has developed a technique that can determine the thermal conductivity inside living cells with a spatial resolution of about 200 nm. They created tiny diamonds coated with a polymer, polydopamine, that emit both fluorescent light as well as heat when illuminated by a laser. Experiments showed that such particles are non-toxic and can be used in living cells. When inside a liquid or a cell, the heat raises the temperature of the nanodiamond. In media with high thermal conductivities, the nanodiamond did not get very hot because heat escaped quickly, but in an environment of low thermal conductivity, the nanodiamonds became hotter. Crucially, the properties of the emitted light depend on the temperature, so the research team could calculate the rate of heat flow from the sensor to the surroundings.

Having good spatial resolution allowed measurements in different locations inside the cells. "We found that the rate of heat diffusion in cells, as measured by the hybrid nanosensors, was several times slower than in pure water, a fascinating result which still waits for a comprehensive theoretical explanation and was dependent on the location," senior author Taras Plakhotnik says.

"In addition to improving heat-based treatments for cancer, we think potential applications for this work will result in a better understanding of metabolic disorders, such as obesity," senior author Madoka Suzuki says. This tool may also be used for basic cell research, for example, to monitor biochemical reactions in real time.

Credit: 
Osaka University

New videos show RNA as it's never been seen

video: New videos show RNA folding as its made by cellular machinery. Data -- collected from RNA experiments in the lab -- were inputted into computer models to generate accurate videos of the folding process.

Image: 
Julius Lucks/Northwestern University

A new Northwestern University-led study is unfolding the mystery of how RNA molecules fold themselves to fit inside cells and perform specific functions. The findings could potentially break down a barrier to understanding and developing treatments for RNA-related diseases, including spinal muscular atrophy and perhaps even the novel coronavirus.

"RNA folding is a dynamic process that is fundamental for life," said Northwestern's Julius B. Lucks, who led the study. "RNA is a really important piece of diagnostic and therapeutic design. The more we know about RNA folding and complexities, the better we can design treatments."

Using data from RNA-folding experiments, the researchers generated the first-ever data-driven movies of how RNA folds as it is made by cellular machinery. By watching their videos of this folding occur, the researchers discovered that RNA often folds in surprising, perhaps unintuitive ways, such as tying itself into knots -- and then immediately untying itself to reach its final structure.

"Folding takes place in your body more than 10 quadrillion times a second," Lucks said. "It happens every single time a gene is expressed in a cell, yet we know so little about it. Our movies allow us to finally watch folding happen for the first time."

The research will be published Jan. 15 in the journal Molecular Cell.

Lucks is an associate professor of chemical and biological engineering at Northwestern's McCormick School of Engineering and a member of Northwestern's Center for Synthetic Biology. He co-led the work with Alan Chen, an associate professor of chemistry at the University of Albany.

Although videos of RNA folding do exist, the computer models that generate them are full of approximations and assumptions. Lucks' team has developed a technology platform that captures data about RNA folding as the RNA is being made. His group then uses computational tools to mine and organize the data, revealing points where the RNA folds and what happens after it folds. Angela Yu, a former student of Lucks, inputted this data into computer models to generate accurate videos of the folding process.

"The information that we give the algorithms helps the computer models correct themselves," Lucks said. "The model makes accurate simulations that are consistent with the data."

Lucks and his collaborators used this strategy to model the folding of an RNA called SRP, an ancient RNA found in all kingdoms of life. The molecule is well-known for its signature hairpin shape. When watching the videos, the researchers discovered that the molecule ties itself into a knot and unties itself very quickly. Then it suddenly flips into the correct hairpin-like structure using an elegant folding pathway called toehold mediated strand displacement.

"To the best of our knowledge, this has never been seen in nature," Lucks said. "We think the RNA has evolved to untie itself from knots because if knots persist, it can render the RNA nonfunctional. The structure is so essential to life that it had to evolve to find a way to get out of a knot."

Credit: 
Northwestern University

Study: X-Rays surrounding 'Magnificent 7' may be traces of sought-after particle

image: An artistic rendering of the XMM-Newton (X-ray multi-mirror mission) space telescope. A study of archival data from the XMM-Newton and the Chandra X-ray space telescopes found evidence of high levels of X-ray emission from the nearby Magnificent Seven neutron stars, which may arise from the hypothetical particles known as axions.

Image: 
D. Ducros; ESA/XMM-Newton, CC BY-SA 3.0 IGO

A new study, led by a theoretical physicist at the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab), suggests that never-before-observed particles called axions may be the source of unexplained, high-energy X-ray emissions surrounding a group of neutron stars.

First theorized in the 1970s as part of a solution to a fundamental particle physics problem, axions are expected to be produced at the core of stars, and to convert into particles of light, called photons, in the presence of a magnetic field.

Axions may also make up dark matter - the mysterious stuff that accounts for an estimated 85 percent of the total mass of the universe, yet we have so far only seen its gravitational effects on ordinary matter. Even if the X-ray excess turns out not to be axions or dark matter, it could still reveal new physics.

A collection of neutron stars, known as the Magnificent 7, provided an excellent test bed for the possible presence of axions, as these stars possess powerful magnetic fields, are relatively nearby - within hundreds of light-years - and were only expected to produce low-energy X-rays and ultraviolet light.

"They are known to be very 'boring,'" and in this case it's a good thing, said Benjamin Safdi, a Divisional Fellow in the Berkeley Lab Physics Division theory group who led a study, published Jan. 12 in the journal Physical Review Letters, detailing the axion explanation for the excess.

Christopher Dessert, a Berkeley Lab Physics Division affiliate, contributed heavily to the study, which also had participation by researchers at UC Berkeley, the University of Michigan, Princeton University, and the University of Minnesota.

If the neutron stars were of a type known as pulsars, they would have an active surface giving off radiation at different wavelengths. This radiation would show up across the electromagnetic spectrum, Safdi noted, and could drown out this X-ray signature that the researchers had found, or would produce radio-frequency signals. But the Magnificent 7 are not pulsars, and no such radio signal was detected. Other common astrophysical explanations don't seem to hold up to the observations either, Safdi said.

If the X-ray excess detected around the Magnificent 7 is generated from an object or objects hiding out behind the neutron stars, that likely would have shown up in the datasets that researchers are using from two space satellites: the European Space Agency's XMM-Newton and NASA's Chandra X-ray telescopes.

Safdi and collaborators say it's still quite possible that a new, non-axion explanation arises to account for the observed X-ray excess, though they remain hopeful that such an explanation will lie outside of the Standard Model of particle physics, and that new ground- and space-based experiments will confirm the origin of the high-energy X-ray signal.

"We are pretty confident this excess exists, and very confident there's something new among this excess," Safdi said. "If we were 100% sure that what we are seeing is a new particle, that would be huge. That would be revolutionary in physics." Even if the discovery turns out not to be associated with a new particle or dark matter, he said, "It would tell us so much more about our universe, and there would be a lot to learn."

Raymond Co, a University of Minnesota postdoctoral researcher who collaborated in the study, said, "We're not claiming that we've made the discovery of the axion yet, but we're saying that the extra X-ray photons can be explained by axions. It is an exciting discovery of the excess in the X-ray photons, and it's an exciting possibility that's already consistent with our interpretation of axions."

If axions exist, they would be expected to behave much like neutrinos in a star, as both would have very slight masses and interact only very rarely and weakly with other matter. They could be produced in abundance in the interior of stars. Uncharged particles called neutrons move around within neutron stars, occasionally interacting by scattering off of one another and releasing a neutrino or possibly an axion. The neutrino-emitting process is the dominant way that neutron stars cool over time.

Like neutrinos, the axions would be able to travel outside of the star. The incredibly strong magnetic field surrounding the Magnificent 7 stars - billions of times stronger than magnetic fields that can be produced on Earth - could cause exiting axions to convert into light.

Neutron stars are incredibly exotic objects, and Safdi noted that a lot of modeling, data analysis, and theoretical work went into the latest study. Researchers have heavily used a bank of supercomputers known as the Lawrencium Cluster at Berkeley Lab in the latest work.

Some of this work had been conducted at the University of Michigan, where Safdi previously worked. "Without the high-performance supercomputing work at Michigan and Berkeley, none of this would have been possible," he said.

"There is a lot of data processing and data analysis that went into this. You have to model the interior of a neutron star in order to predict how many axions should be produced inside of that star."

Safdi noted that as a next step in this research, white dwarf stars would be a prime place to search for axions because they also have very strong magnetic fields, and are expected to be "X-ray-free environments."

"This starts to be pretty compelling that this is something beyond the Standard Model if we see an X-ray excess there, too," he said.

Researchers could also enlist another X-ray space telescope, called NuStar, to help solve the X-ray excess mystery.

Safdi said he is also excited about ground-based experiments such as CAST at CERN, which operates as a solar telescope to detect axions converted into X-rays by a strong magnet, and ALPS II in Germany, which would use a powerful magnetic field to cause axions to transform into particles of light on one side of a barrier as laser light strikes the other side of the barrier.

Axions have received more attention as a succession of experiments has failed to turn up signs of the WIMP (weakly interacting massive particle), another promising dark matter candidate. And the axion picture is not so straightforward - it could actually be a family album.

There could be hundreds of axion-like particles, or ALPs, that make up dark matter, and string theory - a candidate theory for describing the forces of the universe - holds open the possible existence of many types of ALPs.

Credit: 
DOE/Lawrence Berkeley National Laboratory

Simulating evolution to understand a hidden switch

image: Using computer simulations built on reasonable assumptions and conducted under careful control, computational bioscientists can mimic real biological conditions. Starting with the original founding population (ancient phase), they can evolve the population over several thousand generations to develop an intermediate phase, and then evolve that generation another several thousand generations to develop a derived phase.

Image: 
© 2021 KAUST; Anastasia Serin

Computer simulations of cells evolving over tens of thousands of generations reveal why some organisms retain a disused switch mechanism that turns on under severe stress, changing some of their characteristics. Maintaining this "hidden" switch is one means for organisms to maintain a high degree of gene expression stability under normal conditions.

Tomato hornworm larvae are green in warmer regions, making camouflage easier, but black in cooler temperatures so that they can absorb more sunlight. This phenomenon, found in some organisms, is called phenotypic switching. Normally hidden, this switching is activated in response to dangerous genetic or environmental changes.

Scientists have typically studied this process by investigating the changes undergone by organisms under different circumstances over many generations. Several years ago, for example, a team bred generations of tobacco hornworm larvae to observe and induce color changes similar to those that occurred in their tomato hornworm relatives.

"Computer simulations, when built on reasonable assumptions and conducted under careful control, are a very powerful tool to mimic the real situation," says KAUST computational bioscientist Xin Gao. "This helps scientists observe and understand principles that are otherwise very difficult, or impossible, to observe by wet-lab experiments."

Gao and KAUST research scientist Hiroyuki Kuwahara designed a computer simulation of the evolution of 1,000 asexual microorganisms. Each organism was given a gene circuit model for regulating the expression of a specific protein X.

The simulation evolved the population over 90,000 generations. The original founding population had identical nonswitching gene circuits and evolved over 30,000 generations, collectively called the ancient population, under stable conditions. The next 30,000 generations, called the intermediate population, were exposed to fluctuating environments that switched every 20 generations. The final 30,000 generations, the derived population, were exposed to a stable environment.

The individuals in the ancient and derived populations, who evolved in stable environments, both had gene expression levels that were optimized for stability. But they were different: the ancient population's stability did not involve phenotypic switching, while the derived population's did. The difference, explains Kuwahara, stems from the intermediate population, in which switching was favored in order to deal with the fluctuating conditions.

The simulations suggest that populations of organisms maintain their switching machinery over a long period of environmental stability by gradually evolving low-threshold switches, which easily switch in fluctuating circumstances, to high-threshold switches when the environment is more stable.

This is easier, says Kuwahara, than reverting to a nonswitching state through small mutational shifts. "Instead, we end up with a type of 'hidden' phenotypic switching that acts like an evolutionary capacitor, storing genetic variations and releasing alternative phenotypes in the event of substantial perturbations," Kuwahara says.

The team next plans to use computer simulations to study more complex biological systems while also interactively collaborating with researchers conducting wet-lab experiments. Their aim is to develop theoretical frameworks that can be experimentally validated.

Credit: 
King Abdullah University of Science & Technology (KAUST)

Stuck in a rut: Ocean acidification locks algal communities in a simplified state

image: Image photo of algal communities

Image: 
University of Tsukuba

Tsukuba, Japan - Out with the old, in with the new, as the New Year's saying goes, but not where the marine environment is concerned. Researchers from Japan have discovered that ocean acidification keeps algal communities locked in a simplified state of low biodiversity.

In a study published on 11th January 2021 in Global Change Biology, researchers from the University of Tsukuba have revealed that as oceanic carbon dioxide levels rise, the biodiversity and ecological complexity of marine algal communities decline.

Ocean acidification is the continuing increase in the acidity of the Earth's oceans, caused by the absorption of atmospheric carbon dioxide (CO2). The largest contributor to this acidification is human-caused CO2 emissions from the burning of fossil fuels.

"Ocean acidification is harmful to a lot of different marine organisms," says lead author of the study Professor Ben P. Harvey. "This affects not only ecosystem functions, but the goods and services that people get from marine resources."

To examine the changes caused by CO2-enriched waters in algal communities, the researchers anchored tiles in the ocean for the algae to grow on. The tiles were placed in reference conditions (i.e., ones that represent the structure and function of biological communities subject to no/very minor human-caused disturbances) and acidified conditions. The team used a natural CO2 seep for the acidified conditions to represent the CO2 conditions at the end of this century, and compared differences between the cooler months (January to July) and warmer months (July to January).

"We found that the tiles ended up being taken over by turf algae in the acidified conditions, and the communities had lower diversity, complexity and biomass," explains Professor Harvey. "This pattern was consistent throughout the seasons, keeping these communities locked in simplified systems that had low biodiversity."

The team also transplanted established communities between the two conditions. The transplanted communities ultimately matched the other communities around them (i.e., high biodiversity, complexity and biomass in the reference conditions, and vice versa for the acidified conditions).

"By understanding the ecological processes that change community structure, we can better evaluate how ocean acidification is likely to alter communities in the future," says Professor Harvey.

The results of this study highlight that if atmospheric CO2 emissions are not reduced, we may see an increased loss of large algal habitats (such as kelp forests). But the study also shows that shallow-water marine communities can recover if significant reductions in CO2 emissions are achieved, such as those urged by the Paris Agreement.

Credit: 
University of Tsukuba

Increased risk of Parkinson's disease in patients with schizophrenia

A new study conducted at the University of Turku, Finland, shows that patients with a schizophrenia spectrum disorder have an increased risk of Parkinson's disease later in life. The increased risk may be due to alterations in the brain's dopamine system caused by dopamine receptor antagonists or neurobiological effects of schizophrenia.

The record-based case-control study was carried out at the University of Turku in collaboration with the University of Eastern Finland. The study examined the occurrences of previously diagnosed psychotic disorders and schizophrenia in over 25,000 Finnish Parkinson's disease (PD) patients treated in 1996-2019.

In the study, patients with Parkinson's disease were noted to have previously diagnosed psychotic disorders and schizophrenia more often than the control patients of the same age not diagnosed with PD.

- Previous studies have recognised several risk factors for PD, including age, male sex, exposure to insecticides, and head injuries. However, the current understanding is that the development of PD is due to a joint effect of different environment, hereditary, and patient-specific factors. According to our results, a previously diagnosed psychotic disorder or schizophrenia may be one factor that increases the risk of PD later in life, says Doctoral Candidate Tomi Kuusimäki from the University of Turku who was the main author of the research article.

Study changes conception of the association between Parkinson's disease and schizophrenia

PD is currently the most rapidly increasing neurological disorder in the world. It is a neurodegenerative disorder that is most common in patients over 60 years of age. The cardinal motor symptoms include shaking, stiffness and slowness of movement. In Finland, circa 15,000 patients are currently living with a PD diagnosis.

In Parkinson's disease, the neurons located in the substantia nigra in the midbrain slowly degenerate, which leads to deficiency in a neurotransmitter called dopamine. As for schizophrenia, the dopamine level increases in some parts of the brain. In addition, the pharmacotherapies used in the primary treatment of PD and schizophrenia appear to have contrasting mechanisms of action. PD symptoms can be alleviated with dopamine receptor agonists, whereas schizophrenia is commonly treated with dopamine receptor antagonists.

- The occurrence of Parkinson's disease and schizophrenia in the same person has been considered rare because these diseases are associated with opposite alterations in the brain's dopamine system. Our study changes this prevailing conception, says Kuusimäki.

Credit: 
University of Turku

Spreading the sound

Tsukuba, Japan - A team of researchers lead by the University of Tsukuba have created a new theoretical model to understand the spread of vibrations through disordered materials, such as glass. They found that as the degree of disorder increased, sound waves traveled less and less like ballistic particles, and instead began diffusing incoherently. This work may lead to new heat- and shatter-resistant glass for smartphones and tablets.

Understanding the possible vibrational modes in a material is important for controlling its optical, thermal, and mechanical properties. The propagation of vibrations in the form of sound of a single frequency through amorphous materials can occur in a unified way, as if it was a particle. Scientists like to call these quasiparticles "phonons." However, this approximation can break down if the material is too disordered, which limits our ability to predict the strength of glass under a wide range of circumstances.

Now, a team of scientists led by the University of Tsukuba have developed a new theoretical framework that explains the observed vibrations in glass with better agreement with experimental data. They demonstrate that thinking about vibrations as individual phonons is only justified in the limit of long wavelengths. On shorter length scales, disorder leads to increased scattering and the sound waves lose coherence. "We call these excitations 'diffusions,' because they represent the incoherent diffusion of vibrations, as opposed to the directed motion of phonons," explains author Professor Tatsuya Mori. In fact, the equations for low frequencies start looking like those for hydrodynamics, which describe the behavior of fluids. The researchers compared the predictions of the model with data obtained from soda lime glass and showed that they proved a better fit compared with previously accepted equations.

"Our research supports the view that this phenomenon is not unique to acoustic phonons, but rather represents a general phenomenon that can occur with other kinds of excitations within disordered materials," co-authors Professor Alessio Zaccone, University of Cambridge and Professor Matteo Baggioli, Instituto de Fisica Teorica UAM-CSIC say. Future work may involve utilizing the effects of disorder in order to improve the durability of glass for smart devices. The work is published in The Journal of Chemical Physics as "Physics of phonon-polaritons in amorphous materials" (DOI:10.1063/5.0033371).

Credit: 
University of Tsukuba

Principles of care established for young adults with substance use disorders

image: Grayken Center for Addiction logo

Image: 
BMC

Boston - A national group of pediatric addiction medicine experts have released newly-established principles of care for young adults with substance use disorder. Led by the Grayken Center for Addiction at Boston Medical Center, the collection of peer-reviewed papers was developed to guide providers on how to treat young adults with substance use disorder given their age-specific needs, as well as elevate national discussions on addressing these challenges more systematically.

Published in Pediatrics, the 11-paper supplement is the result of a convening of national experts in the treatment of young adults to determine the most important principles to address when caring for this unique population of patients with substance use disorder. Consensus was reached that each of the six principles of care "convey a commitment to compassion, therapeutic optimism and social justice."

"Our goal in publishing this supplement is to bring attention to the unique needs and challenges faced by this age group, and highlight the opportunities to best address these needs in order to lead to improved outcomes," said Michael Silverstein, MD, associate chief medical officer for research and population health at Boston Medical Center who served as first and co-author on several papers in the supplement and played a key role in the convening. "We hope that this will start the much-needed dialogue within the medical community about young adult addiction medicine and lead to the development of recommendations and treatment guidelines specific to the needs of these patients."

One of the age groups most heavily impacted by substance use disorders are young adults between the ages of 18 and 25. According to a Substance Abuse and Mental Health Administration report in 2016, 23 percent of young adults reported using illicit drugs, most commonly marijuana and prescription drugs, while two of three adults in treatment for opioid use disorder report that they first tried drugs before the age of 25. Yet, studies have shown that very few young adults identified as needing treatment for a substance use disorder receive it - one in 13 - and of those few in treatment, they are less likely to remain engaged in treatment than older adults.

"There are significant cognitive and developmental changes taking place during young adulthood that need to be considered when determining how best to address substance use disorder in this unique patient population," said Scott Hadland, MD, MPH, MS, pediatric addiction specialist at the Grayken Center who served as first and co-author on several of the papers. "We must incorporate, at every opportunity, a way to reduce harm and consequences of use, and address any compounding health conditions that factor into their ability to realize recovery."

Below is the list of papers about the six principles of care:

1. Evidence-based Substance Use Treatment of Young Adults with Substance Use Disorders

2. Engaging the Family in the Care of Young Adults With Substance Use Disorders

3. Support Services for Young Adults With Substance Use Disorders

4. Principles of Care for Young Adults With Co-Occurring Psychiatric and Substance Use Disorders

5. Principles of Harm Reduction for Young People Who Use Drugs

6. The Justice System and Young Adults With Substance Use Disorders

The supplement also includes three perspectives about these principles of care, addressing issues such as racial trauma and screening and prevention, with insight from the authors about how best to address gaps in treatment and care at both the system and policy level.

Credit: 
Boston Medical Center

NIH scientists identify nutrient that helps prevent bacterial infection

image: Colorized scanning electron micrograph showing carbapenem-resistant Klebsiella pneumoniae interacting with a human neutrophil

Image: 
NIAID

WHAT:

Scientists studying the body's natural defenses against bacterial infection have identified a nutrient--taurine--that helps the gut recall prior infections and kill invading bacteria, such as Klebsiella pneumoniae (Kpn). The finding, published in the journal Cell by scientists from five institutes of the National Institutes of Health, could aid efforts seeking alternatives to antibiotics.

Scientists know that microbiota--the trillions of beneficial microbes living harmoniously inside our gut--can protect people from bacterial infections, but little is known about how they provide protection. Scientists are studying the microbiota with an eye to finding or enhancing natural treatments to replace antibiotics, which harm microbiota and become less effective as bacteria develop drug resistance.

The scientists observed that microbiota that had experienced prior infection and transferred to germ-free mice helped prevent infection with Kpn. They identified a class of bacteria--Deltaproteobacteria--involved in fighting these infections, and further analysis led them to identify taurine as the trigger for Deltaproteobacteria activity.

Taurine helps the body digest fats and oils and is found naturally in bile acids in the gut. The poisonous gas hydrogen sulfide is a byproduct of taurine. The scientists believe that low levels of taurine allow pathogens to colonize the gut, but high levels produce enough hydrogen sulfide to prevent colonization. During the study, the researchers realized that a single mild infection is sufficient to prepare the microbiota to resist subsequent infection, and that the liver and gallbladder--which synthesize and store bile acids containing taurine--can develop long-term infection protection.

The study found that taurine given to mice as a supplement in drinking water also prepared the microbiota to prevent infection. However, when mice drank water containing bismuth subsalicylate--a common over-the-counter drug used to treat diarrhea and upset stomach--infection protection waned because bismuth inhibits hydrogen sulfide production.

Scientists from NIH's National Institute of Allergy and Infectious Diseases led the project in collaboration with researchers from the National Institute of General Medical Sciences; the National Cancer Institute; the National Institute of Diabetes and Digestive and Kidney Diseases; and the National Human Genome Research Institute.

Credit: 
NIH/National Institute of Allergy and Infectious Diseases