Culture

Study evaluates the filtration efficacy of 227 commercially available face masks in Brazil

image: Digital microscope image of different face masks made of hybrid material.

Image: 
Fernando G. Morais

By Karina Toledo  |  Agência FAPESP – The novel coronavirus is transmitted mainly via inhalation of saliva droplets or respiratory secretions suspended in air, so that face covering and social distancing are the most effective ways to prevent COVID-19 until enough vaccines are available for all. In Brazil, fabric masks are among the most widely used because they are cheap, reusable and available in several colors or designs. However, this type of face covering’s capacity to filter aerosol particles of a size equivalent to the novel coronavirus can vary between 15% and 70%, according to a study conducted in Brazil by the University of São Paulo (USP).

The study was supported by FAPESP, and the principal investigator was Paulo Artaxo, a professor in the university’s Physics Institute (IF-USP). It was part of an initiative called (respire! to assure access to safe masks for the university community. The results are reported in an article in the journal Aerosol Science and Technology.

“We appraised the filtration efficacy of 227 models sold by drugstores and other common types of store in Brazil to see how much genuine protection they afford the general public,” Artaxo told Agência FAPESP.

The scientists conducted a test using a device that contained a sodium chloride solution and emitted aerosol particles of 100 nanometers. SARS-CoV-2 is about 120 nanometers in diameter. A burst of aerosols was triggered, and particle concentration was measured before and after the mask.

As expected, surgical masks were most effective in the test, as were the FFP2 or N95 models certified for professional use, filtering 90%-98% of the particles. Next came masks made of non-woven fabric (TNT) or polypropylene and sold in many kinds of store, with an efficiency of 80%-90%, followed by those made of ordinary cotton, spandex or microfiber, which filtered 40% on average (15%-70%). 

Several factors were critical in enhancing or reducing the degree of protection. “Generally speaking, masks with a central seam protect less because the sewing machine makes holes that increase the passage of air. A tightly fitting top edge improves filtration significantly. Some masks made of fabric include fibers of nickel, copper or other metals that inactivate the virus and hence protect the wearer more effectively. There are even electrically charged models that retain more particles. In all cases, however, efficacy drops when the mask is washed because of wear and tear,” said Fernando Morais, first author of the article. Morais is a PhD candidate at IF-USP and a researcher at the Nuclear and Energy Research Institute (IPEN), an agency of the São Paulo State Government. 

Breathability

According to Artaxo, dual-layer cotton masks filtered considerably better than single-layer models, but efficacy was hardly altered by a third layer, which reduced breathability.

“The study innovated in several ways. One was its evaluation of breathability or resistance to air passage,” Artaxo said. “TNT and cotton masks were best in this regard. The FFP2 and N95 models were not as comfortable, but paper masks were the worst. This is important because if a person can’t bear wearing a mask even for five minutes, it’s useless.”

The authors of the article note that although mask efficacy varies, all types help reduce transmission of the virus, and mask-wearing in conjunction with social distancing is fundamental to control the pandemic. They advocate mass production of FFP2/N95 masks for distribution free of charge to the general public. This “should be considered in future pandemics”, according to Vanderley John, penultimate author and coordinator of (respire!, which is organized by USP’s Innovation Agency.

“Transmission of the virus is demonstrably airborne and wearing a mask all the time is one of the best prevention strategies, as well as leaving doors and windows open to ventilate rooms as much as possible,” Artaxo said.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Sinai Health scientists provide detailed map to understanding human cells

image: Co-first author Christopher Go surveyed the human cell landscape using 192 markers for proteins known to reside in specific organelles that can "tag" neighbouring proteins in the same compartment.

Image: 
Sinai Health

Researchers from Sinai Health have published a study providing an ultra-detailed look at the organization of a living human cell, providing a new tool that can help scientists around the world better understand what happens during disease.

The new study, out today in the journal Nature, was conducted in the laboratory of Dr. Anne-Claude Gingras, a senior investigator at the Lunenfeld-Tanenbaum Research Institute (LTRI) and professor in the Department of Molecular Genetics at the University of Toronto.

Co-first authors Christopher Go and Dr. James Knight surveyed the human cell landscape using 192 markers for proteins known to reside in specific organelles that can "tag" neighbouring proteins in the same compartment.

"The Human Cell Map was able to predict the localization of 4,000 proteins across all compartments in living human cells," said Go. "We sampled all the major organelles of the human cell and used innovative analysis to create the highest resolution map to date, with high accuracy in predicting novel localizations for many unmapped proteins."

The human body is composed of trillions of cells that are each further separated into different compartments with dedicated functions, much like a house has different rooms for sleeping or preparing food. These compartments, called organelles, each contain different proteins that perform specific activities associated with the compartment. The mitochondria, the so-called powerhouse of the cell, is one example of an organelle.

The scientists said knowing which proteins are residing in which organelles is an important first step towards understanding the role of each cellular protein. Earlier approaches often used methods that first kill the cells before trying to separate the organelles from one another.

"Previously, it was like taking the house apart to isolate each of the individual rooms," said Go. "These approaches tend to provide only crude views of the organization of a cell."

The Gingras lab develops tools to detect proteins using instruments known as mass spectrometers. In the new research, they purified the proteins that are "tagged" by organelle markers and identified each of them by mass spectrometry. The team then used Computational tools to reconstruct the human cell.

"Through our research, we have shown that we can precisely localize thousands of proteins at a time with relatively little effort," said Dr. Knight, a bioinformatician in the Gingras Lab at the LTRI. "Previous methods for localizing a protein required each protein to be investigated individually, or required a limited focus."

Given the expansive nature of the Human Cell Map, the team also created an analysis portal to allow researchers around the world to delve deeper into the data. Users can scan each of the 192 markers in detail and compare their own data on protein localization to predictions made in the Human Cell Map.

Knight said while this work provides a greater understanding of the organization within the human cell, it also can be leveraged to better understand what happens during disease.

"Human diseases are typically characterized at the molecular level by proteins with aberrant behaviour that cause the cell to behave in pathological ways. In these situations, proteins will often change where they reside in the cell," said Dr. Knight. "Our research is a first step in addressing this challenge in normal cells and we can use it for comparisons against altered cell states, such as disease conditions, to identify proteins with unexpected localizations that may help us understand a diseased cell better."

The team said the map will now be used in a variety of projects to help shed additional light on protein localization in human cells. Future efforts will include using chemical, viral and disease conditions to better characterize how cells adapt structurally to these stressors. This can inform future research efforts towards a mechanistic understanding of diseased states and the development of future therapeutics.

Credit: 
Lunenfeld-Tanenbaum Research Institute

Pandemic shows essential role of ECT as treatment for severe depression

image: An ECT treatment room at Michigan Medicine, the University of Michigan's academic medical center

Image: 
University of Michigan

When the COVID-19 pandemic arrived in North America in March 2020, health care facilities stopped providing all but "essential" care, to reduce infection risks and preserve protective gear known as PPE.

That included changes at many centers that provide ECT (electroconvulsive therapy) for severe depression and other conditions, a new survey shows.

Because ECT involves anesthesia, so that patients are unconscious when carefully controlled pulses of electricity are delivered to key areas of the brain, it is considered an 'aerosol generating' procedure. That means it poses special risks when a respiratory virus such as the novel coronavirus is in widespread circulation.

In a new commentary in the American Journal of Psychiatry, a team led by Daniel Maixner, M.D., of the University of Michigan Department of Psychiatry and U-M Eisenberg Family Depression Center, describes the experiences of ECT centers during spring of 2020. Maixner leads the ECT program at Michigan Medicine, U-M's academic medical center.

Some centers temporarily stopped accepting new patients for ECT, or prioritized ECT care for only the most severely ill and hospitalized patients. Many changed the schedule for the repeated treatments that a course of ECT entails. All followed newly developed protocols to reduce staff exposure to aerosols using patient screening, used advanced personal protective equipment, also called PPE, and collaborated extensively with anesthesia teams to safely administer treatments.

In all, 16 of the centers reduced their capacity to less than half of their usual patient volume between March and June, including 5 that treated less than a quarter of their usual number of patients. For new patients, and those who had completed their initial course of treatment but needed maintenance, 18 of the 20 centers reduced the frequency of treatments.

The changes came with a price. One center lost a patient to suicide. Three other centers had patients who made serious suicide attempts. Seventy percent of sites had patients return to inpatient psychiatric care after living in the community because they weren't able to receive ECT on the planned schedule, and 80% of the centers had patients who had to restart ECT care from the beginning to get back on track.

Because of this, the authors call for ECT to be seen as "essential" in future waves of COVID-19 activity, and in other crises and pandemics, so that care can continue.

With improved availability of COVID testing and PPE, and treatment protocols, most of the academic ECT programs were able to react promptly and return to caring for most patients again by mid-summer 2020, despite many challenges.

However, Maixner said, "risks are high for our patients during the time of COVID-19 and any other pandemic if access to ECT is curtailed. It is important for psychiatrists and patients to advocate for ECT to remain an essential treatment and not just be considered elective."

The survey was conducted by the ECT Task Group of the National Network of Depression Centers. In addition to the new commentary, Maixner presented additional data on the effects of the pandemic on ECT practice at a recent meeting of the International Society for ECT and Neurostimulation, which has also issued a position statement on the essential role of ECT.

Maixner and colleagues have also studied the efficacy and cost-effectiveness of ECT as a key option for patients who have not responded to other forms of treatment for depression and other mental health conditions.

Credit: 
Michigan Medicine - University of Michigan

The uneven benefits of CSR efforts

image: When reaping benefits from environmental and social activities, not all firms are created equal. Tangible asset-intensive industries do better than intangibles-heavy ones, SMU research has found.

Image: 
Singapore Management University

SMU Office of Research & Tech Transfer - Whether they are in the technology or oil sector, selling shoes or healthcare products, for many companies, green is the new black. While maximising profit might have been the sole priority for most businesses a decade ago, these days it is common for mission-oriented companies to pursue the 'triple bottom line' of people, planet and profit, particularly through corporate social responsibility (CSR) efforts.

While such efforts are commendable, some investors remain primarily concerned about whether firms can do well by doing good; in other words, whether CSR actually can increase a company's value. For instance, CSR activities could enhance brand image and improve customer loyalty, or even make it easier to attract and retain talent, leading to higher future stock returns. However, the wide-ranging and vague quality of these CSR efforts - which can encompass everything from donations to charities to promoting volunteerism among company staff - have typically presented a problem for academics trying to quantify their impact.

To determine the effect of sustainability-related activities on firm value, academics from SMU and INSEAD have embarked on a research project that effectively narrows the scope of CSR efforts to concrete and measurable environmental and social (E&S) activities; for example, redesigning factory processes to reuse water. By zooming in on observable improvements in future operating performance and stock returns, the researchers were able to quantify how E&S activities led to benefits for some - but not all - firms. Interestingly, the impact of E&S activities on future operating performance was largely dependent on company-specific factors such as the nature of assets owned by the business, SMU Assistant Professor of Accounting Grace Fan shared at the SMU/NUS/NTU Accounting Research Conference on April 17, 2021.

A tale of two firms

Crunching the data for more than 4,000 US public companies from 1995 to 2016 including corporate heavyweights such as Apple and Chevron, Professor Fan and team focused on E&S scores in five main categories: environment, community, diversity, employee relations and human rights. They found that in general, E&S activities are related to future improvements in operating performance, but only for firms in tangible asset-intensive industries such as manufacturing, utilities, energy and chemicals. For companies in sectors that are more intangible asset-intensive, which rely more heavily on assets such as intellectual property to derive profits, there was no such beneficial effect.

"For the tangible asset-intensive companies, they have a lot of fixed and heavy assets and processes. We imagine they would derive more benefits from improving their process efficiency and making their workers happier as a result of E&S activities," Professor Fan shared, citing case studies of such process improvements in China, India, the Czech Republic and more.

For instance, US-based conglomerate Honeywell redesigned its chemical cleaning process in a Czech Republic-based plant, which helped reduce its production of chemical waste and consumption of natural gas. Not only did worker safety improve due to reduced handling of toxic chemicals, production time was shortened and the plant saved the company an additional $15,000 a year. Similarly in India, workers at a Honeywell plant implemented an energy conservation programme that allowed it to save an extra 5,000 kilowatt-hours of energy a month, and almost $900,000 a year, Professor Fan explained.

Tracing correlation to causation

Further tracking the impact of E&S activities on stock returns, Professor Fan and team found that E&S activities did in fact correlate with positive stock returns. However, this relationship again occurred mainly in tangible asset-intensive industries. The significant boost in stock returns also disappeared once the researchers controlled for improvements in operating performance, suggesting the positive stock returns were likely due to better internal processes and increased operational efficiencies in these tangible asset-heavy firms.

"It's possible that the stock market does not value the E&S ratings of firms in intangibles-intensive industries such as technology and consumer nondurables as much," Professor Fan said. "Or, the stock market has already incorporated the E&S activities of these firms when determining their value, since their E&S activities may be more easily observable through consumer branding and advertisements compared with tangibles-intensive firms who may embark more on internal process innovations which are more difficult to observe."

Wrapping the virtual session up, Professor Fan delved into the limitations of the study, including the difficulty of claiming causality between E&S ratings and firm value instead of mere correlation. To further investigate this issue, the team will work on collecting more specific data on firm operations, such as the level of carbon emissions and waste production, in addition to E&S ratings.

Credit: 
Singapore Management University

New study may help explain low oxygen levels in COVID-19 patients

image: Shokrollah Elahi led new research that sheds light on why many COVID-19 patients have low levels of oxygen in their blood, and why the anti-inflammatory drug dexamethasone works to treat the potentially dangerous condition.

Image: 
University of Alberta

A new study published in the journal Stem Cell Reports by University of Alberta researchers is shedding light on why many COVID-19 patients, even those not in hospital, are suffering from hypoxia--a potentially dangerous condition in which there is decreased oxygenation in the body's tissues. The study also shows why the anti-inflammatory drug dexamethasone has been an effective treatment for those with the virus.

"Low blood-oxygen levels have been a significant problem in COVID-19 patients," said study lead Shokrollah Elahi, associate professor in the Faculty of Medicine & Dentistry. "Because of that, we thought one potential mechanism might be that COVID-19 impacts red blood cell production."

In the study, Elahi and his team examined the blood of 128 patients with COVID-19. The patients included those who were critically ill and admitted to the ICU, those who had moderate symptoms and were admitted to hospital, and those who had a mild version of the disease and only spent a few hours in hospital. The researchers found that, as the disease became more severe, more immature red blood cells flooded into blood circulation, sometimes making up as much as 60 per cent of the total cells in the blood. By comparison, immature red blood cells make up less than one per cent, or none at all, in a healthy individual's blood.

"Immature red blood cells reside in the bone marrow and we do not normally see them in blood circulation," Elahi explained. "This indicates that the virus is impacting the source of these cells. As a result, and to compensate for the depletion of healthy immature red blood cells, the body is producing significantly more of them in order to provide enough oxygen for the body."

The problem is that immature red blood cells do not transport oxygen--only mature red blood cells do. The second issue is that immature red blood cells are highly susceptible to COVID-19 infection. As immature red blood cells are attacked and destroyed by the virus, the body is unable to replace mature red blood cells--which only live for about 120 days--and the ability to transport oxygen in the bloodstream is diminished.

The question was how the virus infects the immature red blood cells. Elahi, known for his prior work demonstrating that immature red blood cells made certain cells more susceptible to HIV, began by investigating whether the immature red blood cells have receptors for SARS-CoV-2. After a series of studies, Elahi's team was the first in the world to demonstrate that immature red blood cells expressed the receptor ACE2 and a co-receptor, TMPRSS2, which allowed SARS-CoV-2 to infect them.

Working in conjunction with the the lab of virologist Lorne Tyrrell at the U of A's Li Ka Shing Institute of Virology, the team performed investigative infection testing with immature red blood cells from COVID-19 patients and proved these cells got infected with the SARS-CoV-2 virus.

"These findings are exciting but also show two significant consequences," Elahi said. "First, immature red blood cells are the cells being infected by the virus, and when the virus kills them, it forces the body to try to meet the oxygen supply requirements by pumping more immature red blood cells out of the bone marrow. But that just creates more targets for the virus.

"Second, immature red blood cells are actually potent immunosuppressive cells; they suppress antibody production and they suppress T-cell immunity against the virus, making the entire situation worse. So in this study, we have demonstrated that more immature red blood cells means a weaker immune response against the virus."

Following the discovery that immature red blood cells have receptors that allow them to become infected by the coronavirus, Elahi's team then began testing various drugs to see whether they could reduce immature red blood cells' susceptibility to the virus.

"We tried the anti-inflammatory drug dexamethasone, which we knew helped to reduce mortality and the duration of the disease in COVID-19 patients, and we found a significant reduction in the infection of immature red blood cells," said Elahi.

When the team began exploring why dexamethasone had such an effect, they found two potential mechanisms. First, dexamethasone suppresses the response of the ACE2 and TMPRSS2 receptors to SARS-CoV-2 in immature red blood cells, reducing the opportunities for infection. Second, dexamethasone increases the rate at which the immature red blood cells mature, helping the cells shed their nuclei faster. Without the nuclei, the virus has nowhere to replicate.

Luckily, putting Elahi's findings into practice doesn't require significant changes in the way COVID-19 patients are being treated now.

"For the past year, dexamethasone has been widely used in COVID-19 treatment, but there wasn't a good understanding as to why or how it worked," Elahi said. "So we are not repurposing or introducing a new medication; we are providing a mechanism that explains why patients benefit from the drug."

Elahi noted that Wendy Sligl and Mohammed Osman had a crucial role in recruiting COVID-19 patients for the study. The research was supported by Fast Grants, the Canadian Institutes of Health Research and a grant from the Li Ka Shing Institute of Virology.

Credit: 
University of Alberta Faculty of Medicine & Dentistry

Major advance in fabrication of low-cost solar cells also locks up greenhouse gases

image: A team led by Prof. Andre Taylor at the NYU Tandon School of Engineering, devised a fast, efficient means of p-doping key layers of perovskite based solar cells through the use of CO2 bubbles.

Image: 
Andre Taylor

BROOKLYN, New York, Wednesday, June 2, 2021 - Perovskite solar cells have progressed in recent years with rapid increases in power conversion efficiency (from 3% in 2006 to 25.5% today), making them more competitive with silicon-based photovoltaic cells. However, a number of challenges remain before they can become a competitive commercial technology.

Now a team at the NYU Tandon School of Engineering has developed a process to solve one of them, a bottleneck in a critical step involving p-type doping of organic hole-transporting materials within the photovoltaic cells. The research, "CO2 doping of organic interlayers for perovskite solar cells," appears in Nature.

Currently, the p-doping process, achieved by the ingress and diffusion of oxygen into the hole transporting layer, is time intensive (several hours to a day), making commercial mass production of perovskite solar cells impractical.

The Tandon team, led by André D. Taylor, an associate professor, and Jaemin Kong, a post-doctoral associate, along with Miguel Modestino, assistant professor -- all in the Department of Chemical and Biomolecular Engineering -- discovered a method of vastly increasing the speed of this key step through the use of carbon dioxide (CO2) instead of oxygen.

In perovskite solar cells, doped organic semiconductors are normally required as charge-extraction interlayers situated between the photoactive perovskite layer and the electrodes. The conventional means of doping these interlayers involves the addition of lithium bis(trifluoromethane)sulfonimide (LiTFSI), a lithium salt, to spiro-OMeTAD, a π-conjugated organic semiconductor widely used for a hole-transporting material in perovskite solar cells. The doping process is then initiated by exposing spiro-OMeTAD:LiTFSI blend films to air and light.

Not only is this method time consuming, it largely depends on ambient conditions. By contrast, Taylor and his team reported a fast and reproducible doping method that involves bubbling a spiro-OMeTAD:LiTFSI solution with CO2 under ultraviolet light. They found that their process rapidly enhanced electrical conductivity of the interlayer by 100 times compared to that of a pristine blend film, which is also approximately 10 times higher than that obtained from an oxygen bubbling process. The CO2 treated film also resulted in stable, high-efficiency perovskite solar cells without any post-treatments.

"Besides shortening the device fabrication and processing time, application of the pre-doped spiro-OMeTAD in perovskite solar cells makes the cells much more stable," explained Kong, the lead author. "That's partly because most of the detrimental lithium ions in the spiro-OMeTAD:LiTFSI solution were stabilized as lithium carbonates during the CO2 bubbling process."

He added that the lithium carbonates end up being filtered out when the investigators spincast the pre-doped solution onto the perovskite layer. "Thus, we can obtain fairly pure doped organic materials for efficient hole transporting layers."

The team, which included researchers from Samsung, Yale University, Korea Research Institute of Chemical Technology, The Graduate Center of the City University, Wonkwang University, and the Gwangju Institute of Science and Technology also found that the CO2 doping method can be used for p-type doping of other π-conjugated polymers, such as PTAA, MEH-PPV, P3HT, and PBDB-T. According to Taylor the researchers are looking to push the boundary beyond typical organic semiconductors used for solar cells.

"We believe that wide applicability of CO2 doping to various π-conjugated organic molecules stimulates research ranging from organic solar cells to organic light emitting diodes (OLEDs) and organic field effect transistors (OFETs) even to thermoelectric devices that all require controlled doping of organic semiconductors," Taylor explained, adding that since this process consumes quite a large amount of CO2 gas, it can be also considered for CO2 capture and sequestration studies in the future.

"At a time when governments and companies alike are now looking to reduce CO2 emissions if not de-carbonize, this research offers an avenue for reacting large amounts of CO2 in lithium carbonate to improve next generation solar cells, while removing this greenhouse gas from the atmosphere," he explained, adding that the idea for this novel approach was a counterintuitive insight from the team's battery research.

"From our long history of working with lithium oxygen/air batteries we know that lithium carbonate formation from exposure of oxygen electrodes to air is a big challenge because it depletes the battery of lithium ions, which destroys battery capacity. In this Spiro doping reaction, however, we are actually exploiting lithium carbonate formation, which binds lithium and prevents it from becoming mobile ions detrimental to the long term stability of the Perovskite solar cell. We are hoping that this CO2 doping technique could be a stepping stone for overcoming existing challenges in organic electronics and beyond."

Credit: 
NYU Tandon School of Engineering

How is the genome like an open book? New research shows cells' 'library system'

image: Nucleus of a stem cell (left) and its differentiated progeny (right) with liquid-like (green) and gel-like (magenta) parts of the genome.

Image: 
Courtesy of Alexandra Zidovska, NYU's Department of Physics

The organization of the human genome relies on physics of different states of matter - such as liquid and solid - a team of scientists has discovered. The findings, which reveal how the physical nature of the genome changes as cells transform to serve specific functions, point to new ways to potentially better understand disease and to create improved therapies for cancer and genetic disorders.

The genome is the library of genetic information essential for life. Each cell contains the entire library, yet it uses only part of this information. Special types of cells, such as a white blood cell or a neuron, have only certain "books" open - those containing information relevant for their function. Researchers have long sought to determine how the genome manages these enormous libraries and allows access to the "books" that are needed, while storing away the ones not in use.

In the newly published study, which appears in the journal Physical Review Letters, the researchers revealed how this happens within a cell.

"We found that the parts of the genome that are being used are liquid, while the unused parts form solid-like islands," explains Alexandra Zidovska, an assistant professor in New York University's Department of Physics and the senior author of the study. "These solid-like islands serve as library bookshelves storing the books with genes not currently in use, while the liquid genome part acts like an 'open book,' which is readily accessible and used for a cell's life and function."

The genome's genetic information is encoded in the DNA molecule. Proper reading and processing of this information is critical for human health and aging. In a human cell, the genome, which contains the genetic code, is housed in the cell nucleus. Barely 10 micrometers in size - or about 10 times smaller than the width of a strand of human hair - it stores about two meters of DNA.

Storing this vast amount of genetic information in such a small space requires packing in such a way so that each piece of DNA, and thus of genetic code, is easily accessible when needed.

What had been less understood is how this information was stored and what was the role of physics in it.

To explore this phenomenon, the researchers, who also included Iraj Eshghi and Jonah Eaton, NYU doctoral candidates, compared cells before and after they become specialized.

Specifically, the scientists mapped motions of the genome in nuclei of mouse stem cells - those that do not yet have a specialized function, but are poised to become any cell type, such as a neuron or a white blood cell - and then let these cells undergo a differentiation into neuronal cells before mapping the genomic motions again. In doing so, they generated the first-ever maps of a genome's motions before and after cell differentiation.

Here they found that stem cells keep their genome "open" - making it as accessible as an open book, with "genetic pages" being easily reachable.

However, the mapping also showed that once a stem cell becomes a specialized cell, e.g. a neuron, this specialized cell keeps readily accessible only parts of the genome that are needed for its specific function. It puts away the unused parts of the genome on "bookshelves." This leaves more space for information that is being actively read out and processed.

"These motions tell us exactly how accessible the genome is in a given place in the cell nucleus," explains Zidovska. "Moreover, these motions reveal the physical state of different parts of the genome, with liquid parts corresponding to loosely packed DNA, and solid-like parts corresponding to tightly packed DNA gels. The genome packing in these different states of matter directly impacts the genome's accessibility; the liquid parts are accessible, in contrast to the solid-like parts. The amazing thing is that this organization relies on physics of different states of matter, liquid and solid."

"Measuring motions of distinct parts of the genome allowed us to show these different physical properties of different parts of the genome, and thus understand the genome organization--the cell's 'library system,' " she adds.

A proper cellular filing system is vital for human health, the researchers note.

"Considering the vast number of cell types in the human body, if a book is missing or misplaced in this cellular library, it may lead to missing or unnecessary information, possibly leading to developmental and inherited disorders as well as afflictions such as cancer," explains Zidovska. "Therefore, revealing how the genome is organized inside the cell nucleus is critical to our understanding of these conditions and diseases. Moreover, such knowledge may help us in designing future therapies and diagnostics of such disorders."

Credit: 
New York University

New articles for Geosphere posted online in May

Boulder, Colo., USA: GSA's dynamic online journal, Geosphere, posts articles online regularly. Locations and topics studied this month include the Moine thrust zone in northwestern Scotland; the Eastern California shear zone; implementation of "OpenTopography"; the finite evolution of "mole tracks"; the southern central Andes; the work of International Ocean Discovery Program (IODP) Expedition 351; and the Fairweather fault, Alaska, USA. You can find these articles at https://geosphere.geoscienceworld.org/content/early/recent.

Detrital-zircon analyses, provenance, and late Paleozoic sediment dispersal in the context of tectonic evolution of the Ouachita orogen
William A. Thomas; George E. Gehrels; Kurt E. Sundell; Mariah C. Romero

Abstract: New analyses for U-Pb ages and εHft values, along with previously published U-Pb ages, from Mississippian-Permian sandstones in synorogenic clastic wedges of the Ouachita foreland and nearby intracratonic basins support new interpretations of provenance and sediment dispersal along the southern Midcontinent of North America. Recently published U-Pb and Hf data from the Marathon foreland confirm a provenance in the accreted Coahuila terrane, which has distinctive Amazonia/Gondwana characteristics. Data from Pennsylvanian-Permian sandstones in the Fort Worth basin, along the southern arm of the Ouachita thrust belt, are nearly identical to those from the Marathon foreland, strongly indicating the same or a similar provenance. The accreted Sabine terrane, which is documented by geophysical data, is in close proximity to the Coahuila terrane, suggesting the two are parts of an originally larger Gondwanan terrane. The available data suggest that the Sabine terrane is a Gondwanan terrane that was the provenance of the detritus in the Fort Worth basin. Detrital-zircon data from Permian sandstones in the intracratonic Anadarko basin are very similar to those from the Fort Worth basin and Marathon foreland, indicating sediment dispersal from the Coahuila and/or Sabine terranes within the Ouachita orogen cratonward from the immediate forelands onto the southern craton. Similar, previously published data from the Permian basin suggest widespread distribution from the Ouachita orogen. In contrast to the other basins along the Ouachita-Marathon foreland, the Mississippian-Pennsylvanian sandstones in the Arkoma basin contain a more diverse distribution of detrital-zircon ages, indicating mixed dispersal pathways of sediment from multiple provenances. Some of the Arkoma sandstones have U-Pb age dis¬tributions like those of the Fort Worth and Marathon forelands. In contrast, other sandstones, especially those with paleocurrent and paleogeographic indicators of southward progradation of deposi¬tional systems onto the northern distal shelf of the Arkoma basin, have U-Pb age distributions and εHft values like those of the "Appalachian signature." The combined data suggest a mixture of detritus from the proximal Sabine terrane/Ouachita orogenic belt with detritus routed through the Appalachian basin via the southern Illinois basin to the distal Arkoma basin. The Arkoma basin evidently marks the southwestern extent of Appalachian-derived detritus along the Ouachita-Marathon foreland and the transition southwestward to overfilled basins that spread detritus onto the southern craton from the Ouachita-Marathon orogen, including accreted Gondwanan terranes.

View article: https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02288.1/598714/Detrital-zircon-analyses-provenance-and-late

Structural, petrological, and tectonic constraints on the Loch Borralan and Loch Ailsh alkaline intrusions, Moine thrust zone, northwestern Scotland
Robert Fox; Michael P. Searle

Abstract: During the Caledonian orogeny, the Moine thrust zone in northwestern Scotland (UK) emplaced Neoproterozoic Moine Supergroup rocks, metamorphosed during the Ordovician (Grampian) and Silurian (Scandian) orogenic periods, westward over the Laurentian passive margin in the northern highlands of Scotland. The Laurentian margin comprises Archean-Paleoproterozoic granulite and amphibolite facies basement (Scourian and Laxfordian complexes, Lewisian gneiss), Proterozoic sedimentary rocks (Stoer and Torridon Groups), and Cambrian-Ordovician passive-margin sediments. Four major thrusts, the Moine, Ben More, Glencoul, and Sole thrusts, are well exposed in the Assynt window. Two highly alkaline syenite intrusions crop out within the Moine thrust zone in the southern Assynt window. The Loch Ailsh and Loch Borralan intrusions range from ultramafic melanite-biotite pyroxenite and pseudoleucite-bearing biotite nepheline syenite (borolanite) to alkali-feldspar-bearing and quartz-bearing syenites. Within the thrust zone, syenites intrude up to the Ordovician Durness Group limestones and dolomites, forming a high-temperature contact metamorphic aureole with diopside-forsterite-phlogopite-brucite marbles exposed at Ledbeg quarry. Controversy remains as to whether the Loch Ailsh and Loch Borralan syenites were intruded prior to thrusting or intruded syn- or post-thrusting. Borolanites contain large white leucite crystals pseudomorphed by alkali feldspar, muscovite, and nepheline (pseudoleucite) that have been flattened and elongated during ductile shearing. The minerals pseudomorphing leucites show signs of ductile deformation indicating that high-temperature (~500 °C) deformation acted upon pseudomorphed leucite crystals that had previously undergone subsolidus breakdown. New detailed field mapping and structural and petrological observations are used to constrain the geological evolution of both the Loch Ailsh and the Loch Borralan intrusions and the chronology of the Moine thrust zone. The data supports the interpretation that both syenite bodies were intruded immediately prior to thrusting along the Moine, Ben More, and Borralan thrusts.

View article: https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02330.1/598715/Structural-petrological-and-tectonic-constraints

Tectonostratigraphic record of late Miocene-early Pliocene transtensional faulting in the Eastern California shear zone, southwestern USA
Rebecca J. Dorsey; Brennan O'Connell; Kevin K. Gardner; Mindy B. Homan; Scott E.K. Bennett ...

Abstract: The Eastern California shear zone (ECSZ; southwestern USA) accommodates ~20%-25% of Pacific-North America relative plate motion east of the San Andreas fault, yet little is known about its early tectonic evolution. This paper presents a detailed stratigraphic and structural analysis of the uppermost Miocene to lower Pliocene Bouse Formation in the southern Blythe Basin, lower Colorado River valley, where gently dipping and faulted strata provide a record of deformation in the paleo-ECSZ. In the western Trigo Mountains, splaying strands of the Lost Trigo fault zone include a west-dipping normal fault that cuts the Bouse Formation and a steeply NE-dipping oblique dextral-normal fault where an anomalously thick (~140 m) section of Bouse Formation siliciclastic deposits filled a local fault-controlled depocenter. Systematic basinward thickening and stratal wedge geometries in the western Trigo and southeastern Palo Verde Mountains, on opposite sides of the Colorado River valley, record basinward tilting during deposition of the Bouse Formation. We conclude that the southern Blythe Basin formed as a broad transtensional sag basin in a diffuse releasing stepover between the dextral Laguna fault system in the south and the Cibola and Big Maria fault zones in the north. A palinspastic reconstruction at 5 Ma shows that the southern Blythe Basin was part of a diffuse regional network of linked right-step¬ping dextral, normal, and oblique-slip faults related to Pacific-North America plate boundary dextral shear. Diffuse transtensional strain linked northward to the Stateline fault system, eastern Garlock fault, and Walker Lane, and southward to the Gulf of California shear zone, which initiated ca. 7-9 Ma, implying a similar age of inception for the paleo-ECSZ.

View article: https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02337.1/598716/Tectonostratigraphic-record-of-late-Miocene-early

Intra-oceanic submarine arc evolution recorded in an ~1-km-thick rear-arc succession of distal volcaniclastic lobe deposits
Kyle Johnson; Kathleen M. Marsaglia; Philipp A. Brandl; Andrew P. Barth; Ryan Waldman ...

Abstract: International Ocean Discovery Program (IODP) Expedition 351 drilled a rear-arc sedimentary succession ~50 km west of the Kyushu-Palau Ridge, an arc remnant formed by rifting during formation of the Shikoku Basin and the Izu-Bonin-Mariana arc. The ~1-km-thick Eocene to Oligocene deep-marine volcaniclastic succession recovered at Site U1438 provides a unique opportunity to study a nearly complete record of intra-oceanic arc development, from a rear-arc perspective on crust created during subduction initiation rather than supra-subduction seafloor spreading. Detailed facies analysis and definition of depositional units allow for broader stratigraphic analysis and definition of lobe elements. Patterns in gravity-flow deposit types and subunits appear to define a series of stacked lobe systems that accumulated in a rear-arc basin. The lobe subdivisions, in many cases, are a combination of a turbidite-dominated subunit and an overlying debris-flow subunit. Debris flow-rich lobe-channel sequences are grouped into four, 1.6-2 m.y. episodes, each roughly the age range of an arc volcano. Three of the episodes contain overlapping lobe facies that may have resulted from minor channel switching or input from a different source. The progressive up-section coarsening of episodes and the increasing channel-facies thicknesses within each episode suggest progressively prograding facies from a maturing magmatic arc. Submarine geomorphology of the modern Mariana arc and West Mariana Ridge provide present-day examples that can be used to interpret the morphology and evolution of the channel (or channels) that fed sediment to Site U1438, forming the sequences interpreted as depositional lobes. The abrupt change from very thick and massive debris flows to fine-grained turbidites at the unit III to unit II boundary reflects arc rifting and progressive waning of turbidity current and ash inputs. This interpretation is consistent with the geochemical record from melt inclusions and detrital zircons. Thus, Site U1438 provides a unique record of the life span of an intra-oceanic arc, from inception through maturation to its demise by intra-arc rifting and stranding of the remnant arc ridge.

View article: https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02321.1/598717/Intra-oceanic-submarine-arc-evolution-recorded-in

Measuring change at Earth's surface: On-demand vertical and three-dimensional topographic differencing implemented in OpenTopography
Chelsea Scott; Minh Phan; Viswanath Nandigam; Christopher Crosby; J Ramon Arrowsmith

Abstract: Topographic differencing measures landscape change by comparing multitemporal high-resolution topography data sets. Here, we focused on two types of topographic differencing: (1) Vertical differencing is the subtraction of digital elevation models (DEMs) that span an event of interest. (2) Three-dimensional (3-D) differencing measures surface change by registering point clouds with a rigid deformation. We recently released topographic differencing in OpenTopography where users perform on-demand vertical and 3-D differencing via an online interface. OpenTopography is a U.S. National Science Foundation-funded facility that provides access to topographic data and processing tools. While topographic differencing has been applied in numerous research studies, the lack of standardization, particularly of 3-D differencing, requires the customization of processing for individ¬ual data sets and hinders the community's ability to efficiently perform differencing on the growing archive of topography data. Our paper focuses on streamlined techniques with which to efficiently difference data sets with varying spatial resolution and sensor type (i.e., optical vs. light detection and ranging [lidar]) and over variable landscapes. To optimize on-demand differencing, we considered algorithm choice and displacement resolution. The optimal resolution is controlled by point density, landscape characteristics (e.g., leaf-on vs. leaf-off), and data set quality. We provide processing options derived from metadata that allow users to produce optimal high-quality results, while experienced users can fine tune the parameters to suit their needs. We anticipate that the differencing tool will expand access to this state-of-the-art technology, will be a valuable educational tool, and will serve as a template for differencing the growing number of multitemporal topography data sets.

View article: https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02259.1/598718/Measuring-change-at-Earth-s-surface-On-demand

Coseismic deformation of the ground during large-slip strike-slip ruptures: Finite evolution of "mole tracks"
T.A. Little; P. Morris; M.P. Hill; J. Kearse; R.J. Van Dissen ...

Abstract: To evaluate ground deformation resulting from large (~10 m) coseismic strike-slip displacements, we focus on deformation of the Kekerengu fault during the November 2016 Mw 7.8 Kaik?ura earthquake in New Zealand. Combining post-earthquake field observations with analysis of high-resolution aerial photography and topographic models, we describe the structural geology and geomorphology of the rupture zone. During the earthquake, fissured pressure bulges ("mole tracks") initiated at stepovers between synthetic Riedel (R) faults. As slip accumulated, near-surface "rafts" of cohesive clay-rich sediment, bounded by R faults and capped by grassy turf, rotated about a vertical axis and were internally shortened, thus amplifying the bulges. The bulges are flanked by low-angle contractional faults that emplace the shortened mass of detached sediment outward over less-deformed ground. As slip accrued, turf rafts fragmented into blocks bounded by short secondary fractures striking at a high angle to the main fault trace that we interpret to have originated as antithetic Riedel (R¢) faults. Eventually these blocks were dispersed into strongly sheared earth and variably rotated. Along the fault, clockwise rotation of these turf rafts within the rupture zone averaged ~20°-30°, accommodat¬ing a finite shear strain of 1.0-1.5 and a distributed strike slip of ~3-4 m. On strike-slip parts of the fault, internal shortening of the rafts averaged 1-2 m parallel to the R faults and ~1 m perpendicular to the main fault trace. Driven by distortional rotation, this contraction of the rafts exceeds the magnitude of fault heave. Turf rafts on slightly transtensional segments of the fault were also bulged and shortened--relationships that can be explained by a kinematic model involving "deformable slats." In a paleoseismic trench cut perpendicular the fault, one would observe fissures, low-angle thrusts, and steeply dipping strike-slip faults--some cross-cutting one another--yet all may have formed during a single earthquake featuring a large strike-slip displacement.

View article: https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02336.1/598719/Coseismic-deformation-of-the-ground-during-large

Detrital zircon record of Phanerozoic magmatism in the southern Central Andes
T.N. Capaldi; N.R. McKenzie; B.K. Horton; C. Mackaman-Lofland; C.L. Colleps ...

Abstract: The spatial and temporal distribution of arc magmatism and associated isotopic variations provide insights into the Phanerozoic history of the western margin of South America during major shifts in Andean and pre-Andean plate interactions. We integrated detrital zircon U-Th-Pb and Hf isotopic results across continental magmatic arc systems of Chile and western Argentina (28°S-33°S) with igneous bedrock geochronologic and zircon Hf isotope results to define isotopic signatures linked to changes in continental margin processes. Key tectonic phases included: Paleozoic terrane accretion and Carboniferous subduction initiation during Gondwanide orogenesis, Permian-Triassic extensional collapse, Jurassic-Paleogene continental arc magmatism, and Neogene flat slab subduction during Andean shortening. The ~550 m.y. record of magmatic activity records spatial trends in magma composition associated with terrane boundaries. East of 69°W, radiogenic isotopic signatures indicate reworked continental lithosphere with enriched (evolved) εHf values and low (0.7) zircon Th/U values consistent with increased asthenospheric contributions during lithospheric thinning. Spatial constraints on Mesozoic to Cenozoic arc width provide a rough approximation of relative subduction angle, such that an increase in arc width reflects shallower slab dip. Comparisons among slab dip calculations with time-averaged εHf and Th/U zircon results exhibit a clear trend of decreasing (enriched) magma compositions with increasing arc width and decreasing slab dip. Collectively, these data sets demonstrate the influence of subduction angle on the position of upper-plate magmatism (including inboard arc advance and outboard arc retreat), changes in isotopic signatures, and overall composition of crustal and mantle material along the western edge of South America.

View article: https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02346.1/596772/Detrital-zircon-record-of-Phanerozoic-magmatism-in

Prehistoric earthquakes on the Banning strand of the San Andreas fault, North Palm Springs, California
Bryan A. Castillo; Sally F. McGill; Katherine M. Scharer; Doug Yule; Devin McPhillips ...

Abstract: We studied a paleoseismic trench excavated in 2017 across the Banning strand of the San Andreas fault and herein provide the first detailed record of ground-breaking earthquakes on this important fault in Southern California. The trench exposed an ~40-m-wide fault zone cutting through alluvial sand, gravel, silt, and clay deposits. We evaluated the paleoseismic record using a new metric that combines event indicator quality and stratigraphic uncertainty. The most recent paleoearthquake occurred between 950 and 730 calibrated years B.P. (cal yr B.P.), potentially contemporaneous with the last rupture of the San Gorgonio Pass fault zone. We interpret five surface-rupturing earthquakes since 3.3-2.5 ka and eight earthquakes since 7.1-5.7 ka. It is possible that additional events have occurred but were not recognized, especially in the deeper (older) section of the stratigraphy, which was not fully exposed across the fault zone. We calculated an average recurrence interval of 380-640 yr based on four complete earthquake cycles between earthquakes 1 and 5. The average recurrence interval is thus slightly less than the elapsed time since the most recent event on the Banning strand. The average recurrence interval on the Banning strand is thus intermediate between longer intervals published for the San Gorgonio Pass fault zone (~1600 yr) and shorter intervals on both the Mission Creek strand of the San Andreas fault (~215 yr) and the Coachella section (~125 yr) of the San Andreas fault.

View article: https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02237.1/596773/Prehistoric-earthquakes-on-the-Banning-strand-of

Geomorphic expression and slip rate of the Fairweather fault, southeast Alaska, and evidence for predecessors of the 1958 rupture
Robert C. Witter; Adrian M. Bender; Katherine M. Scharer; Christopher B. DuRoss; Peter J. Haeussler ...

Abstract: Active traces of the southern Fairweather fault were revealed by light detection and ranging (lidar) and show evidence for transpressional deformation between North America and the Yakutat block in southeast Alaska. We map the Holocene geomorphic expression of tectonic deformation along the southern 30 km of the Fairweather fault, which ruptured in the 1958 moment magnitude 7.8 earthquake. Digital maps of surficial geology, geomorphology, and active faults illustrate both strike-slip and dip-slip deformation styles within a 10°-30° double restraining bend where the southern Fairweather fault steps offshore to the Queen Charlotte fault. We measure offset landforms along the fault and calibrate legacy 14C data to reassess the rate of Holocene strike-slip motion (?49 mm/yr), which corroborates published estimates that place most of the plate boundary motion on the Fairweather fault. Our slip-rate estimates allow a component of oblique-reverse motion to be accommodated by contractional structures west of the Fairweather fault consistent with geodetic block models. Stratigraphic and structural relations in hand-dug excavations across two active fault strands provide an incomplete paleoseismic record including evidence for up to six surface ruptures in the past 5600 years, and at least two to four events in the past 810 years. The incomplete record suggests an earthquake recurrence interval of ?270 years--much longer than intervals View article: https://pubs.geoscienceworld.org/gsa/geosphere/article-abstract/doi/10.1130/GES02299.1/596774/Geomorphic-expression-and-slip-rate-of-the

Credit: 
Geological Society of America

A new dimension in the quest to understand dark matter

image: Flip Tanedo is an assistant professor of physics and astronomy at UC Riverside.

Image: 
Thomas Wasper.

RIVERSIDE, Calif. -- As its name suggests, dark matter -- material which makes up about 85% of the mass in the universe -- emits no light, eluding easy detection. Its properties, too, remain fairly obscure.

Now, a theoretical particle physicist at the University of California, Riverside, and colleagues have published a research paper in the Journal of High Energy Physics that shows how theories positing the existence a new type of force could help explain dark matter's properties.

"We live in an ocean of dark matter, yet we know very little about what it could be," said Flip Tanedo, an assistant professor of physics and astronomy and the paper's senior author. "It is one of the most vexing known unknowns in nature. We know it exists, but we do not know how to look for it or why it hasn't shown up where we expected it."

Physicists have used telescopes, gigantic underground experiments, and colliders to learn more about dark matter for the last 30 years, though no positive evidence has materialized. The negative evidence, however, has forced theoretical physicists like Tanedo to think more creatively about what dark matter could be.

The new research, which proposes the existence of an extra dimension in space-time to search for dark matter, is part of an ongoing research program at UC Riverside led by Tanedo. According to this theory, some of the dark matter particles don't behave like particles. In effect, invisible particles interact with even more invisible particles in such a way that the latter cease to behave like particles.

"The goal of my research program for the past two years is to extend the idea of dark matter 'talking' to dark forces," Tanedo said. "Over the past decade, physicists have come to appreciate that, in addition to dark matter, hidden dark forces may govern dark matter's interactions. These could completely rewrite the rules for how one ought to look for dark matter."

If two particles of dark matter are attracted to, or repelled by, each other, then dark forces are operating. Tanedo explained that dark forces are described mathematically by a theory with extra dimensions and appear as a continuum of particles that could address puzzles seen in small galaxies.

"Our ongoing research program at UCR is a further generalization of the dark force proposal," he said. "Our observed universe has three dimensions of space. We propose that there may be a fourth dimension that only the dark forces know about. The extra dimension can explain why dark matter has hidden so well from our attempts to study it in a lab."

Tanedo explained that although extra dimensions may sound like an exotic idea, they are actually a mathematical trick to describe "conformal field theories" -- ordinary three-dimensional theories that are highly quantum mechanical. These types of theories are mathematically rich, but do not contain conventional particles and so are typically not considered to be relevant for describing nature. The mathematical equivalence between these challenging three-dimensional theories and a more tractable extra dimensional theory is known as the holographic principle.

"Since these conformal field theories were both intractable and unusual, they hadn't really been systematically applied to dark matter," Tanedo added. "Instead of using that language, we work with the holographic extra-dimensional theory."

The key feature of the extra-dimensional theory is that the force between dark matter particles is described by an infinite number of different particles with different masses called a continuum. In contrast, ordinary forces are described by a single type of particle with a fixed mass. This class of continuum-dark sectors is exciting to Tanedo because it does something "fresh and different."

According to Tanedo, past work on dark sectors focuses primarily on theories that mimic the behavior of visible particles. His research program is exploring the more extreme types of theories that most particle physicists found less interesting, perhaps because no analogs exist in the real world.

In Tanedo's theory, the force between dark matter particles is surprisingly different from the forces felt by ordinary matter.

"For the gravitational force or electric force that I teach in my introductory physics course, when you double the distance between two particles you reduce the force by a factor of four. A continuum force, on the other hand, is reduced by a factor of up to eight."

What implications does this extra dimensional dark force have? Since ordinary matter may not interact with this dark force, Tanedo turned to the idea of self-interacting dark matter, an idea pioneered by Hai-Bo Yu, an associate professor of physics and astronomy at UCR who is not a coauthor on the paper. Yu showed that even in the absence of any interactions with normal matter, the effects of these dark forces could be observed indirectly in dwarf spheroidal galaxies. Tanedo's team found the continuum force can reproduce the observed stellar motions.

"Our model goes further and makes it easier than the self-interacting dark matter model to explain the cosmic origin of dark matter," Tanedo said.

Next, Tanedo's team will explore a continuum version of the "dark photon" model.

"It's a more realistic picture for a dark force," Tanedo said. "Dark photons have been studied in great detail, but our extra-dimensional framework has a few surprises. We will also look into the cosmology of dark forces and the physics of black holes."

Tanedo has been working diligently on identifying "blind spots" in his team's search for dark matter.

"My research program targets one of the assumptions we make about particle physics: that the interaction of particles is well-described by the exchange of more particles," he said. "While that is true for ordinary matter, there's no reason to assume that for dark matter. Their interactions could be described by a continuum of exchanged particles rather than just exchanging a single type of force particle."

Credit: 
University of California - Riverside

How HIV infection shrinks the brain's white matter

image: A confocal microscope image shows an oligodendrocyte in cell culture, labeled to show the cell nucleus in blue and myelin proteins in red, green, and yellow. Researchers from Penn and CHOP have shown that HIV infection prevents oligodendrocytes from maturing, leading to a reduction in white matter in the brain.

Image: 
Raj Putatunda

It's long been known that people living with HIV experience a loss of white matter in their brains. As opposed to "gray matter," which is composed of the cell bodies of neurons, white matter is made up of a fatty substance called myelin that coats neurons, offering protection and helping them transmit signals quickly and efficiently. A reduction in white matter is associated with motor and cognitive impairment.

Earlier work by a team from the University of Pennsylvania and Children's Hospital of Philadelphia (CHOP) found that antiretroviral therapy (ART)--the lifesaving suite of drugs that many people with HIV use daily--can reduce white matter, but it wasn't clear how the virus itself contributed to this loss.

In a new study using both human and rodent cells, the team has hammered out a detailed mechanism, revealing how HIV prevents the myelin-making brain cells called oligodendrocytes from maturing, thus putting a wrench in white matter production. When the researchers applied a compound blocking this process, the cells were once again able to mature.

The work is published in the journal Glia.

"Even when people with HIV have their disease well-controlled by antiretrovirals, they still have the virus present in their bodies, so this study came out of our interest in understanding how HIV infection itself affects white matter," says Kelly Jordan-Sciutto, a professor in Penn's School of Dental Medicine and co-senior author on the study. "By understanding those mechanisms, we can take the next step to protect people with HIV infection from these impacts."

"When people think about the brain, they think of neurons, but they often don't think about white matter, as important as it is," says Judith Grinspan, a research scientist at CHOP and the study's other co-senior author. "But it's clear that myelination is playing key roles in various stages of life: in infancy, in adolescence, and likely during learning in adulthood too. The more we find out about this biology, the more we can do to prevent white matter loss and the harms that can cause."

Jordan-Sciutto and Grinspan have been collaborating for several years to elucidate how ART and HIV affect the brain, and specifically oligodendrocytes, a focus of Grinspan's research. Their previous work on antiretrovirals had shown that commonly used drugs disrupted the function of oligodendrocytes, reducing myelin formation.

In the current study, they aimed to isolate the effect of HIV on this process. Led by Lindsay Roth, who recently earned her doctoral degree within the Biomedical Graduate Studies group at Penn and completed a postdoctoral fellowship working with Jordan-Sciutto and Grinspan, the investigation began by looking at human macrophages, one of the major cell types that HIV infects.

Scientists had hypothesized that HIV's impact on the brain arose indirectly through the activity of these immune cells since the virus doesn't infect neurons or oligodendrocytes. To learn more about how this might affect white matter specifically, the researchers took the fluid in which macrophages infected with HIV were growing and applied it to rat oligodendrocyte precursor cells, which mature into oligodendrocytes. While this treatment didn't kill the precursor cells, it did block them from maturing into oligodendrocytes. Myelin production was subsequently also reduced.

"Immune cells that are infected with the virus secrete harmful substances, which normally target invading organisms, but can can also kill nearby cells, such as neurons, or stop them from differentiating," Grinspan says. "So the next step was to figure out what was being secreted to cause this effect on the oligodendrocytes."

The researchers had a clue to go on: Glutamate, a neurotransmitter, is known to have neurotoxic effects when it reaches high levels. "If you have too much glutamate, you're in big trouble," says Grinspan. Sure enough, when the researchers applied a compound that blunts glutamate levels to HIV-infected macrophages before the transfer of the growth medium to oligodendrocyte precursors, the cells were able to mature into oligodendrocytes. The result suggests that glutamate secreted by the infected macrophages was the culprit behind the precursor cells getting "stuck" in their immature form.

There was another mechanism, however, that the researchers suspected might be involved: the integrated stress response. This response integrates signals from four different signaling pathways, resulting in changes in gene expression that serve to protect the cell from stress or to prompt the cell to die, if the stress is overwhelming. Earlier findings from Jordan-Sciutto's lab had found the integrated stress response was activated in other types of brain cells in patients who had cognitive impairment associated with HIV infection, so the team looked for its involvement in oligodendrocytes as well.

Indeed, they found evidence that the integrated stress response was activated in cultures of oligodendrocyte precursor cells.

Taking this information with what they had found out about glutamate, "Lindsay was able to tie these two things together," Jordan-Sciutto says. She demonstrated that HIV-infected macrophages secreted glutamate, which activated the integrated stress response by turning on a pathway governed by an enzyme called PERK. "If you blocked glutamate, you prevented the activation of the integrated stress response," Jordan-Sciutto says.

To take these findings further, and potentially test out new drug targets to address HIV-related cognitive impairments, the team hopes to use a well-characterized rat model of HIV infection.

"HIV is a human disease, so it's a hard one to model," says Grinspan. "We want to find out if this model recapitulates human disease more accurately than others we've used in the past."

By tracking white matter in this animal model and comparing it to imaging studies done on patients with HIV, they hope to get at a better understanding of what factors shape white matter loss. They're particularly interested in looking at a cohort of adolescents being treated at CHOP, as teens are a group in whom HIV infection rates are climbing.

Ultimately, the researchers want to discern the effects of the virus from the drugs used to treat it in order to better evaluate the risks of each.

"When we put people on ART, especially kids or adolescents, it's important to understand the implications of doing that," says Jordan-Sciutto. "Antiretrovirals may prevent the establishment of a viral reservoir in the central nervous system, which would be wonderful, but we also know that the drugs can cause harm, particularly to white matter.

"And then of course we can't forget the 37 million HIV-infected individuals who live outside the United States and may not have access to antiretrovrials like the patients here," she says. "We want to know how we can help them too."

Credit: 
University of Pennsylvania

Kids who sleep with their pet still get a good night's rest: Concordia research

image: Professor of Psychology Jennifer McGrath.

Image: 
Concordia University

There is a long-held belief that having your pet sleep on the bed is a bad idea. Aside from taking up space, noisy scratching, or triggering allergies, the most common assertion averred that your furry companion would disrupt your sleep.

A new study published in the journal Sleep Health tells a different story. Researchers at Concordia's Pediatric Public Health Psychology Lab (PPHP) found that the sleep quality of the surprisingly high number of children who share a bed with their pets is indistinguishable from those who sleep alone.

"Sleeping with your pet does not appear to be disruptive," says the paper's lead author, PhD student Hillary Rowe. "In fact, children who frequently slept with their pet endorsed having higher sleep quality."

Rowe co-wrote the paper with fellow PPHP researchers Denise Jarrin, Neressa Noel, Joanne Ramil and Jennifer McGrath, professor of psychology and the laboratory's director.

Serendipitous findings

The data the researchers used was found amid the findings of the larger Healthy Heart Project, a longitudinal study funded by the Canadian Institutes of Health Research, which explores the links between childhood stress, sleep and circadian timing.

Children and parents answered questionnaires about bedtime routines and sleep hygiene: keeping a consistent bedtime, having a relaxing pre-sleep routine and sleeping in a quiet comfortable space. For two weeks, children wore wearables (wrist actigraphy) and filled out daily logs to track their sleep. Children were also fitted with a specialized home polysomnography device for one night to allow the researchers to record their brain waves (EEG signals) while they were sleeping.

"One of the sleep hygiene questions asked if they shared their bed with a pet," McGrath says. "We were startled to find that one in three children answered yes!"

Following this discovery, they looked to see what the existing literature said about the subject of bed-sharing with animals. They found a few studies with adults, but almost nothing with youth.

"Co-sleeping with a pet is something many children are doing, and we don't know how it influences their sleep," Rowe adds. "So, from a sleep science perspective, we felt this was something important we should look into."

Shining a better light on sleep measurement

The researchers categorized the children into one of three groups based on how often they sleep with their pet: never, sometimes or frequent. They then compared the three groups across a diverse range of sleep variables to see if there were any significant differences between them.

"Given the larger goals of the Healthy Heart Project, we were able to not only look at bedtimes and amount of time sleeping (duration), but also how long it took to fall asleep (latency), nighttime awakenings (disruptions) and sleep quality," McGrath says. They found that the three groups were generally similar across all sleep dimensions.

"The findings suggest that the presence of a pet had no negative impact on sleep," Rowe notes. "Indeed, we found that children who slept with their pets most often reported higher perceived sleep quality, especially among adolescents."

She hypothesizes that the children are more likely to consider pets as their friends and derive comfort from sleeping with them.

"These findings also sharpen our thinking about how to improve technology to measure sleep," McGrath adds.

"Many wearables like Apple Watch and Fitbit or even smartphones themselves have accelerometers that detect movement to decode one's sleep. Given the number of people who share their bed with their partner, or their pet, it may be sensible to develop a setting for co-sleeping to tweak the algorithm used to define sleep intrusions or awakenings, which would make for a much more accurate sleep assessment."

Credit: 
Concordia University

R&D exploration or exploitation? How firms respond to import competition

Do firms respond to tougher competition by searching for completely new technological solutions (exploration), or do they work to defend their position by improving current technologies (exploitation)?

Competition from increased import penetration generally results in tight profit margins, low prices, and strong efficiency pressures, immediately affecting firms' bottom lines in the form of reduced profits and increased bankruptcy risk.

A firm's R&D strategy is one of the fundamental determinants of success or failure when responding to competitive threats. To ensure both short-term performance and long-term survival, firms have two basic R&D options: explore new knowledge or exploit existing knowledge bases.

A new study published in the Strategic Management Journal (SMJ) examines how firms change the knowledge sources used in their R&D efforts in response to substantial increases in import penetration in their domestic market. The study was conducted by Raffale Morandi Stagni, Universidad Carlos III de Madrid, Spain, Andrea Fosfuri, Bocconi University, Milan, Italy and Juan Santaló, IE University, Madrid, Spain. They studied a sample of U.S. manufacturing firms over the years 1989-2006.

"Our focus on technology reflects both its increasing importance for firm survival and competitive advantage," write the authors. "Specifically, we study competition created by import penetration, which has increased steadily in recent years to become a central concern for companies, for example, dealing with imports from China.

"We find that in the years that immediately follow an increase in import penetration, firms tend to rely more on familiar knowledge in the development of innovations and less on knowledge sources that were not previously used. This switch in R&D strategy also appears to be temporary (reversed in later years), and it is positively associated with an increased likelihood of survival."

The researchers argue that while exploration is riskier and costlier than exploitation, it also requires a longer time horizon to produce results due to its slower learning pattern.

They also tested the effects of import penetration according to the type of competition and the type of industry affected. They separated imports from low-technology countries from imports from high-technology countries.

"If technological competition has a different effect on search strategies than price-based competition, we might expect the results to differ," write the researchers. "Instead, the effects of the two types of import penetration are qualitatively similar.

"We also performed a sample split of industries in which the primary customers are other businesses (B2B) or those in which the primary customers are final consumers (B2C). Consistent with the intuition that import penetration issues a greater threat to firm survival in B2B industries, we find that the effect of import penetration on technological exploration and exploitation is stronger for that group than for B2C industries."

The final variable they researched was whether technology search strategies are moderated by factors that might alleviate or increase concerns about firm survival.

"The findings show that the negative relationship between competition and exploration is magnified for firms that are relatively more vulnerable, because they have greater degrees of operating leverage and lower degrees of product diversification," the researchers write.

Credit: 
Strategic Management Society

Partners play pivotal role in pregnant women's alcohol use and babies' development

A new study by a team of University of Rochester psychologists and other researchers in the Collaborative Initiative on Fetal Alcohol Spectrum Disorders (CIFASD) finds that partners of mothers-to-be can directly influence a pregnant woman's likelihood of drinking alcohol and feeling depressed, which affects their babies' development.

The study, which appeared in Alcoholism: Clinical & Experimental Research, highlights the importance of engaging partners in intervention and prevention efforts to help pregnant women avoid drinking alcohol. A baby's prenatal alcohol exposure carries the risk of potential lifelong problems, including premature birth, delayed infant development, and fetal alcohol spectrum disorders (FASD).

"The findings emphasize how many factors influence alcohol use during pregnancy," says lead author Carson Kautz-Turnbull, a third-year graduate student in the Rochester Department of Psychology whose interests lie in FASD intervention work and reaching underserved populations, including racial minorities, rural populations, and low-income groups. "The more we learn about these factors, the more we can reduce stigma around drinking during pregnancy and help in a way that's empowering and meaningful," Kautz-Turnbull says.

The team followed 246 pregnant women at two sites in western Ukraine over time as part of CIFASD, an international consortium of researchers that researchers at the University's Mt. Hope Family Center are members of, which is funded by the NIH's National Institute of Alcohol Abuse and Alcoholism.

The team found that higher use of alcohol and tobacco by partners as well as pregnant women's lower relationship satisfaction increased the likelihood of their babies' prenatal alcohol exposure. Conversely, women who felt supported by their partners reported lower rates of depressive symptoms and were less likely to drink during pregnancy.

All study participants had a partner; most were married. In their first trimesters, the women reported on their relationship satisfaction, including frequency of quarreling, happiness with the relationship, and the ease of talking to their partners, their partners' substance use, and their socioeconomic status. In the third trimester, the participants were surveyed about their own drinking habits and depressive symptoms. Subsequently, the researchers assessed the infants' mental and psychomotor development around the age of six months.

According to the team's analysis, pregnant women's depressive symptoms and drinking directly correlated with their relationships with their partners and to their partners' substance use. (The researchers asked about alcohol and tobacco use only.) Positive partner influences resulted in women's lower alcohol use in late pregnancy and fewer depressive symptoms. The findings applied even when socioeconomic status, which is generally linked to depression and drinking, was taken into account. Higher prenatal alcohol exposure resulted in poorer mental and psychomotor development in the infants, though a mother's prenatal depression did not affect babies the way drinking did.

That's why maternal health and pregnancy interventions are likely to be more effective when partners are included, with benefits for both mothers and babies, the team concludes. Interventions addressing the partners' substance use may help reduce pregnant women's substance use, too, while improving their relationship satisfaction, protecting against depression, and boosting infant development.

Besides Kautz-Turnbull, the study was coauthored by Rochester's Christie Petrenko and Elizabeth Handley, Emory University's Claire Coles and Julie Kable, University of South Alabama's Wladimir Wertelecki, Lyubov Yevtushok of Omni-Net Centers in Ukraine, Natalya Zymak-Zakutnya of the OMNI-Net for Children International Charitable Fund in Ukraine, Christina Chambers of the University of California, San Diego, and CIFASD.

Credit: 
University of Rochester

Study offers insights for communicating about wildlife, zoonotic disease amid COVID-19

A new study from North Carolina State University found that certain types of messages could influence how people perceive information about the spread of diseases from wildlife to humans.

The researchers say the findings, published in the journal Frontiers in Communication, could help scientists, policymakers and others more effectively communicate with diverse audiences about zoonotic diseases and the role of wildlife management in preventing them from spreading to people. Zoonotic diseases are diseases that originate in wildlife and become infectious to people.

"If we want to prevent and mitigate the next giant zoonotic disease, we need people to recognize these diseases can emerge from their interactions with wildlife," said study co-author Nils Peterson, professor of forestry and environmental resources at NC State. "We have to do better with how we interact with wildlife. We also have to do better in terms of our communication, so people recognize the root of the problem. We need to learn how to communicate with people about zoonotic diseases and wildlife trade across partisan divides."

In the study, researchers surveyed 1,554 people across the United States to understand whether they would see greater acceptance of scientific information about zoonotic diseases - specifically in regard to the potential role of wildlife trade in the origin and spread of the virus that causes COVID-19 - depending on how they structured their messaging. Scientists from the World Health Organization concluded in a report earlier this year that evidence points to a likely animal origin. One group of scientists has recently called for more clarity.

In the experiment, study participants were asked to read one of three articles. One article used a "technocratic" frame that emphasized the use of technology and human ingenuity to address diseases from wildlife, such as using monitoring and culling of animals with diseases. This frame was designed to appeal to people with an "individualistic" worldview. A second article had a "regulatory frame" that emphasized using land conservation to create wildlife refuges as a solution. This frame was designed to appeal to people with a "communitarian" view. The third article was designed as a control, and was intended to be neutral.

Researchers then asked all of the participants to read part of an article that researchers wrote about COVID-19 and the potential role of wildlife trade in its origin and spread, and asked them about their perceived validity of the information. Researchers also surveyed participants about their trust in science overall, and belief in COVID-19's wildlife origin.

"Past research suggests people process and filter information through their cultural lens, or based on how they think the society should function," said the study's lead author Justin Beall, a graduate student in parks, recreation and tourism management at NC State. "We wanted to know, in the domain of zoonotic disease management, what are the solutions for managing diseases that might align with different cultural values in the United States? Would using those perspectives impact how people accepted scientific information about the wildlife origin of COVID-19?"

Researchers found that people who identified as liberal reported higher perceived risk on average from COVID-19. They were also more likely to accept evidence for the wildlife origin of COVID-19 and support restrictions on wildlife trade.

When researchers considered the link between message frames and participants' acceptance of the information about COVID-19 and the potential role of wildlife trade in its origin and spread, they found liberals who received the technocratic framing were significantly less likely to find the information valid, while conservatives were slightly more likely to find it valid. They didn't see any statistically significant relationship between the "regulatory" framing and participants' acceptance of the information.

"The findings show us that cultural views are relevant for communicating about wildlife disease," Beall said. "We found that the technocratic viewpoint might be more polarizing."

That suggests that for communicating to a diverse public audience about zoonotic disease and wildlife trade, communicators should avoid using the technocratic frame. However, when communicators are speaking to a conservative audience, they could consider using the technocratic frame to increase acceptance.

Researchers underscored the importance of the findings for conveying the idea that the health of humans, wildlife and the environment are connected.

"We all exist in this giant ecosystem, and disease is part of it," said study co-author Lincoln Larson, associate professor of parks, recreation and tourism management at NC State. "If we're talking about the health of humans, we're talking about the health of wildlife and ecosystems simultaneously. It's critical to develop effective communication strategies that resonate with ideologically diverse audiences and lead to bipartisan support and action."

"Improving communication and framing around zoonotic disease could help to prevent the next global pandemic, and that's a message everyone can get behind," he added.

Credit: 
North Carolina State University

Opioid Agonist Therapy reduces mortality risk among people with opioid dependence

A new global review has found that receiving Opioid Agonist Therapy (OAT) is associated with lower risk of multiple causes of death among people with opioid dependence.

The review found that people with opioid dependence were less likely to experience overdose-related, suicide, alcohol-related, cancer, and cardiovascular-related mortality while receiving OAT.

Researchers from the National Drug and Alcohol Research Centre (NDARC) at UNSW Sydney, University of Bristol and several other global institutions reviewed the relationship between OAT and mortality across type of drug, setting and participant groups from over 700,000 participants, which is six times the number of any other previous review.

The review found that mortality risk was lower for those receiving either buprenorphine or methadone treatment, the two most common forms of OAT for people with opioid dependence.

Lead author, Thomas Santo Jr, PhD candidate at NDARC, said, "People with opioid dependence who receive OAT are not only at lower risk of overdose than those who do not, but also at lower risk of suicide and several other common causes of death."

"This review provides further justification for expanding access to OAT to help lower the risk of mortality among people with opioid dependence," said Mr Santo.

"Importantly, the benefits of OAT were consistent across region, age, sex, and comorbidity status. The few studies that examined the impact of OAT after release from prison, found that time in OAT lowered risk of mortality."

The review confirmed that there was a greater risk of death in the first month after OAT is stopped. For patients on methadone, there was a greater risk of mortality at the beginning of treatment which was not seen for patients on buprenorphine.

"The first four weeks that follow treatment cessation are associated with particularly high rates of suicide and overdose-related mortality," said Mr. Santo.

"These findings emphasise the importance of retention in treatment for those with opioid dependence who start treatment on OAT. There is also a need for more detailed investigation and intervention development to minimise mortality risk during induction onto OAT."

The review shows that randomised controlled trials (RCTs) of OAT are underpowered (do not have a large enough sample size) to examine mortality risk.

"We looked at trial evidence but so few studies were powered to examine mortality, which is why we need to rely on cohort studies of people in treatment around the world," said Mr Santo.

Professor Matt Hickman, at the NIHR Health Protection Research Unit in Behavioural Science and Evaluation at University of Bristol, said, "The research evidence is clear - OAT reduces mortality risk - but the population benefits of OAT may not be realised if treatment periods in the community are too short and prisoners with opioid use disorders are not released on OAT after leaving prison. Countries - like the UK - with ongoing public health crises in drug related deaths - need to review both access to OAT and the way it is delivered to ensure the greatest number of deaths are averted.

"A clinical decision support system, stratifying clients' risk of dropout in real time, may facilitate the identification of those in need of service enhancements to increase engagement and prevent dropout.

"Work to scale up access and retention could have important population-level benefits."

Credit: 
University of Bristol