Culture

Nearly half of those convicted of sharing explicit images of partners online show remorse

In a new study, researchers found nearly half of those who share explicit images of others without permission feel remorse after the fact and 24% try to deflect blame onto victims. Amy Hasinoff, a researcher at the University of Colorado Denver, joined Danish researcher Sidsel K. Harder, to take a deeper dive into the issue of sexual abuse and image sharing.

Hasinoff and Harder looked at how people who shared explicit images online spoke to police officers about the harmful acts they committed. While looking over cases where the image-sharer was caught and convicted, researchers found nearly half, 44%, of those cases involved the guilty party acknowledging sexual abuse, acknowledging shame, and telling redemption stories about making better choices in the future.

Deflecting Shame

In almost a quarter of all cases, people who committed image-based abuse assign the primary blame to the victim, consistent with the neutralization technique of denial of the victim. Denial of the victim is when the offender claims that the victim deserved what happened to them. The offender may justify the wrongdoing by saying that the victim was a bad person or deserved the abuse in some way.

"Sometimes, people choose to share sexual images of others without their permission," said Hasinoff. "They do this specifically to cause harm, and sometimes they do it out of negligence and carelessness or because they think it doesn't matter or they feel entitled."

Narrating Redemption

While many of those convicted of non-consensual image sharing shift the blame to someone else, 44% show some sort of remorse and acknowledge that they committed harmful actions. These people all plead "guilty" and express that they were ashamed of what they have done as a result of their anger with the victims, their need for respect from male peers, or their carelessness.

"What is particularly striking about this study is that a group of people who've done something really abusive are able to manage their shame by transforming their negative emotions into redemption stories about their better future selves," said Hasinoff. "At the same time, telling a redemption story, apologizing, or expressing shame does not guarantee a victim's or a community's forgiveness."

A redemption story is when a person who's done something bad talks about how they plan to never repeat that in the future and can include trying to make amends too. In this case, making amends could include apologizing, paying for services to remove images from the internet, or other ways of making things right with the person they harmed.

Hasinoff points out there are limitations to any study on shame and remorse since you never know if it is sincere. At the same time, emotions are social, especially shame, meaning that we can only ever feel it in relation to other people, and so it always depends on context. Even when we feel shame on our own, in our own minds, without others present, shame is typically still about what we think others think about us.

Why is this significant?

According to researchers, rather than stigmatizing people who have shared images non-consensually, future interventions might focus on helping them accept the guilt for their actions, and future research might investigate how to create the best conditions for someone who has shared images without consent to acknowledge the harmfulness of what they have done.

"Instead of seeing everyone who shares a sexual image without permission as an irredeemable 'bad person' and just punishing them, it might be better for the victim if the person who committed this kind of harm could get some guidance and help to understand the effects of what they did and to try to find meaningful ways to repair that harm," said Hasinoff. "This is a society-wide problem that is rooted in gender norms - like the way some men feel entitled to treat women as sexual objects - so part of the solution has to be helping people understand why and how to unlearn those ideas."

Credit: 
University of Colorado Denver

Doping by athletes could become tougher to hide with new detection method

WASHINGTON, April 5, 2021 -- As the world awaits the upcoming Olympic games, a new method for detecting doping compounds in urine samples could level the playing field for those trying to keep athletics clean. Today, scientists report an approach using ion mobility-mass spectrometry to help regulatory agencies detect existing dopants and future "designer" compounds.

The researchers will present their results today at the spring meeting of the American Chemical Society (ACS). ACS Spring 2021 is being held online April 5-30. Live sessions will be hosted April 5-16, and on-demand and networking content will continue through April 30. The meeting features nearly 9,000 presentations on a wide range of science topics.

Each year, the World Anti-Doping Agency (WADA) publishes a list of substances, including steroids, that athletes are prohibited from using. However, it can be difficult to distinguish an athlete's natural or "endogenous" steroids from synthetic "exogenous" ones administered to boost performance.

And regulatory bodies face another challenge: "As quickly as we develop methods to look for performance-enhancing drugs, clandestine labs develop new substances that give athletes a competitive advantage," says Christopher Chouinard, Ph.D., the project's principal investigator. Those designer drugs evade detection if testing labs don't know to look for their specific chemical structures.

Chouinard's team at Florida Institute of Technology is trying to outsmart cheaters with an assay that can differentiate endogenous and exogenous steroids and can also anticipate the structure of new compounds that might show up in athletes' urine samples.

Currently, testing labs analyze samples using tandem mass spectrometry (MS) and gas or liquid chromatography. These approaches break up molecules in the sample and separate the fragments, yielding spectra that can reveal the identity of the original, intact compounds. But it can be tough to differentiate molecules with minor structural differences -- including isomers -- that distinguish endogenous steroids from exogenous ones, such as the synthetic anabolic steroids athletes take to build muscle.

To accentuate those differences, Chouinard pairs MS with ion mobility (IM) spectrometry, a separation technique he learned as a graduate student with Richard Yost, Ph.D., at the University of Florida. Yost's team and others found that the differences between isomers could be made even more apparent if the molecules in a sample were modified prior to IM-mass spec analysis by reacting them with other compounds. After Chouinard set up his own lab in 2018, he applied this technique by reacting steroid samples with ozone or acetone in the presence of ultraviolet light -- reactions already well-established among researchers who study lipid isomers, but new in the anti-doping arena.

Last year, Chouinard's team reported they had successfully used these reactions with IM-MS to improve isomer separation, identification and quantification for a few steroids in sample solutions. Now, the researchers report they have tested this technique in urine against nearly half the prohibited steroids on WADA's list and have shown it can successfully characterize and identify these compounds. They also showed the method can characterize and identify banned glucocorticoids, such as cortisone, that improve athletic performance by suppressing inflammation from injuries. Detection limits are below one nanogram per ml.

In addition to tracking down known dopants, the team wants to be able to find newly created illicit steroids not yet known to WADA. With Florida Institute of Technology collaborators including Roberto Peverati, Ph.D., they are developing computational modeling and machine learning techniques to try to predict the structure, spectra and other characteristics of these molecules. "If we can develop methods to identify any theoretical steroids in the future, we could dramatically reduce doping because we would be able to detect these new species immediately, without the lag time that's been associated with anti-doping testing over the last 40 years," Chouinard says.

Though the assays themselves are quick, simple and inexpensive, IM instruments are costly, with a price ranging up to roughly a million dollars, Chouinard notes. However, he adds, with the support of anti-doping funding organizations like the Partnership for Clean Competition (PCC), more labs might be willing to foot that bill, so long as the method offers a significant advantage in detection and deterrence.

Credit: 
American Chemical Society

See further: Scientists achieve single-photon imaging over 200km

image: (a) Visible-band photograph of the mountains taken by a standard astronomical camera equipped with a telescope. The elevation is approximately 4500 m. (b) Schematic diagram of the experimental setup. (c) Photograph of the setup hardware, including the optical system (top and bottom left) and the electronic control system (bottom right). (d) View of the temporary laboratory where lidar was implemented at an altitude of 1770 m.

Image: 
LI Zhengping et al.

A research team led by Professor PAN Jianwei and Professor XU Feihu from University of Science and Technology of China achieved single-photon 3D imaging over 200 km using high-efficiency optical devices and a new noise-suppression technique, which is commented by the reviewer as an almost "heroic" attempt at single photon lidar imaging at very long distances.

Lidar imaging technology has enabled high precision 3D imaging of target scene in recent year. Single photon imaging lidar is an ideal technology for remote optical imaging with single-photon level sensitivity and picosecond resolution, yet its imaging range is strictly limited by the quadratically decreasing count of photons that echo back.

Researchers first optimized transceiver optics. The lidar system setup adopted a coaxial scanning design for the transmit and receive optical paths, which can align the transmitting and receiving spots more precisely and achieve higher-resolution imaging in comparison with tradition method.

To differentiate weak echo signal from strong background noise, the team developed a single-photon avalanche diode detector (SPAD) with a 19.3% detection efficiency and a low dark count rate (0.1kHz). Further, researchers coated telescope to achieve high transmission at 1550 nm. All these improvements achieved higher collection efficiency than before.

Researchers also adopted an efficient temporal filtering technique for noise suppression. The technique can reduce the total number of noise photon counts to be about 0.4 KHz, which is at least 50 times smaller than previous works.

Experiment results showed that the system can achieve accurate 3D imaging at up to 201.5 km with single-photon sensitivity.

This work could provide enhanced methods for low-power, single-photon lidar for high-resolution active imaging and sensing over long ranges and open up a new road for the application of long-range target recognition and earth observation.

Credit: 
University of Science and Technology of China

Scientists scour genes of 53,000+ people to better battle dangerous diseases

image: Stephen S. Rich, PhD, a genetics researcher at the University of Virginia School of Medicine, helped lead the massive effort to better understand heart, lung, blood and sleep disorders.

Image: 
UVA Health

A new analysis of the entire genetic makeup of more than 53,000 people offers a bonanza of valuable insights into heart, lung, blood and sleep disorders, paving the way for new and better ways to treat and prevent some of the most common causes of disability and death.

The analysis from the Trans-Omics for Precision Medicine (TOPMed) program examines the complete genomes of 53,831 people of diverse backgrounds on different continents. Most are from minority groups, which have been historically underrepresented in genetic studies. The increased representation should translate into better understanding of how heart, lung, blood and sleep disorders affect minorities and should help reduce longstanding health disparities.

"The Human Genome Project has generated a lot of promises and opportunities for applying genomics to precision medicine, and the TOPMed program is a major step in this direction," said Stephen S. Rich, PhD, a genetics researcher at the University of Virginia School of Medicine who helped lead the project. "An important feature of TOPMed is not only publishing the genomic data on 53,000 people with massive amounts of data related to heart, lung, blood and sleep disorders but also the great diversity of the participants who donated their blood and data."

Historic Genome Analysis

The groundbreaking work identified 400 million genetic variants, of which more than 78% had never been described. Nearly 97% were extremely rare, occurring in less than 1% of people. This sheds light on both how genes mutate and on human evolution itself, the researchers say.

Of the groups studied, people of African descent had the greatest genetic variability, the researchers found. The resulting data is the best ever produced on people of African ancestry, the scientists report in the prestigious journal Nature.

The work also offers important new insights into certain gene variants that can reduce people's ability to benefit from prescription drugs. This can vary by race and ethnic group.

"TOPMed is an important and historic effort to include under-represented minority participants in genetic studies," said Rich, who served on the project's Executive Committee and chaired the Steering Committee. "The work of TOPMed should translate not only into better scientific knowledge but increase diversity at all levels - scientists, trainees, participants - in work to extend personalized medicine for everyone."

Rich was joined in the effort by UVA's Ani Manichaikul, PhD; Joe Mychaleckyj, DPhil; and Aakrosh Ratan, PhD. All four are part of both the Center for Public Health Genomics and UVA's Department of Public Health Sciences.

Credit: 
University of Virginia Health System

Humans were apex predators for two million years

image: Human Brain

Image: 
Dr. Miki Ben Dor

Researchers at Tel Aviv University were able to reconstruct the nutrition of stone age humans. In a paper published in the Yearbook of the American Physical Anthropology Association, Dr. Miki Ben-Dor and Prof. Ran Barkai of the Jacob M. Alkov Department of Archaeology at Tel Aviv University, together with Raphael Sirtoli of Portugal, show that humans were an apex predator for about two million years. Only the extinction of larger animals (megafauna) in various parts of the world, and the decline of animal food sources toward the end of the stone age, led humans to gradually increase the vegetable element in their nutrition, until finally they had no choice but to domesticate both plants and animals - and became farmers.

"So far, attempts to reconstruct the diet of stone-age humans were mostly based on comparisons to 20th century hunter-gatherer societies," explains Dr. Ben-Dor. "This comparison is futile, however, because two million years ago hunter-gatherer societies could hunt and consume elephants and other large animals - while today's hunter gatherers do not have access to such bounty. The entire ecosystem has changed, and conditions cannot be compared. We decided to use other methods to reconstruct the diet of stone-age humans: to examine the memory preserved in our own bodies, our metabolism, genetics and physical build. Human behavior changes rapidly, but evolution is slow. The body remembers."

In a process unprecedented in its extent, Dr. Ben-Dor and his colleagues collected about 25 lines of evidence from about 400 scientific papers from different scientific disciplines, dealing with the focal question: Were stone-age humans specialized carnivores or were they generalist omnivores? Most evidence was found in research on current biology, namely genetics, metabolism, physiology and morphology.

"One prominent example is the acidity of the human stomach," says Dr. Ben-Dor. "The acidity in our stomach is high when compared to omnivores and even to other predators. Producing and maintaining strong acidity require large amounts of energy, and its existence is evidence for consuming animal products. Strong acidity provides protection from harmful bacteria found in meat, and prehistoric humans, hunting large animals whose meat sufficed for days or even weeks, often consumed old meat containing large quantities of bacteria, and thus needed to maintain a high level of acidity. Another indication of being predators is the structure of the fat cells in our bodies. In the bodies of omnivores, fat is stored in a relatively small number of large fat cells, while in predators, including humans, it's the other way around: we have a much larger number of smaller fat cells. Significant evidence for the evolution of humans as predators has also been found in our genome. For example, geneticists have concluded that "areas of the human genome were closed off to enable a fat-rich diet, while in chimpanzees, areas of the genome were opened to enable a sugar-rich diet."

Evidence from human biology was supplemented by archaeological evidence. For instance, research on stable isotopes in the bones of prehistoric humans, as well as hunting practices unique to humans, show that humans specialized in hunting large and medium-sized animals with high fat content. Comparing humans to large social predators of today, all of whom hunt large animals and obtain more than 70% of their energy from animal sources, reinforced the conclusion that humans specialized in hunting large animals and were in fact hypercarnivores.

"Hunting large animals is not an afternoon hobby," says Dr. Ben-Dor. "It requires a great deal of knowledge, and lions and hyenas attain these abilities after long years of learning. Clearly, the remains of large animals found in countless archaeological sites are the result of humans' high expertise as hunters of large animals. Many researchers who study the extinction of the large animals agree that hunting by humans played a major role in this extinction - and there is no better proof of humans' specialization in hunting large animals. Most probably, like in current-day predators, hunting itself was a focal human activity throughout most of human evolution. Other archaeological evidence - like the fact that specialized tools for obtaining and processing vegetable foods only appeared in the later stages of human evolution - also supports the centrality of large animals in the human diet, throughout most of human history."

The multidisciplinary reconstruction conducted by TAU researchers for almost a decade proposes a complete change of paradigm in the understanding of human evolution. Contrary to the widespread hypothesis that humans owe their evolution and survival to their dietary flexibility, which allowed them to combine the hunting of animals with vegetable foods, the picture emerging here is of humans evolving mostly as predators of large animals.

"Archaeological evidence does not overlook the fact that stone-age humans also consumed plants," adds Dr. Ben-Dor. "But according to the findings of this study plants only became a major component of the human diet toward the end of the era."

Evidence of genetic changes and the appearance of unique stone tools for processing plants led the researchers to conclude that, starting about 85,000 years ago in Africa, and about 40,000 years ago in Europe and Asia, a gradual rise occurred in the consumption of plant foods as well as dietary diversity - in accordance with varying ecological conditions. This rise was accompanied by an increase in the local uniqueness of the stone tool culture, which is similar to the diversity of material cultures in 20th-century hunter-gatherer societies. In contrast, during the two million years when, according to the researchers, humans were apex predators, long periods of similarity and continuity were observed in stone tools, regardless of local ecological conditions.

"Our study addresses a very great current controversy - both scientific and non-scientific," says Prof. Barkai. "For many people today, the Paleolithic diet is a critical issue, not only with regard to the past, but also concerning the present and future. It is hard to convince a devout vegetarian that his/her ancestors were not vegetarians, and people tend to confuse personal beliefs with scientific reality. Our study is both multidisciplinary and interdisciplinary. We propose a picture that is unprecedented in its inclusiveness and breadth, which clearly shows that humans were initially apex predators, who specialized in hunting large animals. As Darwin discovered, the adaptation of species to obtaining and digesting their food is the main source of evolutionary changes, and thus the claim that humans were apex predators throughout most of their development may provide a broad basis for fundamental insights on the biological and cultural evolution of humans."

Credit: 
Tel-Aviv University

Deep dive into key COVID-19 protein is a step toward new drugs, vaccines

image: The nucleocapsid phosphoprotein (blue) of SARS-CoV-2 (N) (grey) plays critical roles in multiple processes of the SARS-CoV-2 infection cycle, including replication and transcription, and packaging and protecting the genomic RNA (gRNA) (red). The N protein exists as a dimer in solution and interacts with gRNA predominantly through its structured N-terminal domain. N binds RNA multivalently and as more N proteins become available, stabilizing interactions between RNA and proteins occur, resulting in an organized nucleocapsid. Fluorescence imaging of 1-1000 RNA with a Cy3 fluorescent tag demonstrates that RNA-Cy3 with the addition of FL-N, becomes organized and condensed (red puncta background).

Image: 
OSU College of Science

CORVALLIS, Ore. - Researchers in the Oregon State University College of Science have taken a key step toward new drugs and vaccines for combating COVID-19 with a deep dive into one protein's interactions with SARS-CoV-2 genetic material.

The virus' nucleocapsid protein, or N protein, is a prime target for disease-fighting interventions because of the critical jobs it performs for the novel coronavirus' infection cycle and because it mutates at a comparatively slow pace. Drugs and vaccines built around the work of the N protein carry the potential to be highly effective and for longer periods of time - i.e., less susceptible to resistance.

Among the SARS-CoV-2 proteins, the N protein is the viral RNA's biggest partner. The RNA holds the genetic instructions the virus uses to get living cells, such as human cells, to make more of itself, and the N protein binds to the RNA and protects it.

Published in Biophysical Journal, the findings are an important jump-off point for additional studies of the N protein and its interactions with RNA as part of a thorough look at the mechanisms of SARS-CoV-2 infection, transmission and control.

Elisar Barbar, professor of biochemistry and biophysics at Oregon State, and Ph.D. candidate Heather Masson-Forsythe led the study with help from undergraduate students Joaquin Rodriguez and Seth Pinckney. The researchers used a range of biophysical techniques that measure changes in the size and shape of the N protein when bound to a fragment of genomic RNA - 1,000 nucleotides of the 30,000-nucleotide genome.

"The genome is rather large for a virus and requires many copies of the N protein to stick to the RNA to give the virus the spherical shape that is necessary for the virus to make more copies of itself," Barbar said. "Our study helps us quantify how many copies of N are needed and how close they are to each other when they stick to the RNA. "

Biophysical studies of N with large segments of RNA by nuclear magnetic resonance are rare, Barbar said, because of the difficulty of preparing the partially disordered N protein and long RNA segments, both prone to aggregation and degradation, but these kinds of studies are a specialty of the Barbar lab. Other researchers' studies generally have been limited to much smaller pieces of RNA and smaller pieces of the N protein.

Rather than just looking at the RNA-binding regions of the N protein on their own, the 1,000-nucleotide view allowed scientists to learn that the protein binds much more strongly when it's a full-length dimer - two copies attached to one another - and to identify regions of the protein that are essential for RNA binding.

"The full protein has structured parts but is actually really flexible, so we know that this flexibility is important for RNA binding," Masson-Forsythe said. "We also know that as N proteins start to bind to the longer RNA, the result is a diverse collection of bound protein/RNA complexes as opposed to one way of binding."

Drugs that thwart the N protein's flexibility would thus be one potential avenue for pharmaceutical researchers, she said. Another possibility would be drugs that disrupt any of those protein/RNA complexes that prove to be of special significance.

Credit: 
Oregon State University

New IT system may help film scriptwriters achieve box-office success

image: Pablo García-Sánchez, a researcher at the UGR's Department of Computer Architecture and Technology and lead author of this study

Image: 
University Of Granada

Could the next Hollywood blockbuster be written by a computer? Scientists from the University of Granada (UGR) and the University of Cádiz (UCA) have designed the world's first computer system based on artificial intelligence techniques that can help film scriptwriters create storylines with the best chance of box-office success.

The researchers focused their analysis on the "tropes" of existing films--that is, the commonplace, predictable, and even necessary clichés that repeatedly feature in film plots, based on rhetorical figures. These storytelling devices and conventions enable directors to readily convey scenarios that viewers find easy to recognise.

As tropes are ideas that are employed repeatedly throughout different films or series, it is often said that virtually all storylines have already appeared in the television series "The Simpsons". This reflection provided the inspiration for the title of the article about this new study, which has just been published in the prestigious journal PLOS ONE: "The Simpsons did it: Exploring the film trope space and its large scale structure". Its authors are Pablo García-Sánchez and Juan Julián Melero, from the UGR's Department of Computer Architecture and Technology, and Antonio Vélez and Manuel Jesús Cobo from the Department of Computer Engineering at the University of Cádiz.

Examples of tropes

García-Sánchez, the lead author of the original study, explains: "Some examples of tropes would be the inevitable villain that the heroes must take on in Marvel films; the detective who hands over his badge and gun; the protagonist's arrival in hell (used in dramas and thrillers such as "Cell 211" or "Below Zero"); the hero's journey (which dates back thousands of years, such as in Homer's Odyssey, but also features in films such as "Star Wars", "The Lord of the Rings", and "Harry Potter"); or the well-kept secret that is suddenly revealed, disrupting the entire plot of a psychological thriller."

The researchers devised a methodology to understand how tropes operate, to visually represent how they relate to one another and to different genres, and, above all, to infer which combinations might be the most successful in creative terms. In other words, using artificial intelligence, their study sought to predict which narrative devices or plot twists may or may not work well with audiences.

To achieve this objective, the researchers consulted an online database called TVTropes, which includes more than 25,000 tropes associated with 10,766 films. This platform is continually updated by fans, making it ideal the ideal source of data for the researchers to analyse. Using the free TropeScraper software, developed at the UGR, they scraped or extracted a list of tropes used in the films and then conducted a mapping exercise based on the user rating and popularity (number of votes) for each film, according to the IMDb website.

The network analysis of these tropes (in what the UGR and UCA researchers have dubbed the "troposphere") was conducted using programmed algorithms to reveal the relationship between the films that share similar tropes and thus build a picture of the existing communities of tropes and communities of films. Using this method, they were able to measure the popularity of the tropes and determine whether they were transversal (general or basic) across all films or highly specific/specialised, and whether they were on the rise (emerging tropes) or, on the contrary, on the decline.

Assisting the creative process

"This research can help film scriptwriters and directors during the creative process. While our system is not yet equipped to write automatically (although that's our next step), it does provide the resources with which to determine what combination of ideas (tropes) may work best," notes García-Sánchez.

The inspiration for the study came when the researchers began to question the over-simplicity of the "same old" characters used repeatedly in video games that quickly become boring. García-Sánchez continues: "We began to wonder how we could model the characters in such a way that they would provide a more interesting experience."

Thanks to the interrelationship between tropes, the films in which they appear, and the ratings given by users to each of these tropes, "depending on the combination and design of the actions based the tropes, we can now broadly ascertain the level of interest each kind of storyline is likely to generate," explains García-Sánchez.

"For example, some tropes are thematic, so what if we mix tropes that, on the face of it, are very different? What if we were to mix those from science fiction with those that typically appear in musicals in the same film, would that work?" This is just one of the inquiries being conducted by the researchers, who also plan to analyse in more depth which areas of the troposphere receive more attention and are currently enjoying growing success or, conversely, are causing interest to decline.

Using this same procedure, it would also be interesting to study the evolution of tropes in a given genre, country, or decade, and to better understand audience consumption and the viewing public's interaction with audio-visual productions.

Credit: 
University of Granada

Study highlights benefits of tax planning for companies facing financial constraints

A recent study of more than 2,000 companies finds that corporations feeling the pinch of financial constraints can benefit significantly from taking a more aggressive stance in their tax planning strategies. One takeaway of the finding is that tax authorities should look closely at the activities of companies facing financial constraints to make sure their tax activities don't become too aggressive.

Financial constraints aren't unusual and occur when a company can't afford to fund a project that would increase its value. Sometimes the constraints are caused by an external event - like a pandemic - that leaves companies with less income than they were anticipating. Sometimes the factors causing a constraint are specific to a single company, such as corporate mismanagement.

"Business researchers are interested in how companies respond to sudden changes in their financial constraints," says Nathan Goldman, co-author of the study and an assistant professor of accounting in the Poole College of Management at North Carolina State University. "But it's difficult to separate various confounding variables from the financial constraints that companies are facing. However, my co-authors and I realized that the Pension Protection Act of 2006 (PPA) gave us a great opportunity to examine financial constraints."

The PPA increased pension funding requirements by almost 500% for all companies that had defined-benefit pension plans. For example, before the PPA, if a company had $400 million in pension obligations, and $100 million in pension assets, it would have 30 years to fund 90% of its obligations. In other words, it could set aside an additional $9 million per year for 30 years. But the PPA changed that, and the same company would have been required to fund 100% of its obligations within seven years - or an additional $43 million per year.

"The PPA imposed new demands on the financial resources of these 'pension firms' - and that means many were suddenly facing financial constraints," Goldman says.

To assess the impact of these financial constraints, the researchers looked at data from 2,647 publicly traded companies: 730 pension firms and 1,907 companies that did not provide defined-benefit pension plans to their employees.

"We found that pension firms were able to recoup 19% of their investment shortfall by modifying their tax strategies," Goldman says. "That is a significant amount of money."

The benefits of the tax strategies can vary in terms of whether they are one-time benefits - such as R&D expenses - or recurring savings - such as relocating operations to locations with lower taxes.

Ultimately, the researchers found that there were three key takeaway messages for the business community.

"One: firms facing financing constraints should consider turning to tax planning as a way of generating capital without increasing debt or equity," Goldman says. "Two: tax planning alone cannot solve liquidity demands. However, the average firm generated a significant amount of cash to offset investment that would otherwise be lost. And, lastly, the taxing authority should more carefully scrutinize tax positions of financially constrained firms - since these firms are more likely to be increasing the aggressiveness of their deductions, and thus have a higher likelihood of being overturned."

Credit: 
North Carolina State University

Beef industry can cut emissions with land management, production efficiency

image: Researchers found the most potential for industry to reduce greenhouse gas emissions in the United States and Brazil.

Image: 
Kenton Rowe for The Nature Conservancy

A comprehensive assessment of 12 different strategies for reducing beef production emissions worldwide found that industry can reduce greenhouse gas (GHG) emissions by as much as 50% in certain regions, with the most potential in the United States and Brazil. The study, "Reducing Climate Impacts of Beef Production: A synthesis of life cycle assessments across management systems and global regions," is published April 5 in Global Change Biology.

A research team led by Colorado State University (CSU) found that widespread use of improved ranching management practices in two distinct areas of beef production would lead to substantial emissions reductions. This includes increased efficiency to produce more beef per unit of GHG emitted - growing bigger cows at a faster rate - and enhanced land management strategies to increase soil and plant carbon sequestration on grazed lands.

Globally, cattle produce about 78% of total livestock GHG emissions. Yet, there are many known management solutions that, if adopted broadly, can reduce, but not totally eliminate, the beef industry's climate change footprint, according to lead author Daniela Cusack, an assistant professor in the Department of Ecosystem Science and Sustainability at CSU.

Overall, the research team found a 46% reduction in net GHG emissions per unit of beef was achieved at sites using carbon sequestration management strategies on grazed lands, including using organic soil amendments and restoring trees and perennial vegetation to areas of degraded forests, woodlands and riverbanks. Additionally, researchers found an overall 8% reduction in net GHGs was achieved at sites using growth efficiency strategies. Net-zero emissions, however, were only achieved in 2% of studies.

"Our analysis shows that we can improve the efficiency and sustainability of beef production, which would significantly reduce the industry's climate impact," said Cusack, also a research associate at the Smithsonian Tropical Research Institute in Panama. "But at the same time, we will never reach net-zero emissions without further innovation and strategies beyond land management and increased growth efficiency. There's a lot of room, globally, for improvement."

Global analysis

Researchers analyzed 292 comparisons of "improved" versus "conventional" beef production systems across Asia, Australia, Brazil, Canada, Latin America and the U.S. The analysis revealed that Brazilian beef production holds the most potential for emissions reductions.

In the studies analyzed, researchers found a 57% GHG emission reduction through improved management strategies for both carbon sequestration and production efficiency in Brazil. Specific strategies include improved feed quality, better breed selections and enhanced fertilizer management.

The biggest impact was found in integrated field management, including intensive rotational grazing schemes, adding soil compost, reforestation of degraded areas and selectively planting forage plants bred for sequestering carbon in soils.

"My home country of Brazil has more than 52 million hectares of degraded pastureland - larger than the state of California," said Amanda Cordeiro, co-author and a graduate student at CSU. "If we can aim for a large-scale regeneration of degraded pastures, implementation of silvo-agro-forestry systems and adoption of other diversified local management strategies to cattle production, Brazil can drastically decrease carbon emissions."

In the U.S., researchers found that carbon sequestration strategies such as integrated field management and intensive rotational grazing reduced beef GHG emissions by more than 100% - or net-zero emissions - in a few grazing systems. But efficiency strategies were not as successful in the U.S. studies, possibly because of a high use of the strategies in the region already.

"Our research shows the important role that ranchers can play in combatting the global climate crisis, while ensuring their livelihoods and way of life," said Clare Kazanski, co-author and North America region scientist with The Nature Conservancy. "By analyzing management strategies in the U.S. and around the world, our research reinforces that ranchers are in a key position to reduce emissions in beef production through various management strategies tailored to their local conditions."

Darrell Wood, a northern California rancher, is an example of a producer leading the way on climate-friendly practices. Wood's family participates in the California Healthy Soils program, which incentivizes practices with a demonstrated climate benefit.

"As a sixth-generation cattle rancher, I see nothing but upside potential from using our cattle as a tool for reducing greenhouse gas emissions," Wood said. "Taking good care of our grasslands not only benefits climate, but also wildlife and the whole ecosystem that generates clean air and water. It'll help the next generation continue our business, too."

Next steps

Although the research shows a significant reduction in the GHG footprints of beef production using improved management strategies, scientists don't yet know the full potential of shifting to these emission-reducing practices because there are very few data on practice adoption levels around the world.

"Asia, for example, is one of the most rapidly growing beef markets, but there is an imbalance between the amount of research focus on improving beef production and the growing demand for beef," Cusack said. "We know with the right land management and efficiency strategies in place, it's possible to have large reductions in emissions across geographic regions, but we need to keep pushing for additional innovations to create a truly transformation shift in the way the global beef system operates to ensure a secure food supply and a healthy environment."

Credit: 
Colorado State University

EMS workers 3 times more likely to experience mental health issues

image: Data Source: Estimates are the published study. Data were collected in 2019.

Image: 
Syracuse University Lerner Center for Public Health Promotion.

Syracuse, N.Y. - Emergency medical service (EMS) workers face triple the risk for significant mental health problems such as depression and posttraumatic stress disorder compared to the general population, according to a recently published study by researchers from Syracuse University.

The study also showed that daily mental health symptoms for EMS workers can be reduced through recovery activities such as exercising, socializing with other people, and finding meaning in the day's challenges.

The study, "Dynamic psychosocial risk and protective factors associated with mental health in Emergency Medical Service (EMS) personnel," was published recently by the Journal of Affective Disorders. The study is also summarized in the Lerner Center for Public Health Promotion research brief "How Do Emergency Medical Service Workers Cope with Daily Stressors?"

The COVID-19 pandemic has underscored the significant mental health burden experienced by EMS workers. The researchers surveyed EMS workers at American Medical Response in Syracuse, N.Y., for eight consecutive days in 2019 to better understand their mental health symptoms related to daily occupational stressors. These stressors can take the form of routine work demands, critical incidents involving serious harm or death, and social conflicts.

"Together, these occupational stressors negatively impacted mental health each day that they occurred," said researcher Bryce Hruska. "Each additional work demand or critical event that an EMS worker encountered on a given workday was associated with a 5% increase in their PTSD symptom severity levels that day, while each social conflict was associated with a 12% increase in their depression symptom severity levels."

The research team was led by Hruska, a Lerner Faculty Affiliate and assistant professor of public health at Syracuse University's David B. Falk College of Sport and Human Dynamics. Here are the team's key findings:

EMS workers experience a diverse array of occupational stressors each day.

These stressors are associated with an elevation in mental health symptoms each day that they occur.

Recovery activities (like exercising or socializing with other people) and looking for meaning in the day's stressors may protect mental health.

The study found that on workdays, the EMS workers engage in approximately three recovery activities during non-work hours, mostly visiting with friends and family, eating a meal with others, and spending quiet time alone.

"These activities had a beneficial impact on mental health; each additional recovery activity in which a worker engaged was associated with a 5% decrease in their depression symptom severity levels that day," Hruska said. "The social nature of the reported recovery activities is notable, given that healthy relationships can alleviate the negative impact of stress on mental health by assisting with coping efforts and helping to reframe the day's stressors.

"Perhaps demonstrating this latter effect, we also found that EMS workers who looked for lessons to learn from the day's challenges experienced a 3% decrease in their daily depression symptoms," Hruska added.

The researchers identified several actionable strategies that build upon the protective behaviors in which the EMS workers naturally engaged and could make some work events less stressful. Here are some instances noted in the study:

Developing or refining communications strategies may be helpful for alleviating the stress associated with managing patients' family and friends and interacting with co-workers.

Recognizing conflicts as an opportunity for learning and growing may be a useful tactic for effectively resolving the situation with fewer negative mental health effects.

Taking time to recharge after a particularly demanding shift offers an opportunity to let emotions cool. For example, when EMS workers respond to a critical event, scheduled downtime may offer an opportunity for recovery and processing of the day's events.

Credit: 
Syracuse University

New paper shows benefits of Louisiana coastal restoration to soil carbon sequestration

image: Dr. Melissa Baustian, coastal ecologist with The Water Institute of the Gulf, collects field data as part of her work on "Long-term carbon sinks in marsh soils of coastal Louisiana are at risk to wetland loss."

Image: 
The Water Institute of the Gulf

BATON ROUGE, La. (March 2021) - Without restoration efforts in coastal Louisiana, marshes in the state could lose half of their current ability to store carbon in the soil over a period of 50 years, according to a new paper published in American Geophysical Union Journal of Geophysical Research Biogeosciences.

"This reduction in capacity could significantly alter the global carbon budget, given that Louisiana's marsh soils account for between 5 and 21 percent of the global soil carbon storage in tidally influenced wetlands," said Melissa Baustian, lead author and coastal ecologist at The Water Institute of the Gulf.

The article, "Long-term carbon sinks in marsh soils of coastal Louisiana are at risk to wetland loss" examined 24 south Louisiana sites located within four marsh habitats defined by the amount of saltwater influence - fresh, intermediate, brackish, and saline. Carbon sink is a reservoir that stores more carbon than it releases.

By working with colleagues from U.S. Geological Survey, Vernadero Group, Abt Associates, and Tulane University the team used marsh habitat maps from 1949 to 2013, deep soil cores, soil carbon accumulation rates, and maps of future marsh area, to confirm the importance of considering historical habitats when evaluating a coastal areas' long-term ability to store carbon in the soil. Due to the evolving nature of coastal wetland habitats, simply looking at current conditions might not reflect how much carbon was buried historically or how much carbon can be buried in the future, especially in Louisiana where land loss is a continuing concern.

"Protection and restoration of these marshes is vital to help protect the pool of buried carbon in the soils, and to prevent release of carbon to the atmosphere from soil oxidation," Baustian said.

As Louisiana continues to build projects contained with its 50-year Coastal Master Plan, Gov. John Bel Edwards announced in August that the Institute, led by Baustian, will be working with the state to quantify the carbon sink potential of coastal Louisiana with and without restoration projects in the state's 2017 Coastal Master Plan to examine how these potential coastal carbon sinks could help reach the Governor's greenhouse gas emissions goals of 2025, 2030, and 2050.

Credit: 
The Water Institute of the Gulf

Less sugar, please! New studies show low glucose levels might assist muscle repair

image: Reducing glucose concentration enhances cell proliferation of muscle stem cells, suggesting that excess glucose impedes cell proliferation capacity.

Image: 
Tokyo Metropolitan University

Tokyo, Japan - Researchers from Tokyo Metropolitan University have shown that skeletal muscle satellite cells, key players in muscle repair, proliferate better in low glucose environments. This is contrary to conventional wisdom that says mammalian cells fare better when there is more sugar to fuel their activities. Because ultra-low glucose environments do not allow other cell types to proliferate, the team could produce pure cultures of satellite cells, potentially a significant boost for biomedical research.

Healthy muscles are an important part of a healthy life. With the wear and tear of everyday use, our muscles continuously repair themselves to keep them in top condition. In recent years, scientists have begun to understand how muscle repair works at the cellular level. Skeletal muscle satellite cells have been found to be particularly important, a special type of stem cell that resides between the two layers of sheathing, the sarcolemma and basal lamina, that envelopes myofiber cells in individual muscle fibers. When myofiber cells get damaged, the satellite cells go into overdrive, multiplying and finally fusing with myofiber cells. This not only helps repair damage, but also maintains muscle mass. To understand how we lose muscles due to illness, inactivity, or age, getting to grips with the specific mechanisms involved is a key challenge for medical science.

A team of scientists from Tokyo Metropolitan University led by Assistant Professor Yasuro Furuichi, Associate Professor Yasuko Manabe and Professor Nobuharu L Fujii have been studying how skeletal muscle satellite cells multiply outside the body. Looking at cells multiplying in petri dishes in a growth medium, they noticed that higher levels of glucose had an adverse effect on the rate at which they grew. This is counterintuitive; glucose is considered to be essential for cellular growth. It is converted into ATP, the fuel that drives a lot of cellular activity. Yet, the team confirmed that lower glucose media led to a larger number of cells, with all the biochemical markers expected for greater degrees of cell proliferation.

They also confirmed that this doesn't apply to all cells, something they successfully managed to use to their advantage. In experiments in high glucose media, cultures of satellite cells always ended up as a mixture, simply due to other cell types in the original sample also multiplying. By keeping the glucose levels low, they were able to create a situation where satellite cells could proliferate, but other cell types could not, giving a very pure culture of skeletal muscle satellite cells. This is a key prerequisite for studying these cells in a variety of settings, including regenerative medicine. So, was the amount of glucose in their original experiment somehow "just right"? The team added glucose oxidase, a glucose digesting enzyme, to get to even lower levels of glucose, and grew the satellite cells in this glucose-depleted medium. Shockingly, the cells seemed to fare just fine, and proliferated normally. The conclusion is that these particular stem cells seem to derive their energy from a completely different source. Work is ongoing to try to pin down what this is.

The team notes that the sugar levels used in previous experiments matched those found in diabetics. This might explain why loss of muscle mass is seen in diabetic patients, and may have significant implications for how we might keep our muscles healthier for longer.

Credit: 
Tokyo Metropolitan University

Tracking receptor proteins can unveil molecular basis of memory and learning

image: Confocal microscopic image of cortical neurons after two-step labeling. The neurons were fixed, permeabilized, and immunostained with an anti-MAP2 antibody for visualizing the dendrite. Green and red signals indicate labeled fluorescence and anti-MAP2 signals, respectively.

Image: 
Shigeki Kiyonaka

The neurons in our nervous system "talk" to each other by sending and receiving chemical messages called neurotransmitters. This communication is facilitated by cell membrane proteins called receptors, which pick up neurotransmitters and relay them across cells. In a recent study published in Nature Communications, scientists from Japan report their findings on the dynamics of receptors, which can enable understanding of the processes of memory formation and learning.

The regulation of receptor movement and localization within the neuron is important for synaptic plasticity, an important process in the central nervous system. A specific type of glutamate receptor, known as AMPA-type glutamate receptor (AMPAR), undergoes a constant cycle of "trafficking", being cycled in and out of the neuronal membrane. "A precise regulation of this 'trafficking' process is associated with learning, memory formation, and development in neural circuits," says Professor Shigeki Kiyonaka from Nagoya University, Japan, who led the aforementioned study.

While methods to analyze the trafficking of AMPARs are available aplenty, each has its limitations. Biochemical approaches include "tagging" a receptor protein with biotin (a B vitamin). However, this requires purification of the proteins after tagging, hindering quantitative analysis. Another method which involves producing "fusion" receptor proteins labelled with a fluorescent protein may interfere with the trafficking process itself. "In most cases, these methods largely rely on the overexpression of target subunits. However, the overexpression of a single receptor subunit may interfere with the localization and/or trafficking of native receptors in neurons", explains Prof. Kiyonaka.

To that end, researchers from Nagoya University, Kyoto University, and Keio University developed an AMPAR-selective reagent (a chemical agent that causes reactions) that allowed them to label AMPARs with chemical probes in cultured neurons in a two-step manner, combining affinity-based labeling with a biocompatible reaction. The new method, as anticipated by Prof. Kiyonaka, proved to be superior to the conventional ones: it allowed scientists to analyze receptor trafficking over both shorter as well as much longer periods (over 120 hours) and did not require extra purification steps after labeling.

The team's analyses showed a three-fold higher concentration of AMPARs at synapses compared with dendrites as well as a half-life of 33 hours in neurons. Additionally, scientists used this technique to label and analyze the trafficking of NMDA-type glutamate receptors (NMDARs), and obtained a half-life of 22 hours in neurons. Interestingly, both half-life values were significantly longer than those reported in HEK293T (a kidney cell line). The researchers attributed this to the formation of large glutamate receptor protein complexes and -- in the case of AMPARs -- a difference in phosphorylation levels.

The team is excited by potential implications of their findings. "Our method can contribute to our understanding of the physiological and pathophysiological roles of glutamate receptor trafficking in neurons. This, in turn, can help us understand the molecular mechanism underlying memory formation and the process of learning," says Prof. Kiyonaka.

The study provides a closer look at -- and brings us a step closer to deciphering -- the processes of memory and learning at the molecular level.

Credit: 
Nagoya University

Dual-bed catalyst enables high conversion of syngas to gasoline-range liquid hydrocarbons

image: Schematic diagram for the conversion of syngas to gasoline-range liquid hydrocarbons over a dual-bed catalyst (CZA+Al2O3)/N-ZSM-5(97) and results of the stability test.

Image: 
DICP

Gasoline, the primary transportation fuel, contains hydrocarbons with 5-11 carbons (C5-11) and is almost derived from petroleum at present.

Gasoline can also be produced from non-petroleum syngas. Nonetheless, achieving high conversions of syngas to C5-11 with excellent selectivity and stability remains a challenge.

A research group led by Prof. LIU Zhongmin and Prof. ZHU Wenliang from the Dalian Institute of Chemical Physics (DICP) of the Chinese Academy of Sciences realized highly efficient and selective conversion of syngas to gasoline-range liquid hydrocarbons over a dual-bed catalyst.

The study was published in Chem Catalysis on April 2.

This dual-bed catalyst, (CZA +Al2O3)/N-ZSM-5(97), consists of the conventional syngas-to-dimethyl ether catalyst CZA + Al2O3 in the upper bed and a dimethyl ether-to-gasoline catalyst N-ZSM-5(97) in the lower bed.

The selectivity of C5-11 and C3-11 in the hydrocarbon products reached 80.6% and 98.2%, respectively, along with 86.3% CO conversion.

The catalyst exhibited excellent stability, and the iso/n-paraffin ratio in the C5-11 products was up to 18. The nano-sized structure of N-ZSM-5(97) was beneficial for reducing coke and prolonging the lifetime; meanwhile, the low acid content of N-ZSM-5(97) was advantageous for increasing the C5-11 selectivity.

Compared with the Fischer-Tropsch synthesis process, this dual-bed syngas-to-gasoline (STG) process was more suitable for producing high-quality gasoline, along with the co-production of aromatic hydrocarbons.

Credit: 
Dalian Institute of Chemical Physics, Chinese Academy Sciences

From stardust to pale blue dot: Carbon's interstellar journey to Earth

ANN ARBOR--We are made of stardust, the saying goes, and a pair of studies including University of Michigan research finds that may be more true than we previously thought.

The first study, led by U-M researcher Jie (Jackie) Li and published in Science Advances, finds that most of the carbon on Earth was likely delivered from the interstellar medium, the material that exists in space between stars in a galaxy. This likely happened well after the protoplanetary disk, the cloud of dust and gas that circled our young sun and contained the building blocks of the planets, formed and warmed up.

Carbon was also likely sequestered into solids within one million years of the sun's birth--which means that carbon, the backbone of life on earth, survived an interstellar journey to our planet.

Previously, researchers thought carbon in the Earth came from molecules that were initially present in nebular gas, which then accreted into a rocky planet when the gases were cool enough for the molecules to precipitate. Li and her team, which includes U-M astronomer Edwin Bergin, Geoffrey Blake of the California Institute of Technology, Fred Ciesla of the University of Chicago and Marc Hirschmann of the University of Minnesota, point out in this study that the gas molecules that carry carbon wouldn't be available to build the Earth because once carbon vaporizes, it does not condense back into a solid.

"The condensation model has been widely used for decades. It assumes that during the formation of the sun, all of the planet's elements got vaporized, and as the disk cooled, some of these gases condensed and supplied chemical ingredients to solid bodies. But that doesn't work for carbon," said Li, a professor in the U-M Department of Earth and Environmental Sciences.

Much of carbon was delivered to the disk in the form of organic molecules. However, when carbon is vaporized, it produces much more volatile species that require very low temperatures to form solids. More importantly, carbon does not condense back again into an organic form. Because of this, Li and her team inferred most of Earth's carbon was likely inherited directly from the interstellar medium, avoiding vaporization entirely.

To better understand how Earth acquired its carbon, Li estimated the maximum amount of carbon Earth could contain. To do this, she compared how quickly a seismic wave travels through the core to the known sound velocities of the core. This told the researchers that carbon likely makes up less than half a percent of Earth's mass. Understanding the upper bounds of how much carbon the Earth might contain tells the researchers information about when the carbon might have been delivered here.

"We asked a different question: We asked how much carbon could you stuff in the Earth's core and still be consistent with all the constraints," Bergin said, professor and chair of the U-M Department of Astronomy. "There's uncertainty here. Let's embrace the uncertainty to ask what are the true upper bounds for how much carbon is very deep in the Earth, and that will tell us the true landscape we're within."

A planet's carbon must exist in the right proportion to support life as we know it. Too much carbon, and the Earth's atmosphere would be like Venus, trapping heat from the sun and maintaining a temperature of about 880 degrees Fahrenheit. Too little carbon, and Earth would resemble Mars: an inhospitable place unable to support water-based life, with temperatures around minus 60.

In a second study by the same group of authors, but led by Hirschmann of the University of Minnesota, the researchers looked at how carbon is processed when the small precursors of planets, known as planetesimals, retain carbon during their early formation. By examining the metallic cores of these bodies, now preserved as iron meteorites, they found that during this key step of planetary origin, much of the carbon must be lost as the planetesimals melt, form cores and lose gas. This upends previous thinking, Hirschmann says.

"Most models have the carbon and other life-essential materials such as water and nitrogen going from the nebula into primitive rocky bodies, and these are then delivered to growing planets such as Earth or Mars," said Hirschmann, professor of earth and environmental sciences. "But this skips a key step, in which the planetesimals lose much of their carbon before they accrete to the planets."

Hirschmann's study was recently published in Proceedings of the National Academy of Sciences.

"The planet needs carbon to regulate its climate and allow life to exist, but it's a very delicate thing," Bergin said. "You don't want to have too little, but you don't want to have too much."

Bergin says the two studies both describe two different aspects of carbon loss--and suggest that carbon loss appears to be a central aspect in constructing the Earth as a habitable planet.

"Answering whether or not Earth-like planets exist elsewhere can only be achieved by working at the intersection of disciplines like astronomy and geochemistry," said Ciesla, a U. of C. professor of geophysical sciences. "While approaches and the specific questions that researchers work to answer differ across the fields, building a coherent story requires identifying topics of mutual interest and finding ways to bridge the intellectual gaps between them. Doing so is challenging, but the effort is both stimulating and rewarding."

Blake, a co-author on both studies and a Caltech professor of cosmochemistry and planetary science, and of chemistry, says this kind of interdisciplinary work is critical.

"Over the history of our galaxy alone, rocky planets like the Earth or a bit larger have been assembled hundreds of millions of times around stars like the Sun," he said. "Can we extend this work to examine carbon loss in planetary systems more broadly? Such research will take a diverse community of scholars."

Credit: 
University of Michigan