Culture

Clinical relevance of patient-reported outcomes: new threshold proven feasible in practice

In order to show the clinical relevance of a difference between two treatment alternatives, in recent years, the manufacturer dossiers submitted in early benefit assessments of new drugs have increasingly contained responder analyses for patient-relevant outcomes. In such analyses, it is investigated whether the proportion of patients experiencing a noticeable change in the respective outcome differs between the two treatment groups in a study. This involves information on health-related quality of life or on individual symptoms such as pain or itching, which patients recorded with the help of scales in questionnaires.

But what difference makes a change relevant for the individual? That is, at what threshold can a response to an intervention be derived for the patient, so that, for example, the difference in the response rates of two groups can be used as an effect measure for early benefit assessments?

When is a difference relevant?

To answer this question, the methodological discussion has long revolved around so-called minimally important differences (MID). This approach is based on the idea that thresholds can be identified for the respective questionnaires that represent the smallest relevant change for patients. However, the methodological problems of this approach have recently become apparent. For instance, it has been shown that an MID is not a fixed value for a questionnaire, but is variable. It depends, for example, on the type and severity of a disease, the direction of a change (improvement or deterioration) or the methods used to determine it.

Furthermore, it has become clear that many scientific studies on the determination of an MID no longer meet today's methodological standards or that the methodology used is inadequately described in the scientific publications. In this situation, IQWiG has developed a new approach that makes it possible to easily determine thresholds delimiting a relevant range with sufficient certainty. The aim of this specification was also to create clarity for manufacturers and to make arbitrary responder analyses based on incomprehensible responder definitions unattractive.

Concerns were unfounded

From an evaluation of new scientific reviews on the topic, last year IQWiG therefore identified a value of 15% of the range of the respective scales as a plausible threshold for a relatively small, but sufficiently certain noticeable change. In its General Methods 6.0 published in November 2020, the Institute then specifically stated that in future assessments, it would use responder analyses from a threshold of at least 15% of the scale range of the measurement instrument used.

The strict stipulation of the 15% threshold for the acceptance of responder analyses in IQWiG assessments triggered discussions in the run-up to the General Methods 6.0. Among other things, some manufacturers feared that this approach would not be applicable to all scales and that proving the added benefit of a new drug would thus be made more difficult. On the occasion of the addenda published today on early benefit assessments of the drugs secukinumab (for psoriatic arthritis), ivacaftor (for cystic fibrosis) and alpelisib (for breast cancer), Katrin Nink from IQWiG's Drug Assessment Department emphasizes: "The concerns were unfounded. We see that the new threshold is suitable for use in practice." In the assessments in question, the manufacturers subsequently submitted the requested data so that the Institute could use the responder analyses for its assessments.

This applies to several questionnaires on both the burden of disease symptoms and on health-related quality of life in quite different indications such as cystic fibrosis in children, psoriatic arthritis or breast cancer. Nink further explains: "Based on analyses with the new response thresholds, we were able to show the advantages and disadvantages of treatments, such as the advantage of ivacaftor for the quality of life of children with cystic fibrosis. So the new generic criterion works."

Credit: 
Institute for Quality and Efficiency in Health Care

Do known drugs help against SARS-coronavirus-2?

image: (A) SARS-CoV-2 attaches to human cells through binding of its spike protein to the cellular protein ACE2. Next, SARS-CoV-2 exploits human enzymes like the protease TMPRSS2 for the activation of its spike protein, which subsequently fuses the viral membrane with a target cell membrane and thus allow the delivery of the viral genetic information into the cell.
(B) Apart from TMPRSS2, SARS-CoV-2 can utilize other proteases for spike protein activation. Among them are TMPRSS11D and TMPRSS13, which are mainly expressed in the upper respiratory tract. The anti-pancreatitis drugs Camostat (as well as its metabolite GBPA) and Nafamostat not only block TMPRSS2 but also TMPRSS11D and TMPRSS13. As a consequence, switching from TMPRSS2 to TMPRSS11D or TMPRSS13 as activators does not allow SARS-CoV-2 to escape Camostat's/Nafamostat's antiviral activity.

Image: 
Markus Hoffmann

There are no therapeutics available that have been developed for COVID-19 treatment. Repurposing of already available medication for COVID-19 therapy is an attractive option to shorten the road to treatment development. The drug Camostat could be suitable. Camostat exerts antiviral activity by blocking the protease TMPRSS2, which is used by SARS-CoV-2 for entry into cells. However, it was previously unknown whether SARS-CoV-2 can use TMPRSS2-related proteases for cell entry and whether these proteases can be blocked by Camostat. Moreover, it was unclear whether metabolization of Camostat interferes with antiviral activity. An international team of researchers around Markus Hoffmann and Stefan Pöhlmann from the German Primate Center (DPZ) - Leibniz Institute for Primate Research has now shown that SARS-CoV-2 can use several TMPRSS2-related proteases for its activation. These proteases are expressed in the upper respiratory tract and are blocked by Camostat. In addition, the researchers found that Camostat and its major metabolite GBPA inhibit SARS-CoV-2 infection of primary human lung tissue. These findings support the further development of Camostat and related compounds for COVID-19 therapy (EBioMedicine).

SARS-CoV-2 depends on activation by the cellular protease TMPRSS2 for infection of lung cells. The researchers of the Infection Biology Unit of DPZ previously documented that the drugs camostat and nafamostat, which are used in Japan to treat inflammation of the pancreas, block SARS-CoV-2 infection by inhibiting TMPRSS2. However, it was unknown whether Camostat metabolites also block SARS-CoV-2 and whether the virus may use TMPRSS2-related proteases for infection that may be Camostat-insensitive.

An international team of researches around Markus Hoffmann and Stefan Pöhlmann from the Infection Biology Unit of the DPZ has now shown that SARS-CoV-2 can use several TMPRSS2-related proteases for infection, among them TMPRSS11D and TMPRSS13. These proteases may support viral spread in the upper respiratory tract and are blocked by Camostat. This finding indicates that switching to activators other than TMPRSS2 might not allow the virus to replicate in the presence of Camostat.

The researchers could show that not only Camostat but also a major Camostat-metabolite, GBPA, block TMPRSS2 and SARS-CoV-2 infection. "In the human body Camostat is rapidly converted to GBPA. Therefore, it was crucial to demonstrate that not only Camostat but also GBPA exert antiviral activity", says Stefan Pöhlmann, the head of the Infection Biology Unit of DPZ. Markus Hoffmann, the first author of the study adds: "Our results suggest that Camostat/GBPA may unfold antiviral activity in patients. However, for effective treatment of COVID-19, a higher Camostat dose might be required as compared to pancreatitis treatment."

Inhibition of SARS-CoV-2 by Camostat was initially shown using the lung cell line Calu-3. The participation of Armin Braun, Fraunhofer ITEM, Hannover, and Danny Jonigk, Institute of Pathology at the MHH, in the consortium allowed analysis of Camostat antiviral activity in primary human lung tissue ex vivo. Camostat and GBPA blocked SARS-CoV-2 infection of lung tissue and Nafamostat had an increased antiviral activity. Therefore, the team of the Infection Biology Unit and the laboratory of Armin Braun are investigating how Nafamostat can be directly delivered into the human lung for increased antiviral activity. This project receives financial support from the Bundesministerium für Bildung und Forschung (BMBF) (project RENACO, Repurposing of Nafamostatmesylat for treatment of COVID-19).

"Our results on the antiviral activity of Camostat and GBPA are relevant beyond the treatment of COVID-19. TMPRSS2 also plays an important role in other respiratory infections. Thus, Camostat could also be successfully used to treat influenza," says Markus Hoffmann.

Credit: 
Deutsches Primatenzentrum (DPZ)/German Primate Center

March science snapshots

image: This scanning electron microscope image shows SARS-CoV-2 emerging from the surface of cells (blue/pink) cultured in the lab.

Image: 
National Institute of Allergy and Infectious Diseases-Rocky Mountain Laboratories, NIH

Solving a Genetic Mystery at the Heart of the COVID-19 Pandemic

As the COVID-19 pandemic enters its second year, scientists are still working to understand how the new strain of coronavirus evolved, and how it became so much more dangerous than other coronaviruses, which humans have been living alongside for millennia.

Virologists and epidemiologists worldwide have speculated for months that a protein called ORF8 likely holds the answer, and a recent study by Berkeley Lab scientists has helped confirm this hypothesis.

In a paper published in mBio, lead author Russell Neches and his colleagues show that ORF8 evolved from another coronavirus protein called ORF7a, and that both proteins have folds similar to that of a human antibody. This finding helps to explain how the virus avoids immune detection and is able to escalate into a severe infection in some hosts.

"By exploring the structural and functional characteristics of ORF8, and using supercomputers to look at the genomes of over 200,000 viruses, we discovered a striking and highly unusual evolutionary strategy," said co-author Nikos Kyrpides, a computational biologist at the DOE Joint Genome Institute (JGI). "Amazingly, it seems that within the SARS clade, the gene encoding ORF7a is used as a 'template' gene, remaining stable, with a duplicate copy of this gene evolving to a point almost beyond recognition." SARS-CoV-2 arose and exploded into a pandemic when a SARS strain's duplicate ORF7a gene happened to mutate leading to a new protein (which we now call ORF8) that gave it the ability to interfere with immune cells.

According to the team, a similar event occurred in the SARS-CoV strain that caused the SARS epidemic in the early 2000s. In that instance, a copy of the ORF7a gene split into two, resulting in ORF8a and ORF8b proteins.

Christos Ouzounis, senior author of the study and a JGI affiliate scientist, noted that the connection between ORF8 and ORF7a was initially quite difficult to make, due to how little was known about this set of genes and their encoded proteins compared with the existing knowledge about surface proteins (such as the infamous spike protein), and because ORF8 and ORF7a currently seem wildly different. ORF7a is highly stable, mutation resistant protein that interacts with very few mammalian host proteins, whereas ORF8 is encoded by the most mutation prone gene in the viral genome, and is now known to be involved in dozens of interactions in the human body.

"Our findings - and their confirmation by parallel sequence and structure studies - reveal ORF8 to be an evolutionary hotspot in the SARS lineage. The lack of knowledge about the role of these genes has diverted attention to the more well-understood genes, but we now know more about this gene and hopefully it will receive more attention from the community," said Kyrpides.

This work was supported by the ExaBiome Project, a Berkeley Lab-led collaboration which develops supercomputing tools for microbiome analysis. JGI is an Office of Science user facility.

Assessing the Costs of Major Power Outages

Little is known about the full impact of widespread, long duration power interruptions, especially the indirect costs and related economy-wide impacts of these events. As a result, the costs of such power interruptions are generally not or only incompletely considered in utility planning activities.

A new Berkeley Lab report titled "A Hybrid Approach to Estimating the Economic Value of Enhanced Power System Resilience" describes a new approach for estimating the economic costs of widespread, long duration power interruptions, such as the one that occurred recently in Texas. This hybrid method involves using survey responses from utility customers to calibrate a regional economic model that is able to estimate both the direct and indirect costs of these events.

"We believe that this paper is an important breakthrough for researchers interested in estimating the full economic impact of widespread, long duration power disruptions," said Berkeley Lab researcher Peter Larsen.

A second report, titled "Case Studies of the Economic Impacts of Power Interruptions and Damage to Electricity System Infrastructure from Extreme Events," analyzes in detail the economics of power interruptions caused by extreme weather. The researchers examined the effects of Hurricane Harvey in Texas, wildfires in California, and four other extreme events; they found that even years after the events, utility companies had a clear picture of the costs of physical repairs but had not tallied the human and societal costs of the outages. (Researchers from the University of Texas at Austin were co-authors of this report; read their news release about the report here.)

"Our research shows that utilities, regulators, and other stakeholders rarely, if ever, account for the direct and indirect costs of power disruptions in their decision-making," Larsen said.

Location, Location, Location: Regional Tau Deposits in Healthy Elders Predict Alzheimer Disease

Subtle memory deficits are common in normal aging as well as Alzheimer disease (AD), the leading cause of dementia in older adults. This makes AD difficult to diagnose in its early stages. As there is currently no effective treatment to slow or stop the progression of AD, it is important to identify early pathological brain changes, which can start decades before people show symptoms, and then trace the effects that lead to cognitive decline.

Xi Chen and her colleagues in Bill Jagust's research group at Berkeley Lab recently published a study in the Journal of Neuroscience that provides some clarification of the differences between normal aging and AD brains, and elucidates the transition from the former to the latter.

Using positron emission tomography (PET) imaging, they measured levels of tau and beta amyloid (Aß) - two critical biomarkers of AD - in cognitively normal older adults, and then followed them for several years for prospective cognition assessment. Their findings revealed new insight into tau, a protein that helps stabilize the internal skeleton of neurons in its normal form, but becomes unstable and can interfere with neuron functioning when aggregated. Aggregated tau deposits in the entorhinal cortex, a major part of the memory system, likely reflect normal, age-related memory impairment. However, tau deposits in anterior temporal regions of the brain - responsible for our knowledge of objects, people, words, and facts - were most predictive of AD-related impairment. The finding supports a model of early AD pathology proposed by the researchers whereby tau spreads from the entorhinal cortex to anterior temporal regions facilitated by amyloid.

The team suggests that the presence of tau aggregates in anterior temporal regions could be used as a marker of early disease progression.

New Optical Antennas Could Overcome Data Limits

Researchers at Berkeley Lab and UC Berkeley have found a new way to harness properties of lightwaves that can radically increase the amount of data they carry. They demonstrated the emission of discrete twisting laser beams from antennas made up of concentric rings roughly equal to the diameter of a human hair, small enough to be placed on computer chips.

The new work, reported in a paper published Feb. 25 in the journal Nature Physics, throws wide open the amount of information that can be multiplexed, or simultaneously transmitted, by a coherent light source. A common example of multiplexing is the transmission of multiple telephone calls over a single wire, but there had been fundamental limits to the number of coherent twisted lightwaves that could be directly multiplexed.

"It's the first time that lasers producing twisted light have been directly multiplexed," said senior author Boubacar Kanté, a faculty scientist in Berkeley Lab's Materials Sciences Division and the Chenming Hu Associate Professor in UC Berkeley's Department of Electrical Engineering and Computer Sciences. "We've been experiencing an explosion of data in our world, and the communication channels we have now will soon be insufficient for what we need. The technology we are reporting overcomes current data capacity limits through a characteristic of light called the orbital angular momentum. It is a game-changer with applications in biological imaging, quantum cryptography, high-capacity communications and sensors."

Read the full UC Berkeley release here.

Credit: 
DOE/Lawrence Berkeley National Laboratory

Cutting off stealthy interlopers: a framework for secure cyber-physical systems

image: The trio of authors from the Department of Information and Communication Engineering at DGIST; Professor Kyung-Jun Park (right), Professor Yongsoon Eun (left), and Sangjun Kim (middle; integrated Master's and PhD student)

Image: 
dgist

In 2015, hackers infiltrated the corporate network of Ukraine's power grid and injected malicious software, which caused a massive power outage. Such cyberattacks, along with the dangers to society that they represent, could become more common as the number of cyber-physical systems (CPS) increases.

A CPS is any system controlled by a network involving physical elements that tangibly interact with the material world. CPSs are incredibly common in industries, especially those integrating robotics or similar automated machinery to the production line. However, as CPSs make their way into societal infrastructures such as public transport and energy management, it becomes even more important to be able to efficiently fend off various types of cyberattacks.

In a recent study published in IEEE Transactions on Industrial Informatics, researchers from Daegu Gyeongbuk Institute of Science and Technology (DGIST), Korea, have developed a framework for CPSs that is resilient against a sophisticated kind of cyberattack: the pole-dynamics attack (PDA). In a PDA, the hacker connects to a node in the network of the CPS and injects false sensor data. Without proper readings from the sensors of the physical elements of the system, the control signals sent by the control algorithm to the physical actuators are incorrect, causing them to malfunction and behave in unexpected, potentially dangerous ways.

To address PDAs, the researchers adopted a technique known as software-defined networking (SDN), whereby the network of the CPS is made more dynamic by distributing the relaying of signals through controllable SDN switches. In addition, the proposed approach relies on a novel attack-detection algorithm embedded in the SDN switches, which can raise an alarm to the centralized network manager if false sensor data are being injected.

Once the network manager is notified, it not only cuts the cyberattacker off by pruning the compromised nodes but also establishes a new safe path for the sensor data. "Existing studies have only focused on attack detection, but they fail to consider the implications of detection and recovery in real time," explains Professor Kyung-Joon Park, who led the study, "In our study, we simultaneously considered these factors to understand their effects on real-time performance and guarantee stable CPS operation."

The new framework was validated experimentally in a dedicated testbed, showing promising results. Excited about the outcomes of the study, Park remarks, "Considering CPSs are a key technology of smart cities and unmanned transport systems, we expect our research will be crucial to provide reliability and resiliency to CPSs in various application domains." Having a system that is robust against cyberattacks means that economic losses and personal injuries can be minimized. Therefore, this study paves the way to a more secure future for both CPSs and ourselves.

Credit: 
DGIST (Daegu Gyeongbuk Institute of Science and Technology)

Large number of COVID-19 survivors will experience cognitive complications

A research review led by Oxford Brookes University has found a large proportion of COVID-19 survivors will be affected by neuropsychiatric and cognitive complications.

Psychologists at Oxford Brookes University and a psychiatrist from Oxford Health NHS Foundation Trust, evaluated published research papers in order to understand more about the possible effects of the SARS-COV-2 infection on the brain, and the extent people can expect to experience short and long-term mental health issues.

Patients experienced a range of psychiatric problems

The study found that in the short term, a wide range of neuropsychiatric problems were reported. In one examined study, 95% of clinically stable COVID-19 patients had post-traumatic stress disorder (PTSD) and other studies found between 17-42%* of patients experienced affective disorders, such as depression.

The main short-term cognitive problems were found to be impaired attention (reported by 45% patients) and impaired memory (between 13-28% of patients).

In the long term, neuropsychiatric problems were mostly affective disorders and fatigue, as well as impaired attention (reported by 44% of patients) and memory (reported between 28-50% of patients).

Mental health disorders could have significant impact on NHS

Dr Sanjay Kumar, Senior Lecturer in Psychology at Oxford Brookes University said: "Understanding the neuropsychiatric and cognitive consequences of COVID-19 is important as millions of people have been affected by the virus, and many cases go undetected. These conditions affect people's capacity to work effectively, drive, manage finances, make informed decisions and participate in daily family activities.

"If even just a fraction of patients experience neuropsychiatric complications, the impact on public health services could be significant."

Academics say that there is likely to be an increase in patients with psychiatric and cognitive problems who were otherwise healthy prior to COVID-19 infection.

"Detailed cognitive evaluation and robust monitoring of patients should be considered in order to detect new neurological cases," continues Dr Kumar.

"This will also enable health care providers to plan adequate health care and resources, and improve the quality of life for many COVID-19 survivors.

"These are emerging findings though, and we will learn much more as the research in the field progresses."

Co-author Dr Tina Malhotra, Consultant Psychiatrist working in Oxford Health NHS Foundation Trust said: "We are already seeing an impact of COVID -19 on mental health. Patients are presenting with Long COVID syndrome which includes fatigue, cognitive problems and a range of psychiatric problems.

"It is estimated that these problems are experienced by 1 in 5 people who have had COVID. Management of such patients in long-covid clinics should involve a multidisciplinary team including psychiatrists.

"NHS England has set out a recovery plan which includes setting up long covid clinics."

Credit: 
Oxford Brookes University

Magnetic whirls in confined spaces

image: Stable states with three, six, and ten skyrmions enclosed in a triangle. The plot shows time-averaged skyrmion positions from experiment (top row) and corresponding computer simulations (bottom row).

Image: 
ill./©: Jan Rothörl and Chengkun Song

In a close collaboration between experimental and theoretical physicists at Johannes Gutenberg University Mainz (JGU), the research groups of Professor Mathias Kläui and Dr. Peter Virnau investigated the behavior of magnetic whirls within nanoscale geometric structures. In their work published in Advanced Functional Materials, the researchers confined small magnetic whirls, so-called skyrmions, in geometric structures. Skyrmions can be created in thin metal films and have particle-like properties: They exhibit high stability and are repelled from each other and from specially prepared walls. Experiments and accompanying computer simulations showed that the mobility of skyrmions within these geometric structures depends massively on their arrangement. In triangles, for example, three, six, or ten skyrmions arranged like bowling pins are particularly stable.

"These studies lay the foundation for the development of novel non-conventional computing and storage media based on the movement of magnetic vortices through microscopic corridors and chambers," explained Professor Mathias Kläui. The research was funded by the Dynamics and Topology (TopDyn) Top-level Research Area, which was founded in 2019 as a collaboration between Johannes Gutenberg University Mainz, TU Kaiserslautern, and the Max Planck Institute for Polymer Research in Mainz. "This work is an excellent example for the interdisciplinary cooperation between simulation and experiment, which was only made possible by TopDyn's funding," emphasized Dr. Peter Virnau.

Credit: 
Johannes Gutenberg Universitaet Mainz

Determination of glycine transporter opens new avenues in development of psychiatric drugs

image: Unraveling the three-dimensional structure of the glycine transporter, researchers have now come a big step closer to understanding the regulation of glycine in the brain. These result open up opportunities to find effective drugs that inhibit GlyT1 function, with major implications for the treatment of schizophrenia and other mental disorders.

Image: 
Azadeh Shahsavar

Glycine can stimulate or inhibit neurons in the brain, thereby controlling complex functions. Unraveling the three-dimensional structure of the glycine transporter, researchers have now come a big step closer to understanding the regulation of glycine in the brain. These results, which have been published in Nature, open up opportunities to find effective drugs that inhibit GlyT1 function, with major implications for the treatment of schizophrenia and other mental disorders.

Glycine is the smallest amino acid and a building block of proteins, and also a critical neurotransmitter that can both stimulate or inhibit neurons in the brain and thereby control complex brain functions. Termination of a glycine signal is mediated by glycine transporters that reuptake and clear glycine from the synapses between neurons. Glycine transporter GlyT1 is the main regulator of neurotransmitter glycine levels in the brain, and also important for e.g. blood cells, where glycine is required for the synthesis of heme.

The N-methyl-D-aspartate (NMDA) receptor is activated by glycine, and its poor performance is implicated in schizophrenia. Over the past twenty years, many pharmaceutical companies and academic research laboratories therefore have focused on influencing glycinergic signaling and glycine reuptake delay as a way of activating the NMDA receptor in search of a cure for schizophrenia and other psychiatric disorders. Indeed, several potent and selective GlyT1 inhibitors achieve antipsychotic and pro-cognitive effects alleviating many symptoms of schizophrenia, and have advanced into clinical trials. However, a successful drug candidate has yet to emerge, and GlyT1 inhibition in blood cells is a concern for side effects. Structural insight into the binding of inhibitors to GlyT1 would provide insight in finding new strategies in drug design.

To gain better knowledge about the three-dimensional structure and inhibition mechanisms of the GlyT1 transporter, researchers from the companies Roche and Linkster, and from the European Molecular Biology Laboratory (EMBL) Hamburg, University of Zurich and Aarhus University, have therefore collaborated on investigating one of the most advanced GlyT1 inhibitors. Using a synthetic single-domain antibody (Linkster therapeutics'sybody®) for GlyT1, the research team managed to grow microcrystals of the inhibited GlyT1 complex. By employing a Serial Synchrotron Crystallography (SSX) approach, the team lead by Assistant Professor Azadeh Shahsavar and Professor Poul Nissen from the Department of Molecular Biology and Genetics/DANDRITE, Aarhus University, determined the structure of human GlyT1 using X-ray diffraction data from hundreds of microcrystals. The SSX method is particularly suitable as a method fornew, powerful X-ray sources and opens up for new approaches to, among other things, the development of drugs for various purposes.

The structure is reported in the leading scientific journal Nature and also unveils a new mechanism of inhibition in neurotransmitter transporters in general. Mechanisms have previously been uncovered for, for example, inhibition of the serotonin transporter (which has many similarities to GlyT1) with antidepressant drugs, but it is a quite different inhibition mechanism now found for GlyT1. It provides background knowledge for the further development of small molecules and antibodies as selective inhibitors targeted at GlyT1 and possibly also for new ideas for the development of inhibitors of other neurotrandmitter carriers that can be used to treat other mental disorders. Azadeh Shahsavar's team continues the studies of GlyT1 and will be investigating further aspects of its function and inhibition and the effect of GlyT1 inhibitors in the body.

Credit: 
Aarhus University

SARS-CoV-2 mutations can complicate immune surveillance of human T-killer cells

image: Venugopal Gudipati, Johannes B. Huppa, Judith H. Aberle, Maximilian Koblischke Andreas Bergthaler, Benedikt Agerer.

Image: 
© Laura Alvarez / CeMM

The body's immune response plays a crucial role in the course of a SARS-CoV-2 infection. In addition to antibodies, the so-called T-killer cells, are also responsible for detecting viruses in the body and eliminating them. Scientists from the CeMM Research Center for Molecular Medicine of the Austrian Academy of Sciences and the Medical University of Vienna have now shown that SARS-CoV-2 can make itself unrecognizable to the immune response by T-killer cells through mutations. The findings of the research groups of Andreas Bergthaler, Judith Aberle and Johannes Huppa provide important clues for the further development of vaccines and were published in the journal Science Immunology.

After a year of the pandemic, an increasingly clear picture is emerging for science and medicine of how the immune response protects people from SARS-CoV-2. Two protagonists play central roles: antibodies and T-killer cells (also called cytotoxic CD8 T cells). While antibodies dock directly onto viruses to render them harmless, T-killer cells recognize viral protein fragments on infected cells and subsequently kill them to stop virus production. More and more studies show that SARS-CoV-2 can evade the antibody immune response through mutations and thus also impair the effectiveness of vaccines. Whether such mutations also affect T-killer cells in their function had not been clarified so far. Benedikt Agerer in the laboratory of Andreas Bergthaler (CeMM), Maximilian Koblischke and Venugopal Gudipati in the research groups of Judith Aberle and Johannes Huppa (both Medical University of Vienna) have now worked together closely to investigate the effect of viral mutations in so-called T cell epitopes, i.e., in regions recognized by T-killer cells. For this purpose, they sequenced 750 SARS-CoV-2 viral genomes from infected individuals and analyzed mutations for their potential to alter T cell epitopes. "Our results show that many mutations in SARS-CoV-2 are indeed capable of doing this. With the help of bioinformatic and biochemical investigations as well as laboratory experiments with blood cells from COVID-19 patients, we were able to show that mutated viruses can no longer be recognized by T-killer cells in these regions," says Andreas Bergthaler.

Focus on spike protein might be too narrow

In most natural infections, several epitopes are available for recognition by T-killer cells. If the virus mutates in one place, it is likely that other epitopes indicate the presence of the virus.

Most of the current vaccines against SARS-CoV-2 are directed exclusively against the so-called spike protein, which is one of 26 virus proteins. This also reduces the number of epitopes that are available for recognition by T-killer cells. "The spike protein has, on average, one to six of these T cell epitopes in an infected person. If the virus mutates in one of these regions, the risk that the infected cells will not be recognized by the T-killer cells increases," explains Johannes Huppa. Judith Aberle emphasizes: "Especially for the further development of vaccines, we therefore have to keep a close eye on how the virus mutates and which mutations prevail globally. Currently, we see few indications that mutations in T killer cell epitopes are increasingly spreading."

The study authors see no reason in their data to believe that SARS-CoV-2 can completely evade the human immune response. However, these results provide important insights into how SARS-CoV-2 interacts with the immune system. "Furthermore, this knowledge helps to develop more effective vaccines with the potential to activate as many T-killer cells as possible via a variety of epitopes. The goal are vaccines that trigger neutralizing antibody and T killer cell responses for the broadest possible protection," the study authors say.

Credit: 
CeMM Research Center for Molecular Medicine of the Austrian Academy of Sciences

Volcanoes might light up the night sky of this planet

image: This artist's illustration represents the possible interior dynamics of the super-Earth exoplanet LHS 3844b. The planet's interior properties and the strong stellar irradiation might lead to a hemispheric tectonic regime.

Image: 
© Universität Bern / University of Bern, Illustration: Thibaut Roger

On Earth, plate tectonics is not only responsible for the rise of mountains and earthquakes. It is also an essential part of the cycle that brings material from the planet's interior to the surface and the atmosphere, and then transports it back beneath the Earth's crust. Tectonics thus has a vital influence on the conditions that ultimately make Earth habitable.

Until now, researchers have found no evidence of global tectonic activity on planets outside our solar system. A team of researchers led by Tobias Meier from the Center for Space and Habitability (CSH) at the University of Bern and with the participation of ETH Zurich, the University of Oxford and the National Center of Competence in Research NCCR PlanetS has now found evidence of the flow patterns inside a planet, located 45 light-years from Earth: LHS 3844b. Their results were published in The Astrophysical Journal Letters.

An extreme contrast and no atmosphere

"Observing signs of tectonic activity is very difficult, because they are usually hidden beneath an atmosphere", Meier explains. However, recent results suggested that LHS 3844b probably does not have an atmosphere. Slightly larger than Earth and likely similarly rocky, it orbits around its star so closely that one side of the planet is in constant daylight and the other in permanent night - just like the same side of the Moon always faces the Earth. With no atmosphere shielding it from the intense radiation, the surface gets blisteringly hot: it can reach up to 800°C on the dayside. The night side, on the other hand, is freezing. Temperatures there might fall below minus 250°C. "We thought that this severe temperature contrast might affect material flow in the planet's interior", Meier recalls.

To test their theory, the team ran computer simulations with different strengths of material and internal heating sources, such as heat from the planet's core and the decay of radioactive elements. The simulations included the large temperature contrast on the surface imposed by the host star.

Flow inside the planet from one hemisphere to the other

"Most simulations showed that there was only upwards flow on one side of the planet and downwards flow on the other. Material therefore flowed from one hemisphere to the other", Meier reports. Surprisingly, the direction was not always the same. "Based on what we are used to from Earth, you would expect the material on the hot dayside to be lighter and therefore flow upwards and vice versa", co-author Dan Bower at the University of Bern and the NCCR PlanetS explains. Yet, some of the teams' simulations also showed the opposite flow direction. "This initially counter-intuitive result is due to the change in viscosity with temperature: cold material is stiffer and therefore doesn't want to bend, break or subduct into the interior. Warm material, however, is less viscous - so even solid rock becomes more mobile when heated - and can readily flow towards the planet's interior", Bower elaborates. Either way, these results show how a planetary surface and interior can exchange material under conditions very different from those on Earth.

A volcanic hemisphere

Such material flow could have bizarre consequences. "On whichever side of the planet the material flows upwards, one would expect a large amount of volcanism on that particular side", Bower points out. He continues "similar deep upwelling flows on Earth drive volcanic activity at Hawaii and Iceland". One could therefore imagine a hemisphere with countless volcanoes - a volcanic hemisphere so to speak - and one with almost none.

"Our simulations show how such patterns could manifest, but it would require more detailed observations to verify. For example, with a higher-resolution map of surface temperature that could point to enhanced outgassing from volcanism, or detection of volcanic gases. This is something we hope future research will help us to understand", Meier concludes.

Bernese space exploration: With the world's elite since the first moon landing

When the second man, "Buzz" Aldrin, stepped out of the lunar module on July 21, 1969, the first task he did was to set up the Bernese Solar Wind Composition experiment (SWC) also known as the "solar wind sail" by planting it in the ground of the moon, even before the American flag. This experiment, which was planned and the results analysed by Prof. Dr. Johannes Geiss and his team from the Physics Institute of the University of Bern, was the first great highlight in the history of Bernese space exploration.

Ever since Bernese space exploration has been among the world's elite. The numbers are impressive: 25 times were instruments flown into the upper atmosphere and ionosphere using rockets (1967-1993), 9 times into the stratosphere with balloon flights (1991-2008), over 30 instruments were flown on space probes, and with CHEOPS the University of Bern shares responsibility with ESA for a whole mission.

The successful work of the Department of Space Research and Planetary Sciences (WP)from the Physics Institute of the University of Bern was consolidated by the foundation of a university competence center, the Center for Space and Habitability (CSH). The Swiss National Fund also awarded the University of Bern the National Center of Competence in Research (NCCR) PlanetS, which it manages together with the University of Geneva.

Credit: 
University of Bern

Want to cut emissions that cause climate change? Tax carbon

COLUMBUS, Ohio - Putting a price on producing carbon is the cheapest, most efficient policy change legislators can make to reduce emissions that cause climate change, new research suggests.

The case study, published recently in the journal Current Sustainable/Renewable Energy Reports, analyzed the costs and effects that a variety of policy changes would have on reducing carbon dioxide emissions from electricity generation in Texas and found that adding a price, based on the cost of climate change, to carbon was the most effective.

"If the goal is reducing carbon dioxide in the atmosphere, what we found is that putting a price on carbon and then letting suppliers and consumers make their production and consumption choices accordingly is much more effective than other policies," said Ramteen Sioshansi, senior author of the study and an integrated systems engineering professor at The Ohio State University.

The study did not examine how policy changes might affect the reliability of the Texas power system - an issue that became acute and painful for Texas residents last month when a winter storm caused the state's power grid to go down.

But it did evaluate other policies, including mandates that a certain amount of energy in a region's energy portfolio come from renewable sources, and found that they were either more expensive or not as effective as carbon taxes at reducing the amount of carbon dioxide in the air. Subsidies for renewable energy sources were the also not as effective at reducing carbon dioxide, the study found.

The researchers modeled what might happen if the government used these various methods to cut carbon emission to be 80% below the 2010 level by the end of 2040.

They found that carbon taxes on coal and natural-gas-fired producing units could achieve those cuts at about half the cost of tax credits for renewable energy sources.

The study was led by Yixian Liu, a former graduate student in Sioshansi's lab, who is now a research scientist at Amazon. It modeled the expenses and carbon reductions possible from five generation technologies - wind, solar, nuclear, natural gas and coal-fired units - along with the costs and carbon reductions associated with storing energy. Storing energy is crucial, because it allows energy systems to manage renewable energy resources as sources shift from climate-change-causing fossil fuels - natural gas and coal - to cleaner sources like wind and solar.

Sioshansi said the results of the study were not surprising, given that a similar program has been in use to reduce levels of sulfur dioxide, one of the chemicals that causes acid rain.

"We have known for the last 40 or more years that market-based solutions can work on issues like this," Sioshansi said.

Although subsidies for renewable sources would work to decrease carbon emissions, the costs of those subsidies would be an issue, the study found.

"If no one had to pay for the subsidies and they were truly free, that would be a great option," Sioshansi said. "Unfortunately, that is not how they work."

Credit: 
Ohio State University

Researchers successfully determine annual changes in genetic ancestry within Finland

Commercially available gene tests that shed light on individual's origins are popular. They provide an estimate of the geographic regions where one's ancestors come from. To arrive at such an estimate, the genetic information of an individual is compared to information pertaining to reference groups collected from around the world.

The findings now made by researchers from the University of Helsinki, Aalto University and the Finnish Institute for Health and Welfare make it possible, for the first time, to make similar comparisons within Finland.

A research group at the University of Helsinki, headed by Associate Professor Matti Pirinen, who directed the study, has already earlier produced very detailed information on genetic structure in Finland. In the recently published study, reference groups were compiled from Finns with similar genetic ancestry and then applied to track the effects of migration in the 20th century on the population level.

The results have been published in the PLoS Genetics journal. In addition, the researchers have launched a website for the project, where anyone can browse the results using an interactive map.

The work is based on samples collected in the FINRISK study by the Finnish Institute for Health and Welfare. Using the reference datasets compiled from the samples, Finns can be classified on three different levels according to their geographical origin with fairly high precision. The first level separates Finns on the east-west axis, while the 10 different reference groups on the most detailed level partially match the former provinces of Finland.

The model used in the comparison works best when most of the individual's ancestors are from the same geographic region. The greater the dispersion of the ancestors, the less accurate the estimate provided by the model is.

"Determining genetic ancestry on nearly the provincial level in Finland would be exciting for anyone interested in recent history," says Sini Kerminen, who recently published a doctoral thesis on the topic at the University of Helsinki's Institute for Molecular Medicine Finland (FIMM).

Furthermore, the study observed the effect of recent genetic mixing in different areas in Finland on the basis of people's year and place of birth. A study of this temporal and geographical precision has never before been conducted anywhere in the world.

According to the findings, the genetic impact of the evacuation of the Karelian region ceded to the Soviet Union has been greater than that of the post-war urbanisation. Moreover, clear differences were seen within Finland with regard to how much population mixing had taken place in the period covered by the study, from the 1920s to the 1980s.

The most notable change was seen in the Uusimaa, Varsinais-Suomi and Häme regions, where the share of southwestern ancestry had declined by more than 20 percentage points. Changes of such magnitude and speed illustrate migration within the country and indicate that more people have migrated in the 20th century from east to west than in the opposite direction in Finland. The least change was observed in the Ostrobothnia region.

"By monitoring the share of the Karelian ancestry in the newborn population of the regions studied we can trace the movement of evacuees at almost an annual level. For example, Karelian ancestry in Ostrobothnia appears to have increased substantially in wartime. However, already in the 1950s it had largely disappeared from the region, unlike in the Uusimaa region and southwest Finland where the share of Karelian ancestry seems to have stabilised at the wartime level," says Associate Professor Pirinen.

According to the research group, detailed understanding of the fine-scale genetic structure of the population is also important for medical projects that utilise genomic information.

"Finally, we want to emphasise that genetic background is not the same as national or cultural identity, which means that our results cannot be used to determine who is a Finn," Pirinen states.

Credit: 
University of Helsinki

Thin explosive films provide snapshot of how detonations start

video: Videos of test detonations at Sandia National Laboratories of thin explosive films, about as thick as a few pieces of notebook paper.

Image: 
Sandia National Laboratories

ALBUQUERQUE, N.M. -- Using thin films -- no more than a few pieces of notebook paper thick -- of a common explosive chemical, researchers from Sandia National Laboratories studied how small-scale explosions start and grow. Sandia is the only lab in the U.S. that can make such detonatable thin films.

These experiments advanced fundamental knowledge of detonations. The data were also used to improve a Sandia-developed computer-modeling program used by universities, private companies and the Department of Defense to simulate how large-scale detonations initiate and propagate.

"It's neat, we're really pushing the limits on the scale at which you can detonate and what you can do with explosives in terms of changing various properties," said Eric Forrest, the lead researcher on the project. "Traditional explosives theory says that you shouldn't be able to detonate at these length scales, but we've been able to demonstrate that, in fact, you can."

Forrest and the rest of the research team, shared their work studying the characteristics of these thin films and the explosions they produce in two recently published papers in ACS Applied Materials and Interfaces and Propellants, Explosives, Pyrotechnics.

For their studies, the team used PETN, also known as pentaerythritol tetranitrate, which is a bit more powerful than TNT, pound for pound. It is commonly used by the mining industry and by the military.

Typically, PETN is pressed into cylinders or pellets for use. The research team instead used a method called physical vapor deposition -- also used to make second-generation solar panels and to coat some jewelry -- to "grow" thin films of PETN.

Sandia is the only lab in the U.S. that has the skills and equipment to use this technique to make thin explosive films that can detonate, said Rob Knepper, a Sandia explosives expert involved in the project.

Growing and studying thin explosive films

Starting in late 2015, the team grew thin films of PETN on different types of surfaces to determine how that would affect the films' characteristics. They started with pieces of silicon about the size of a pinkie nail and grew films that were about one-tenth the thickness of a piece of paper, too thin to explode. Some of the silicon pieces were very clean, some were moderately clean, and some were straight-out-of-the-box and thus had a very thin layer of dirt -- 50,000 times thinner than a sheet of paper.

On the very clean silicon surfaces, the PETN films formed what appeared to be smooth plates by scanning-electron microscopy, yet had tiny cracks in between plates, somewhat like dried mud on a dried lakebed. On the dirty silicon surfaces, the surface of the PETN films appeared more like even hills.

Using an X-ray-based technique, the researchers determined this is because the PETN molecules orient themselves differently on dirty surfaces compared to very clean surfaces, and thus the film grows differently, Forrest said.

"This study in particular has shown that we can get not just novel, but very useful forms of traditional explosives that you would never be able to achieve via traditional means," Forrest said. "Finely controlling the film properties enables us to investigate theories to better understand explosive initiation, which will allow us to better predict reliability, performance and safety of explosive systems through improved models."

Knepper, who served as Forrest's mentor on the project, agreed. "Developing a way that we can reproducibly control the microstructure of the films, just through the surface manipulation, is important. Right now, our focus is on using these films to further our understanding of explosive properties at small scales, such as the initiation and failure of explosives."

Small-scale tests to improve computer models

Once the characteristics and properties of the thin films were better understood, the research team grew thicker films -- this time about the thickness of two sheets of notebook paper -- on very clean pieces of plastic about the size of a pinkie finger.

Then, with a bang, they detonated the explosive films inside a specially designed safety enclosure called a "boombox," which was engineered to prevent a detonation from starting while the enclosure was open and contain any debris from the detonation. Using an ultra-high-speed camera that can take up to a billion frames a second, they watched the shock wave rise up as the explosion raced across the thin film.

In collaboration with New Mexico Institute of Mining and Technology in Socorro, the research team developed a specialized setup to see the shock wave despite the smoke and debris from the test explosions using schlieren imaging, a technique that can detect differences in air density similar to the shimmering over a hot highway.

A mechanical engineering master's student from New Mexico Tech, Julio Peguero, used the data from these experiments to refine Sandia's explosives computer-modeling program. The program, called CTH, can be used for applications, such as to determine how to best shape explosive charges while drilling for oil, Knepper said.

Peguero plotted the velocity of the shock waves above the films with and without gaps and adapted the computer program to better match their experimental results on very thin films. The team engineered thin films with cracks in the middle of various sizes -- ranging from one-third the width of a human hair to 1 1/3 the width of a hair -- to better understand the reliability of thin films and how detonations can fail. The team found that gaps around the size of a hair could stop a detonation from continuing.

Forrest was particularly interested in the gap studies because the first study found thin cracks between the very smooth plates of some of the films. Although these cracks were far smaller than even one-tenth a hair's width, the data from the gap study provided insights into how these films would perform.

Peguero, who is now a Sandia employee started working on the project in January 2018, first as a student and then later as a Sandia intern. "In addition to the excitement of doing explosives research, I gained an appreciation for measurement uncertainty and risks," Peguero said. "That is especially important for national security work to ensure that our confidence in our measurements is well-understood."

Knepper agreed about the importance of the project. He said, "When you have experimental data at small scales, especially those that are relevant for the border between what can detonate and what can't, those data can be really helpful in calibrating computer models. Also, being able to have good characterization of the explosive microstructure to go into the models helps with having parameters that can successfully predict performance over a wider range of explosive behaviors."

Credit: 
DOE/Sandia National Laboratories

NASA's ICESat-2 satellite reveals shape, depth of Antarctic ice shelf fractures

image: Satellite imagery of the Amery Ice Shelf in East Antarctica. The blue lines represent the movement of the ice as it flows from the continent to the edge of the ice shelf, where it calves, or breaks off into the ocean. Satellite data can now help researchers determine where these calving events will occur.

Image: 
Shujie Wang, Penn State

When a block of ice the size of Houston, Texas, broke off from East Antarctica's Amery Ice Shelf in 2019, scientists had anticipated the calving event, but not exactly where it would happen. Now, satellite data can help scientists measure the depth and shape of ice shelf fractures to better predict when and where calving events will occur, according to researchers.

Ice shelves make up nearly 75% of Antarctica's coastline and buttress -- or hold back -- the larger glaciers on land, said Shujie Wang, assistant professor of geography at Penn State. If the ice shelves were to collapse and Antarctica's glaciers fell or melted into the ocean, sea levels would rise by up to 200 feet.

"When we try to predict the future contribution of Antarctica to sea-level rise, the biggest uncertainty is ice shelf stability," said Wang, who also holds an appointment in the Earth and Environmental Systems Institute. "There's no easy way to map the depth of fractures in the field over a regional scale. We found that satellite data can capture the depth and surface morphology of ice shelf fractures and thereby allow us to consistently monitor this information over a large range."

Wang and her colleagues examined high-resolution data collected by the Ice, Cloud and Land Elevation Satellite (ICESat-2) over the Amery Ice Shelf, which is about the size of West Virginia, between October 2018 and November 2019. The satellite shoots green laser pulses to the land surface and uses reflected photons to determine surface height. Whereas other satellites have a resolution of several thousands of feet, ICESat-2 has a resolution of approximately 56 feet, enabling it to see smaller fractures and the fracture morphology.

The researchers then ran the ICESat-2 data through an algorithm that identifies surface depression features to locate and characterize fractures in the ice. They reported their results in the journal Remote Sensing of Environment.

The researchers identified three types of fractures -- U-shaped, parabolic-shaped and V-shaped fractures -- up to 164 feet deep in the ice shelf. They also realized that this surface information provides insights into what is happening hundreds of feet below the surface of the ice.

Basal fracture morphology -- the shape and size of fractures at the base of the ice shelf -- is proportional to the surface depressions, according to Wang. As the glacier that the ice sheet is buttressing accumulates more snow and ice, the parabolic-shaped fractures flow toward the edges of the ice shelf. Once they cross a certain boundary, those surface fractures have a greater potential to penetrate deeper into the ice as the basal fractures extend upward. These fractures can then become V-shaped, potentially signaling that a rift -- a fracture that penetrates the entire thickness of the ice sheet -- has formed. These rifts are more likely to cause calving events.

"Incorporating satellite-based vertical information can improve future ice shelf models," Wang said. "It can help us actually predict calving fronts and where an ice shelf is vulnerable to these events."

Credit: 
Penn State

Rapid new automated genomics screening stamps out crop disease

Researchers at the Earlham Institute (EI) have created a new automated workflow using liquid handling robots to identify the genetic basis to prevent plant pathogens, which can be used on a much larger and rapid scale than current methods.

The new EI Biofoundry automated workflow gives scientists an enhanced visual check of genetic mutations linked to the control of crop disease, speeding up analysis to a fraction of the time compared to current methods - from months to weeks - accelerating development of novel products for crop protection in the agricultural industry.

Biosynthesis is the formation of chemical compounds by a living organism, or a biosynthetic process modeled on these reactions in living organisms.

The EI biofoundry, alongside the Truman Group at the John Innes Centre, used this workflow to experiment on the control of the common potato pathogen, Streptomyces scabies, which causes a devastating disease known as 'potato scab', by Pseudomonas sp. (bacteria).

The team screened 2,880 Pseudomonas sp. (isolated from potato field) mutants with the plant pathogen in just 11 hours, to identify and correlate the pathogen growth inhibition with a biosynthetic gene cluster within two weeks - indicating which genes were preventing the pathogen.

This approach will pinpoint genes involved in pathogen growth inhibition by bacteria, not the plant themselves; where either a bacterial strain or a molecule produced by a bacterial strain, would end up as the crop protection product.

The new EI automated workflow will allow scientists to scale-up the process of identifying Pseudomonas gene clusters that are responsible for restricting pathogen growth, avoiding human error, and increasing reproducibility and accuracy. The engineering biology workflow can also be applied to similar bacterial genome analyses.

Biofoundries integrate high-throughput software and hardware platforms with synthetic biology approaches to enable the design, execution and analyses of large-scale experiments. The unique and powerful combination of EI's Biofoundry infrastructure, expertise in molecular biology, and automation programming provide flexible resources for a wide range of workflows and research areas.

Co-Corresponding author of the study and Earlham BIO Foundry Manager Dr Jose A. Carrasco Lopez, said: "We demonstrate the applicability of biofoundries to molecular microbiology by using automated workflows to identify the genetic basis of growth inhibition of the plant pathogen Streptomyces scabies by a Pseudomonas strain isolated from a potato field.

"The EI Biofoundry generated workflow resulted in the identification of a gene cluster linked to the inhibitory effect on the potato pathogen which made the process much easier. By identifying the new genetic determinants, it opens the door to finding the metabolites involved in pathogen inhibition."

The new workflow will help scientists to understand how the genes are involved in the synthesis of the inhibitory metabolites, look at the species' inhibitory range and how these genes can be used for biocontrol, and how Pseudomonas is resistant to the very same inhibitory metabolite it's producing.

"Manual screenings are usually performed with pin replicators in the same plate, where many mutants are together in the presence of the pathogen," adds Dr Carrasco Lopez. "This means that a mutant in a non-related gene could mask the lack of inhibitory effect of a real-hit mutant by proximity."

"We solved this by creating individual assays for each mutant - this impacts the scientific community significantly, and provides enhanced control of crop pathogens based on biological processes which will translate into better crop yields."

Although these microbiological methods have been used before, with screening of well-known mutant libraries, this innovative study has for the first time used an automated screening process to reduce the required time and complete the process in weeks; whereas manually, it could take months.

Collaborator and first author Alaster Moffat, PhD student in the Truman Lab, who approached EI about the possibility of automating the biosynthetic screening, said: "We had previously been unable to identify genes important for inhibiting the growth of the pathogen using bioinformatics approaches, but this workflow allowed us to rapidly probe the effects of almost every accessory gene in the Pseudomonas isolate's genome directly, and find a novel biosynthetic gene cluster within a very short timeframe."

The EI Biofoundry plans to progress the study by creating workflow modifications to adapt to new projects and pathogens. "Once we have identified the gene cluster involved in the synthesis of this new metabolite," says Dr Carrasco Lopez, "scientists could mutate every single gene to identify functions and essential genes for metabolite synthesis. This can be applied to control plant pathogens and further improve the potato crop's yield, to test the species' inhibitory range of these metabolites and to identify new genes from other bacterial species linked to the impairment of other crop pathogens."

Credit: 
Earlham Institute

New study examines importance and unique characteristics of US female farmers

UNIVERSITY PARK, Pa. -- While women can be drawn into farming for many reasons, researchers in Penn State's College of Agricultural Sciences have found that female-owned farms in the U.S. are more common in areas that are closer to urban markets, that engage in agritourism activity, and that offer greater access to childcare.

The number of farms operated by women has risen over the past two decades, said Claudia Schmidt, assistant professor of marketing and local/regional food systems.

The U.S. Department of Agriculture changed the way it counts the operators of farms in its most recent Census of Agriculture, allowing for up to four principal operators per farm. This has inflated the number of female operators somewhat, but female participation in agriculture is nonetheless at an all-time high, said Schmidt.

"This type of research is needed not just for reasons of equity, but also to support a more diverse and resilient agricultural sector in general," said Schmidt. "Without knowing more about female farm-operators' decision making, agricultural service providers have had to make assumptions about the type of information and products that are useful to them. Our analysis shows some of the ways in which female-owned farms are unique and it can offer important insights into how best to serve this population."

Using data from the U.S. Census of Agriculture from 2002-2017, Schmidt and her colleagues developed a statistical model to examine the relationship between a county's share of female-operated farms and the conditions in the county. Their goal was to shed light on aspects of the local economic and agricultural ecosystems that are most strongly associated with female-owned farms.

The researchers identified 10 economic variables hypothesized to matter, including unemployment, non-farm wages, availability of childcare, and the rate of female participation in the labor force. They also examined the total number of farms, average farm size and annual sales, average farmer age, and the types of farm activities carried out. They looked at each variable in isolation to determine which variables are independently and most strongly associated with the share of female-operated farms.

"We wanted to understand why women are drawn to farming," said Stephan Goetz, professor of agricultural and regional economics and director of the Northeast Regional Center for Rural Development (NERCRD). "Is it because they want to engage in this kind of work, or is it because they are pushed into farming due to a lack of other economic opportunities locally? We also wanted to examine how local agricultural conditions -- what farming generally looks like in a given place -- relate to women's participation in agriculture."

The analysis, which was recently published in Food Policy, shows that more female-owned farms are found where average farm size is below 50 acres, where annual farm sales average less than $10,000 per farm, where more farms specialize in grazing sheep or goats, and where agritourism activities -- which attract visitors to farms -- are more common.

The researchers also found that direct-to-consumer sales are more prevalent in counties with more female-owned farms. It is therefore not surprising that urban areas with high population densities have more female-owned farms than more rural areas do, said Goetz.

"Our findings suggest that females are more likely to engage in the type of farming that benefits from being in or near urban locations," said Goetz. "In addition to offering more opportunities to market directly to consumers, urban and suburban locations also offer greater access to childcare than rural areas, and our research showed the availability of childcare is correlated with the number of female-owned farms in a county."

The researchers also noted that the share of farms with female operators is higher in counties with a greater total number of farms, which could reflect increased opportunities for networking and learning through knowledge-sharing networks.

"Our research suggests that female-owned farms are more common in certain economic and agricultural ecosystems," Schmidt said. "Therefore, they likely have different needs in terms of education and support, and this research is an important step in identifying these differences."

Among other questions, future research will look at the impact of female-owned farms on local economic and agricultural conditions.

Credit: 
Penn State