Culture

Treatments for leading cause of blindness generate $0.9 to $3 billion

Wet age-related macular degeneration (wAMD) is one of the leading causes of blindness in the U.S. Breakthrough treatments come with a steep price tag and treatment burden for the patient, but a new study suggests their benefits to patient health and society top billions of dollars, or more, if adherence could be improved.

"This study shows the importance of targeted antibody therapy in transforming the treatment for a leading cause of blindness, namely wet macular degeneration," said Dr. Mark Humayun, director of the USC Ginsburg Institute for Biomedical Therapeutics and co-director of the USC Roski Eye Institute. "Understanding the economics underpinning this therapy is also important because it helps us in the health care system quantify how much impact a therapy delivers relative to its cost."

Age-related macular degeneration (AMD) affects approximately 11 million individuals in the United States. The wet form of the disease (wAMD), which is caused by the abnormal growth of blood vessels under the retina, progresses rapidly. Symptoms include blurred vision and blind spots, which often lead to legal blindness. While only 10 percent of people with macular degeneration develop the wet form, the symptoms for these patients are much worse -- in the past, wAMD has caused 90 percent of the blindness associated with macular degeneration.

New treatments for wAMD not only prevent further vision loss, but have also been shown to improve vision with the benefits of these innovative treatments lasting for multiple years, according to clinical trials. However, the administration of these treatments is burdensome and requires patients to receive injections in their eye as frequently as every four to eight weeks. Studies have shown more than half of Medicare patients discontinue treatment within the first year due to cost, the inability to get transportation to and from their retina specialists, and fear or discomfort from receiving these injections.

A new economic study, published in JAMA Ophthalmology and conducted by USC researchers at the Schaeffer Center for Health Policy & Economics, the Ginsburg Institute for Biomedical Therapeutics, and the Roski Eye Institute, quantifies the benefits of treatment for wAMD. They found improvements in vision from innovative treatments generated $5.1 to $8.2 billion in patient benefits. This translates to $0.9 to $3.0 billion in societal value (patient benefits minus treatment costs) over three years. Future innovative treatments that lead to improved adherence would generate an additional $7.3 to $15.0 billion in patient benefits, they estimate.

Weighing Benefits and Burdens of Treatment

Treatments for wAMD first came on the market around 2006. Called anti-vascular endothelial growth factor (VEGF), these treatments target the abnormal growth of the blood vessels and have been shown to restore patients' eyesight for many years.

In clinical trials, these treatments were administered to patients monthly via an injection in the eye. However, in practice adherence tends to decline because of the discomfort associated with the eye injections, difficulty in getting to retina specialists for timely care, and cost. To address this burden to the patient, some doctors have modified treatment plans that allow for lower injection frequency, taking into account the patient's documented vision improvements, cost, and burden of administering the treatment.

Taking into account the costs associated with treatment, the researchers modeled treatment scenarios to provide practitioners, patients, and payers with information about the value of anti-VEGF therapy. Their findings quantified the benefits derived from the therapy to individual patients and society.

"We already know these drugs are effective based on clinical data. Our economic model helps translate clinical outcomes into dollar terms so payers and practitioners can quantify the value of treatment to patients rather than focusing on the treatment cost alone," said Karen Mulligan, lead author on the study and a professor at the USC Price School of Public Policy and the USC Schaeffer Center.

Current Treatments Return Value in the Billions, Future Therapies that Improve Adherence May Add Economic Benefit

To build the economic model, Mulligan and her team collected data from published literature on wAMD patients treated with anti-VEGF therapy, translating average documented changes in visual acuity and treatment usage to quality-adjusted life years (or QALYs), a standard measurement in assessments of the value of new health technologies. Based on well-established metrics, the researchers used a model in which a person with perfect vision has a quality of life that is valued at $150,000 for a single year.

The researchers modeled multiple treatment scenarios:

No injections scenario

Less frequent injections scenario (patients received approximately eight injections per year)

More frequent injections scenario (patients received an average of 10.5 injections per year)

Improved adherence scenario (85 percent of patients initiate therapy and adherence improves)

Innovation scenarios (based on clinical trial data representing best case scenarios with patients receiving either more or less frequent injections)

The researchers found even under current treatment conditions of less frequent injections, treatment generates over $1 billion for the full population with wAMD in year one and $5.1 billion in year three. With improved adherence, benefits to the patient population were estimated to reach $7.3 to $11.4 billion in year three.

This translates to a benefit to society (patient benefit minus treatment costs) of $0.9 to $3.0 billion across three years in the current treatment scenarios and upwards of over $4 billion in the innovative treatment scenario (i.e., when a drug that leads to better adherence is discovered).

The researchers find innovations to improve treatment adherence could generate an additional $1.2 to $3.7 billion in patient benefit and $59 million to $1.3 billion in societal value compared to current treatment scenarios, highlighting the fact that when patients follow through with necessary treatment, both individuals and society as a whole can reap the rewards.

"Our model scenarios suggest that even though anti-VEGFs provide substantial benefits, a lot is left on the table due to low adherence," said Mulligan. "One limitation in these types of modeling exercises is we often have access to vision data or treatment patterns, but not both (for the same set of patients). Generating more comprehensive and long-run real world data on vision outcomes and treatment patterns would improve our ability to understand the impact of things like adherence on value."

Credit: 
University of Southern California

Blocking a survival mechanism could tackle melanoma treatment resistance

image: Adding an inhibitor compound to cancer cells for analysis of how it affects cell signalling, growth and survival.

Image: 
the Babraham Institute

The effectiveness of current treatments for melanoma, the deadliest form of skin cancer, could be improved by using approaches that wipe out the 'survival system' of cancer cells according to a study published in Nature Communications today.*

Researchers from the Babraham Institute, AstraZeneca and the Cancer Research UK Cambridge Centre have demonstrated an approach, used in parallel with existing treatments, which knocks out one of melanoma cells' survival pathways and is effective at triggering tumour cell death and delaying treatment resistance.

The researchers suggest this approach may also help to tackle late stage cancers even after they have become resistant to existing treatments.

There are around 16,000 new melanoma skin cancer cases in the UK every year. Although survival has doubled in the UK in the past 40 years, late-stage melanoma is aggressive and difficult to treat. Around 55% of people with latest stage melanoma survive their disease for 1 year or more compared to nearly 100% of those diagnosed at the earliest stage.** These late-stage cancers evolve rapidly to resist treatment.

Cancer cells can rely on various 'survival proteins' to stay alive despite the effect of treatment. But so far, researchers have been unable to pinpoint which of these survival proteins are used by melanoma cells.

Researchers from the Babraham Institute and Cancer Research UK Cambridge Centre have now discovered that melanoma cells rely on a protein called MCL1, which is critical for the cells to survive when they are exposed to standard MEK and BRAF inhibitor drugs, such as trametinib or vemurafenib.

The researchers then studied an investigational compound from AstraZeneca, an MCL1 antagonist called AZD5991, and used it in the lab against models of melanoma.

They showed that by blocking MCL1, AZD5991 inactivated the backup survival system within melanoma cells. Combining AZD5991 with a treatment like vemurafenib had a 'double whammy' effect against cancer cells, eliminating them more effectively.

This drug combination also worked in late-stage melanoma tumours, derived from patients and grown in mice. In these mice, combinations of vemurafenib and AZD5991 reduced the size of tumours, sometimes almost completely, and slowed their growth compared to standard treatment alone. However, used alone, AZD5991 had no effect in these models.

Patients with these aggressive tumours may be given a different type of drug called an ERK inhibitor; although these drugs are still undergoing clinical trials and not widely available yet, it already seems that melanoma could evolve rapidly to resist them. Future clinical trials could look at whether blocking MCL1 at the same time as giving an ERK inhibitor, could halt the evolution of these late-stage tumours from becoming resistant.

Lead researcher Dr Mathew Sale, from the Babraham Institute, said: "This study has demonstrated that melanoma cells are addicted to the MCL1 protein for survival, but only when they are treated with the existing melanoma drugs.

"By targeting both vulnerabilities at the same time we can kill melanoma cells, causing greater inhibition of tumour growth over a longer time period."

Dr Simon Cook, group leader at the Babraham Institute, said: "This study stems from 15 years of basic research in which we have sought to understand the normal signals that control whether a cell lives or dies.

"However, we became increasingly aware that these same pathways were not functioning correctly in cancer. Thanks to a long-standing partnership with AstraZeneca and the Cancer Research UK Cambridge Centre we were able to translate this basic research to understand and potentially better treat melanoma."

Professor Duncan Jodrell from the Cancer Research UK Cambridge Centre, who contributed to the research, said: "This work highlights the importance of performing collaborative research like this, as it could lead to new ways to tackle cancers, particularly those that are hard to treat. Our work also shows the value of scientists in basic science labs working closely with drug development specialists and industry scientists, which is fundamental if we want to find better treatments for people affected by cancer."

Credit: 
Cancer Research UK

The invisible US Hispanic/Latino HIV crisis: Addressing gaps in the national response

"In his February 5,2019, State of the Union Address, President Trump promised to reinforce national efforts to end the US HIV/AIDS epidemic by 2030. However, the national public health agenda has neglected the accelerating HIV/AIDS crisis in Hispanic/Latino communities. Progress in the fight against HIV is re?ected in aggregate data for the United States, but data released by the Centers for Disease Control and Prevention (CDC) raise alarming concerns about widening, yet largely unrecognized, HIV infection disparities among Hispanics/Latinos."

So begins a peer-reviewed commentary published today (Nov. 14) in the American Journal of Public Health and principally authored by Professor Vincent Guilamo-Ramos of the NYU Silver School of Social Work.

Appearing on the approach to Worlds AIDS Day on Dec. 1, the article notes that the federal government is seeking to put an end to HIV transmission in the US in little more than a decade. But, it states, while the number of estimated annual new HIV infections in the United States has declined overall by 6% since 2010, it has increased among Hispanic/Latino populations by 14% or more.

The alarming trend is best understood by considering the specific Hispanic/Latino populations most heavily affected by HIV/AIDS, such as Hispanic/Latino gay and bisexual men 25 to 34 years old, who experienced the largest increase in estimated annual new HIV infections of all groups reflected in Centers for Disease Control and Prevention surveillance data.

In this piece, Dr. Guilamo-Ramos draws from his research in Latinx communities to identify underlying drivers of increasing new HIV infections among Hispanics/Latinos most at risk, discusses current national efforts to fight HIV across the demographic, and underscores gaps in the national response.

Consideration of these underlying drivers of increased HIV incidence among Hispanics/Latinos is warranted to achieve the administration's 2030 HIV/AIDS goals, writes Guilamo-Ramos - with specifically focused investment in: (1) HIV stigma reduction in Hispanic/Latino communities, (2) the availability and accessibility of HIV treatment of HIV-positive Hispanics/Latinos, (3) the development of behavioral interventions tailored to Hispanic/Latino populations, and (4) the engagement of Hispanic/Latino community leaders.

Credit: 
New York University

'Are we alone?' Study refines which exoplanets are potentially habitable

image: An artist's conception shows a hypothetical planet with two moons orbiting within the habitable zone of a red dwarf star.

Image: 
NASA/Harvard-Smithsonian Center for Astrophysics/D. Aguilar

Study is the first to include 3D chemistry to understand how a star's radiation heats or cools a rocky planet's atmosphere

Information will help astronomers know where to search for life elsewhere

Researchers find that only planets orbiting active stars lose water to vaporization

Some planets, previously believed to be habitable, receive too much UV radiation to sustain life

EVANSTON, Ill. -- In order to search for life in outer space, astronomers first need to know where to look. A new Northwestern University study will help astronomers narrow down the search.

The research team is the first to combine 3D climate modeling with atmospheric chemistry to explore the habitability of planets around M dwarf stars, which comprise about 70% of the total galactic population. Using this tool, the researchers have redefined the conditions that make a planet habitable by taking the star's radiation and the planet's rotation rate into account.

Among its findings, the Northwestern team, in collaboration with researchers at the University of Colorado Boulder, NASA's Virtual Planet Laboratory and the Massachusetts Institute of Technology, discovered that only planets orbiting active stars -- those that emit a lot of ultraviolet (UV) radiation -- lose significant water to vaporization. Planets around inactive, or quiet, stars are more likely to maintain life-sustaining liquid water.

The researchers also found that planets with thin ozone layers, which have otherwise habitable surface temperatures, receive dangerous levels of UV dosages, making them hazardous for complex surface life.

"For most of human history, the question of whether or not life exists elsewhere has belonged only within the philosophical realm," said Northwestern's Howard Chen, the study's first author. "It's only in recent years that we have had the modeling tools and observational technology to address this question."

"Still, there are a lot of stars and planets out there, which means there are a lot of targets," added Daniel Horton, senior author of the study. "Our study can help limit the number of places we have to point our telescopes."

The research will be published online Nov. 14 in the Astrophysical Journal. (Read a pre-print of the paper.)

Horton is an assistant professor of Earth and planetary sciences in Northwestern's Weinberg College of Arts and Sciences. Chen is a Ph.D. candidate in Northwestern's Climate Change Research Group and a NASA future investigator.

The 'Goldilocks zone'

To sustain complex life, planets need to be able to maintain liquid water. If a planet is too close to its star, then water will vaporize completely. If a planet is too far from its star, then water will freeze, and the greenhouse effect will be unable to keep the surface warm enough for life. This Goldilocks area is called the "circumstellar habitable zone," a term coined by Professor James Kasting of Penn State University.

Researchers have been working to figure out how close is too close -- and how far is too far -- for a planet to sustain liquid water. In other words, they are looking for the habitable zone's "inner edge."

"The inner edge of our solar system is between Venus and Earth," Chen explained. "Venus is not habitable; Earth is."

Horton and Chen are looking beyond our solar system to pinpoint the habitable zones within M dwarf stellar systems. Because they are numerous and easier to find and investigate, M dwarf planets have emerged as frontrunners in the search for habitable planets. They get their name from the small, cool, dim stars around which they orbit, called M dwarfs or "red dwarfs".

Crucial chemistry

Other researchers have characterized the atmospheres of M dwarf planets by using both 1D and 3D global climate models. These models also are used on Earth to better understand climate and climate change. Previous 3D studies of rocky exoplanets, however, have missed something important: chemistry.

By coupling 3D climate modeling with photochemistry and atmospheric chemistry, Horton and Chen constructed a more complete picture of how a star's UV radiation interacts with gases, including water vapor and ozone, in the planet's atmosphere.

In their simulations, Horton and Chen found that a star's radiation plays a deciding factor in whether or not a planet is habitable. Specifically, they discovered that planets orbiting active stars are vulnerable to losing significant amounts of water due to vaporization. This stands in stark contrast to previous research using climate models without active photochemistry.

The team also found that many planets in the circumstellar habitable zone could not sustain life due to their thin ozone layers. Despite having otherwise habitable surface temperatures, these planets' ozone layers allow too much UV radiation to pass through and penetrate to the ground. The level of radiation would be hazardous for surface life.

"3D photochemistry plays a huge role because it provides heating or cooling, which can affect the thermodynamics and perhaps the atmospheric composition of a planetary system," Chen said. "These kinds of models have not really been used at all in the exoplanet literature studying rocky planets because they are so computationally expensive. Other photochemical models studying much larger planets, such as gas giants and hot Jupiters, already show that one cannot neglect chemistry when investigating climate."

"It has also been difficult to adapt these models because they were originally designed for Earth-based conditions," Horton said. "To modify the boundary conditions and still have the models run successfully has been challenging."

'Are we alone?'

Horton and Chen believe this information will help observational astronomers in the hunt for life elsewhere. Instruments, such as the Hubble Space Telescope and James Webb Space Telescope, have the capability to detect water vapor and ozone on exoplanets. They just need to know where to look.

"'Are we alone?' is one of the biggest unanswered questions," Chen said. "If we can predict which planets are most likely to host life, then we might get that much closer to answering it within our lifetimes."

Credit: 
Northwestern University

Here's how you help kids crack the reading code

To help children learn to read earlier, one thing appears to be key: Learn the letters and sounds associated with the letters as early as possible. This may sound obvious, but another theory has suggested that children should first learn to read the letters in the context of words instead.

Charting each child's letter-sound knowledge can be helpful in supporting them further in the learning process as they begin school, says Professor Hermundur Sigmundsson at NTNU's Department of Psychology.

Sigmundsson, Greta Storm Ofteland, Trygve Solstad and Monika Haga collaborated on a recently published article in New Ideas in Psychology. Sigmundsson says the research team are among the first to clearly show the connection between learning the letters and sound correspondences and breaking the reading code.

"Since reading is the very foundation for acquiring other skills, it should be prioritized for the first few years of school," says Professor Sigmundsson.

Clear link

The connection between literacy and literacy is clear, and a good indicator of literacy. On average, the children had to know 19 letters to crack the reading code or read.

But it's not a given that you'll be able to read even if you know your letters. Reading or writing single letters is something completely different from putting those letters together into words that make sense. The individual letter variations can be huge.

Granted, the letters in Norwegian are pronounced quite consistently - especially compared to English - but they vary enough that children need time. The words "cough" or "light," for example, aren't necessarily pronounced the way you would think by just looking at the letters individually.

Children who have already cracked the reading code should have appropriate challenges to further develop their reading skills. These should be in the form of books that pique their interest. At the same time, youngsters who still haven't cracked the code should learn enough letters and letter sounds to start practicing putting words together.

Read to kids early - practice makes perfect

The research team studied 356 children aged 5 to 6 years for one year. Eleven per cent of the children could already read when they started school. By the end of the first school year, 27 per cent had not yet learned to read. Most of this group were boys, who also knew fewer letters when they started school.

"If you take out the 5 to 10 per cent who have dyslexia, the numbers could indicate that around one in five children gets too little practice or lacks motivation in their first school year," Sigmundsson says.

Girls are better at reading than boys from the outset. This difference continues throughout school, but it's important to remember that this is an average, and that parents of both boys and girls can do things to help their children.

Previous research from NTNU and elsewhere shows that you need to practice exactly what you want to be good at. Therefore, it is important that children are encouraged to become independent readers early. Parents should read to children to arouse their interest whenever possible.

What you read hardly matters, as long as the child finds it time well spent. As a bonus, children and parents enjoy a cosy time together.

Credit: 
Norwegian University of Science and Technology

Tailor-made carbon helps pinpoint hereditary diseases and correct medication dosage

image: The new methodology allows the experimental spectrum produced by X-ray spectroscopy to be separated into atomic-level data.

Image: 
Anja Aarva / Aalto University

Sensors manufactured with carbon-based materials can provide uniquely accurate and real-time information on hereditary diseases or the concentrations of drugs in the body. In addition to medicine, carbonaceous materials are used in batteries, solar cells and water purification.

Other elements, such as hydrogen and oxygen, are almost always present in carbon-based materials, which alters the materials' properties. Therefore, modifying materials for desired applications requires atomic-level knowledge on carbon surface structures and their chemistry. Researchers at Aalto University, the University of Cambridge, the University of Oxford and Stanford University have now taken a significant new step forward in describing the atomic nature of carbonaceous materials.

Detailed information on carbon surfaces can be obtained by X-ray spectroscopy, but the spectrum it produces is challenging to interpret because it summarises information from several local chemical environments of the surface. The researchers have developed a new systematic analysis method that uses machine learning to integrate the computational model (density functional theory) with the experimental results of the carbon sample. The new methodology allows the experimental spectrum produced by X-ray spectroscopy to be separated into atomic-level data.

'In the past, experimental results have been interpreted differently, based on varying literature references, but now we were able to analyse the results using only computational references. The new method gives us a much better understanding of carbon surface chemistry without human-induced bias' says Anja Aarva, a doctoral student at Aalto University.

The new method expands knowledge of carbon-based materials

In a two-part study, the researchers initially studied how differently bound carbon affects the formation of the experimental spectrum qualitatively. The researchers then attempted to aggregate the measured spectrum with computational spectrum reference data to obtain a quantitative estimate of what the experimental spectrum consists of. This was to help them determine what the nature of the carbon sample at the atomic-level is. The new methodology is suitable for analysing the surface chemistry of various forms of carbon, such as graphene, diamond and amorphous carbon.

The study is a continuation of the work of Aalto University postdoctoral researcher Miguel Caro and professor Volker Deringer from Oxford University, which extensively mapped the structure and reactivity of amorphous carbon. The study utilises machine learning methods developed by professor Volker Deringer and professor Gabor Csányi from Cambridge University. Experimental measurements were carried out by Sami Sainio, an Aalto based postdoctoral researcher at Stanford University.

'Next, we intend to use the methodology we have developed to predict, for example, what kind of carbon surface would be best for electrochemical identification of certain neurotransmitters, and then try to produce the desired surface. In this way, computational work would guide experimental work and not vice versa, as has typically been the case in the past,' Tomi Laurila, professor at Aalto University said.

Credit: 
Aalto University

Two cosmic peacocks show violent history of the magellanic clouds

video: A number of filamentary structures are formed at the same time after the collision. This simulation was performed by the supercomputer "ATERUI" operated by the National Astronomical Observatory of Japan.

Image: 
NAOJ/Inoue et al.

Two peacock-shaped gaseous clouds were revealed in the Large Magellanic Cloud (LMC) by observations with the Atacama Large Millimeter/submillimeter Array (ALMA). A team of astronomers found several massive baby stars in the complex filamentary clouds, which agrees well with computer simulations of giant collisions of gaseous clouds. The researchers interpret this to mean that the filaments and young stars are telltale evidence of violent interactions between the LMC and the Small Magellanic Cloud (SMC) 200 million years ago.

Astronomers know that stars are formed in collapsing clouds in space. However, the formation processes of giant stars, 10 times or more massive than the Sun, are not well understood because it is difficult to pack such a large amount of material into a small region. Some researchers suggest that interactions between galaxies provide a perfect environment for massive star formation. Due to the colossal gravity, clouds in the galaxies are stirred, stretched, and often collide with each other. A huge amount of gas is compressed in an unusually small area, which could form the seeds of massive stars.

A research team used ALMA to study the structure of dense gas in N159, a bustling star formation region in the LMC. Thanks to ALMA's high resolution, the team obtained a detailed map of the clouds in two sub-regions, N159E-Papillon Nebula and N159W South.

Interestingly, the cloud structures in the two regions look very similar: fan-shaped filaments of gas extending to the north with the pivots in the southernmost points. The ALMA observations also found several massive baby stars in the filaments in the two regions.

"It is unnatural that in two regions separated by 150 light-years, clouds with such similar shapes were formed and that the ages of the baby stars are similar," says Kazuki Tokuda, a researcher at Osaka Prefecture University and the National Astronomical Observatory of Japan. "There must be a common cause of these features. Interaction between the LMC and SMC is a good candidate."

In 2017, Yasuo Fukui, a professor at Nagoya University and his team revealed the motion of hydrogen gas in the LMC and found that a gaseous component right next to N159 has a different velocity than the rest of the clouds. They suggested a hypothesis that the starburst is caused by a massive flow of gas from the SMC to the LMC, and that this flow originated from a close encounter between the two galaxies 200 million years ago.

The pair of peacock-shaped clouds in the two regions revealed by ALMA fits nicely with this hypothesis. Computer simulations show that many filamentary structures are formed in a short time after a collision of two clouds, which also backs this idea.

"For the first time, we uncovered a link between massive star formation and galaxy interactions in very sharp detail," says Fukui, the lead author of one of the research papers. "This is an important step in understanding the formation process of massive star clusters in which galaxy interactions have a big impact."

Credit: 
National Institutes of Natural Sciences

Architecture of a bacterial power plant decrypted

image: Structure of the cytochrome bd oxidase. The experimental data are shown in gray and the derived molecular model is colored. The excision enlargement shows the area in which the three cytochromes are bound.

Image: 
Rudolf-Virchow-Zentrum / University of Würzburg

Both humans and many other creatures need oxygen for survival. In the conversion of nutrients into energy, the oxygen is converted to water, for which the enzyme oxidase is responsible. It represents the last step of the so-called respiratory chain.

While humans have only one type of these oxidases, the bacterial model organism Escherichia coli (E. coli) has three alternative enzymes available. In order to better understand why E. coli and other bacteria need multiple oxidases, Prof. Bettina Böttcher from the Rudolf Virchow Center in collaboration with Prof. Thorsten Friedrich (University of Freiburg) have determined the molecular structure of the cytochrome bd oxidase from E. coli. This type of oxidase is found only in bacteria and microbial archaea.

Bacteria have other types of oxidase

The eponymous cytochromes, two of type b and one of type d, are the key iron-containing groups that enable the function of oxidase. At the cytochrome d, the oxygen is bound and converted to water. The structure determination revealed that the architecture of cytochrome bd oxidase from E. coli is very similar to the structure of another bacterium, Geobacillus thermodenitrificans. "However, to our great surprise, we discovered that a cytochrome b and cytochrome d have changed positions and thus the site of oxygen conversion within the enzyme," reports Prof. Thorsten Friedrich.

The cause of this change could be that the cytochrome bd oxidase might fulfill a second function: in addition to the energy production, it can serve to protect against oxidative stress and stress by nitroxides. Particularly pathogenic bacterial strains show a high activity of cytochrome bd oxidase. Since humans do not have this type of oxidase, these results might furthermore provide important indications on the development of new antimicrobials that target the cytochrome bd oxidase of pathogens such as Mycobacteria.

Important for this success was the new high-performance electron microscope, which has been operated since 2018 under the direction of Prof. Böttcher at the Rudolf Virchow Center. "Cytochrome bd oxidase was a challenging sample for cryo-electron microscopy because it is one of the smallest membrane proteins whose structure has been determined with this technique," explains Prof. Bettina Böttcher.

Special features of this technique are extremely low temperatures down to minus 180 degrees Celsius and a resolution that moves in the order of atoms. It makes it possible to study biological molecules and complexes in solution that have been previously snap frozen and to reconstruct their three-dimensional structure. With a voltage of 300,000 volts, the microscope accelerates the electrons with which it "scans" the samples.

Credit: 
University of Würzburg

Alpine rock axeheads became social and economic exchange fetishes in the Neolithic

image: Alpine rock axehead found at Harras, Thuringia, from the Michelsberg Culture (c. 4300-2800 ANE).

Image: 
Juraj Lipták, State Office for Heritage Management and Archaeology Saxony-Anhalt.

Axeheads made out of Alpine rocks had strong social and economic symbolic meaning in the Neolithic, given their production and use value. Their resistance to friction and breakage, which permitted intense polishing and a re-elaboration of the rocks, gave these artefacts an elevated exchange value, key to the formation of long-distance exchange networks among communities of Western Europe. Communities who had already begun to set the value of exchange of a product according to the time and effort invested in producing them.

This is what a study led by a research group at the Universitat Autònoma de Barcelona (UAB) indicates in regards to the mechanical and physical parameters characterising the production, circulation and use of a series of rock types used in the manufacturing of sharp-edged polished artefacts in Europe during the Neolithic (5600-2200 BCE).

The objective of the study was to answer a long debated topic: the criteria by which Alpine rocks formed part of an unprecedented pan-European phenomenon made up of long-distance exchange networks, while others were only used locally. Was the choice based on economic, functional or perhaps subjective criteria? Stone axeheads were crucial to the survival and economic reproduction of societies in the Neolithic. Some of the rocks used travelled over 1000 kilometres from their Alpine regions to northern Europe, Andalusia in southern Spain and the Balkans.

This is the first time a study includes in the specialised bibliography comparative data obtained by testing the resistance to friction and breakage of the rocks. These mechanical parameters have led to the definition of production and use values, which were then correlated with the distances and volumes of the rocks exchanged in order to obtain their exchange value. The results help understand the basic principles underlying the supply and distribution system of stone materials during the Neolithic in Western Europe, as well as its related economic logic.

"The reasons favouring the integration of specific rock types into these long-distance networks depended on a complex pattern of technological and functional criteria. This pattern was not solely based on economic aspects, their use value, but rather on the mechanical capacity to resist successive transformation processes, i.e. their production value, and remain unaltered throughout time", explains Selina Delgado-Raack, researcher at the Department of Prehistory, UAB, and first author of the article.

Supply System and Economic Logic

The study points to the diverging economic conception between the manufacturing of tools using other rocks and Alpine rock axeheads. Neolithic communities selected the most suitable raw materials available from all the resources in their region and knew each of their mechanical and physical characteristics. These tools normally travelled in a radius of 200 kilometres from where they originated and rarely went farther than 400-500 kilometres. Only Alpine rocks travelled further than those regional and economic limits.

"The circulation of these rocks at larger distances did not respond to a functional and cost-efficient logic, in which each agent takes into account the costs of manufacturing and transport when selecting the different rock types, all of them viable in being converted into fully functioning tools", indicates Roberto Risch, also researcher at the Department of Prehistory, UAB, and coordinator of the research. "It rather obeys the emergence of a very different economic reasoning, based on the ability to transform one material through ever greater amounts of work, something which many centuries later Adam Smith used to define the British economy of the 18th century. In the case of Alpine axeheads, their exceptional exchange value was due to the increase in manufacturing costs, a result of the intense polishing of these stones as they passed from one community to another".

A Primitive Form of Currency?

For the research team, the fact that the Alpine axeheads are categorised as the most commonly crafted and modified artefact in different periods and regions during the Neolithic rules out their role as symbols of power or ceremonial elements. "The economic pattern points towards more of a fetish object used in social and economic interactions among European communities of highly different socio-political productions and orientations", Selina Delgado-Raack states.

The exceptional exchange value reached by some rock types, such as the omphacitites and jadeitites, leads the team to think that they may have been used as a primitive form of currency, although they admit that there is a need for more studies before this topic can be clarified.

Credit: 
Universitat Autonoma de Barcelona

Efficient, but not without help

HSE University economists analyzed what banks performed best on the Russian market from 2004 to 2015 - state, private, or foreign -owned ones. They found out that during stable economic and political periods, foreign- owned banks tend to take the lead, while during a crisis period, such as from 2008 to 2013, state -owned banks outperformed them.

Empirical studies have shown that efficiency in the banking sector generally correlates with a country's economic growth. A bank's efficiency is impacted by many factors, including its assets and liability structure, as well as its specialization (household deposit, securities, or investment banking). The type of ownership - public or private - also plays a role. Furthermore, depending on the overall macroeconomic environment, private or state-owned banks may perform better. This is the first time such an analysis has been carried out in Russia.

The focus of leading banks is profit efficiency. State-owned banks, in addition to operational goals, carry out social objectives as well, such as offering reduced mortgages to military personnel or other eligible beneficiaries.

In regards to foreign private-owned banks, they generally enter new markets with only one goal - profit generation. Russia used to be quite attractive in this regard, at least from 2004 to 2009, when the interest margin was almost four times higher than in Europe (8.02% vs. 2.24%).

The proportion of state-owned and foreign-owned banks in terms of total assets in the banking sector changed over 11 years, from 2004 to 2015. According to the Bank of Russia's data, this share increased rapidly from 40% in 2004, to 52% in 2008, and 61.5% in 2015, respectively. In addition, the share of foreign-owned banks started out at 7.3% in 2004 and climbed to 18.7% in 2008. However, the global financial meltdown impacted them more harshly than others, and by 2015, their share of assets fell to 13%. At the same time, the share of privately-owned Russian banks also decreased significantly. Standing above 50% in 2004, their share fell below 30% by 2015.

Research Methods

Economists at HSE University analyzed reported data from 240 banks, which, in aggregate, represent 91% of total banking assets in Russia. They aimed at identifying the factors as to why the return on assets (ROA) for certain banks fell behind the optimal level.

The 'suspects' included the banks' ownership, as well as those indicators characterizing assets and liability structures and the risk profiles of loan portfolios. The researchers analyzed the share of household deposits in total deposits, the share of household loans in banks' loan portfolios, and the share of loan loss provisions to total loans. These particular indicators can characterize a bank's focus and credit risk level.

A bank's profit efficiency demonstrates how it manages its profits. As such, a lack of efficiency may show up in banking services, which might be too expensive or not popular among the bank's clients.

'To model profit function, we relied on three factors: labour, capital, and deposits. We believed that for banks, deposits are the main resource,' explained Veronika Belousova, one of the study's authors, adding: 'The profit function included the cost of these resources (cost of labour = personel expenses/total bank assets; cost of capital = operational costs/fixed assets; cost of deposits = interest paid on deposits/volume of deposits). Also, loans and securities were investigated as banking products. ROA was a dependent variable in regards to this function'.

In addition to these indicators, the researchers considered several additional factors, such as currency exchange rates, banks' capital adequacy (as risk buffer), their location and specialization, and the size of their branch networks.

The profit function was modeled for different time periods, taking into account crises experienced by the Russian economy: i.e., the 2004 liquidity crisis, the 2008-2010 economic meltdown, and the geopolitical crisis that started in spring 2014.

Outcomes

This empirical study indicates that the profit efficiency of state-owned banks during crisis periods was better than that of private-owned banks. The researchers note that this phenomenon, in conjunction with the fact that state-owned banks have more resources, help them outperform competitors in terms of volume of household loans, attracting payroll projects (e.g., state-owned firms), and offering payments services to their corporate clients. In addition, state-owned banks are able to reduce the share of less profitable interbank loans.

The introduction of Russia's deposit insurance system and the 2004 crisis forced many firms to transfer their accounts from private to state-owned banks. As a result, the former was unable to compete with the latter in pricing their services, resulting in a decline of long-term and stable funding sources and, consequently, higher liquidity risks.
Today, state-owned banks lead in household (68.4%) and corporate (72.8%) loans, respectively. Furthermore, interbank loans have started taking a larger share in the loan portfolios of privately owned banks.

During the more 'peaceful' years (from January 2004 to June 2008, and January 2014 to October 2015) foreign-owned banks turned out to be more profit-efficient (i.e., their ROAs were closer to optimal) on the Russian market. The researchers believe that one of their key advantages is better management practices, as instructed by their mother institutions. This largely concerned all areas of activity, from routine standardization of business processes and faster decision-making, to introducing new technologies and unifying risk assessment instruments.

However, during the crisis, from July 2008 to December 2013, state-owned banks performed better in terms of profit efficiency. According to the researchers, this happened due to their larger capital stocks and ability to replenish them. 'This situation is characteristic of countries where there are more monopolies and state banks in the economy,' the authors concluded.

Credit: 
National Research University Higher School of Economics

A step closer to cancer precision medicine

image: Figure 1. Left panel: ER-positive breast cancer patients with AGR2 up-regulation showed less survival rate in the METABRIC study; Right panel: Acute myeloid leukaemia patients with SRGN up-regulation showed less survival rate in the BeatAML study.

Image: 
University of Helsinki

Researchers from the Faculty of Medicine and the Institute for Molecular Medicine (FIMM) at the University of Helsinki have developed a computational model, Combined Essentiality Scoring (CES) that enables accurate identification of essential genes in cancer cells for development of anti-cancer drugs.

Why are the essential genes important in cancer?

Cancer is the leading cause of death worldwide. Cancer cells grow faster usually with the activation of certain genes. Targeted therapies aim at inhibiting these genes that are activated only in cancer cells, and thus minimizing side effects to normal cells.

High-throughput genetic screening has been established for evaluating the importance of individual genes for the survival of cancer cells. Such an approach allows researchers to determine the so-called gene essentiality scores for nearly all genes across a large variety of cancer cell lines.

However, challenges with replicability of the estimated gene essentiality have hindered its use for drug target discovery.

"shRNA and CRISPR-Cas9 are the two common techniques used to perform high-throughput genetic screening. Despite improved quality control, the gene essentiality scores from these two techniques differ from each other on the same cancer cell lines," explains Wenyu Wang, first author of the study.

How can we do better?

To harmonize genetic screening data, researchers proposed a novel computational method called Combined Essentiality Scoring (CES) that predicts cancer essential genes using the information from shRNA and CRISPR-Cas9 screens plus molecular features of cancer cells. The team demonstrated that CES could detect essential genes with higher accuracy than the existing computational methods. Furthermore, the team showed that two predicted essential genes were indeed correlated with poor prognosis separately for breast cancer and leukaemia patients, suggesting their potential as drug targets (Figure 1).

"Improving gene essentiality scoring is just a beginning. Our next aim is to predict drug-target interactions by integrating drug sensitivity and gene essentiality profiles. Given the ever-increasing volumes of functional screening datasets, we hope to extend our knowledge of drug target profiles that will eventually benefit drug discovery in personalized medicine," says Assistant Professor Jing Tang, corresponding author of the study.

Credit: 
University of Helsinki

Sociable crows are healthier -- new research

image: Carrion crows

Image: 
Photo by Dr Claudia Wascher, Anglia Ruskin University (ARU)

A new study has found that crows living in large social groups are healthier than crows that have fewer social interactions.

The research, led by Dr Claudia Wascher of Anglia Ruskin University (ARU), has been published this week in the journal Animal Behaviour.

Dr Wascher and her colleagues studied a population of captive carrion crows over a six-year period. They monitored the behaviour of the crows in different sized groups and measured friendship by ranking the birds using a sociality index.

At the same time, they studied the crows' droppings to measure for the presence of coccidian oocyst, a gastrointestinal parasite that can represent an important health threat for birds.

Increased exposure to parasites and disease transmission is considered as one of the major disadvantages of group living. This new study, however, shows the opposite effect.

The researchers found that crows with strong social bonds, living with more relatives, and in larger groups, excreted a significantly smaller proportion of droppings containing parasites than less sociable crows.

The study did not find a connection between health and the crow's dominance within the group, but found that male crows (33%) were slightly more likely to carry the parasite than females (28%).

Dr Wascher, Senior Lecturer in Biology at Anglia Ruskin University (ARU), said: "Crows are a highly social bird and we found that crows with the strongest social bonds excreted fewer samples containing coccidian oocyst, which is a common parasite in birds.

"It is a commonly-held belief that animals in larger groups are less healthy, as illness spreads from individual to individual more easily. We also know from previous studies that aggressive social interactions can be stressful for birds and that over time chronic activation of the physiological stress response can dampen the immune system, which can make individuals more susceptible to parasites.

"Therefore the results from our six-year study, showing a correlation between sociability and health, are significant. It could be that having close social bonds reduces stress levels in crows, which in turn makes them less susceptible to parasites.

"It could also be that healthier crows are more sociable. However, as many of the birds we studied were socialising within captive family groups, dictated by the number of crows within that family, we believe that social bonds in general affect the health of crows, and not vice versa."

Credit: 
Anglia Ruskin University

Researchers find new role for dopamine in gene transcription and cell proliferation

WASHINGTON (Nov. 14, 2019) - The dopamine D2 receptor has a previously unobserved role in modulating Wnt expression and control of cell proliferation, according to a new study from the George Washington University (GW) and the University of Pittsburgh. The research, published in Scientific Reports, could have implications for the development of new therapeutics across multiple disciplines including nephrology, endocrinology, and psychiatry.

Dopamine is traditionally studied in the central nervous system, however, it is increasingly implicated in regulating functions of various other organs. This new study identifies a new role for dopamine signaling via the D2 receptor outside the brain - in controlling signaling through the Wnt/β-catenin pathway, in part, through its effects on expression of Wnt3a, a key Wnt receptor ligand.

Both dopamine and the Wnt/β-catenin signaling pathways are ubiquitous across organ systems and species. Wnt signaling is essential for development and cell proliferation, and is associated with a number of diseases from cancer to schizophrenia. However, little is known about the underlying mechanism regulating expression of Wnt3a, or the modulation of its activity.

"In our research, we found that the dopamine D2 receptor is a transcriptional regulator of Wnt signaling and this ability to modulate Wnt signaling is important for better understanding development of hypertension," said Prasad Konkalmatt, PhD, assistant research professor of medicine at the GW School of Medicine and Health Sciences and a first author on the study.

The research team focused the study on signaling in the kidneys and in the pancreas. More broadly, the study shows that dopamine receptors can act as regulators of gene transcription and that this signaling is important in controlling cell proliferation under healthy and disease conditions.

These results were unexpected, surprising the investigators by how well conserved this dopamine regulation was across species and organs. This work also showed for the first time that lithium, one of the most commonly used psychiatric medications today, strongly increases the expression of D2 receptors, providing a new mechanism of action for this drug.

"Our work opens the door to a new way of thinking about dopamine signaling and its regulation," said Zachary Freyberg, MD, PhD, assistant professor of psychiatry and cell biology at the University of Pittsburgh School of Medicine and a senior author on the study. "By providing a new mechanism for the actions of lithium, we can better understand how this medication works and make better medications in the future to treat bipolar disorder and to improve the lives of the millions of people living with this illness."

The investigators also discovered that a number of common gene polymorphisms associated with hypertension and renal injury control D2 receptor expression in renal cells. This discovery provides new mechanisms and drug discovery targets for hypertension and renal injury.

"Our findings have broad implications in terms of how we think about dopamine receptor signaling, especially given that the receptors are targets for diabetes and potentially for hypertension and renal injury," explained Ines Armando, PhD, associate research professor of medicine at the GW School of Medicine and Health Sciences and a senior author on the study. "Expanding our understanding of this unique signaling specific to individual patients offers the promise of more effective precision medicine."

Credit: 
George Washington University

MicroRNA comprehensively analyzed

image: Messenger RNA transmits genetic information to the proteins, and microRNA plays a key role in the regulation of gene expression. Scientists from the Moscow Institute of Physics and Technology and the Research Centre for Medical Genetics have described the complex interactions between these two and other kinds of human RNA.

Image: 
@tsarcyanide/MIPT Press Office

Messenger RNA transmits genetic information to the proteins, and microRNA plays a key role in the regulation of gene expression. Scientists from the Moscow Institute of Physics and Technology and the Research Centre for Medical Genetics have described the complex interactions between these two and other kinds of human RNA. The paper was published in Frontiers in Genetics.

What are microRNA and Argonaute proteins?

Ribonucleic acid, or RNA, is an essential molecule encoding genetic information in cells. Its three main types are called transfer, ribosomal, and messenger RNA. The latter is also known as mRNA and serves as an intermediary between DNA -- the gene repository -- and the protein molecules resulting from gene expression. mRNA is synthesized in the nucleus based on the DNA sequence. It is then transported out into the cytoplasm, where it becomes a template for protein synthesis. However, only about 2% of the RNA molecules produced by a cell serve as protein templates. Among the remaining ones is the so-called microRNA, which is 18 to 25 nucleotides long and plays an entirely different role.

MicroRNAs are a group comprising about 2,500 molecules known for human so far, which bind with proteins from the Argonaute family (AGO) and function together. The small-sized microRNA-AGO complex binds to particular mRNA sites, and it is always the microRNA component of the complex that determines which region of which mRNA to bind to. The Argonaute protein either blocks protein production from mRNA or eliminates the mRNA by "cleaving" it. Therefore, if a microRNA-AGO complex engages with a certain mRNA, its corresponding protein will no longer be produced. That way genes are effectively silenced by microRNA "capturing" mRNA and affecting gene expression.

While the interaction occurs between microRNA and mRNA, it is often seen as that between microRNA and the gene encoding the mRNA. This silencing is one of the numerous mechanisms for regulating gene expression. Cells employ them to control gene productivity by enabling or disabling genes to a varying degree. Misregulation of gene expression due to microRNA "malfunctions" can cause cancer and other pathologies.

Geneticists still do not fully understand the interactions between the two types of RNA. There are about 20,000 known human mRNAs and 2,500 microRNAs, but it remains unclear which of them actually bind to each other. The researchers previously showed that the computer programs predicting interactions between microRNA and mRNA do not work properly.

In their new research, the scientists combined the experimental data on the amount of mRNA and microRNA formed in a cell with the data on the interactions between these two kinds of RNA for two types of human cells. The team explored the connection between the amount of a microRNA in a cell and it's binding activity. One would expect the two quantities to be in direct proportion to each other, but that proved not to be the case. The researchers also looked at how many pairs an mRNA formed and whether with the same or with different microRNAs. Scientifically speaking, the geneticists explored the relation between the expression level and the binding activity for mRNA and microRNA. They also explored the way the behavior of these pairs depends on cell type.

"Our research explores the interaction between microRNA and genes," said Olga Plotnikova, a PhD student at MIPT and one of the authors of the research. "A microRNA is a small noncoding RNA molecule that regulates gene expression. We published an article showing the imperfections of the computer programs used to predict the interactions between microRNA and genes. That's why we wanted to get the full picture of microRNA interactions: what binds with what and how.

"We analyzed the only two available papers on this subject that report the experimental data on the entire range of interactions between microRNA and genes, for two different human cell lines. Then we correlated the data with the results of other experiments, determining the expression level of mRNA and microRNA in the same cell lines," Plotnikova went on. "We showed that microRNA does not strongly regulate all genes and its regulation potential does not directly depend on its expression level. We also identified the differences between the microRNA interactions in the two cell lines."

Methods

The main problem with the empirical research into microRNA interactions has to do with the limitations of the available methods. There is a set of methods known as reporter gene assays, which can test one particular interaction with an experiment. Another set of methods, called cross-linking with immunoprecipitation (CLIP), enables researchers to identify the binding sites, but not the specific microRNAs associated with them. Cross-linking is typically used to determine the position of direct interactions between proteins and nucleic acids. It enables to purify a specific protein-RNA complex so the vast majority of contaminating RNA can be removed. It is thus possible to identify all the microRNA-mRNA binding sites without knowing which of the thousands of known microRNAs was involved in the interaction.

There are two similar techniques that have been recently developed: the CLASH and the CLEAR-CLIP methods. They actually build upon CLIP. The problem with them is that they are very intricate and have only been used on two cancerous human cell lines: kidney cell line and hepatic cell line. The team also used the available data on the amount of mRNA and microRNA formed in each of the cell lines (gene expression data). The scientists used the data obtained in 79 CLIP experiments to identify the mRNA parts interacting with microRNAs. While the experiments establish the existence of these interactions, they do not reveal the exact microRNAs involved.

Research results

The researchers have provided an in silico proof that the data from improved CLIP experiments on the entire range of interactions between microRNA and genes for two different human cell lines are similar and can be compared. It was demonstrated that most mRNA-microRNA complexes are formed by a small number of RNAs. For example, only 1%-2% of protein-encoding genes form more than 10 different interactions. Fascinating kinds of mRNA that exhibit "sponge-like" properties have also been found. Those kinds of mRNA were binding with a large number of RNAs (more than 50) in different mRNA sites. What is more, the researchers found a group of microRNAs with two key features: They are weakly expressed and have a lot of interactions. This runs contrary to the expectation that the more a given microRNA is expressed, the more it should interact with various mRNAs.

The study also established a list of reliable microRNA-binding regions -- the places where mRNA and microRNA interact with each other. This led the team to create online software that determines whether a given position in the human genome coincides with a microRNA binding site. It helps to identify the disruption of microRNA binding and therefore of gene regulation, which means that it can explain genetically inherited diseases. The program could be used for analyzing the genome of patients. Mapping all the interactions between microRNA and human genes helps to reveal the molecular basis of congenital and acquired disorders.

Credit: 
Moscow Institute of Physics and Technology

Sexual minorities continue to face discrimination, despite increasing support

UNIVERSITY PARK, Pa. -- Despite increasing support for the rights of people in the LGBTQ+ community, discrimination remains a critical and ongoing issue for this population, according to researchers.

In a recent study, researchers found that adults who identified as gay, lesbian or bisexual -- as well as people who reported same-sex attraction or same-sex sexual partners, referred to as sexual minorities -- experienced discrimination and victimization at different rates across age.

Cara Exten, assistant professor of nursing at Penn State, said the findings are a reminder that discrimination is still a significant issue for sexual minorities, which is key for policy, prevention and intervention.

"We conducted this study because we wanted to better understand discrimination experiences affecting sexual minority populations," Exten said. "We wanted to examine whether there were adults at particular ages who were more likely to have experienced discrimination in the past year -- and if so, what types of discrimination. We aimed to call attention to the continued high rates of discrimination that LGBTQ+ individuals are experiencing -- because we know that these experiences affect their health."

Collaborator Stephanie Lanza, professor of biobehavioral health and director of the Edna Bennett Pierce Prevention Research Center, noted that "a better understanding of recent experiences of discrimination among adults across a wide range of ages is necessary so that we can add to the national discourse on LGBTQ+ disparities in physical and mental health. Importantly, examining specific types of discrimination experienced by sexual minorities across age can indicate where there is greatest need for intervention -- both to support individuals and to address stigma more broadly."

According to the researchers, previous work has found that sexual minorities tend to experience poorer health than non-sexual minorities. Exten said that while sexual minorities are not inherently more vulnerable to health concerns, their experiences with anti-LGB stress, stigma, and discrimination across the life course may lead to poor and complicated health patterns.

"Research has linked discrimination and poor health outcomes among minorities, but we didn't have a clear picture of whether sexual minorities may be more or less vulnerable to experiencing discrimination at certain points during their life," Exten said. "We might, for example, find that older adults are more likely to experience discrimination in health care settings as they age, given that older adults are more likely to need medical care."

The researchers used data gathered from a nationally representative study of U.S. citizens on 2,993 sexual minorities between the ages of 18 and 65. Participants answered a questionnaire about how often they had experienced discrimination in the previous year due to being perceived as gay, lesbian or bisexual.

The survey included questions about whether they had experienced six different forms of discrimination. The researchers grouped the different types of discrimination into three groups: general, like in public places like shops or restaurants; victimization, such as being called names, pushed or threatened; and healthcare discrimination, such as trouble obtaining healthcare due to sexual orientation, or discrimination during treatment.

After analyzing the data, the researchers found that 17% of participants had experienced some form of discrimination in the previous year. In total, 13% reported general discrimination, 12% reported victimization and 7% reported healthcare discrimination.

The researchers also broke down the data by age, gender, and sexual identity. In general, discrimination experiences were most common in early adulthood, with another increase in middle adulthood. Males were generally more likely to report having experienced anti-LGB discrimination and victimization in the last year. Healthcare discrimination peaked among individuals in their early 50s.

"The overall rates were quite high," Exten said. "This was particularly true in some subgroups of the community. Among 18-year-olds, one in five males experienced victimization in the past year. Experiencing victimization can be quite traumatic, and certainly acts as a stressor for these individuals. We hope these findings will be a call to action."

Exten said the findings -- recently published in the Journal of Homosexuality -- suggest the need for continued work in reducing discrimination.

"Reducing discrimination in the United States will require broad approaches within our communities, schools, workplaces, healthcare facilities, and families," Exten said. "It is critical that we continue to recognize that discrimination is happening and that we continue to work to develop more inclusive policies and spaces in our communities.

Credit: 
Penn State