Culture

Researchers develop mathematical model predicting disease spread patterns

image: Multifractality of COVID-19 Spread

Image: 
NJIT

Early on in the COVID-19 pandemic, health officials seized on contact tracing as the most effective way to anticipate the virus's migration from the initial, densely populated hot spots and try to curb its spread. Months later, infections were nonetheless recorded in similar patterns in nearly every region of the country, both urban and rural.

A team of environmental engineers, alerted by the unusual wealth of data published regularly by county health agencies throughout the pandemic, began researching new methods to describe what was happening on the ground in a way that does not require obtaining information on individuals' movements or contacts. Funding for their effort came through a National Research Foundation RAPID research grant (CBET 2028271).

In a paper published May 6 in the Proceedings of the National Academy of Sciences, they presented their results: a model that predicts where the disease will spread from an outbreak, in what patterns and how quickly.

"Our model should be helpful to policymakers because it predicts disease spread without getting into granular details, such as personal travel information, which can be tricky to obtain from a privacy standpoint and difficult to gather in terms of resources," explained Xiaolong Geng, a research assistant professor of environmental engineering at NJIT who built the model and is one of the paper's authors.

"We did not think a high level of intrusion would work in the United States so we sought an alternative way to map the spread," noted Gabriel Katul, the Theodore S. Coile Distinguished Professor of Hydrology and Micrometeorology at Duke University and a co-author.

Their numerical scheme mapped the classic SIR epidemic model (computations based on a division of the population into groups of susceptible, infectious and recovered people) onto the population agglomeration template. Their calculations closely approximated the multiphase COVID-19 epidemics recorded in each U.S. state.

"Ultimately, we'd like to come up with a predictive model that would let us determine what the likely outcome would be if a state took a (particular) action," Katul added.

By plotting on a map all of the data published weekly by county health agencies, the researchers also discovered that the disease spread out across the country in similar patterns, from the largest cities to the tiniest hamlets.

"High infection 'hotspots' interspersed within regions where infections remained sporadic were ubiquitous early in the outbreak, but the spatial signature of the infection evolved to affect most regions equally, albeit with distinct temporal patterns," the authors wrote.

"We wondered whether the spread - in space and time - was predetermined, or if there were spatial variability related to local policy," said Elie Bou-Zeid, a professor of Civil and Environmental Engineering at Princeton University and a co-author. "We found that the distribution tended to be proportional to the population."

When the alarm went out in March of last year, it was already too late, the authors noted.

"The virus could not be isolated. While the superhighways of contagion - air flights - were curtailed, the disease spread at the local level from city to city," said Michel Boufadel, professor of environmental engineering, director of the Center for Natural Resources at NJIT, and the corresponding author of the study. "Using the standard precautions of masking and distancing is not enough if there are a lot of people out there. There will still be superspreader events, resulting from 5-10 people gathering."

Their model, he said, allows them to separately examine the two key driving mechanisms of the pandemic: individuals taking preventive measures such as social distancing and wearing masks and local and state policies on shutting down or reopening public spaces.

"In this case, states did not much learn from each other and that made it difficult to have a different trend," Bou-Zeid noted.

How do environmental engineers become pandemic experts?

"There are a lot of similarities between the spikes in the number cases and the bursts of energy found in turbulent mixing," said Boufadel.

Credit: 
New Jersey Institute of Technology

Why do some neurons degenerate and die in Alzheimer's disease, but not others?

image: Gladstone scientists uncover evidence that neurons are more sensitive to degeneration when they contain high levels of the protein apoE, which is associated with a higher risk of developing Alzheimer's disease. Shown here in the lab are Nicole Koutsodendris (left) and Yadong Huang (right), authors of a new study.

Image: 
Photo: Michael Short/Gladstone Institutes

SAN FRANCISCO, CA--May 6, 2021--In the brain of a person with Alzheimer's disease, neurons degenerate and die, slowly eliminating memories and cognitive skills. However, not all neurons are impacted equally. Some types of neurons in certain brain regions are more susceptible, and even among those subtypes--mysteriously--some perish and some do not.

Researchers at Gladstone Institutes have uncovered molecular clues that help explain what makes some neurons more susceptible than others in Alzheimer's disease. In a study published in the journal Nature Neuroscience, the scientists present evidence that neurons with high levels of the protein apolipoprotein E (apoE) are more sensitive to degeneration, and that this susceptibility is linked to apoE's regulation of immune-response molecules within neurons.

"This is the first time such a link has been established, which is quite exciting and could open new paths to developing treatments for Alzheimer's disease," says Gladstone Senior Investigator Yadong Huang, MD, PhD, senior author of the study.

Finding Clues by Comparing Individual Neurons

ApoE has long been a focus of Alzheimer's disease research because people carrying a gene that produces a particular form of apoE (called apoE4) have a higher risk of developing the disease. For this study, Huang and his team harnessed recent advancements in single-cell analysis to study the potential role of apoE in the variable susceptibility of neurons in Alzheimer's disease.

Specifically, they applied a technique known as single-nucleus RNA sequencing, which reveals the extent to which the different genes in any given cell are expressed and converted into RNA, the intermediate between genes and proteins. This approach allowed them to compare individual cells within a cell type, as well as across different cell types.

The researchers used this technique to study brain tissue from both healthy mice and mouse models of Alzheimer's disease. They also analyzed publicly available data for human brain tissue--some from healthy brains and some with varying degrees of Alzheimer's disease or mild cognitive impairment.

In both mice and humans, the analysis showed that neurons varied greatly in their extent of apoE expression, even within the same subtype. In addition, the amount of apoE expression was strongly linked to the expression of immune-response genes, which also varied significantly among neurons.

Digging deeper, the researchers examined the connection between apoE and the immune response genes. They found that, in both mouse and human neurons, high levels of apoE turned on genes in the major histocompatibility complex class-I (MHC-I). MHC-I is part of a pathway involved in eliminating excess synapses (connections between neurons) during brain development, and may also alert the immune system to damaged neurons and synapses in the adult brain.

"This was an intriguing clue that by controlling the expression of MHC-I in neurons, apoE might help determine which neuron should be recognized and cleared by the immune system," says Kelly Zalocusky, PhD, first author of the study and a scientist in Huang's lab at Gladstone.

A Process Gone Awry Leads to the Progressive Destruction of Neurons

The team found that, in brain tissue, the proportion of neurons expressing high levels of apoE and MHC-I genes fluctuates in a way that closely matches neurodegeneration and the progression of Alzheimer's disease.

They observed this relationship both in mouse models of Alzheimer's disease and in human brain tissue at different stages of neurodegeneration. Their work also revealed a causal link between the expression of MHC-I induced by apoE and the increase in tangled aggregates of a protein called tau, which is a hallmark of Alzheimer's disease and is a good predictor of neurodegeneration.

So, taken together, what do all these findings mean?

"We think that, normally, apoE turns on the expression of MHC-I in a small number of damaged neurons to produce 'eat me' signals that mark the neurons for destruction by the immune cells," says Huang, who is also director of the Center for Translational Advancement at Gladstone, as well as professor of neurology and pathology at UC San Francisco. "You don't want to keep damaged neurons around because they could malfunction and cause problems."

But in Alzheimer's disease, the scientists think this normal process for clearing out damaged neurons may become overactivated in a larger number of cells, leading to the progressive loss of neurons.

In other words, aging brains may face stressors that boost the amount of apoE in some neurons past healthy levels. The study shows that neurons carrying the form of apoE associated with increased risk of Alzheimer's disease, apoE4, are particularly susceptible to these stressors. This excess apoE turns on the expression of MHC-I, marking these neurons for destruction. Meanwhile, neurons with lower levels of apoE remain unharmed. Consequently, this process results in selective neurodegeneration within a given neuron type in Alzheimer's disease, guided by the level of apoE.

Further research could help clarify how apoE and MHC-1 determine which neurons die and which survive in Alzheimer's disease.

"Additional studies could reveal potential new targets for treatments that may be able to disrupt this destructive process in Alzheimer's disease and potentially in other neurodegenerative disorders as well," says Huang.

Credit: 
Gladstone Institutes

Molecular analysis identifies key differences in lungs of cystic fibrosis patients

image: Healthy airways (left) show well-defined layers of ciliated cells (green) and basal stem cells (red). In airways affected by cystic fibrosis (right), the layers are disrupted, and scientists identified a transitioning cell type that combines properties of both stem cells and ciliated cells (red and green in the same cell).

Image: 
Image courtesy of Nature Medicine

LOS ANGELES (May 6, 2021) -- A team of researchers from UCLA, Cedars-Sinai and the Cystic Fibrosis Foundation has developed a first-of-its-kind molecular catalog of cells in healthy lungs and the lungs of people with cystic fibrosis.

The catalog, described today in the journal Nature Medicine, reveals new subtypes of cells and illustrates how the disease changes the cellular makeup of the airways. The findings could help scientists in their search for specific cell types that represent prime targets for genetic and cell therapies for cystic fibrosis.

"This new research has provided us with valuable insights into the cellular makeup of both healthy and diseased airways," said Dr. Brigitte Gomperts, a co-senior author of the study and a member of the Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research at UCLA. "If you can understand how things work in a state of health, it becomes easier to see what cellular and molecular changes occur in a disease state."

A progressive genetic disorder that affects more than 70,000 people worldwide, cystic fibrosis results from mutations to the CFTR gene. Cells that contain the defective protein encoded by the gene produce unusually thick and sticky mucus that builds up in the lungs and other organs. This mucus clogs the airways, trapping germs and bacteria that can cause life-threatening infections and irreversible lung damage.

While several new therapies can partially restore the function of these damaged proteins, easing symptoms of the disease, slowing its progression and improving quality of life, these treatments only work in people with some of the many CFTR mutations that can cause cystic fibrosis.

"We have made tremendous progress in the development of treatments for the underlying cause of cystic fibrosis, but many people cannot benefit from these medicines," said John (Jed) Mahoney, a senior co-author of the paper and head of the stem cell biology team at the Cystic Fibrosis Foundation Therapeutics Lab. "This research provides critical insight into how the disease alters the cellular makeup of the airways, which will enable scientists to better target the next generation of transformative therapies for all people with cystic fibrosis."

The research team was assembled through the Cystic Fibrosis Foundation's Epithelial Stem Cell Consortium, which brings together scientists from across the U.S. in an ongoing effort to identify different subtypes of stem cells in the lungs and study their function. Because these cells can self-renew and produce a continuous supply of specialized cells that maintain, repair and regenerate the airways, therapies aimed at correcting CFTR mutations in stem cells hold the best hope for a one-time, universal treatment for the disease.

"This catalog represents an important step toward identifying those rare stem cells that regenerate the airways over a person's lifetime," said Gomperts, who is also a professor of pediatrics and pulmonary medicine at the David Geffen School of Medicine at UCLA. "We suspect these cells could be targets for future therapies that may provide a long-term correction to the mutation that causes this disease."

'Un-blending' a smoothie: How the catalog was created

For the study, Gomperts, Mahoney and their co-senior authors, Barry Stripp of Cedars-Sinai and Kathrin Plath of UCLA, compared tissue samples taken from lungs removed from 19 people with cystic fibrosis, all of whom had received lung transplants, with samples taken from healthy lungs donated by 19 individuals who had died from other causes.

Researchers at the three institutions employed similar but distinct methods to break these tissues down and examine them using a technology called single-cell RNA sequencing, which allowed them to analyze thousands of cells in tandem and classify them into subtypes based on their gene expression patterns - this is, which genes are turned on and off.

"The process is analogous to taking a smoothie and 'un-blending' it to discover all the ingredients it contains and measure how much of each ingredient was used," said Plath, a professor of biological chemistry and a member of the UCLA Broad Stem Cell Research Center.

Using a novel computer-based bioinformatics approach to compare the gene expressions patterns of the various cells, the team was able to create a catalog of the cell types and subtypes present in healthy airways and those affected by cystic fibrosis, including some previously unknown subtypes that illuminate how the disease alters the cellular landscape of the airways.

"We were surprised to find that the airways of people with cystic fibrosis showed differences in the types and proportions of basal cells, a cell category that includes stem cells responsible for repairing and regenerating upper airway tissue, compared with airways of people without this disease," said Stripp, a professor of medicine and director of the lung stem cell program at the Cedars-Sinai Board of Governors Regenerative Medicine Institute.

Specifically, the researchers discovered that among the basal cell populations, there was a relative overabundance of cells that appear to be transitioning from basal stem cells into specialized ciliated cells, which use their finger-like projections to clear mucus out of the lungs.

"We suspect that changes the basal cells undergo to replenish ciliated cells represent an unsuccessful attempt to clear mucus that typically accumulates in airways of patients with cystic fibrosis," Stripp said.

The increase in transitioning cells provides further evidence suggesting that expression of the mutated CFTR gene disrupts normal function of the airways, leading to changes in the way that basal stem cells produce specialized cells.

Moving forward, the researchers will continue to study the new data for insights into how cystic fibrosis alters the cellular landscape of the airways - including why this unsuccessful transition occurs- and to identify the long-lived basal stem cells that would be the ideal target for a CFTR gene correction.

Credit: 
Cedars-Sinai Medical Center

Penn study reveals how opioid supply shortages shape emergency department prescribing behaviors

PHILADELPHIA-- When evaluating the opioid crisis, research reveals that external factors - such as the volume of pre-filled syringes, or a default number of opioid tablets that could easily be ordered at discharge for the patient - can shift prescribing and compel emergency department (ED) physicians to administer or prescribe greater quantities of opioids. A new study published in the Journal of Medical Toxicology reveals that opioid prescribing behavior can also be decreased by external factors, such as a supply shortage.

Led by the Perelman School of Medicine at the University of Pennsylvania, researchers evaluated pharmacy data from the electronic medical records (EMR) collected before, during, and after a period of parenteral opioid shortage across two large urban academic emergency departments - the Hospital of the University of Pennsylvania and University Hospital in Newark, New Jersey. In this case, the shortage was of parenteral morphine and hydromorphone, as a result of supply chain disruptions caused by Hurricane Maria in 2018.

Researchers found that the percentage of patients who received an opioid among all ED visits during the 2018 shortage fell significantly from 11.5% pre-shortage to 8.5% during, and did not return to baseline once the shortage abated. Further, the total number of oral or IV opioid doses administered during the shortage also decreased and remained lower than pre-shortage levels once supply chains were restored. However, the study also found that while fewer opioid doses were administered to fewer patients, there was no change in net Morphine Milligram Equivalent (MME) per patient receiving opioids.

"Although the percentage of patients who received non-opioid analgesics did not rise during the shortage, it was significantly higher in the post-shortage period," said lead investigator Amanda Deutsch, MD, an Emergency Medicine resident at Penn. "This suggests that a subset of patients were transitioned to non-opioids, and this prescribing practice was a sustained change after the resolution of the shortage."

The study also found that because the shortages were specifically of parenteral morphine and hydromorphone, those were replaced by parenteral fentanyl, but the overall total use in MME of opioids administered remained the same. The use of oral morphine also appeared to increase during the shortage period. These findings may suggest that the shortage may have led to a shift from IV opioid use to oral opioid formulations.

However, researchers felt that the shift in prescribing patterns during the shortage can offer an opportunity for more research into how external factors can influence ED prescribing practices. "Changing clinician prescribing behavior is challenging," said author Jeanmarie Perrone, MD, a professor of Emergency Medicine at Penn. "This study shows encouraging data to support that there are environmental modifications that can nudge providers toward more judicious opioid use."

Credit: 
University of Pennsylvania School of Medicine

Scrap for cash before coins

image: Map showing the spread of weighing technology in Bronze Age Europe (c. 2300-800 BC)

Image: 
N Ialongo

How did people living in the Bronze Age manage their finances before money became widespread? Researchers from the Universities of Göttingen and Rome have discovered that bronze scrap found in hoards in Europe circulated as a currency. These pieces of scrap - which might include swords, axes, and jewellery broken into pieces - were used as cash in the late Bronze Age (1350-800 BC), and in fact complied with a weight system used across Europe. This research suggests that something very similar to our 'global market' evolved across Western Eurasia from the everyday use of scrap for cash by ordinary people some 1000 years before the beginning of classical civilizations. The results were published in Journal of Archaeological Science.

This study analysed around 2,500 metal objects and fragments from among the thousands of hoards of fragments from the late Bronze Age that, over time, have been unearthed in Central Europe and Italy. The researchers used a statistical technique that can determine if a sample of measurements is due to an underlying system. This technique can detect, for instance, if the analysed objects are multiples of a weight unit. The researchers' analysis provides very significant results for fragments and scraps, which means that these metal objects were intentionally fragmented in order to meet predetermined weights. The analyses confirm that the weight unit that regulated the mass of metals was the same unit represented in European balance weights of the same period. The researchers conclude that these scraps were being used as money, and that the fragmentation of bronze objects was aimed at obtaining 'small change' or cash.

Trade in prehistory is commonly imagined as a primitive system based on barter and on the exchange of gifts, with money appearing as some kind of evolutionary milestone somewhere during the making of Western state-societies. The study challenges this notion by introducing the concept that money was a bottom-up convention rather than a top-down regulation. Bronze Age money in Western Eurasia emerges in a socio-political context in which public institutions either did not exist (as was the case in Europe) or were uninterested in enforcing any kind of monetary policy (as in Mesopotamia). In fact, money was widespread and used on a daily basis at all levels of the population.

The spread of the use of metallic scraps for cash happened against the background of the formation of a global market in Western Eurasia. "There was nothing 'primitive' about pre-coinage money, as money before coins performed exactly the same functions that modern money does now," explains Dr Nicola Ialongo at the University of Göttingen's Institute for Prehistory and Early History. Ialongo adds, "Using these metallic scraps was not an unexpected development, as it is likely that perishable goods were used as currency long before the discovery of metallurgy, but the real turning point was the invention of weighing technology in the Near East around 3000 BC. This provided, for the first time in human history, the objective means to quantify the economic value of things and services, or, in other words, to assign them a price."

Credit: 
University of Göttingen

Small apoptotic bodies: Nirvana, birth and death

Scientists from Nanjing University and University of Macau have discovered nano-scaled apoptotic bodies (ABs) as a new brain-targeting drug carrier, bringing new promise for the Parkinson's Disease as well as other brain diseases.

The blood-brain barrier (BBB) is the most restrictive barrier that keeps most biomolecules and drugs from the brain, setting "barriers" for the treatment of cerebrovascular diseases. With the increasingly serious ageing problem, the treatment of brain diseases now faces tough challenges, and therefore efficient brain drug delivery strategies are urgently needed.

Apoptotic bodies (ABs), secreted from dying cells, have been discovered for half a century but their roles are underestimated. ABs are rarely considered for drug delivery, due to their uneven size distribution (varying from hundreds to thousands of nanometres) and complex composition (especially large chromosomal DNA fragments and various kinds of cytoplasmic proteins). However, with the natural bioactive lipids and affluent proteins, Abs should be more functional than being just an in vivo recycling unit.

In this paper, the team separated the small apoptotic bodies (sABs) and revealed their potential advantages as a delivery system for brain targeting. First, compared with the micron-sized apoptotic bodies, sABs have a more uniform size, with few DNA fragments and abundant RNAs. Second, sABs are stable in serum and has a long circulating time in vivo, not easily engulfed by the phagocytes. Third, the drug loading efficiency into sABs is high and the process is productive and controllable. Additionally, sABs are vesicles shed from the cell membrane, so the molecules in the cell membrane are preserved on the vesicles, providing a way for incorporating targeting ligands. With these advantages, sABs are likely a new candidate for drug delivery.

This paper 'Delivering Antisense Oligonucleotides across the Blood-Brain Barrier by Tumor Cell-Derived Small Apoptotic Bodies', is recently published in Advanced Science. Professor Lei Dong of Nanjing University, the leading author of this work, believes that sABs would be able to overcome the bottleneck of the exosome-based therapeutics and become a new class of drug delivery careers. They successfully loaded TNF-a antisense oligonucleotide (ASO) into sABs secreted by melanoma cells with high brain metastasis. The drug-loaded sABs could penetrate BBB, delivering the anti-inflammatory ASO to microglia and showing a remarkable efficacy in alleviating the development of Parkinson's disease in mice.

With the outstanding encapsulation and delivery efficiency to the microglia within the brain, the cancer cell-derived sABs might revolutionize the diagnosis, prevention and treatment of many brain diseases.

Credit: 
Nanjing University School of Life Sciences

WHO 'needs to act' on suicides caused by pesticides

image: A Sri Lankan worker using pesticides without any protection.

Image: 
Photo by Knut-Erik Helle

Scientists are calling for more stringent pesticide bans to lower deaths caused by deliberately ingesting toxic agricultural chemicals, which account for one fifth of global suicides.

A NHMRC funded study, in which the University of South Australia analysed the patient plasma pesticide concentrations, has identified discrepancies in World Health Organization (WHO) classifications of pesticide hazards that are based on animal doses rather than human data.

As a result, up to five potentially lethal pesticides are still being used in developing countries in the Asia Pacific, where self-poisonings account for up to two thirds of suicides.

In a paper published in Lancet Global Health, UniSA Research Chair of Therapeutics and Pharmaceutical Science, Professor Michael Roberts, says while deaths from pesticide poisonings have fallen dramatically in recent years, it is still a common cause of global suicides.

Each year, more than 150,000 people die from deliberately ingesting pesticides, although this number was much higher - averaging 260,000 a year - before specific chemicals were banned.

Prof Roberts was a member of a team of international researchers who used Sri Lanka as a case study, in which they compared the number of deaths from pesticides in a rural area between 2002 and 2019.

Of 35,000 people (mainly men aged in their late 20s) hospitalised for pesticide self-poisoning, 6.6 per cent died.

The three most toxic agents were paraquat, dimethoate and fenthion, which were all banned by 2011, reflected in much lower fatalities after that date. Since then, five other common pesticides still allowed by the WHO were disproportionately responsible for 24 per cent of fatalities, the team found.

"If human data for acute toxicity of pesticides was used for hazard classification and regulation worldwide, it would prevent many deaths and have a substantial impact on global suicide rates," Prof Roberts says. "Australia is fortunate in that its regulations properly balance good pesticide use versus public health concern."

He predicts that fatal poisonings across Asia could fall by more than 50 per cent, and total suicides in the region could fall by at least a third.

"Instead, the WHO classifications are largely based on animal median lethal doses. This method ignores the differences between species and how they respond to treatment, and the formulations used."

Fatalities from pesticide poisonings also differed greatly from that predicted by the WHO classification.

The researchers have called for the WHO to eliminate all pesticides with fatality rates above five per cent.

"Setting a global benchmark and relying on human data is critical to reducing suicides, particularly in countries such as Sri Lanka, South Korea, Bangladesh and China, where agriculture is still a dominant industry."

"We also found evidence that some banned agents were still in circulation due to illegal importation, so there is still a need to tighten existing regulations," Prof Roberts says.

Credit: 
University of South Australia

Greater access to birth control leads to higher graduation rates

When access to free and low-cost birth control goes up, the percentage of young women who leave high school before graduating goes down by double-digits, according to a new CU Boulder-led study published May 5 in the journal Science Advances.

The study, which followed more than 170,000 women for up to seven years, provides some of the strongest evidence yet that access to contraception yields long-term socioeconomic benefits for women. It comes at a time when public funding for birth control is undergoing heated debate, and some states are considering banning certain forms.

"One of the foundational claims among people who support greater access to contraception is that it improves women's ability to complete their education and, in turn, improves their lives," said lead author and Assistant Professor of Sociology Amanda Stevenson, noting that those claims have been based largely on anecdotal evidence. "This study is the first to provide rigorous, quantitative, contemporary evidence that it's true."

The study centers around the Colorado Family Planning Initiative (CFPI), a 2009 program that widely expanded access to more forms of contraception in the state.

Funded by a $27 million grant from a private donor, the program augmented funding for clinics supported by Title X, a federal grant program formed n the 1970s to help provide low-income women with reproductive services.

The grant enabled Title X clinics to provide not only inexpensive forms of birth control, like condoms and oral contraceptives, but also more costly long-acting reversible contraception (LARC), including intrauterine devices (IUDs) and implants.

Over the span of the program from 2009 to 2015, birth and abortion rates in Colorado both declined by half among teens aged 15 to 19 and by 20% among women aged 20 to 24, according to the Colorado Department of Public Health and Environment.

But how else did this greater access to contraception benefit women?

With help from a five-year, $2.5 million grant from the Eunice Kennedy National Institute of Child Health and Human Development, Stevenson and collaborators at CU Denver sought to find out.

"This was the biggest policy experiment ever conducted in contraception in the U.S. and its impacts had not been fully assessed," said Stevenson.

Using anonymized data and detailed surveys from the U.S. Census, the team examined the educational attainment of 5,050 Colorado women and compared those whose high school career occurred before and after the policy change.

To parse out what differences in their lives were due to the family planning initiative vs. other factors, the researchers also looked at the same changes in the outcomes of women of similar age in 17 other states.

They found that overall high school graduation rates in Colorado increased from 88% before CFPI was implemented to 92% after, and about half of that gain was due to the program.

Improvements in graduation rates among Hispanic women, specifically, were even greater, with graduation rates rising from 77% to 87%, about 5% of the increase attributed to CFPI.

In all, the program decreased the percentage of young women in Colorado who left school before graduating by 14%.

Put another way, an additional 3,800 Colorado women born between 1994 and 1996 received a high school diploma by age 20 to 22 as a result of CFPI.

"Supporting access to contraception does not eliminate disparities in high school graduation, but we find that it can contribute significantly to narrowing them," said Stevenson, who believes the Colorado results translate to other states.

Co-author Sara Yeatman, an associate professor of health and behavioral sciences at CU Denver, said that while the program did help some young women avoid unwanted pregnancy and abortion, the data suggest that is not the only mechanism by which increased access to contraception promotes higher graduation rates.

"We think there is also an indirect effect," she said, noting that previous studies dating back to the introduction of the birth control pill in the 1960s suggest contraception access is empowering. "The confidence that you can control your own fertility can contribute to a young woman investing in her education and in her future."

The research team is now looking to see whether increased access to birth control may influence women's futures in other ways, such as boosting their chance of going to or graduating from college, improving their income long-term and reducing their chances of living in poverty.

To Stevenson, these are stronger, more ethically sound reasons for supporting public funding of contraception.

The researchers hope their findings inform the conversation as lawmakers around the country consider proposals to boost Title X funding, lift restrictions requiring that teens get parental consent for birth control and increase access in other ways.

"When these arguments are based solely on the effect of contraception on fertility, it implies that poor women shouldn't have children," Stevenson said. "Our findings suggest that better access to contraception improves women's lives."

Credit: 
University of Colorado at Boulder

UBCO cardiovascular researcher urges women to listen to their hearts

image: Atrial fibrillation is the most commonly diagnosed arrhythmia in the world. Despite that, many women do not understand the pre-diagnosis symptoms and tend to ignore them.

Image: 
UBCO

A UBC Okanagan researcher is urging people to learn and then heed the symptoms of Atrial Fibrillation (AF). Especially women.

Dr. Ryan Wilson, a post-doctoral fellow in the School of Nursing, says AF is the most commonly diagnosed arrhythmia (irregular heartbeat) in the world. Despite that, he says many people do not understand the pre-diagnosis symptoms and tend to ignore them. In fact, 77 per cent of the women in his most recent study had experienced symptoms for more than a year before receiving a diagnosis.

While working in a hospital emergency department (ED), Dr. Wilson noted that many patients came in with AF symptoms that included, but were not limited to, shortness of breath, feeling of butterflies (fluttering) in the chest, dizziness or general fatigue. Many women also experienced gastrointestinal distress or diarrhea. When diagnosed they admitted complete surprise -- even though they had been experiencing the symptoms for a considerable time.

One in four strokes are AF related, he says. However, when people with AF suffer a stroke, their outcomes are generally worse than people who have suffered a stroke for other reasons.

"I would see so many patients in the ED who had just suffered a stroke but they had never been diagnosed with AF. I wanted to get a sense of their experience before diagnoses: what did they do before they were diagnosed, how they made their decisions, how they perceived their symptoms and ultimately, how they responded."

Even though his study group was small, what he learned was distressing.

"Ten women, in comparison to only three men, experienced symptoms greater than one year," says Dr. Wilson. "What's really alarming is they also had more significant severity and frequency of their symptoms than men--yet they experience the longest amount of time between onset of symptoms and diagnosis."

What really troubles Dr. Wilson are the reasons a diagnosis is delayed in women.

Many doubted their symptoms were serious, he says. They discounted them because they were tired, stressed, thought they related to other existing medical conditions, or even something they had eaten. Most women also had caregiving responsibilities that took precedence over their own health, and they chose to self-manage their symptoms by sitting, lying down, or breathing deeply until they stopped.

What's more alarming, however, is that if women mentioned their symptoms to their family doctor, many said they simply felt dismissed.

"There was a lot more anger among several of the women because they had been told nothing was wrong by their health-care provider," says Dr. Wilson. "To be repeatedly told there is nothing wrong, and then later find yourself in the emergency room with AF, was incredibly frustrating for these women. More needs to be done to support gender-sensitive ways to promote an early diagnosis regardless of the gender."

Dr. Wilson reports that none of the men in his study were upset about their interactions with their health-care providers, mostly because they were immediately sent for diagnostic tests.

"But a delay in diagnosis is not just in this study," he cautions. "Women generally wait longer than men for diagnosis with many ailments. Sadly, with AF and other critical illnesses, the longer a person waits, the shorter time there is to receive treatments. Statistically, women end up with a worse quality of life."

Dr. Wilson, who is currently working on specific strategies to help people manage AF, admits the condition is often hard to diagnose because some of the symptoms are vague. Ideally, he would like people to be as knowledgeable about AF as they are about the symptoms and risks of stroke and heart attacks. As the population is living longer, the number of people with AF continues to increase. In fact, about 15 percent of people over the age of 80 will be diagnosed with the condition.

"People know what to do for other cardiovascular diseases, it's not the same with AF," he adds. "And while the timeline may not be as essential as a stroke for diagnosis and care, there is still a substantial risk of life-limiting effects such as stroke, heart failure and dementia. Reason enough, I hope, for people to seek out that diagnosis."

Credit: 
University of British Columbia Okanagan campus

Thin, large-area device converts infrared light into images

video: The infrared imager provides a clear picture of blood vessels in a person's hand and sees through opaque objects like silicon wafers.

Image: 
Ning Li

Seeing through smog and fog. Mapping out a person's blood vessels while monitoring heart rate at the same time--without touching the person's skin. Seeing through silicon wafers to inspect the quality and composition of electronic boards. These are just some of the capabilities of a new infrared imager developed by a team of researchers led by electrical engineers at the University of California San Diego.

The imager detects a part of the infrared spectrum called shortwave infrared light (wavelengths from 1000 to 1400 nanometers), which is right outside of the visible spectrum (400 to 700 nanometers). Shortwave infrared imaging is not to be confused with thermal imaging, which detects much longer infrared wavelengths given off by the body.

The imager works by shining shortwave infrared light on an object or area of interest, and then converting the low energy infrared light that's reflected back to the device into shorter, higher-energy wavelengths that the human eye can see.

"It makes invisible light visible," said Tina Ng, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering.

While infrared imaging technology has been around for decades, most systems are expensive, bulky and complex, often requiring a separate camera and display. They are also typically made using inorganic semiconductors, which are costly, rigid and consist of toxic elements such as arsenic and lead.

The infrared imager that Ng's team developed overcomes these issues. It combines the sensors and the display into one thin device, making it compact and simple. It is built using organic semiconductors, so it is low cost, flexible and safe to use in biomedical applications. It also provides better image resolution than some of its inorganic counterparts.

The new imager, published recently in Advanced Functional Materials, offers additional advantages. It sees more of the shortwave infrared spectrum, from 1000 to 1400 nanometers--existing similar systems often only see below 1200 nanometers. It also has one of the largest display sizes of infrared imagers to date: 2 square centimeters in area. And because the imager is fabricated using thin film processes, it is easy and inexpensive to scale up to make even larger displays.

Energizing infrared photons to visible photons

The imager is made up of multiple semiconducting layers, each hundreds of nanometers thin, stacked on top of one another. Three of these layers, each made of a different organic polymer, are the imager's key players: a photodetector layer, an organic light-emitting diode (OLED) display layer, and an electron-blocking layer in between.

The photodetector layer absorbs shortwave infrared light (low energy photons) and then generates an electric current. This current flows to the OLED display layer, where it gets converted into a visible image (high energy photons). An intermediate layer, called the electron-blocking layer, keeps the OLED display layer from losing any current. This is what enables the device to produce a clearer image.

This process of converting low energy photons to higher energy photos is known as upconversion. What's special here is that the upconversion process is electronic. "The advantage of this is it allows direct infrared-to-visible conversion in one thin and compact system," said first author Ning Li, a postdoctoral researcher in Ng's lab. "In a typical IR imaging system where upconversion is not electronic, you need a detector array to collect data, a computer to process that data, and a separate screen to display that data. This is why most existing systems are bulky and expensive."

Another special feature is that the imager is efficient at providing both optical and electronic readouts. "This makes it multifunctional," said Li. For example, when the researchers shined infrared light on the back of a subject's hand, the imager provided a picture of the subject's blood vessels while recording the subject's heart rate.

The researchers also used their infrared imager to see through smog and a silicon wafer. In one demonstration, they placed a photomask patterned with "EXIT" in a small chamber filled with smog. In another, they placed a photomask patterned with "UCSD" behind a silicon wafer. Infrared light penetrates through both smog and silicon, making it possible for the imager to see the letters in these demonstrations. This would be useful for applications such as helping autonomous cars see in bad weather and inspecting silicon chips for defects.

The researchers are now working on improving the imager's efficiency.

Credit: 
University of California - San Diego

New bonobo genome fine tunes great ape evolution studies

image: Mhudiblu, a female bonobo, holds her daughter Akema. Mhudiblu's DNA was sequenced to help construct a new, high-quality bonobo genome assembly for great ape and other hominid evolution and genetic research

Image: 
Claudia Philipp/Wuppertal Zoo Germany

Chimpanzees and bonobos diverged comparatively recently in great ape evolutionary history. They split into different species about 1.7 million years ago. Some of the distinctions between chimpanzee (Pan troglodytes) and bonobo (Pan paniscus) lineages have been made clearer by a recent achievement in hominid genomics.

A new bonobo genome assembly has been constructed with a multiplatform approach and without relying on reference genomes. According to the researchers on this project, more than 98% of the genes are now completely annotated and 99% of the gaps are closed.

The high quality of this assembly is allowing scientists to more accurately compare the bonobo genome to that of other great apes - the gorilla, orangutan, chimpanzee - as well as to the modern human. All these species, as well as extinct, ancient, human-like beings, are referred to as hominids.

Because chimpanzee and bonobo are also the closest living species to modern humans, comparing higher-quality genomes could help uncover genetic changes that set the human species apart.

In a May 5 Nature paper, researchers explain how they developed and analyzed the new bonobo assembly, and what juxtaposing it to other great ape genomes is revealing.

The multi-institutional project was led by Yafei Mao, of the Department of Genome Sciences at the University of Washington School of Medicine in Seattle, and Claudia R. Catacchio, of the Department of Biology at the University of Bari, Italy. The senior scientists were Evan Eichler, professor of genome sciences at the UW School of Medicine, and Mario Ventura of the University of Bari. Eichler is also a Howard Hughes Medical Institute investigator.

By comparing the bonobo genome to that of other great apes, the researchers found more than 5,571 structural variants that distinguished the bonobo and chimpanzee lineages.

The researchers explained in the paper, "We focused on genes that have been lost, changed in structure, or expanded in the last few million years of bonobo evolution."

The great ape genome comparisons are also enabling researchers to gain new insights on what happened to the various ape genomes during and after the divergence or splitting apart into different species from a common ancestor.

They were particularly interested in what is called incomplete lineage sorting. This is the less-than-perfect passing along of alleles into the separating populations as species diverge, as well as the loss of alleles or their genetic drift. Analyses of incomplete lineage sorting can help clarify gene evolution and the genetic relationships among present-day hominids.

The higher-quality bonobo genome assembly enabled the researchers to generate a higher resolution map comparing incomplete lineage sorting in hominids. They identified regions that are inconsistent with the species tree. In addition, they estimate that 2.52% of the human genome is more closely related to the bonobo genome than the chimpanzee genome, and 2.55% of the human genome is more closely related to the chimpanzee genome than the bonobo genome.

The total proportion based on incomplete lineage sorting analysis (5.07%) is almost double earlier estimates (3.1%).

"We predict a greater fraction of the human genome is genetically closer to chimpanzees and bonobos compared to previous studies," the researchers note.

The researchers took their incomplete lineage sorting analysis back 15 million years to include genome data from orangutan and gorilla. This increased the incomplete lineage sorting estimates for the hominid genomes to more than 36.5%, which is only slightly more than earlier predictions.

Surprisingly, more than a quarter of these regions are distributed non-randomly, have elevated rates of amino acid replacement, and are enriched for particular genes with related functions such as immunity. This suggests that incomplete lineage sorting might work to increase diversity for specific regions.

The new bonobo genome assembly is named for the female great ape whose DNA was sequenced, Mhudiblu, a current resident of the Wuppertal Zoo in Germany. The researchers estimate that sequence accuracy of the new assembly is about 99.97% to 99.99%, and closes about 99.5% of the 108,390 gaps in the previous bonobo assembly.

The bonobo is one of the last great ape genomes to be sequenced with more advanced long-read genome sequence technologies, the researchers noted.

"Its sequence will facilitate more systematic comparisons between human, chimpanzee, gorilla and orangutan without the limitations of technological differences in sequencing and assembly of the original reference," according to the researchers.

Credit: 
University of Washington School of Medicine/UW Medicine

Revealing the impact of 70 years of pesticide use on European soils

Pesticides have been used in European agriculture for more than 70 years, so monitoring their presence, levels and their effects in European soils quality and services is needed to establish protocols for the use and the approval of new plant protection products.

In an attempt to deal with this issue, a team led by the prof. Dr. Violette Geissen from Wageningen University (Netherlands) have analysed 340 soil samples originating from three European countries to compare the contentdistribution of pesticide cocktails in soils under organic farming practices and soils under conventional practices. This study was a combined effort of 3 EC funded projects addressing soil
quality: RECARE (http://www.recare-project.eu/), iSQAPER (http://www.isqaper-project.eu) and DIVERFARMING.

The soil samples were obtained from two case study sites in Spain, 1 case study site in Portugal, and 1 case study site in the Netherlands; which covered four of the main European crops: horticultural products and oranges (in Spain), grapes (in Portugal), and potato production (in the Netherlands). Chemical analyses revealed that the total content of pesticides in conventional soils was between 70% and 90% higher than in organic soils, although the latter soils did also contain pesticide residues.

Although in 70% of conventional soils mixtures of up to 16 residues were detected per sample, only a maximum of five different residues were found in the organic soils. the residues most frequently found and in the greatest quantities were the herbicides Glyphosate and Pendimethalin. The samples were collected between 2015-2018, as no major changes occurred in terms of management, there are indicative of current situation, and likely of other Eu agricultural areas."

Once the presence of these pesticide cocktails in European agricultural soils is unfolded, it becomes necessary to have a greater understanding of the effects that these complex and cumulative mixtures have on soil health, an area in which there is currently a major lack of information.

The research team emphasis the need to define and introduce regulations and reference points on pesticide cocktails in soils in order to protect the soil's biodiversity, and the quality of crop production. Additionally, taking into account the persistence of residues in organic soils it is necessary to reconsider the time required for the transition from conventional agriculture to organic agriculture, making it dependent on the mix of residues in the soil at starting point and the time they take to degrade.

Credit: 
University of Córdoba

Tracking down the tiniest of forces: How T cells detect invaders

image: The T cell (yellow) touches the antigen-presenting cell. Tiny forces are applied on the surface, eventually the connection breaks.

Image: 
TU Wien / MedUni Wien

T-cells play a central role in our immune system: by means of their so-called T-cell receptors (TCR) they make out dangerous invaders or cancer cells in the body and then trigger an immune reaction. On a molecular level, this recognition process is still not sufficiently understood.

Intriguing observations have now been made by an interdisciplinary Viennese team of immunologists, biochemists and biophysicists. In a joint project funded by the Vienna Science and Technology Fund and the FWF, they investigated which mechanical processes take place when an antigen is recognized: As T cells move their TCRs pull on the antigen with a tiny force - about five pico-newtons (5 x 10-12 or 0.0000000005 newtons). This is not only sufficient to break the bonds between the TCRs and the antigen, it also helps T cells to find out whether they are interacting indeed with the antigen they are looking for. These results have now been published in the scientific journal "Nature Communications".

Tailor-made for a specific antigen

"Each T cell recognizes one specific antigen particularly well," explains Johannes Huppa, biochemist and immunology professor at MedUni Vienna.

"To do so, it features around 100,000 TCRs of the same kind on its surface."

When viruses attack our body, infected cells present various fragments of viral proteins on their surface. T cells examine such cells for the presence of such antigens. "This works according to the lock-and-key principle," explains Johannes Huppa. "For each antigen, the body must produce T cells with matching TCRs. Put simply, each T-cell recognizes only one specific antigen to then subsequently trigger an immune response."

That particular antigen, or more precisely, any antigenic protein fragment presented that exactly matches the T cell's TCR, can form a somewhat stable bond. The question that needs to be answered by the T cell is: how stable is the binding between antigen and receptor?

Like a finger on the sticky surface

"Let's say we wish to find out whether a surface is sticky - we then test how stable the bond is between the surface and our finger," says Gerhard Schütz, Professor of Biophysics at TU Wien. "We touch the surface and pull the finger away until it comes off. That's a good strategy because this pull-away behavior quickly and easily provides us information about the attractive force between the finger and the surface."

In principle, T-cells do exactly the same. T cells are not static, they deform continuously and their cell membrane is in constant motion. When a TCR binds to an antigen, the cell exerts a steadily increasing pulling force until the binding eventually breaks. This can provide information about whether it is the antigen that the cell is looking for.

A nano-spring for force measurement

"This process can actually be measured, even at the level of individual molecules," says Dr. Janett Göhring, who was active as coordinator and first author of the study at both MedUni Vienna and TU Vienna. "A special protein was used for this, which behaves almost like a perfect nano-spring, explain the two other first authors Florian Kellner and Dr. Lukas Schrangl from MedUni Vienna and TU Vienna respectively: "The more traction is exerted on the protein, the longer it becomes. With special fluorescent marker molecules, you can measure how much the length of the protein has changed, and that provides information about the forces that occur". In this way, the group was able to show that T cells typically exert a force of up to 5 pico-newtons - a tiny force that can nevertheless separate the receptor from the antigen. By comparison, one would have to pull on more than 100 million such springs simultaneously to feel stickiness with a finger.

"Understanding the behavior of T cells at the molecular level would be a huge leap forward for medicine. We are still leagues away from that goal," says Johannes Huppa. "But", adds Gerhard Schütz, "we were able to show that not only chemical but also mechanical effects play a role. They have to be considered together."

Credit: 
Vienna University of Technology

Mysterious hydrogen-free supernova sheds light on stars' violent death throes

image: Artist's impression of a yellow supergiant in a close binary with a blue, main sequence companion star, similar to the properties derived for the 2019yvr progenitor system in Kilpatrick et al. (2021). If the progenitor system to 2019yvr was in such a binary, it must have had a very close interaction that stripped a large amount of hydrogen from the yellow supergiant in the last 100 years before it exploded as a supernova.

Image: 
Kavli IPMU / Aya Tsuboi

A curiously yellow pre-supernova star has caused astrophysicists to re-evaluate what's possible at the deaths of our Universe's most massive stars. The team describe the peculiar star and its resulting supernova in a new study published today in Monthly Notices of the Royal Astronomical Society.

At the end of their lives, cool, yellow stars are typically shrouded in hydrogen, which conceals the star's hot, blue interior. But this yellow star, located 35 million light years from Earth in the Virgo galaxy cluster, was mysteriously lacking this crucial hydrogen layer at the time of its explosion.

"We haven't seen this scenario before," said Charles Kilpatrick, postdoctoral fellow at Northwestern University's Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), who led the study. "If a star explodes without hydrogen, it should be extremely blue -- really, really hot. It's almost impossible for a star to be this cool without having hydrogen in its outer layer. We looked at every single stellar model that could explain a star like this, and every single model requires that the star had hydrogen, which, from its supernova, we know it did not. It stretches what's physically possible."

Kilpatrick is also a member of the Young Supernova Experiment, which uses the Pan-STARRS telescope at Haleakalā, Hawaii to catch supernovae right after they explode. After the Young Supernova Experiment spotted supernova 2019yvr in the relatively nearby spiral galaxy NGC 4666, the team used deep space images captured by NASA's Hubble Space Telescope, which fortunately already observed this section of the sky two and a half years before the star exploded.

"What massive stars do right before they explode is a big unsolved mystery," Kilpatrick said. "It's rare to see this kind of star right before it explodes into a supernova."

The Hubble images show the source of the supernova, a massive star imaged just a couple of years before the explosion. Several months after the explosion however, Kilpatrick and his team discovered that the material ejected in the star's final explosion seemed to collide with a large mass of hydrogen. This led the team to hypothesize that the progenitor star might have expelled the hydrogen within a few years before its death.

"Astronomers have suspected that stars undergo violent eruptions or death throes in the years before we see supernovae," Kilpatrick said. "This star's discovery provides some of the most direct evidence ever found that stars experience catastrophic eruptions, which cause them to lose mass before an explosion. If the star was having these eruptions, then it likely expelled its hydrogen several decades before it exploded."

In the new study, Kilpatrick's team also presents another possibility: a less massive companion star might have stripped away hydrogen from the supernova's progenitor star. However, the team will not be able to search for the companion star until after the supernova's brightness fades, which could take up to a decade.

"Unlike its normal behaviour right after it exploded, the hydrogen interaction revealed it's kind of this oddball supernova," Kilpatrick said. "But it's exceptional that we were able to find its progenitor star in Hubble data. In four or five years, I think we will be able to learn more about what happened."

Credit: 
Royal Astronomical Society

Bees thrive where it's hot and dry: A unique biodiversity hotspot located in North America

image: One of the spring-active desert bees, female Centris caesalpiniae on flower of Krameria

Image: 
Bruce D. Taubert

The United States-Mexico border traverses through large expanses of unspoiled land in North America, including a newly discovered worldwide hotspot of bee diversity. Concentrated in 16 km2 of protected Chihuahuan Desert are more than 470 bee species, a remarkable 14% of the known United States bee fauna.

This globally unmatched concentration of bee species is reported by Dr. Robert Minckley of the University of Rochester and William Radke of the United States Fish and Wildlife Service in the open-access, peer-reviewed Journal of Hymenoptera Research.

Scientists studying native U.S. bees have long recognized that the Sonoran and Chihuahuan deserts of North America, home to species with interesting life histories, have high bee biodiversity. Exactly how many species has largely remained speculation. Together with students from Mexico, Guatemala and the United States, the authors made repeated collections over multiple years, identifying more than 70,000 specimens.

Without such intensive collecting, a full picture of the bee diversity would not have been possible. Most of these bee species are solitary, without a queen or workers, which visit flowers over a 2-4 week lifespan and specialize on pollen and nectar from one to a few plants. Furthermore, these desert species experience periodic drought, which the immature stages survive by going into dormancy for years, much like the seeds of the desert plants they pollinate.

Additionally, bee diversity is notoriously difficult to estimate and compare among studies, because of differences in the collecting techniques and the size of the studied area. An unexpected benefit of the regular and intensive sampling for this study was the opportunity to test if the observed bee diversity approached the true bee diversity in this region, or if many more species were yet to be found. In this case, the larger San Bernardino Valley area is home to 500 bee species, only slightly above the number of species recovered along the border - an unusually robust confirmation of the researchers' estimate.

What we know about the decline of bees due to human activity, along with that of other pollinators, is based primarily on diversity data from human-modified habitats. Needed is baseline information on native bees from pristine areas to help us assess the magnitude and understand the ways in which humans impact bee faunas. This study from the Chihuahuan Desert is therefore an important contribution towards filling that knowledge gap from one of the bee biodiversity hotspots in the world.

Credit: 
Pensoft Publishers