Culture

All ears: Genetic bases of mammalian inner ear evolution

Mammals have adapted to live in the darkest of caves and the deepest oceans, and from the highest mountains to the plains. Along the way, mammals have also adapted a remarkable capacity in their sense of hearing, from the high-frequency echolocation calls of bats to low frequency whale songs. Even our best friend companion animals, dogs, have developed a hearing range twice as wide as people.

Assuming that these adaptations have a root genetic cause, a team of scientists led by Lucia Franchini of the National Council of Scientific and Technological Research (CONICET), in Buenos Aires, Argentina, have made it their goal to identify the genetic bases underlying the evolution of the inner ear in mammals. Their latest findings underscored the promise of their approach, which identified two new genes involved in hearing. The study was published in the advanced online edition of Molecular Biology and Evolution.

"This paper builds on the premise that the evolution of mammalian inner ear hearing related novelties should leave a discoverable trace of adaptive molecular signature," said Franchini. "This work highlights the usefulness of evolutionary studies to pinpoint novel key functional genes."

The basic processes of hearing in different mammalian species is the same. The auditory system of mammals is characterized by a middle ear composed of three ossicles (the incus (anvil), malleus (hammer) and stapes (stirrup), which funnels sound to the inner ear.

Franchini's group focused on the inner ear, which turns changes in sound intensity into electrical signals that the brain can process. Within the inner ear is the snail-shaped cochlea that transforms sound waves into nerve impulses, including an auditory organ of Corti that possesses two types of specialized sensory hair cells (HCs), inner (IHCs) and outer hair cells (OHCs).

"In the mammalian cochlea, IHCs and OHCs display a clear division of labor," explains Franchini. "The IHCs receive and relay sound information behaving as the true sensory cells, while OHCs amplify sound information. Thus, IHCs which are the primary transducers, release glutamate to excite the sensory fibers of the cochlear nerve and OHCs act as biological motors to amplify the motion of the sensory epithelium."

In their study, they used a two-pronged approach, complementing in silico gene comparisons with follow-up experimental studies, to gain a more complete understanding of the genetic circuitry behind mammalian inner ear adaptations.

"These functional and morphological innovations in the mammalian inner ear contribute to its unique hearing capacities," said lead author Lucia Franchini. However, the genetic bases underlying the evolution of this mammalian landmark are poorly understood. We propose that the emergence of morphological and functional innovations in the mammalian inner ear could have been driven by adaptive molecular evolution."

First, they took advantage of extensive gene expression databases to perform software-based, or, in silico comparative studies of 1,300 genes to identify genes that may have been positively selected to help mammals adapt over evolutionary time. In total, they found 13%, or 165 inner ear genes that may have been selected for adaptation.

"This analysis indicated that both IHCs and OHCs went through similar levels of gene adaptive evolution probably underlying the morphological and functional remodelling that both cellular types underwent in the mammalian lineage," said Franchini.

"Notably we found that analysing functional categories of positively selected genes the most enriched functional term were 'cytoskeletal protein binding' and 'structural constituent of the cytoskeleton'. These findings indicate that the OHC genes that underwent positive selection could have contributed to the acquisition of the highly specialized cytoskeleton present in these cells that underlies its distinctive functional properties, including somatic electromotility."

Next, they experimentally tested hearing gene functions in a series of mouse studies. Among these, they focused on two previously unknown inner ear genes: STRIP2 (from Striatin Interacting Protein 2) and ABLIM2 (Actin Binding LIM domain 2), which were functionally characterized by generating novel strains of mutant mice by using CRISPR/Cas9 technology. In each case, they used CRISPR to turn off part of the normal gene function to see how it affected the hearing genetic circuitry.

"We performed auditory functional studies of Strip2 and Ablim2 newly generated mutant mice by means of two complementary techniques that allow differential diagnosis of OHC versus IHC/neuronal dysfunction throughout the cochlea," said Franchini. "To evaluate the integrity of the hearing system we recorded ABRs (Auditory Brainstem Responses) that are sound-evoked potentials generated by neuronal circuits in the ascending auditory pathways. We also evaluated the OHCs function through distortion product otoacoustic emissions (DPOAE) testing."

They discovered that Strip2 likely plays a functional role in the first synapse between IHCs and nerve fibers. Moreover, when they at the cochlear sensory epithelium, they found a significant reduction in auditory-nerve synapses. In contrast, the mutant studies of Ablim2 suggest that the absence of Ablim2 does not affect either cochlear amplification or auditory nerve function.

"In summary, through this evolutionary approach we discovered that STRIP2 underwent strong positive selection in the mammalian lineage and plays an important role in the physiology of the inner ear," said Franchini. "Moreover, our combined evolutionary and functional studies allow us to speculate that the extensive evolutionary remodeling that this gene underwent in the mammalian lineage provided an adaptive value. Thus, our study is a proof of concept that evolutionary approaches paired with functional studies could be a useful tool to uncover new key players in the function of organs and tissues."

Credit: 
SMBE Journals (Molecular Biology and Evolution and Genome Biology and Evolution)

Factors associated with elephant poaching

image: The African elephant poaching rates have fallen since 2011.

Image: 
Photo: Colin Beale/Universität York

Elephants are essential to savannah and forest ecosystems and play an important role in ecotourism in Africa - yet poaching has contributed to a rapid decline in elephant populations in recent decades. An international research team has now released a study presenting a more positive perspective: Severin Hauenstein and Prof. Dr. Carsten Dormann from the Department of Biometry and Environmental Systems Analysis at the University of Freiburg, together with Dr. Colin Beale from the University of York/England as well as Dr. Mrigesh Kshatriya and Dr. Julian Blanc from the elephant monitoring program MIKE in Kenya/Africa, used a statistical approach to show that the African elephant poaching rates have fallen since 2011. In a study published in the current issue of the journal Nature Communications, the researchers associated the illegal elephant hunting rates with a local poverty, national corruption and global ivory demand.

While almost all elephant populations have experienced drastic declines since 2000, some populations have been stable or even increasing in recent years, such as that in the Kruger National Park in South Africa. The analysis shows that the number of elephants killed by poachers has fallen from an estimated peak of more than ten percent of the African elephant population in 2011 to less than four percent in 2017. "This is a positive trend, but we should not see this as an end to the poaching crisis," cautions Hauenstein. "After some changes in the political environment, the total number of illegally killed elephants in Africa seems to be falling, but to assess possible protection measures, we need to understand the local and global processes driving illegal elephant hunting."

The results indicate that in a regional comparison, corruption and poverty among the local population are the main factors that drive poaching rates. The researchers show that efforts to curb the demand for ivory in Asian markets and reduce local corruption and poverty could be more successful in the fight against poaching than solely focusing on law enforcement: the recorded annual poaching rates correlate strongly with proxies of ivory demand in China, the traditional market for ivory. In addition, the variation of illegal killing rates among the 29 African countries was primarily explained by the degree of corruption and poverty in the respective country.

In the CITES programme "Monitoring the Illegal Killing of Elephants" (MIKE), which is co-financed by the European Union, wildlife law enforcement patrols annually record the elephant carcasses in 53 monitoring sites in 29 African countries and identify the cause of death. Between 2002 and 2017, the programme documented 18,007 carcasses, of which 8,860 were identified as illegal killings. MIKE was established by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) to inform decision-making by the Parties regarding trade in specimens of elephants, and build capacity in elephant range States for the overall goal of better management of elephants and enhanced enforcement efforts.

Credit: 
University of Freiburg

Chloropicrin application increases production and profit potential for potato growers

image: Chloropicrin was first used on potato in 1940 as a wireworm suppressant and then in 1965 as a verticillium suppressant. Farmers stopped using it on potato for many years, but over the last decade, it has seen a resurgence in popularity -- and for good reason, according to Chad Hutchinson, director of research at TriEst Ag Group, Inc., in his webcast 'Chloropicrin Soil Fumigation in Potato Production Systems.'

Image: 
Hutchinson C.M.

St. Paul, MN (May, 2019)--The chemical compound chloropicrin was first synthesized in 1848 by Scottish chemist John Stenhouse and first applied to agriculture in 1920, when it was used to cure tomato "soil sickness." Over the next decade, it was used to restore pineapple productivity in Hawaii and to address soil fungal problems in California. Over time, it began to be widely used as a fungicide, herbicide, insecticide, and nematicide.

Chloropicrin was first used on potato in 1940 as a wireworm suppressant and then in 1965 as a verticillium suppressant. Farmers stopped using it on potato for many years, but over the last decade, it has seen a resurgence in popularity--and for good reason, according to Chad Hutchinson, director of research at TriEst Ag Group, Inc., in his webcast "Chloropicrin Soil Fumigation in Potato Production Systems."

Used as a preplant soil treatment measure, chloropicrin suppresses soilborne pathogenic fungi and some nematodes and insects. With a half-life of hours to days, it is completely digested by soil organisms before the crop is planted, making it safe and efficient. Contrary to popular belief, chloropicrin does not sterilize soil and does not deplete the ozone layer, as the compound is destroyed by sunlight. Additionally, chloropicrin has never been found in groundwater, due to its low solubility.

According to Hutchinson, chloropicrin-treated soil has a healthier root system, improved water use, and more efficient fertilizer use. Applying chloropicrin to soil also results in greater crop yield and health. Hutchinson also comments on the compound's ability to suppress many common pathogens, including the pathogen that causes common scab and species of Verticillium, Fusarium, and Phytophthora.

Hutchinson concludes that the use of chloropicrin not only increases production efficiency and profit potential for potato farmers, but it can also improve soil health, "the foundation of a positive crop production system." His presentation "Chloropicrin Soil Fumigation in Potato Production Systems" is fully open access and available online.

This webcast, sponsored by TriEst, is part of the "Focus on Potato" series on the Plant Management Network (PMN). PMN is a cooperative, not-for-profit resource for the applied agricultural and horticultural sciences. Together with more than 80 partners, which include land-grant universities, scientific societies, and agribusinesses, PMN publishes quality, applied, and science-based information for practitioners.

Credit: 
American Phytopathological Society

Reading clinician visit notes can improve patients' adherence to medications

BOSTON--A new study of patients reading the visit notes their clinicians write, report positive effects on their use of prescription medications. The study, Patients Managing Medications and Reading their Visit Notes: A survey of OpenNotes participants, published today in the Annals of Internal Medicine, shows that when patients read their notes, they report significant benefits, including feeling more comfortable with and in control of their medications, a greater understanding of medication's side effects, and being more likely to take medications as prescribed.

The study of approximately 20,000 adult patients at Beth Israel Deaconess Medical Center in Boston (BIDMC) in Boston, at the University of Washington Medicine (UW) in Seattle, and at Geisinger, a health system in rural Pennsylvania was conducted online between June and October of 2017. The three health systems have been sharing visit notes written by primary care doctors, medical and surgical specialists, and other clinicians for several years.

"Sharing clinical notes with patients is a relatively low-cost, low-touch intervention," said study lead Catherine DesRoches, DrPH, Executive Director of OpenNotes, and also of the Division of General Medicine at BIDMC. "While note sharing requires a culture shift in medicine, it is not technically difficult with most Electronic Health Record Systems (EHRs), and could have an enormous payoff, given that we know poor adherence to medications costs the health care system about $300 billion per year. Anything that we can do to improve adherence to medications has significant value."

Patients reported that they gained important benefits from reading their notes: 64 percent reported increased understanding of why a medication was prescribed; 62 percent felt more in control of their medications; 57 percent found answers to questions about medications; and 61 percent felt more comfortable with medications. Fourteen percent of patients at BIDMC and Geisinger said that they were more likely to take their medications as prescribed after reading their notes, while 33% of patients at UW rated notes as very important in helping them with their medications. The study also showed that patients speaking primary languages other than English and those with lower levels of formal education were more likely to report benefits.

"This kind of transparent communication presents a big change in long-standing practice, and it's not easy," said study co-author and OpenNotes co-founder Tom Delbanco, MD, MACP, John F. Keane & Family Professor of Medicine at Harvard Medical School and BIDMC. "Doctors contemplating it for the first time are nervous. They worry about many things, including potential effects on their workflow, and scaring their patients. But once they start, we know of few doctors who decide to stop, and patients overwhelmingly love it. The promise it holds for medication adherence is enormous, and we are really excited by these findings."

Study participants were aged 18 years or older, had logged into the secure patient portal at least once in the previous 12 months, had at least one ambulatory visit note available and had been prescribed or were taking medications in the previous 12 months. The survey respondents represented urban and rural settings, varied levels of education, and broad age and racial distributions. The main outcome measures included patient-reported behaviors and their perceptions concerning benefits versus risks.

In an accompanying editorial, David Blumenthal, MD and Melinda K. Abrams, MS of the Commonwealth Fund write: "Transparency is no longer the distant, radical vision it was when the pioneering OpenNotes team began their work. Rather, it is a fact of clinical life, mandated by federal law and policy...Our challenge now is to make the best and most of shared health care information as a tool for clinical management and health improvement."

Credit: 
Beth Israel Deaconess Medical Center

As plaque deposits increase in the aging brain, money management falters

image: Scans of two study participants show the brain of a cognitively healthy 74-year-old (top row) who demonstrated average financial skills compared to an 86-year-old with mild Alzheimer's disease (bottom row) who demonstrated impaired financial skills. The bottom scan is positive for amyloid plaques, highlighted in yellow and orange throughout the brain and extending to its edges.

Image: 
Duke Health

DURHAM, N.C. - Aging adults often show signs of slowing when it comes to managing their finances, such as calculating their change when paying cash or balancing an account ledger.

These changes happen even in adults who are cognitively healthy. But trouble managing money can also be a harbinger of dementia and, according to new Duke research in The Journal of Prevention of Alzheimer's Disease, could be correlated to the amount of protein deposits built up in the brain.

"There has been a misperception that financial difficulty may occur only in the late stages of dementia, but this can happen early and the changes can be subtle," said P. Murali Doraiswamy, MBBS, a professor of psychiatry and geriatrics at Duke and senior author of the paper. "The more we can understand adults' financial decision-making capacity and how that may change with aging, the better we can inform society about those issues."

The findings are based on 243 adults ages 55 to 90 participating in a longitudinal study called the Alzheimer's Disease Neuroimaging Initiative, which included tests of financial skills and brain scans to reveal protein buildup of beta-amyloid plaques.

The study included cognitively healthy adults, adults with mild memory impairment (sometimes an Alzheimer's precursor) and adults with an Alzheimer's diagnosis.

Testing revealed that specific financial skills declined with age and at the earliest stages of mild memory impairment. The decline was similar in men and women. After controlling for a person's education and other demographics, the scientists found the more extensive the amyloid plaques were, the worse that person's ability to understand and apply basic financial concepts or completing tasks such as calculating an account balance.

"Older adults hold a disproportionate share of wealth in most countries and an estimated $18 trillion in the U.S. alone," Doraiswamy said. "Little is known about which brain circuits underlie the loss of financial skills in dementia. Given the rise in dementia cases over the coming decades and their vulnerability to financial scams, this is an area of high priority for research."

Even cognitively healthy people can develop protein plaques as they age, but the plaques may appear years earlier and be more widespread in those at risk for Alzheimer's disease due to a family history or mild memory impairment, Doraiswamy said.

Most testing for early dementia and Alzheimer's disease focuses on memory, said Duke researcher Sierra Tolbert, the study's lead author. A financial capacity assessment, such as the 20-minute Financial Capacity Instrument-Short Form used in the Duke study, could also be a tool for doctors to track a person's cognitive function over time and is sensitive enough to detect even subtle changes, she said.

"Doctors could consider proactively counseling their patients using this scale, but it's not widely in use," Tolbert said. "If someone's scores are declining, that could be a warning sign. We're hoping with this research more doctors will become aware there are tools that can measure subtle changes over time and possibly help patients and families protect their loved ones and their finances."

In addition to Doraiswamy and Tolbert, study authors include Yuhan Liu, Caroline Hellegers, Jeffrey R. Petrella, Michael W. Weiner and Terence Z. Wong.

This research used data from the Alzheimer's Disease Neuroimaging Initiative, which is funded by the National Institutes of Health (U01 AG024904) and the U.S. Department of Defense (W81XWH-12-2-0012), as well as the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through contributions from numerous other organizations. A full list of contributors and financial disclosures is available with the manuscript.

Credit: 
Duke University Medical Center

UTSA study shows vaping is linked to adolescents' propensity for crime

(San Antonio, May 28, 2019) -- UTSA criminal justice professor Dylan Jackson recently published one of the first studies to explore emerging drug use in the form of adolescent vaping and its association with delinquency among 8th and 10th grade students across the nation.

The Centers for Disease Control and Prevention estimate that 4.9 million middle and high school students used some type of tobacco product in 2018, up from 3.6 million in 2017. Moreover, the percentage of high school-aged children who report using e-cigarettes increased by more than 75 percent between 2017 and 2018.

New legislation is targeting this dangerous trend. Earlier this year, the FDA introduced new policies to prevent adolescents from accessing flavored tobacco products, including e-cigarettes. U.S. Senators Mitch McConnell and Tim Kaine have also introduced a bipartisan bill to raise the federal smoking age to 21. The proposed bill includes the use of e-cigarettes, citing it as an "epidemic" among adolescents that has been largely overlooked.

Using a nationally representative sample of 8th and 10th graders in 2017, Jackson found that adolescents who vape are at an elevated risk of engaging in criminal activities such as violence and property theft. He also found that teens who vape marijuana are at a significantly higher risk of violent and property offenses than youth who ingest marijuana through traditional means.

He believes that these findings might be explained by the ability to conceal an illegal substance through the mechanism of vaping, which can reduce the likelihood of detection and apprehension among youth who vape illicit substances and thereby embolden them to engage other delinquent behaviors.

Ultimately, he argues that youth who vape illicit substances such as marijuana may easily go unnoticed and/or unchallenged due to the ambiguity surrounding the substance they are vaping and the ease of concealability of vaping devices, which can look like a flash drive.

These behaviors include four categories of delinquency:

violent delinquency including fighting at school, engaging in a gang fight, causing injury to another or carrying a weapon to school

property delinquency such as stealing an item or damaging school property

"Other" types of delinquency such as trespassing or running away from home

Some combination of the behaviors mentioned above

Jackson also discussed other factors related to vaping, such as youth perceptions of media messaging by product manufacturers that vaping is acceptable because it is a "healthier" option than traditional forms of smoking nicotine or marijuana. "Our hope is that this research will lead to the recognition among policymakers, practitioners, and parents that the growing trend of adolescent vaping is not simply "unhealthy" - or worse, an innocuous pastime - but that it may in fact be a red flag or an early marker of risk pertaining to violence, property offending, and other acts of misconduct."

Credit: 
University of Texas at San Antonio

AccessLab: New workshops to broaden access to scientific research

image: The trust scale at an AccessLab workshop -- how much do you trust the sources of information that you use?

Image: 
Amber G.F. Griffiths, amber@fo.am

A team from the transdisciplinary laboratory FoAM Kernow and the British Science Association detail how to run an innovative approach to understanding evidence called AccessLab in a paper published on May 28 in the open-access journal PLOS Biology. The AccessLab project enables a broader range of people to access and use scientific research in their work and everyday lives.

Five trial AccessLabs have taken place for policy makers, media and journalists, marine sector participants, community groups, and artists. Through direct citizen-scientist pairings, AccessLab encourages people to come with their own science-related questions and work one-to-one with a science researcher to find and access trustworthy information together. Among the many who've benefited from AccessLabs' approach include a town councillor researching the impacts of building developments on the environment, a GP researching nutrition for advising patients with specific diseases, and a dancer and choreographer researching physiology and injuries.

The act of pairing science academics with local community members from other backgrounds helps build understanding and trust between groups, at a time where this relationship is under increasing threat from different political and economic currents in society. This process also exposes science researchers to the difficulties accessing their work and the importance of publishing research findings in a way that is more inclusive.

"AccessLab is a powerful example of researchers using their expertise to unlock skills in their local communities," the authors say in the paper. "The workshops focus on transferring research skills rather than subject-specific knowledge, highlighting that not having a science background doesn't need to be a barrier to understanding and using scientific knowledge."

Credit: 
PLOS

Computer-assisted diagnostic procedure enables earlier detection of brain tumor growth

image: A computer-assisted diagnostic procedure helps physicians detect the growth of low-grade brain tumors earlier and at smaller volumes than visual comparison alone, according to a study published May 28 in the open-access journal PLOS Medicine.

Image: 
geralt, Pixabay

A computer-assisted diagnostic procedure helps physicians detect the growth of low-grade brain tumors earlier and at smaller volumes than visual comparison alone, according to a study published May 28 in the open-access journal PLOS Medicine by Hassan Fathallah-Shaykh of the University of Alabama at Birmingham, and colleagues. However, additional clinical studies are needed to determine whether early therapeutic interventions enabled by early tumor growth detection prolong survival times and improve quality of life.

Low-grade gliomas constitute 15% of all adult brain tumors and cause significant neurological problems. There is no universally accepted objective technique available for detecting the enlargement of low-grade gliomas in the clinical setting. The current gold standard is subjective evaluation through visual comparison of 2D images from longitudinal radiological studies. A computer-assisted diagnostic procedure that digitizes the tumor and uses imaging scans to segment the tumor and generate volumetric measures could aid in the objective detection of tumor growth by directing the attention of the physician to changes in volume. This is important because smaller tumor sizes are associated with longer survival times and less neurological morbidity. In the new study, the authors evaluated 63 patients--56 diagnosed with grade 2 gliomas and 7 followed for an imaging abnormality without pathological diagnosis--for a median follow-up period of 150 months, and compared tumor growth detection by seven physicians aided by a computer-assisted diagnostic procedure versus retrospective clinical reports.

The computer-assisted diagnostic procedure involved digitizing magnetic resonance imaging scans of the tumors, including 34 grade 2 gliomas with radiological progression and 22 radiologically stable grade 2 gliomas. Physicians aided by the computer-assisted method diagnosed tumor growth in 13 of 22 glioma patients labeled as clinically stable by the radiological reports, but did not detect growth in the imaging-abnormality group. In 29 of the 34 patients with progression, the median time-to-growth detection was 14 months for the computer-assisted method compared to 44 months for current standard-of-care radiological evaluation. Using the computer-assisted method, accurate detection of tumor enlargement was possible with a median of only 57% change in tumor volume compared to a median of 174% change in volume required using standard-of-care clinical methods. According to the authors, the findings suggest that current clinical practice is associated with significant delays in detecting the growth of low-grade gliomas, and computer-assisted methods could reduce these delays.

Credit: 
PLOS

Study finds how prostate cancer cells mimic bone when they metastasize

DURHAM, N.C. -- Prostate cancer often becomes lethal as it spreads to the bones, and the process behind this deadly feature could potentially be turned against it as a target for bone-targeting radiation and potential new therapies.

In a study published online Tuesday in the journal PLOS ONE, Duke Cancer Institute researchers describe how prostate cancer cells develop the ability to mimic bone-forming cells called osteoblasts, enabling them to proliferate in the bone microenvironment.

Attacking these cells with radium-233, a radioactive isotope that selectively targets cells in these bone metastases, has been shown to prolong patients' lives. But a better understanding of how radium works in the bone was needed.

The mapping of this mimicking process could lead to a more effective use of radium-233 and to the development of new therapies to treat or prevent the spread of prostate cancer to bone.

"Given that most men who die of prostate cancer have bone metastases, this work is critical to helping understand this process," said lead author Andrew Armstrong, M.D., director of research at the Duke Cancer Institute Center for Prostate and Urologic Cancers.

Armstrong and colleagues enrolled a small study group of 20 men with symptomatic bone-metastatic prostate cancer. When analyzing the circulating tumor cells from study participants, they found that bone-forming enzymes appeared to be expressed commonly, and that genetic alterations in bone forming pathways were also common in these prostate cancer cells.

They validated these new genetic findings in a separate multicenter trial involving a larger group of more than 40 men with prostate cancer and bone metastases.

Following treatment with radium-223, the researchers found that the radioactive isotope was concentrated in bone metastases, but tumor cells still circulated and cancer progressed within six months of therapy. The researchers found a range of complex genetic alterations in these tumor cells that likely enabled them to persist and develop resistance to the radiation over time.

"Osteomimicry may contribute in part to how prostate cancer spreads to bone, but also to the uptake of radium-223 within bone metastases and may thereby enhance the therapeutic benefit of this bone targeting radiotherapy," Armstrong said. He said by mapping this lethal pathway of prostate cancer bone metastasis, the study points to new targets and thus critical areas of research into designing better tumor-targeting therapies.

Credit: 
Duke University Medical Center

New genetic engineering strategy makes human-made DNA invisible

image: This new genetic engineering tool opens up the possibilities for research on bacteria that haven't been well studied before.

Image: 
Image courtesy of Peter Hoey.

Bacteria are everywhere. They live in the soil and water, on our skin and in our bodies. Some are pathogenic, meaning they cause disease or infection. To design effective treatments against pathogens, researchers need to know which specific genes are to blame for pathogenicity.

Scientists can identify pathogenic genes through genetic engineering. This involves adding human-made DNA into a bacterial cell. However, the problem is that bacteria have evolved complex defense systems to protect against foreign intruders--especially foreign DNA. Current genetic engineering approaches often disguise the human-made DNA as bacterial DNA to thwart these defenses, but the process requires highly specific modifications and is expensive and time-consuming.

In a paper published recently in the Proceedings of the National Academy of Sciences journal, Dr. Christopher Johnston and his colleagues at the Forsyth Institute describe a new technique to genetically engineer bacteria by making human-made DNA invisible to a bacterium's defenses. In theory, the method can be applied to almost any type of bacteria.

Johnston is a researcher in the Vaccine and Infectious Disease Division at the Fred Hutchinson Cancer Research Center and lead author of the paper. He said that when a bacterial cell detects it has been penetrated by foreign DNA, it quickly destroys the trespasser. Bacteria live under constant threat of attack by a virus, so they have developed incredibly effective defenses against those threats.

The problem, Johnston explained, is that when scientists want to place human-made DNA into bacteria, they confront the exact same defense systems that protect bacteria against a virus.

To get past this barrier, scientists add specific modifications to disguise the human-made DNA and trick the bacterium into thinking the intruder is a part of its own DNA. This approach sometimes works but can take considerable time and resources.

Johnston's strategy is different. Instead of adding a disguise to the human-made DNA, he removes a specific component of its genetic sequence called a motif. The bacterial defense system needs this motif to be present to recognize foreign DNA and mount an effective counter-attack. By removing the motif, the human-made DNA becomes essentially invisible to the bacterium's defense system.

"Imagine a bacterium like an enemy submarine in a dry-dock, and a human-made genetic tool as your soldier that needs to get inside the submarine to carry out a specific task. The current approaches would be like disguising the spy as an enemy soldier, having them walk up to each gate, allowing the guards to check their credentials, and if all goes well, they're in," Johnston said. "Our approach is to make that soldier invisible and have them sneak straight through the gates, evading the guards entirely."

This new method requires less time and fewer resources than current techniques. In the study, Johnston used Staphylococcus aureus bacteria as a model, but the underlying strategy he developed can be used to sneak past these major defense systems that exist in 80 to 90 percent of bacteria that are known today.

This new genetic engineering tool opens up the possibilities for research on bacteria that haven't been well studied before. Since scientists have a limited amount of time and resources, they tend to work with bacteria that have already been broken into, Johnston explained. With this new tool, a major barrier to breaking into bacteria DNA has been solved, and researchers can use the method to engineer more clinically relevant bacteria.

"Bacteria are the drivers of our planet," said Dr. Gary Borisy, a Senior Investigator at the Forsyth Institute and co-author of the paper. "The capacity to engineer bacteria has profound implications for medicine, for agriculture, for the chemical industry, and for the environment."

Credit: 
Forsyth Institute

Synthetic version of CBD treats seizures in rats

image: CBD from extracts of cannabis or hemp plants could be used to treat epilepsy and other conditions. UC Davis chemists have come up with a way to make a synthetic version of CBD and showed that it is as effective as herbal CBD in treating seizures in rats. Left to right: chemical structures of THC and CBD from plants, and of synthetic H2CBD.

Image: 
Mascal laboratory, UC Davis

A synthetic, non-intoxicating analogue of cannabidiol (CBD) is effective in treating seizures in rats, according to research by chemists at the University of California, Davis.

The synthetic CBD alternative is easier to purify than a plant extract, eliminates the need to use agricultural land for hemp cultivation, and could avoid legal complications with cannabis-related products. The work was recently published in the journal Scientific Reports.

"It's a much safer drug than CBD, with no abuse potential and doesn't require the cultivation of hemp," said Mark Mascal, professor in the UC Davis Department of Chemistry. Mascal's laboratory at UC Davis carried out the work in collaboration with researchers at the University of Reading, U.K.

Products containing CBD have recently become popular for their supposed health effects and because the compound does not cause a high. CBD is also being investigated as a pharmaceutical compound for conditions including anxiety, epilepsy, glaucoma and arthritis. But because it comes from extracts of cannabis or hemp plants, CBD poses legal problems in some states and under federal law. It is also possible to chemically convert CBD to tetrahydrocannabinol (THC), the intoxicating compound in marijuana.

8,9-Dihydrocannabidiol (H2CBD) is a synthetic molecule with a similar structure to CBD. Mascal's laboratory developed a simple method to inexpensively synthesize H2CBD from commercially available chemicals. "Unlike CBD, there is no way to convert H2CBD to intoxicating THC," he said.

One important medical use of cannabis and CBD is in treatment of epilepsy. The U.S. Food and Drug Administration has approved an extract of herbal CBD for treating some seizure conditions and there is also strong evidence from animal studies.

The researchers tested synthetic H2CBD against herbal CBD in rats with induced seizures. H2CBD and CBD were found to be equally effective for the reduction of both the frequency and severity of seizures.

Mascal is working with colleagues at the UC Davis School of Medicine to carry out more studies in animals with a goal of moving into clinical trials soon. UC Davis has applied for a provisional patent on antiseizure use of H2CBD and its analogues, and Mascal has founded a company, Syncanica, to continue development.

Credit: 
University of California - Davis

Replacing diesel with liquefied natural leads to a fuel economy of up to 60% in São Paulo

The substitution of diesel oil by liquefied natural gas (LNG) for cargo transportation in São Paulo would possibly lead to a significant reduction in fuel costs and greenhouse gas (GHG) emissions - as well as other pollutants - in São Paulo State, Brazil. This was presented in a study by the Research Centre for Gas Innovation (RCGI) supported by the São Paulo Research Foundation - FAPESP - and Shell.

Hosted at the Engineering School of the University of São Paulo (Poli-USP), the RCGI is one of the Engineering Research Centers (ERCs) financed by FAPESP in partnership with large companies.

"The biggest benefits, both in terms of pollution reductions and in prices of the fuels being discussed herein, are perceived in São Paulo and Campinas, which are regions with greater potential for substituting diesel oil with LNG and where diesel oil is more expensive than it is in the rest of the State. Our results show that in São Paulo, LNG can be up to 60% cheaper than diesel oil," said Dominique Mouette, Professor in the School of Arts, Sciences, and Humanities at the University of São Paulo (EACH-USP), in an RCGI press communiqué. Mouette is principal author of the article and leader of the RCGI project focusing on the viability of a Blue Corridor in São Paulo State.

The objective of the study, which resulted in an article published in Science of Total Environment, was to evaluate the economic and environmental benefits of substituting diesel oil with LNG for the purpose of establishing a Blue Corridor in the state. This concept appears in Russia and designates routes on which trucks use LNG instead of diesel oil.

LNG is obtained by cooling natural gas to minus 163 °C. Gas is condensed so that its volume is reduced up to 600 times, making it possible to be transported using cryogenic carts to places located far from oil ducts.

To analyze the substitution of diesel with LNG, the investigation considered four scenarios. "Within the best scenario, the use of LNG would reduce fuel costs by up to 40%; equivalent CO2 emissions [a measure used to compare the potential heating effect among several greenhouse gases (GHGs), also known as CO2-eq] by 5.2%; particulate materials by 88%; nitrogen oxides (NOx) by 75%; and would eliminate hydrocarbon emissions," states Pedro Gerber Machado, a researcher at the University of São Paulo's Institute of Energy and Environment and coauthor of the article.

"The methodology initially considered two contexts: one for the geographical regions served by gas pipelines, called the Restricted Scenario (RS), and another covering the 16 administrative regions of the state, called the State Scenario (SS). Both scenarios had different versions of the Blue Corridor, with 3,100 and 8,900 kilometers of roads, respectively," Machado explained.

According to Machado, in the case of each scenario, two forms of LNG distribution were considered: the first one considered a centralized liquefaction with road distribution and generated two subscenarios, a State Scenario with Centralized Liquefaction (SSCL) and a Restricted Scenario with Centralized Liquefaction (RSCL). The second would perform the liquefaction locally in the region where it would be used, which would eliminate the need for distributing LNG on highways. From this scenario, two more subscenarios were derived: the State Scenario with Hybrid Local and Central Liquefaction (SSHL) and the Restricted Scenario with Local Liquefaction (RSLL).

Cost comparison

"The RSLL scenario presents the lowest average price difference for the consumer between LNG and diesel, which means that, in this case, the delivery process of LNG is more expensive, as influenced by the scale factor and greater operating costs," Machado explains.

He continues, "The RSCL scenario offers the lowest gas price for the consumer, that is, 12 dollars per MMBTU (million British thermal units), whereas diesel, in this same scenario, would cost 22.01 dollars per MMBTU. The difference in price between LNG and diesel, in this scenario, is also the largest: 10.01 dollars per MMBTU."

However, the RSLL scenario was designed within the context of a shorter corridor, where the investment would be US$ 243.40 per meter. This contrasts with the SSHL scenario, which has the lowest investment per meter of the four scenarios (US$ 122.10 per meter).

Emissions avoided

Machado explains that to calculate the GHG and pollutant emissions, only the two macroscenarios were considered: SS and RS. "When using LNG, the GHG emissions are different from diesel oil emissions due to CH4 and N2O, which are greenhouse gases with potential for global warming. If the fuel used is diesel, CO2 is responsible for 99% of the emissions of CO2-eq, and if the fuel used is LNG, it represents 82% of the CO2-eq emissions, while CH4 is responsible for 10% and N2O for 8%," he states.

Regarding the GHG emissions generated by the logistics of transporting LNG, the worst-case scenario refers to the SSCL and corresponds to 1% of the total CO2-eq emitted with the use of trucks. In the SCHL, the logistics represent 0.34% of the emissions, and in the RSCL, the logistics correspond to 0.28% of the emissions.

As for pollutants, in the RS scenario, 119,129 tons of emissions from particulate matter (PM) would be avoided: 7.3 million tons of NOx and 209,230 tons of hydrocarbon (HC). In the SS scenario, the benefits are even greater, with reductions of 163,000 tons of MP, 10 million tons of NOx, and 286,000 tons of HC.

When one compares the burning of natural gas and diesel oil, the reduction of 5.2% in GHG emissions, which was observed in the State Scenario, might not be so great, but there are considerable reductions in local pollutants - NOx, PM, and HC saw reductions of 75%, 88%, and 100%, respectively.

However, despite the economic and environmental advantages presented, LNG still faces regulatory barriers to its general use in the transportation sector. "It is not regulated to be used as a fuel for vehicles in Brazil. Most of the LNG used here is compressed natural gas (CNG)," states Professor Mouette.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Study of northern Alaska could rewrite Arctic history

image: A view of the northeast Brooks Range in Alaska.

Image: 
Justin V. Strauss

HANOVER, N.H. - May 28, 2019 - Parts of Alaska's mountainous Brooks Range were likely transported from Greenland and a stretch of the Canadian Arctic much farther to the east, according to a series of Dartmouth-led studies detailing over 300 million years of Arctic geologic history.

The finding updates the geological evolution of the Arctic Ocean and could help revise predictions about the Arctic's oil, gas and mineral wealth.

By explaining the formation of the Arctic Ocean in the Western Hemisphere - known as the Amerasian Basin - the research provides more clues into the geological history of the rapidly changing region.

"This is arguably the most important place for the United States from the perspective of Arctic economic development," said Justin Strauss, an assistant professor of earth science at Dartmouth. "The geology of this region, which is directly connected to its ancient history, will help revise our knowledge about natural resources in the Arctic."

The existing model for the formation of the Arctic Ocean along the U.S. and Canada border region details how seismic activity, known as faulting, caused Alaska to rotate away from a western band of islands in the Canadian Arctic starting approximately 125 million years ago.

Under this original "rotation" scenario, parts of the Brooks Range should match perfectly with Canada's Banks Island and Victoria Island about 450 hundred miles away.

But after close to ten years studying exposed rocks in the region, the Dartmouth studies show that the area actually contains rocks with origins as far away as 1,200 miles to the east. The results were recently published in a Special Papers series by the Geological Society of America.

"The geology of the northeastern Brooks Range does not match anything that we've studied in the neighboring region of North America," said Strauss, the research lead of the study. "This complicates previous models for how you open this major ocean basin."

Further confirming the findings, researchers in the study area saw signs of mountain-building processes that are not known to have taken place near the present position of the Brooks Range. This collision of ancient land masses dating from 400-450 million years is more closely associated with tectonic activity in the eastern Arctic.

The team believes that the area was formed by a combination of activities including the action of a major strike-slip fault system - similar to the San Andreas Fault of California - that transported part of what is now the Brooks Range from Greenland to the western Canadian Arctic Islands.

A smaller scale rotation of the land mass farther westward that has been previously recorded may complete the explanation of how northern Alaska was transported to its current location.

"Relationships on the northwestern margin of North America have long been poorly understood and poorly documented," said Bill McClelland, a professor of earth and environmental studies at the University of Iowa and co-investigator of the study. "The results of these studies have significantly enhanced our understanding of the tectonic processes that formed the Arctic margin of North America and will be instrumental in pushing forward on new research frontiers."

"Because of its remoteness, the North Slope of Alaska and Yukon has seen limited studies to date," said co-investigator Maurice Colpron, a scientist with the Yukon Geological Survey. "Understanding the area with complete certainty will still take many years of hard work, but these research findings greatly advance our knowledge of the region."

As the Arctic continues to open for the development of oil, gas and mineral resources, this new understanding of the region's history could change predictions of how much resource wealth lies in the area.

The United States Geological Survey currently estimates that about 6 percent of the world's oil and 25 percent of the world's natural gas are in the Arctic. The region studied by the team lies within the Arctic National Wildlife Refuge, an area that some would like to open for oil drilling.

In addition to changing the outlook over how much resource wealth exists in the Arctic and where it sits, the research could impact how countries lay claim to those resources. The United States, Russia, Canada and other Arctic countries are all jostling for extended footholds in the region.

"If countries are going to make legal claims based on geology or geophysics, they should consider these much older boundaries that we are highlighting. Governments will to need to confront the complexities of geology meeting politics," said Strauss.

While magnetic surveys allow researchers to understand how the Arctic's Eurasian Basin over Europe and western parts of Asia was formed, that same data are not as easy to interpret for the Amerasian Basin above North America.

The research, funded by the National Science Foundation, is based on the existence of a major fault system in the northern part of North America that has yet to be completely mapped.

"This is the area of the Arctic Ocean that still baffles researchers. It's one of the last major ocean basins on the planet that we just do not understand," said Strauss.

In future studies, the research team will focus on Canada's Yukon and Ellesmere Island to look for major fault systems. The research will retrace hundreds of millions of years of geological evolution to further explore where this part of Alaska came from and why it landed where it did.

Credit: 
Dartmouth College

First-of-its-kind study in endothelial stem cells finds exposure to flavored e-cigarette liquids, e-cigarette use exacerbates cell dysfunction

image: Mechanistic overview by which e-cigarette use might cause acute endothelial dysfunction. Exposure of endothelial cells to e-cigarette flavorings or serum of e-cigarette users leads to endothelial dysfunction associated with increased apoptosis,reactive oxygen species, and inflammation. ICAM ¼ intracellular adhesion molecule; IL ¼ interleukin; MCP ¼ monocyte chemoattractant protein; MCSF ¼ macrophage colony-stimulating factor; ROS ¼ reactive oxygen species.

Image: 
Lee, W.H. et al. J Am Coll Cardiol. 2019;73(21):2722-37.

There has been a rapid rise in e-cigarette use, but its health effects have not been well-studied and their effect on vascular health remains unknown. A first of its kind study in endothelial stem cells, published in the Journal of the American College of Cardiology, found acute exposure to flavored e-liquids or e-cigarette use exacerbates endothelial cell dysfunction, which often precedes heart disease.

Endothelial cells are the main type of cell found in the inside lining of blood vessels, lymph vessels and the heart.

The researchers used induced pluripotent stem cells-derived endothelial cells (iPSC-ECs) from three healthy individuals and a subject population consisting of five healthy non-smokers, five active cigarette smokers, two dual users of e-cigarettes and cigarettes, and two sole users of e-cigarettes. All subjects were healthy individuals free of other major cardiovascular risk factors. The researchers examined the effects of e-liquids on endothelial cell viability by treating the iPSC-ECs with a dilution of six commercially available e-liquids at varying nicotine concentrations. They found all six flavored e-liquids had varying effects on cell survival and observed the presence of pro-inflammatory markers that are known to play a critical role in the development of vascular disease.

The researchers observed the fruit-flavored, sweet tobacco with undertones of caramel and vanilla-flavored, tobacco-flavored Red Oak Tennessee Cured, and sweet-flavored Butter Scotch all had moderate toxic effects on the cells, with the strongest toxic effect coming from the cinnamon-flavored Marcado. They also found menthol tobacco-flavored Tundra had a strong toxic effect at 1 percent dose of concentration with or without nicotine.

The researchers performed other tests to examine endothelial function in iPSC-ECs after the addition of e-liquids and serum from e-cigarette users, and the effect of acute e-cigarette use and cigarette smoking on serum nicotine levels, among others.

"Although limited by a small sample size, our data suggest that e-cigarette use can lead to acute endothelial dysfunction, which we validated by in vitro exposure to either e-liquid or serum derived from patients using e-cigarettes," said Joseph C. Wu, MD, PhD, professor and director of the Stanford Cardiovascular Institute at the Stanford School of Medicine and the study's senior author. "E-cigarette use in the U.S. and worldwide is rapidly increasing with growing concerns from the scientific, public health and policy making communities. Our findings are an important first step in filling this gap by providing mechanistic insights on how e-cigarettes cause endothelial dysfunction, which is an important risk factor for the development of heart disease."

A study presented at the American College of Cardiology's 68th Annual Scientific Session earlier this year found that adults who report puffing e-cigarettes, or vaping, are significantly more likely to have a heart attack, coronary artery disease and depression compared with those who do not use them or any tobacco products.

Credit: 
American College of Cardiology

De-TOXing exhausted T cells may bolster CAR T immunotherapy against solid tumors

image: Chimeric antigen receptor therapy. CAR molecules (light blue) bind to CD19 molecules on a cancer cell leading to segregation of granzyme vesicles (yellow) that activate apoptosis.

Image: 
La Jolla Institute for Immunology

LA JOLLA, CA--A decade ago researchers announced development of a cancer immunotherapy called CAR (for chimeric antigen receptor)-T, in which a patient is re-infused with their own genetically modified T cells equipped to mount a potent anti-tumor attack. Since then CAR T approaches (one of several strategies collectively known as "adoptive T cell transfer") have made headlines as a novel cellular immunotherapy tool, most successfully against so-called "liquid cancers" like leukemias and lymphomas.

Sarcomas and carcinomas have proven more resistant to these approaches, in part because engineered T-cells progressively lose tumor-fighting capacity once they infiltrate a tumor. Immunologists call this cellular fatigue T cell "exhaustion" or "dysfunction."

In efforts to understand why, La Jolla Institute for Immunology (LJI) investigators Anjana Rao, Ph.D., and Patrick Hogan, Ph.D., have published a series of papers over the last years reporting that a transcription factor that regulates gene expression, called NFAT, switches on "downstream" genes that weaken T cell responses to tumors and thus perpetrates T cell exhaustion. One set of these downstream genes encodes transcription factors known as NR4A, and a previous graduate student, Joyce Chen, showed that genetic elimination of NR4A proteins in tumor-infiltrating CAR T cells improved tumor rejection. However, the identity of additional players cooperating with NFAT and NR4A in that pathway has remained unknown.

Now a paper published in this week's online edition of the Proceedings of the National Academy of Sciences (PNAS) from the Rao and Hogan labs provides a more complete list of participants in an extensive gene expression network that establishes and maintains T cell exhaustion. The study employs a mouse model to show that genetically eliminating two new factors, TOX and TOX2, also improves eradication of "solid" melanoma tumors in the CAR T model. This work suggests that comparable interventions to target NR4A and TOX factors in patients may extend the use of CAR T-based immunotherapy to solid tumors.

The group began by comparing gene expression profiles in samples of normal versus "exhausted" T cells, searching for factors upregulated in parallel with NR4A as co-conspirators in T cell dysfunction. "We found that two DNA binding proteins called TOX and TOX2 were consistently highly expressed along with NR4A transcription factors," says Hyungseok Seo, Ph.D., a postdoctoral fellow in the Rao lab and the study's first author. "This discovery suggested that factors like NFAT or NR4A may control expression of TOX."

The group then recapitulated a CAR T protocol in mice by first inoculating animals with melanoma tumor cells to establish a tumor, and then a week later infusing mice with one of two collections of T cells: a "control" sample from a normal mouse, versus a sample derived from mouse genetically engineered to lack TOX and TOX2 expression in T cells.

Remarkably, mice infused with TOX-deficient CAR T cells showed more robust regression of melanoma tumors than did mice infused with normal cells. Moreover, mice treated with TOX-deficient CAR T cells exhibited dramatically increased survival, suggesting that loss of TOX factors combats T cell exhaustion and allows T cells to destroy tumor cells more effectively.

Additional analysis led the investigators down a pathway ending with a well-known immune adversary. The researchers showed that TOX factors join forces with both NFAT and NR4A to promote expression of an inhibitory receptor called PD-1, which decorates the surface of exhausted T cells and sends immunosuppressive signals.

PD-1 is blocked by numerous monoclonal antibodies called checkpoint inhibitors, which combat immunosuppression and activate an innate anti-cancer immune response. Convergence of TOX, NFAT, and NR4A on PD-1 makes molecular and immunological sense and puts it at the convergence of both cellular and antibody immunotherapy approaches.

"Currently, CAR T cell therapy shows amazing effects in patients with "liquid tumors" such as leukemia and lymphoma," says Seo. "But they still do not work well in patients with solid tumors due to T cell exhaustion. If we could inhibit TOX or NR4A by treating CAR T cells with a small molecule, this strategy might show a strong therapeutic effect against solid cancers such as melanomas."

Credit: 
La Jolla Institute for Immunology