Culture

Penn researchers uncover defective sperm epigenome that leads to male infertility

PHILADELPHIA -- One out of eight couples has trouble conceiving, with nearly a quarter of those cases caused by unexplained male infertility. For the past decade, research has linked that infertility to defective sperm that fail to "evict" proteins called histones from DNA during development. However, the mechanisms behind that eviction and where this is happening in the sperm DNA has remained both controversial and unclear.

Now, researchers at Penn Medicine show, using newer genome-wide DNA sequencing tools, the precise genetic locations of those retained histones, as well as a key gene regulating it. The findings were published in Developmental Cell.

Taking it a step further, the researchers created a new mouse model with a mutated version of the gene, Gcn5, which allows investigators to closely track the defects in sperm from the early stages of sperm development through fertilization and on. This is an important step forward as it could lead to a better understanding of not only infertility in men -- and ways to potentially reverse it -- but also the suspected epigenetic mutations being passed onto the embryo from males either naturally or through in vitro fertilization.

Epigenetics, the factors influencing an organism's genetics that are not encoded in the DNA, play a strong role in sperm and egg formation.

"For men who have unexplained infertility, everything may look normal at the doctors: normal semen counts, normal motility. Yet they can still have problems conceiving," said first author Lacey J. Luense, PhD, a research associate in the lab of study senior author, Shelley L. Berger, PhD, the Daniel S. Och University Professor in the departments of Cell and Developmental Biology and Biology, and director of the Penn Epigenetics Institute. "One explanation for persistent problems is histones being in the wrong location, which may affect sperm and then early development. Now, we have a really good model to study what happens when you don't get rid of the histones appropriately in the sperm and what that may look like in the embryo."

Healthy sperm lose 90 to 95 percent of histones, the main proteins in chromatin that package DNA and turn genes on and off, and replace them with protamines, which are smaller proteins able to properly pack the DNA into tiny sperm. Given the role of retained histones in infertility and embryonic development, there is great interest in determining the genomic locations so they could potentially be utilized for further study and ultimately treatment.

Past studies have produced conflicting results on the whereabouts of histones. A technology known as MNase-sequencing that uses an enzymatic reaction to pinpoint location has placed the retained histones on important gene promotors. Other studies with the same approach found histones at DNA repeats and placed in so-called "gene deserts," where they play less of a role in regulation.

"There has been controversy in the field trying to understand these discrepant data," Luense said. "In this new study, we found that both of these previously described models are correct. We find histones on genes that appear to be important for embryo development, but we also find them at repetitive elements, places that do need to be turned off and to prevent expression of these regions in the embryo."

The researchers applied a technology known as ATAC-sequencing, a more precise and faster approach, to track waves of histones at unique sites across the genome during the early and late stages of sperm development in mice. ATAC-seq can identify parts of the genome open and closed -- in this case, regions that retain the sperm histones -- and then make a cut and tag the DNA, which can then be sequenced.

In the mouse models created with the mutated Gcn5 gene, the researchers found these mice to have very low fertility. The researchers also showed that retained histones in normal mice sperm correlated with histone positions in very early embryos, supporting the hypothesis that paternal histones transfer epigenetic information to the next generation.

Having this type of mutant model gives scientists a tool to closely study the mechanisms underlying the mutated sperm's trajectory and understand what effect it may have on the embryo and in development. It also opens an opportunity to study potential therapeutic targets.

"Right now, the burden of IVF and other assisted-reproductive technologies fall on women. Even it's the male factor, it's still women who have to go through hormone injections and procedures," Berger said. "Now imagine being able to apply epigenetic therapeutics to change the levels of histones and protamines in males before embryogenesis? That's one of the questions we want to explore and this model will allow us to move toward that direction."

There are numerous available epigenetic drugs used to treat cancer and other diseases. Given their mechanisms, treating sperm with drugs to increase histone eviction is one potential route to explore.

Limitations with human embryos in science have led to a lack of overall research on infertility and the role of the father's epigenome on embryo development, which underscores the importance of studies such as this, the researchers said.

"There are a lot different factors that can alter the sperm epigenome: diet, drugs, alcohol, for example," Luense said. "We are just now starting to understand how that can affect the child and affect development. These initial, basic studies that we are doing are critical, so we can better understand what's driving these epigenetic mutations."

Credit: 
University of Pennsylvania School of Medicine

Limiting the loss of nature

image: Growing infrastructure is essential for human development in less-wealthy countries, limiting impacts on already-depleted natural habitats is key.

Image: 
The University of Queensland

With only about half of Earth's terrestrial surface remaining as natural vegetation, a University of Queensland-led team has proposed an international goal to halt its continued loss.

The team, led by Professor Martine Maron, examined how a global goal of 'no net loss' of natural ecosystems could work, where some nations seek net increases in over-depleted natural vegetation, while recognising that for others, limited further losses of ecosystems might be unavoidable.

"Across the globe, our natural habitats are suffering, with alarming impacts on biodiversity, the climate and other critical natural systems - impacts that affect people too," Professor Maron said.

"To stop the loss, there have been calls for global policy-makers to set targets to protect the nature we have left.

"It's a lofty goal, but for it to be achievable, it needs to be equitable.

"And that means recognising that some nations might need to contribute more to conservation and restoration than others."

The researchers calculated the depletion of natural ecosystems in 170 countries and considered the socioeconomic factors at play in each.

"There is plenty of divergence across the world," Professor Maron said.

"Many countries have already converted the vast majority of their natural ecosystems, so ecosystem restoration might be needed to contribute equitably to a global 'no net loss' goal.

"On the other hand, there are some countries with largely intact remaining ecosystems and urgent human development imperatives, which may need to accept limited and controlled depletion.

"The latter include some of the world's poorest countries, so finding a way for essential development to proceed without locking in the current ongoing declines of natural ecosystems is critical.

A global goal of no net loss could allow this kind of development in an equitable, limited and transparent way."

The team's work on a global no net loss goal comes at a critical time, with the UN's Convention on Biological Diversity due for a fresh Global Biodiversity Framework in 2020.

"Now's the time to work out what we really want a future Earth to look like, and soon our governments will be collectively deciding just that," Professor Maron said.

"Loss without limit is the paradigm under which natural ecosystems are currently being destroyed - this needs to stop.

"We need a strong, overarching goal to retain, restore and protect natural ecosystems, while dramatically increasing conservation ambitions globally.

"A global NNL goal sets a limit to the loss we -- and biodiversity -- can tolerate, while allowing for human development where it is most urgently needed."

Credit: 
University of Queensland

Evolutionarily novel genes work in tumors

image: Scientists from Peter the Great St. Petersburg Polytechnic University studied the evolutionary ages of human genes and identified a new class of them expressed in tumors -- tumor specifically expressed, evolutionarily novel (TSEEN) genes

Image: 
Peter the Great St.Petersburg Polytechnic University

A team of scientists from Peter the Great St.Petersburg Polytechnic University (SPbPU) studied the evolutionary ages of human genes and identified a new class of them expressed in tumors -- tumor specifically expressed, evolutionarily novel (TSEEN) genes. This confirms the team's earlier theory about the evolutionary role of neoplasms.

A report about the study was published in Scientific Reports.

A tumor is a pathological new growth of tissues. Due to genetic changes, it has impaired cellular regulation and therefore defective functionality. Tumors can be benign or malignant. Unlike the latter, the former grow slowly, don't metastasize, and are easy to remove. Malignant tumors (cancer) are one of the primary mortality factors in the world.

A team of scientists from Saint Petersburg discovered a new class of evolutionarily novel genes present in all tumors -- the so-called TSEEN (Tumor Specifically Expressed Evolutionarily Novel) genes. "The evolutionary role of these genes is to provide genetic material for the origin of new progressive characteristics. TSEEN genes are expressed in many neoplasms and therefore can be excellent tumor markers," said Prof. Andrei Kozlov, a PhD in Biology, the head of Laboratory "Molecular Virology and Oncology" at Peter the Great St. Petersburg Polytechnic University.

The new research confirms a theory that has been proposed by the A. Kozlov earlier. According to it, the number of oncogenes in a human body should correspond to the number of differential cell types. The theory also suggested that the evolution of oncogenes, tumor suppressor genes, and the genes that determine cell differentiation goes on concurrently. The theory is based on the hypothesis of evolution through tumor neofunctionalization, according to which hereditary neoplasms might have played an important role during the early stages of metazoan evolution by providing additional cell masses for the origin of new cell types, tissues, and organs. Evolutionarily novel genes that originate in the DNA of germ cells are expressed in these extra cells.

Prof. Kozlov also made a reference to the article 'Evolutionarily Novel Genes Are Involved in Development of Progressive Traits in Humans' (2019) that has recently been published by his laboratory. In this article the team confirmed their hypothesis using transgenic fish tumors and fish evolutionarily novel genes. The orthologs of such genes are found in the human genome, but in humans they play a role in the development of progressive characteristics not encountered in fish (e.g. lungs, breasts, placenta, ventricular septum in the heart, etc). This confirms the hypothesis about the evolutionary role of tumors. The studies referred to in the article lasted for several years, and their participants used a wide range of methods from the fields of bioinformatics and molecular biology.

"Our work is of great social importance, as the cancer problem hasn't been solved yet. Our theory suggests new prevention and therapy strategies," said Prof. Kozlov. According to him, to fight cancer, a new paradigm should be developed in oncology. TSEEN genes may be used to create new cancer test systems and antitumor vaccines.

Credit: 
Peter the Great Saint-Petersburg Polytechnic University

New study sheds light into origins of neurodegenerative disease

image: Al La Spada, MD, PhD

Image: 
Duke Department of Neurology

New research has shed light on the origins of spinocerebellar ataxia type 7 (SCA7) and demonstrates effective new therapeutic pathways for SCA7 and the more than 40 other types of spinocerebellar ataxia. The study, which appears online Monday on the website of the journal Neuron, implicates metabolic dysregulation leading to altered calcium homeostasis in neurons as the underlying cause of cerebellar ataxias.

"This study not only tells us about how SCA7 begins at a basic mechanistic level,but it also provides a variety of therapeutic opportunities to treat SCA7 and other ataxias," said Al La Spada, MD, PhD, professor of Neurology, Neurobiology, and Cell Biology, at the Duke School of Medicine, and the study's senior author.

SCA7 is an inherited neurodegenerative disorder that causes progressive problems with vision, movement, and balance. Individuals with SCA7 have CAG-polyglutamine repeat expansions in one of their genes; these expansions lead to progressive neuronal death in the cerebellum. SCA7 has no cure or disease-modifying therapies.

La Spada and colleagues performed transcriptome analysis on mice living with SCA7. These mice displayed down-regulation of genes that controlled calcium flux and abnormal calcium-dependent membrane excitability in neurons in their cerebellum.

La Spada's team also linked dysfunction of the protein Sirtuin 1 (Sirt1) in the development of cerebellar ataxia. Sirt1 is a "master regulator" protein associated both with improved neuronal health and with reduced overall neurodegenerative effects associated with aging. La Spada's team observed reduced activity of Sirt1 in SCA7 mice; this reduced activity was associated with depletion of NAD+, a molecule important for metabolic functions and for catalyzing the activity of numerous enzymes, including Sirt1.

When the team crossed mouse models of SCA7 with Sirt1 transgenic mice, they found improvements in cerebellar degeneration, calcium flux defects, and membrane excitability. They also found that NAD+ repletion rescued SCA7 disease phenotypes in both mouse models and human stem cell-derived neurons from patients.

These findings elucidate Sirt1's role in neuroprotection by promoting calcium regulation and describe changes in NAD+ metabolism that reduce the activity of Sirt1 in neurodegenerative disease.

"Sirt1 has been known to be neuroprotective, but it's a little unclear as to why," said Colleen Stoyas, PhD, first author of the study, and a postdoctoral fellow at the Genomics Institute of the Novartis Research Foundation in San Diego. "Tying NAD+ metabolism and Sirt1 activity to a crucial neuronal functional pathway offers a handful of ways to intervene that could be potentially useful and practical to patients."

Credit: 
Duke Department of Neurology

Physics of Living Systems: How cells muster and march out

Many of the cell types in our bodies are constantly on the move. Ludwig-Maximilians-Universitaet (LMU) in Munich physicists have developed a mathematical model that describes, for the first time, how single-cell migration can coalesce into coordinated movements of cohorts of cells.

Many vital biological processes, such as growth, wound healing and immune responses to pathogens, require the active movement of cells. Inflammation and metastasis also involve the migration of specific kinds of cells through tissues to distant sites. A detailed understanding of the mechanisms that underlie cell migration - of single cells and small cohorts of cells, and the coordinated locomotion of tissue-level cell collectives - promises to elucidate the basis for one of the fundamental properties of cells. A team of researchers led by LMU theoretical physicist Erwin Frey (Professor of Statistical Physics and Biophysics at LMU) has now developed a new model, which is capable of describing, on both microscopic and macroscopic scales, the motions of cells on planar surfaces, which yields new insights into the collective dynamics of cells. The authors report their findings in the online journal eLife.

Many models have been constructed that seek to account for either the dynamics of single cells or the motions of cell sheets. However, the integration of both approaches into a single model presents a considerable challenge. This is largely because the levels of abstraction needed to capture the requisite phenomenology vary widely, owing to the differences in scale involved. The theoretical model constructed by Frey and his students is specifically designed to close the gap between the paradigms that have been applied to the analysis of cell locomotion at both single-cell and the multicellular scales. It does so by representing the interaction of cells with the underlying substrate in terms of a honeycomb lattice of contact sites, while also taking adhesive contacts between cells into account. "In contrast to the typically macroscopic approaches to the modelling of locomotion at the tissue level, our model explicitly incorporates the relevant properties of the individual cells, such as cell polarization, the structure of the cytoskeleton and the ability to actively reconfigure cytoskeletal organization in response to mechanical cues," explains Andriy Goychuk, joint first author of the paper. "Nonetheless, unlike strategies that depend on the microscopic analysis of shape changes in single cells, which are computationally costly, our framework is entirely rule-based and efficient enough to make simulations at the tissue level possible."

As the new study shows, the model can be used to investigate the migratory behavior of single cells, the transition to collective cell motion, and the coordinated movement of advancing epithelial sheets consisting of several thousands of cells that is involved in wound repair. The analyses and simulations based on the model uncovered links between specific cellular parameters and characteristic patterns of movement, which accurately reflect the experimental findings. Among other things, the authors found that the forces exerted by the cytoskeleton at cell-substrate contact sites and the contractility of the cytoskeletal network on the inner face of the cell membrane both play vital roles in locomotory behavior. In addition, there is a defined relationship between the expansion of cells owing to mechanical pressure within a monolayer and density-dependent cell growth, which leads to specific patterns of multicellular migration. "Our results constitute a considerable advance in our understanding of collective migration on flat substrates," says Frey. "Furthermore, our new model provides us with a highly flexible instrument for studying the migratory behavior of cells of a wide range of contexts, and a very versatile research tool for further studies in this field."

Credit: 
Ludwig-Maximilians-Universität München

Study exposes surprise billing by hospital physicians

Patients with private health insurance face a serious risk of being treated and billed by an out-of-network doctor when they receive care at in-network hospitals, according to a new study by Yale researchers. Addressing the issue could reduce health spending by 3.4% -- $40 billion annually, the researchers conclude.

The study, published Dec. 16 in the journal Health Affairs, analyzes 2015 data from a large commercial insurer covering tens of millions of individuals throughout the United States to show that anesthesiologists, pathologists, radiologists, and assistant surgeons at in-network hospitals billed out of network in about 10% of cases.

"When physicians whom patients do not choose and cannot avoid bill out of network, it exposes people to unexpected and expensive medical bills and undercuts the functioning of U.S. health care markets," said Zack Cooper, associate professor of public health at the Yale School of Public Health and in the Department of Economics, and one of the study's authors. "Moreover, the ability to bill out of network allows specialists to negotiate inflated in-network rates, which are passed on to consumers in the form of higher insurance premiums."

The study, which was supported by the James Tobin Center for Economic Policy at Yale, adds to a body of work by Cooper and his colleagues analyzing the causes of surprise medical billing in the United States. A 2018 study in the New England Journal of Medicine found that over 1 in 5 patients who went to in-network emergency departments were treated by out-of-network emergency physicians. A 2019 study analyzed the drivers of surprise medical billing and New York State's approach of protecting consumers by introducing binding arbitration between insurers and out-of-network physicians.

Their research triggered the recent push in Congress to pass federal protections against surprise medical billing. Several relevant bills are currently under consideration in Congress. Cooper's research has been cited by the White House, highlighted by congressional leaders, and featured extensively in the media.

The latest paper focused on anesthesiologists, pathologists, radiologists, and assistant surgeons -- hospital-based physicians who are not chosen by patients. After analyzing more than 3.9 million cases involving at least one of the four specialties, the researchers found out-of-network billing at in-network hospitals occurred in 12.3% of pathology cases, 11.8% of anesthesiology care, 11.3% of cases involving an assistant surgeon, and 5.6% of claims for radiologists.

Out-of-network billing was more prevalent at for-profit hospitals and at hospitals located in concentrated hospital and insurance markets where there is little competition, according to the study.

When a private insurance company declines to cover care delivered by an out-of-network provider, patients can get stuck with exorbitant bills. Mean out-of-network charges were $7,889 for assistant surgeons, $2,130 for anesthesiologists, $311 for pathologists, and $194 for radiologists.

The study analyzes several potential policy measures to address the problem. The researchers' preferred approach would be to regulate the contracts of physicians who work in hospitals and are not chosen by patients. The policy would require hospitals to sell a bundled package of services that included fees for anesthesiologists, pathologists, radiologists, assistant surgeons, and emergency department physicians.

"This approach eliminates the possibility of out-of-network specialists treating patients at in-network hospitals," said Cooper, who is associate director of the James Tobin Center for Economic Policy at Yale. "It wouldn't require patients to take any action and it would restore competitively set rates for specialists who patients cannot choose."

The authors emphasize the need a federal policy to protect patients. Cooper has hosted webinars with policymakers from New York and California to better understand what they're doing and describe how their efforts could form the basis of a national policy.

"Ultimately, a well-designed arbitration system that allows arbitrators to consider in-network rates could work," said Cooper. "So could a benchmark-style approach, where out-of-network providers are paid mean in-network payments. Then there are hybrid models, like California, which seems to be working well, where there's a benchmark rate and providers can go to arbitration if they choose. At the end of the day, patients are getting crushed, and we need a change to protect them."

Credit: 
Yale University

Children allergic to cow's milk smaller and lighter

image: This is Karen A. Robbins, M.D., lead study author.

Image: 
Children's National Hospital

Children who are allergic to cow's milk are smaller and weigh less than peers who have allergies to peanuts or tree nuts, and these findings persist into early adolescence. The results from the longitudinal study - believed to be the first to characterize growth patterns from early childhood to adolescence in children with persistent food allergies - was published online in The Journal of Allergy and Clinical Immunology.

"Published data about growth trajectories for kids with ongoing food allergies is scarce," says Karen A. Robbins, M.D.*, lead study author and an allergist in the Division of Allergy and Immunology at Children's National Hospital when the study was conducted. "It remains unclear how these growth trends ultimately influence how tall these children will become and how much they'll weigh as adults. However, our findings align with recent research that suggests young adults with persistent cow's milk allergy may not reach their full growth potential," Dr. Robbins says.

According to the Centers for Disease Control and Prevention, 1 in 13 U.S. children has a food allergy with milk, eggs, fish, shellfish, wheat, soy, peanuts and tree nuts accounting for the most serious allergic reactions. Because there is no cure and such allergies can be life-threatening, most people eliminate one or more major allergen from their diets.

The multi-institutional research team reviewed the charts of pediatric patients diagnosed with persistent immunoglobulin E-mediated allergy to cow's milk, peanuts or tree nuts based on their clinical symptoms, food-specific immunoglobulin levels, skin prick tests and food challenges. To be included in the study, the children had to have at least one clinical visit during three defined time frames from the time they were age 2 to age 12. During those visits, their height and weight had to be measured with complete data from their visit available to the research team. The children allergic to cow's milk had to eliminate it completely from their diets, even extensively heated milk.

From November 1994 to March 2015, 191 children were enrolled in the study, 111 with cow's milk allergies and 80 with nut allergies. All told, they had 1,186 clinical visits between the ages of 2 to 12. Sixty-one percent of children with cow's milk allergies were boys, while 51.3% of children with peanut/tree nut allergies were boys.

In addition to children allergic to cow's milk being shorter, the height discrepancy was more pronounced by ages 5 to 8 and ages 9 to 12. And, for the 53 teens who had clinical data gathered after age 13, differences in their weight and height were even more notable.

"As these children often have multiple food allergies and other conditions, such as asthma, there are likely factors besides simply avoiding cow's milk that may contribute to these findings. These children also tend to restrict foods beyond cow's milk," she adds.

The way such food allergies are handled continues to evolve with more previously allergic children now introducing cow's milk via baked goods, a wider selection of allergen-free foods being available, and an improving understanding of the nutritional concerns related to food allergy.

Dr. Robbins cautions that while most children outgrow cow's milk allergies in early childhood, children who do not may be at risk for growth discrepancies. Future research should focus on improving understanding of this phenomenon.

Credit: 
Children's National Hospital

The sympathetic nervous system can inhibit the defense cells in autoimmune disease

The results of a study conducted in Brazil suggest that the sympathetic nervous system - the part of the autonomous nervous system that controls responses to danger or stress - can modulate the action of defense cells in patients with autoimmune diseases.

Using an experimental model of multiple sclerosis, the scientists found that the sympathetic nervous system can limit the generation of effector responses by inhibiting the action of the cells that attack an antigen taken as a threat by the immune system.

The study, which was supported by São Paulo Research Foundation - FAPESP, was conducted at the Federal University of São Paulo (UNIFESP), with Alexandre Basso as principal investigator. Basso is a professor in the Department of Microbiology, Immunology and Parasitology at UNIFESP's Medical School (Escola Paulista de Medicina). The findings are published in the journal Cell Reports.

"Our study opens up an opportunity for the development of novel therapies. The model we describe could theoretically be applied to other autoimmune diseases besides multiple sclerosis," Basso told Agência FAPESP.

According to the Brazilian Multiple Sclerosis Association (ABEM), more than 35,000 Brazilians suffer from the disease, which affects more women than men. Patients are usually between 20 and 40 years old when symptoms begin.

The first author of the article is Leandro Pires Araújo, a researcher in the same department of UNIFESP. The study was funded by FAPESP via a Regular Research Grant, a Young Investigator Grant and a doctoral scholarship.

Contradictory research findings

The most widely used model in research on multiple sclerosis and comparable autoimmune diseases is an animal model known as experimental autoimmune encephalomyelitis, which consists of inducing an inflammatory response in the animal's central nervous system by means of immunization with antigens from myelin, the lipid-rich insulating substance that surrounds nerve fibers and helps transmit electrical pulses. The model can involve different animals depending on the requirements of the experiment.

In the case of multiple sclerosis, defense cells attack the antigens, causing nerve fiber demyelination (loss of myelin) and impairing communication between neurons. Alterations in the transmission of electrical pulses result in problems such as muscle weakness, loss of balance and motor coordination, and joint pain.

In previous studies using these models, the animals were treated with a substance called 6-hydroxydopamine (6-OHDA) in an attempt to find out how the sympathetic nervous system influences the development of autoimmune disease. The synthetic neurotoxin eliminates fibers in the sympathetic nervous system that release noradrenaline, one of the neurotransmitters that control involuntary movement. The absence of these fibers prevents the release of noradrenaline in the organs innervated by the sympathetic nervous system.

"6-Hydroxydopamine enters the noradrenaline synthesis pathway where it's taken up by sympathetic nerve fibers that express tyrosine hydroxylase, an enzyme present in neurons and in immune system cells. It's a key enzyme in the noradrenaline synthesis pathway," Basso explained.

"Neurons and cells that express tyrosine hydroxylase are also capable of taking up 6-hydroxydopamine through specific transporters. Because of its toxicity, 6-OHDA eventually eliminates the cells and fibers of the sympathetic nervous system."

The results of studies using 6-OHDA are contradictory. Some suggest that the process limits the development of autoimmune disease, while others show exactly the opposite - the disorder becomes even more severe in the absence of these nerve fibers.

Some studies point to the possibility that treatment with 6-OHDA could eliminate immune system cells that are important to the development of the disease. "Based on this finding, we formulated the hypothesis that the contradictions in the studies using 6-OHDA could reflect the fact that some immune system cells with which the nervous system interacts also express tyrosine hydroxylase and are capable of synthesizing and secreting noradrenaline, so they're targets of 6-OHDA," Basso said.

Alternative model

Basso's research group then proposed an alternative experimental strategy to study the influence of the sympathetic nervous system on the development of autoimmune disease, using mice genetically modified to lack certain adrenergic receptors with a key role in the process of controlling release of the neurotransmitter by sympathetic nervous system fibers.

Animals that lack these receptors release much more noradrenaline. "We opted for the opposite strategy: instead of using a model that eliminated the fibers [reducing production of noradrenaline], we used a model in which the sympathetic nervous system was hyperactive [and released more noradrenaline]," Basso said.

"After finding that animals with sympathetic nervous system hyperactivity did indeed develop a milder form of the disease with an impaired effector immune response [which should destroy myelin antigens], we wondered how the higher level of noradrenaline released by the sympathetic nervous system might influence development of the disease in these animals."

To answer this question, the scientists pharmacologically blocked the ß2-adrenergic receptor, one of the cell receptors activated by noradrenaline. After this procedure, the animals developed a more severe form of the disease than that in the control group (with a hyperactive sympathetic nervous system), confirming that the sympathetic nervous system influences the development of autoimmune disease.

"In sum, we concluded that the higher level of noradrenaline released by the sympathetic nervous system regulated development of the disease by augmenting activation of the ß2-adrenergic receptor in immune system cells, especially CD4+ T lymphocytes," Basso said. This type of T cell plays a key role in the activation and stimulation of other leukocytes and orchestrated the central nervous system's inflammatory response in the animals with encephalomyelitis.

The new model is being used at UNIFESP to study the mechanism whereby the sympathetic nervous system influences allergic responses in the lungs. There are molecules that activate or block the ß2-adrenergic receptor and are used in various situations. "One of them is fenoterol, used to relax the airways in patients with asthma and bronchoconstriction, so they can breathe more easily. How does its use affect the immune response? Our research is now pursuing answers to such questions," Basso said.

Credit: 
Fundação de Amparo à Pesquisa do Estado de São Paulo

Paper: Cultural variables influence consumer demand for private-label brands

image: Consumer attitudes toward private-label store brands might be driven more by social variables than price, says new research co-written by Carlos Torelli, a professor of business administration and James F. Towey Faculty Fellow at Illinois.

Image: 
Photo by Gies College of Business

CHAMPAIGN, Ill. -- New research co-written by a University of Illinois expert in consumer behavior and global marketing explores why certain segments of consumers prefer national or global brands over their less-pricey private-label equivalents, and the managerial and marketing implications of those choices.

Private-label brands - think not-so-generic store brands such as Costco's Kirkland Signature line or Target's "up & up" labeled products - contribute significantly to retailer profits by catering to bargain-driven consumers who also value quality. But consumer attitudes toward store brands might be driven by the consumer's own social status and beliefs about societal hierarchy more generally, with results varying between products of high symbolism (sunglasses or jeans, for example) versus products of low symbolism such as bleach, according to a paper co-written by Carlos Torelli, a professor of business administration and the James F. Towey Faculty Fellow at Illinois.

"Private-label brands have been around for many years, but they've been undergoing an evolution lately," Torelli said. "In the past, they were considered and branded as generic products - laundry detergent or dish soap that didn't have a name on the label other than what it was. Just a container with the product inside. Now we have store brands that mimic the elements, attributes and packaging of their big-name competitors but cost less."

Although store brands are popular with consumers, their market share hasn't increased proportionally and has remained steady at 10-15% in most countries.

"Given the widespread belief that private-label brands offer good value, it's surprising that the market share of such brands has remained stubbornly low," said Torelli, also the executive director of Professional and Executive Education at the Gies College of Business. "The preference for national brands has puzzled marketers, who are continuously striving to understand the factors that drive consumer choice."

Torelli and his co-authors examined the interactive effect of "power distance belief" - the acceptance and expectation of hierarchies and inequalities in society - and consumers' social status on the effects they have on preference for private-label versus national brands. They used a data set spanning 32 countries from 2006-10 on the aggregate market share of private-label brands in 21 common product categories.

The researchers found that in societies high in power distance belief (countries such as China, Indonesia and Mexico), low-status consumers preferred national brands when purchasing low-status-symbol products such as laundry detergent - even though the national brands were more expensive than their private-label equivalent - in order to fulfill their need for "heightened status." High-status consumers, on the other hand, preferred private-label brands for everyday products.

"You would assume that it would be the other way around - that low-status consumers would buy the cheaper private-label brand because they have less disposable income, but that's not what we found," Torelli said.

The research has implications for how private-label marketers can penetrate the developing markets of countries where people accept and endorse hierarchy, including the potentially lucrative markets of Brazil, China, India and Russia, Torelli said.

"There's an opening for the national brand to target low-status consumers who are not traditionally thought of as part of their consumer demographic," he said. "If national brands manage the size and certain other parameters to make the product slightly more affordable, then there is a market for premium brands in that demographic - as long as they don't cheapen or water down the quality of the product itself to make it more price competitive with the store brand."

The results also suggest that enhancing the prestige of private-label brands may more successfully attract low-status consumers than offering lower quality products at lower prices, Torelli said.

"If you're a private-label brand, the one thing you could possibly do is burnish your image by 'branding up,' much like what Target did, and create a higher-end private label to sell exclusively in your stores," Torelli said. "That's a trend we're seeing - a movement among retailers to do their own branding. Our research would suggest that just because it's a private-label brand doesn't mean it's destined to be low status.

"If you do a good enough job branding it, you spin it off into its own brand, much like how The Limited spun off Victoria's Secret, which was originally a private-label brand. We don't think of Victoria's Secret as a private label now, but that's how it started. In order to do that, the parent company really has to be invested in the brand - invested in the packaging, the advertising, the signage, everything."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Hospital patient portals lack specific and informative instructions for patients

image: Regenstrief Institute research scientists Joy Lee, PhD, and Michael Weiner, M.D., MPH, conducted a study of hospital patient portals, the secure online websites that give patients access to their personal health information. Among their findings: over half of the 200 patient portals they studied lacked specific instructions on how they should be used.

Image: 
Regenstrief Institute

INDIANAPOLIS -- Most hospitals in the United States, but not all, have secure online websites called patient portals that give patients access to their personal health information. However, many hospitals fail to inform patients fully about using the portals, according to new research from Regenstrief Institute and Indiana University School of Medicine.

Patient portals offer the opportunity to expand people's access to both their own health information and communication with their clinicians. A federal law, known as the Health Information Technology for Economic and Clinical Health (HITECH) Act, has provided financial incentives for healthcare providers to adopt these portals, and patients' access to them has increased significantly over the last ten years. Clinicians, however, say they are concerned about patients misusing the portals, especially when it comes to electronic messages. Likewise, patients have expressed a desire for more guidance on using portals and secure messaging.

The goal of the new study, the most recent in Regenstrief Institute's extensive work in the field of doctor-patient communications, was to determine the availability of hospital portals in the U.S. and what instructions were given to patients about using them. Researchers found:

Portal instructions were more focused on operational and legal information, like how to sign on and liability limits, than on instructing the patient on what medical circumstances are best suited for portal use.

More than half of portals with secure messaging did not have available guidance describing the appropriate uses of messages and practices relating to them. Many had generic statements describing secure messaging, such as "send and receive messages from staff," but included no information on what message content would be considered appropriate.

Some guidance used complicated language and vocabulary, which may hinder understanding by a general audience.

"We found that many instructional materials had more of a medicolegal focus, rather than a focus on the patient as a user," said Joy L. Lee, PhD, M.S., Regenstrief research scientist and lead author of the paper. "This research indicates there is room for improvement when it comes to educating patients on the portals, especially related to secure messaging. The guidance that exists includes a lot of 'don'ts', but not very many 'dos'. This makes it difficult for patients to properly utilize and benefit from the service."

Content of patient portal guidance

Dr. Lee and the research team collected information from a random sample of 200 acute-care hospitals from across the U.S. The study team accessed publicly available portal information from hospital websites and called the hospitals to request any additional information that was distributed to patients about portals or messaging. Then they read and analyzed the content.

Some key results of the analysis were:

Only 89 percent of hospitals had patient portals

66 percent of patient portals included secure messaging

58 percent of secure messaging portals did not detail how the patient was supposed to use the messaging.

Many hospitals included disclaimers that the messaging was not for emergencies, however 23 included that inside the "Terms and Conditions" section, which few patients may actually read.

"Hospitals and healthcare systems have invested a lot of money in patient portals, but the investment won't pay off for them or the people they provide care for if patients are confused about how to use the portals or don't understand how to get the most out of the tool," said Dr. Lee.

"Hospitals and health systems are expanding their uses and provision of online resources, including patient portals," said Michael Weiner, M.D., MPH, senior author of the article and associate director of Regenstrief Institute's William M. Tierney Center for Health Services Research. "Health systems need to be active participants in engaging patients, providing them with more and better information, and clarifying expectations. As guidance is developed at a system level, clinicians can also guide conversations with their patients about how to use messaging tools."

Study authors add that while many instructions could be improved, several good examples of complete and informative patient guidance do exist.

Credit: 
Regenstrief Institute

Hydrogels control inflammation to help healing

image: An illustration shows how effective a selection of custom-designed peptide hydrogels are in controlling inflammation. The gels developed at Rice University serve as scaffolds for new tissue and show promise for treating wounds and cancer and for delivering drugs. The hydrogels are designed to dissolve in the body as they are replaced by natural, functional tissue.

Image: 
Illustration by Tania Lopez-Silva/Rice University

HOUSTON - (Dec. 16, 2019) - Hydrogels for healing, synthesized from the molecules up by Rice University bioengineers, are a few steps closer to the clinic.

Rice researchers and collaborators at Texas Heart Institute (THI) have established a baseline set of injectable hydrogels that promise to help heal wounds, deliver drugs and treat cancer. Critically, they've analyzed how the chemically distinct hydrogels provoke the body's inflammatory response -- or not.

Hydrogels developed at Rice are designed to be injectable and create a mimic of cellular scaffolds in a desired location. They serve as placeholders while the body naturally feeds new blood vessels and cells into the scaffold, which degrades over time to leave natural tissue in its place. Hydrogels can also carry chemical or biological prompts that determine the scaffold's structure or affinity to the surrounding tissue.

The study led by chemist and bioengineer Jeffrey Hartgerink and graduate student Tania Lopez-Silva at Rice and Darren Woodside, vice president for research and director of the flow cytometry and imaging core at THI, demonstrates it should be possible to tune multidomain peptide hydrogels to produce appropriate inflammatory response for what they're treating.

The research appears in Biomaterials.

"We've been working on peptide-based hydrogels for a number of years and have produced about 100 different types," Hartgerink said. "In this paper, we wanted to back up a bit and understand some of the fundamental ways in which they modify biological environments."

The researchers wanted to know specifically how synthetic hydrogels influence the environment's inflammatory response. The two-year study offered the first opportunity to test a variety of biocompatible hydrogels for the levels of inflammatory response they trigger.

"Usually, we think of inflammation as bad," Hartgerink said. "That's because inflammation is sometimes associated with pain, and nobody likes pain. But the inflammatory response is also extremely important for wound healing and in clearing infection.

"We don't want zero inflammation; we want appropriate inflammation," he said. "If we want to heal wounds, inflammation is good because it starts the process of rebuilding vasculature. It recruits all kinds of cells that are regenerative to that site."

The labs tested four basic hydrogel types -- two with positive charge and two negative -- to see what kind of inflammation they would trigger. They discovered that positively charged hydrogels triggered a much stronger inflammatory response than negatively charged ones.

"Among the positive materials, depending on the chemistry generating that charge, we can either generate a strong or a moderate inflammatory response," Hartgerink said. "If you're going for wound-healing, you really want a moderate response, and we saw that in one of the four materials.

"But if you want to go for a cancer treatment, the higher inflammatory response might be more effective," he said. "For something like drug delivery, where inflammation is not helpful, one of the negatively charged materials might be better.

"Basically, we're laying the groundwork to understand how to develop materials around the inflammatory responses these materials provoke. That will give us our best chance of success."

The THI team helped analyze the cellular response to the hydrogels through multidimensional flow cytometry.

"The results of this work lay the groundwork for specifically tailoring delivery of a therapeutic by a delivery vehicle that is functionally relevant and predictable," Woodside said. "Aside from delivering drugs, these hydrogels are also compatible with a variety of cell types.

"One of the problems with stem cell therapies at present is that adoptively transferred cells don't necessarily stay in high numbers at the site of injection," he said. "Mixing these relatively inert, negatively charged hydrogels with stem cells before injection may overcome this limitation."

Hartgerink said the work is foundational, rather than geared toward a specific application, but is important to the long-term goal of bringing synthetic hydrogels to the clinic. "We have been speculating about a lot of the things we think are good and true about this material, and we now have more of a sound mechanistic understanding of why they are, in fact, true," Hartgerink said.

Credit: 
Rice University

Tiny insects become 'visible' to bats when they swarm

Bats use echolocation to hunt insects, many of which fly in swarms. In this process, bats emit a sound signal that bounces off the target object, revealing its location. Smaller insects like mosquitos are individually hard to detect through echolocation, but a new Tel Aviv University study reveals that they become perceptible when they gather in large swarms.

The findings could provide new insights into the evolution of bat echolocation and explain why tiny insects are found in the diets of bats that seem to use sound frequencies that are too high to effectively detect them.

The new research was conducted by Dr. Arjan Boonman and Prof. Yossi Yovel at TAU's Department of Zoology and colleagues at Canada's Western University. It was published in PLOS Computational Biology on December 12.

Few studies have addressed what swarms of insects -- as opposed to single insects -- "look" like to bats. To find out, Dr. Boonman and colleagues combined three-dimensional computer simulations of insect swarms with real-world measurements of bat echolocation signals to examine how bats sense swarms that vary in size and density.

They found that small insects that are undetectable on their own, such as mosquitos, suddenly become "visible" to bats when they gather in large swarms. They also discovered that the fact that bats use signals with multiple frequencies is well suited to the task of detecting insect swarms. These signals appear to be ideal for detecting an object if more than one target falls inside the echolocation signal beam at once.

"Using simulations, we investigated something that could never have been measured in reality," Dr. Boonman says. "Modeling enabled us to have full control over any aspect of an insect swarm, even the full elimination of the shape of each insect within the swarm."

The insect model the researchers used has a tiny mesh (skeleton) and minuscule legs and wings. "We are still adding new features, such as the bat's acoustic beam or ears, which were not in the original model," says Prof. Yovel. "We also developed a faster version of the algorithm. All of this will open a new world for us in which we can get echoes even from entire landscapes, so we can learn what a bat or sonar-robot would 'see' much more quickly."

The study could also affect technology being developed to improve defense systems. "The algorithms developed for this study could potentially be applied to radar echoes of drone swarms in order to lower the probability of detection by enemy radar," Dr. Boonman explains. "Since drones are playing an ever more prominent role in warfare, our biological study could spawn new ideas for the defense industry."

Credit: 
American Friends of Tel Aviv University

Neutrons optimize high efficiency catalyst for greener approach to biofuel synthesis

image: Illustration of the optimized zeolite catalyst (NbAlS-1), which enables a highly efficient chemical reaction to create butene, a renewable source of energy, without expending high amounts of energy for the conversion.

Image: 
ORNL/Jill Hemman

OAK RIDGE, Tenn., December 16, 2019--Researchers led by the University of Manchester have designed a catalyst that converts biomass into fuel sources with remarkably high efficiency and offers new possibilities for manufacturing advanced renewable materials.

Neutron scattering experiments at the Department of Energy's Oak Ridge National Laboratory played a key role in determining the chemical and behavioral dynamics of a zeolite catalyst--zeolite is a common porous material used in commercial catalysis--to provide information for maximizing its performance.

The optimized catalyst, called NbAlS-1, converts biomass-derived raw materials into light olefins--a class of petrochemicals such as ethene, propene, and butene, used to make plastics and liquid fuels. The new catalyst has an impressive yield of more than 99% but requires significantly less energy compared to its predecessors. The team's research is published in the journal Nature Materials.

"Industry relies heavily on the use of light olefins from crude oil, but their production can have negative impacts on the environment," said lead author Longfei Lin at the University of Manchester. "Previous catalysts that produced butene from purified oxygenated compounds required lots of energy, or extremely high temperatures. This new catalyst directly converts raw oxygenated compounds using much milder conditions and with significantly less energy and is more environmentally friendly."

Biomass is organic matter that can be converted and used for fuel and feedstock. It is commonly derived from leftover agricultural waste such as wood, grass, and straw that gets broken down and fed into a catalyst that converts it to butene--an energy-rich gas used by the chemical and petroleum industries to make plastics, polymers and liquid fuels that are otherwise produced from oil.

Typically, a chemical reaction requires a tremendous amount of energy to break the strong bonds formed from elements such as carbon, oxygen, and hydrogen. Some bonds might require heating them to 1,000°C (more than 1,800°F) and hotter before the bonds are broken.

For a greener design, the team doped the catalyst by replacing the zeolite's silicon atoms with niobium and aluminum. The substitution creates a chemically unbalanced state that promotes bond separation and radically reduces the need for high degrees of heat treatments.

"The chemistry that takes place on the surface of a catalyst can be extremely complicated. If you're not careful in controlling things like pressure, temperature, and concentration, you'll end up making very little butene," said ORNL researcher Yongqiang Cheng. "To obtain a high yield, you have to optimize the process, and to optimize the process you have to understand how the process works."

Neutrons are well suited to study chemical reactions of this type due to their deeply penetrating properties and their acute sensitivity to light elements such as hydrogen. The VISION spectrometer at ORNL's Spallation Neutron Source enabled the researchers to determine precisely which chemical bonds were present and how they were behaving based on the bonds' vibrational signatures. That information allowed them to reconstruct the chemical sequence needed to optimize the catalyst's performance.

"There's a lot of trial and error associated with designing such a high-performance catalyst such as the one we've developed," said corresponding author Sihai Yang at University of Manchester. "The more we understand how catalysts work, the more we can guide the design process of next-generation materials."

Synchrotron X-ray diffraction measurements at the UK's Diamond Light Source were used to determine the catalyst's atomic structure and complementary neutron scattering measurements were made at the Rutherford Appleton Laboratory's ISIS Neutron and Muon Source.

Credit: 
DOE/Oak Ridge National Laboratory

Research brief: New methods promise to speed up development of new plant varieties

image: Researchers triggered seedlings to develop new shoots that contain edited genes.

Image: 
Kit Leffler, University of Minnesota.

A University of Minnesota research team recently developed new methods that will make it significantly faster to produce gene-edited plants. They hope to alleviate a long-standing bottleneck in gene editing and, in the process, make it easier and faster to develop and test new crop varieties with two new approaches described in a paper recently published in Nature Biotechnology.

Despite dramatic advances in scientists' ability to edit plant genomes using gene-editing tools such as CRISPR and TALENs, researchers were stuck using an antiquated approach -- tissue culture. It has been in use for decades and is costly, labor intensive and requires precise work in a sterile environment. Researchers use tissue culture to deliver genes and gene editing reagents, or chemicals that drive the reaction, to plants.

"A handful of years ago the National Academy of Sciences convened a meeting of plant scientists, calling on the community to solve the tissue culture bottleneck and help realize the potential of gene editing in plants," said Dan Voytas, professor in Genetics, Cell Biology and Development in the College of Biological Sciences and senior author on the paper. "We have advanced genome editing technology but we needed a novel way to efficiently deliver gene editing reagents to plants. The methods in this paper present a whole new way of doing business."

The new methods will:

drastically reduce the time needed to edit plant genes from as long as nine months to as short as a few weeks;

work in more plant species than was possible using tissue culture, which is limited to specific species and varieties;

allow researchers to produce genetically edited plants without the need of a sterile lab, making it a viable approach for small labs and companies to utilize.

To eliminate the arduous work that goes into gene-editing through tissue culture, co-first authors Ryan Nasti and Michael Maher developed new methods that leverage important plant growth regulators responsible for plant development.

Using growth regulators and gene editing reagents, researchers trigger seedlings to develop new shoots that contain edited genes. Researchers collect seeds from these gene-edited shoots and continue experiments. No cell cultures needed.

The approaches differ in how the growth regulators are applied and at what scale. The approach developed by Nasti allows small-scale rapid testing -- with results in weeks instead of months or years -- of different combinations of growth regulators. "This approach allows for rapid testing so that researchers can optimize combinations of growth regulators and increase their efficacy," he said.

Maher used the same basic principles to make the process more accessible by eliminating the need for a sterile lab environment. "With this method, you don't need sterile technique. You could do this in your garage," he said. He added that this technique opens up the possibility that smaller research groups with less resources can gene edit plants and test how well they do.

"Nasti and Maher have democratized plant gene editing. It will no longer take months in a sterile lab with dozens of people in tissue culture hoods," Voytas said.

The researchers used a tobacco species as their model, but have already demonstrated the method works in grape, tomato and potato plants. They believe the findings will likely transfer across many species. Plant geneticists and agricultural biotechnologists aim to ensure stable food sources for a growing global population in a warming climate, where pest outbreaks and extreme weather events are commonplace. These new methods will allow them to work more efficiently.

Credit: 
University of Minnesota

Simple test could prevent fluoride-related disease

image: The test tube on the left shows a real positive result from water sampled in Costa Rica. The middle tube is a negative control. The tube on the right is a positive control.

Image: 
Julius B. Lucks/Northwestern University

With one drop of water, test detects fluoride levels that exceed EPA standards

Test costs pennies to make, is easy to read and requires no expertise to use

Method works by using an RNA riboswitch, which flips when fluoride is not present

Researchers tested device in Costa Rica, where fluorosis has been reported

EVANSTON, Ill. -- Northwestern University synthetic biologists developed a simple, inexpensive new test that can detect dangerous levels of fluoride in drinking water.

Costing just pennies to make, the system only needs a drip and a flick: Drip a tiny water droplet into a prepared test tube, flick the tube once to mix it and wait. If the water turns yellow, then an excessive amount of fluoride -- exceeding the EPA's most stringent regulatory standards -- is present.

This method is starkly different from current tests, which cost hundreds of dollars and often require scientific expertise to use.

The researchers tested the system both in the laboratory at Northwestern and in the field in Costa Rica, where fluoride is naturally abundant near the Irazu volcano. When consumed in high amounts over long periods of time, fluoride can cause skeletal fluorosis, a painful condition that hardens bones and joints.

Americans tend to think of the health benefits of small doses of fluoride that strengthen teeth. But elsewhere in the world, specifically across parts of Africa, Asia and Central America, fluoride naturally occurs at levels that are dangerous to consume.

"In the United States, we hear about fluoride all the time because it's in toothpaste and the municipal water supply," said Northwestern's Julius Lucks, who led the project. "It makes calcium fluoride, which is very hard, so it strengthens our tooth enamel. But above a certain level, fluoride also hardens joints. This mostly isn't an issue in the U.S. But it can be a debilitating problem in other countries if not identified and addressed."

The research was published online last week (Dec. 13) in the journal ACS Synthetic Biology.

Lucks is an associate professor of chemical and biological engineering in the McCormick School of Engineering and a member of Northwestern's Center for Synthetic Biology. The work was performed in collaboration with Michael Jewett, professor of chemical and biological engineering in McCormick and director of the Center for Synthetic Biology. Graduate students Walter Thavarajah, Adam Silverman and Matthew Verosloff spearheaded the research.

Field test success

Fluoride is a naturally occurring element, which can seep out of bedrock into groundwater. Also found in volcanic ash, fluoride is particularly abundant in regions surrounding volcanoes.

Home to three volcanic range systems, Costa Rica seemed like a natural place to test the device in the field. Matthew Verosloff, a Ph.D. candidate in Lucks' laboratory, traveled to Costa Rica and sampled various water samples -- from mud puddles, ponds and ditches.

"Every test on these field samples worked," Lucks said. "It's exciting that it works in the lab, but it's much more important to know that it works in the field. We want it to be an easy, practical solution for people who have the greatest need. Our goal is to empower individuals to monitor the presence of fluoride in their own water."

How it works

Although the device is simple to use, the prepared test tube houses a sophisticated synthetic biology reaction. Lucks has spent years working to understand RNA folding mechanisms. In his new test, he puts this folding mechanism to work.

"RNA folds into a little pocket and waits for a fluoride ion," he explained. "The ion can fit perfectly into that pocket. If the ion shows up, then RNA expresses a gene that turns the water yellow. If the ion doesn't show up, then RNA changes shape and stops the process. It's literally a switch."

According to Lucks, organisms already perform this function in nature. "Fluoride is toxic to bacteria," he said. "They use RNA to sense fluoride in the cell, then they make a protein to pump it out and detoxify."

Lucks' system works in the same way. But instead of producing a protein pump, his test produces a protein enzyme that makes a yellow pigment, so people can see the results with a simple glance.

Lucks' team freeze-dried the RNA reaction, which looks like a tiny cotton ball, and put it into a test tube. In this form, the reaction is safe and shelf-stable. A small pipette accompanies the test tube. When placed in water, the pipette absorbs exactly 20 microliters -- just the small drop that's needed to rehydrate the reaction. From there, it takes two hours to get a result, which Lucks intends to accelerate in future iterations.

"We're currently limited to testing for fluoride," said Thavarajah, the paper's first author. "But we're trying to engineer other RNAs to detect all sorts of targets."

Credit: 
Northwestern University