Culture

New improved dog reference genome will aid a new generation of investigation

Researchers at Uppsala University and the Swedish University of Agricultural Sciences have used new methods for DNA sequencing and annotation to build a new, and more complete, dog reference genome. This tool will serve as the foundation for a new era of research, helping scientists to better understand the link between DNA and disease, in dogs and in their human friends. The research is presented in the journal Communications Biology.

The dog has been aiding our understanding of the human genome since both genomes were released in the early 2000s. At that time, a comparison of both genomes, and two others, revealed that the human genome contained circa 20,000 genes, down from the around 100,000 predicted earlier. In the new study, researchers led by Dr Jennifer Meadows and Professor Kerstin Lindblad-Toh, have greatly improved the dog genome, identifying missing genes and highlighting regions of the genome that regulate when these genes are on or off.

A key factor was the move from short- to long-read technology, reducing the number of genome gaps from over 23,000 to a mere 585.

"We can think of the genome as a book," says Meadows. "In the previous assembly, many words and sometimes whole sentences were in the wrong order or even missing. Long-read technology allowed us to read whole paragraphs at once, greatly improving our comprehension of the genome."

"Additional tools which measure the DNA's 3D structure allowed us to place the paragraphs in order," adds Dr Chao Wang, first author of the study.

A better reference genome also helps disease research. Domestic dogs have lived alongside humans for tens of thousands of years and suffer from similar diseases to humans, including neurological and immunological diseases as well as cancer. Studying dog disease genetics can provide precise clues to the causes of corresponding human diseases.

"The improved canine genome assembly will be of great importance and use in canine comparative medicine, where we study diseases in dogs, for example osteosarcoma, systemic lupus erythematosus (SLE) and amyotrophic lateral sclerosis (ALS), with the goal of helping both canine and human health," says Lindblad-Toh.

Credit: 
Uppsala University

Lipid epoxides target pain, inflammatory pathways in neurons

image: Comparative biosciences professor Aditi Das led a study of modified lipids that target the body's endocannabinoid system, which is involved in regulating pain and inflammation.

Image: 
Photo by L. Brian Stauffer

CHAMPAIGN, Ill. -- When modified using a process known as epoxidation, two naturally occurring lipids are converted into potent agents that target multiple cannabinoid receptors in neurons, interrupting pathways that promote pain and inflammation, researchers report. These modified compounds, called epo-NA5HT and epo-NADA, have much more powerful effects than the molecules from which they are derived, which also regulate pain and inflammation.

Reported in the journal Nature Communications, the study opens a new avenue of research in the effort to find alternatives to potentially addictive opioid pain killers, researchers say.

The work is part of a long-term effort to understand the potentially therapeutic byproducts of lipid metabolism, a largely neglected area of research, said University of Illinois Urbana-Champaign comparative biosciences professor Aditi Das, who led the study. While many people appreciate the role of dietary lipids such as omega-3 and omega-6 fatty acids in promoting health, the body converts these fat-based nutrients into other forms, some of which also play a role in the healthy function of cells, tissues and organ systems.

"Our bodies use a lot of genes for lipid metabolism, and people don't know what these lipids do," said Das, also an affiliate of the Beckman Institute for Advanced Science and Technology and of the Cancer Center at Illinois. "When we consume things like polyunsaturated fatty acids, within a few hours they are transformed into lipid metabolites in the body."

Scientists tend to think of these molecules as metabolic byproducts, "but the body is using them for signaling processes," Das said. "I want to know the identity of those metabolites and figure out what they are doing."

She and her colleagues focused on the endocannabinoid system, as cannabinoid receptors on cells throughout the body play a role in regulating pain. When activated, cannabinoid receptors 1 and 2 tend to reduce pain and inflammation, while a third receptor, TRPV1, promotes the sensation of pain and contributes to inflammation. These receptors work together to modulate the body's responses to injury or disease.

"Understanding pain regulation in the body is important because we know we have an opioid crisis," Das said. "We're looking for lipid-based alternatives to opioids that can interact with the cannabinoid receptors and in the future be used to design therapeutics to reduce pain."

Previous research identified two lipid molecules, known as NA5HT and NADA, that naturally suppress pain and inflammation. Das and her colleagues discovered that brain cells possess the molecular machinery to epoxidize NA5HT and NADA, converting them to epo-NA5HT and epo-NADA. Further experiments revealed that these two epoxidated lipids are many times more potent than the precursor molecules in their interactions with the cannabinoid receptors.

"For example, we found that epo-NA5HT is a 30-fold stronger antagonist of TRPVI than NA5HT and displays a significantly stronger inhibition of TRPV1-mediated responses in neurons," Das said. It inhibits pathways associated with pain and inflammation, and promotes anti-inflammatory pathways.

The team was unable to determine whether neurons naturally epoxidate NA5HT and NADA in the brain, but the findings hold promise for the future development of lipid compounds that can combat pain and inflammation without the dangerous side effects associated with opioids, Das said.

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Discovery of a new law of phase separation

image: Researchers at The University of Tokyo discover a new law about how the complex network of phase-separated structures grows with time, which may lead to more efficient batteries and industrial catalysts

Image: 
Institute of Industrial Science, the University of Tokyo

Tokyo, Japan - Researchers from Institute of Industrial Science at The University of Tokyo investigated the mechanism of phase separation into the two phases with very different particle mobilities using computer simulations. They found that slow dynamics of complex connected networks control the rate of demixing, which can assist in the design of new functional porous materials, like lithium-ion batteries.

According to the old adage, oil and water don't mix. If you try to do it anyway, you will see the fascinating process of phase separation, in which the two immiscible liquids spontaneously "demix." In this case, the minority phase always forms droplets. Contrary to this, the researchers found that if one phase has much slower dynamics than the other phase, even the minority phase form complex networks instead of droplets. For example, in phase separation of colloidal suspensions (or protein solutions), the colloid-rich (or protein-rich) phase with slow dynamics forms a space-spanning network structure. The network structure thickens and coarsens with time while having the remarkable property of looking similar over a range of length scales, so the individual parts resemble the whole.

In the case of spontaneous demixing, the self-similar property causes the typical size of the domains to increase as a function of the elapsed time while obeying a power law. Classical theories predict that the growth exponent of the domains should be 1/3 and 1 for droplet or bicontinuous structures, respectively. However, for network-forming phase separation, it has not been explored how the structure grows or if there is such a law.

Now, using large-scale computer simulations, researchers at The University of Tokyo studied how the typical size of phase domains grows over time when a system is deeply quenched. "In such a situation, the particle mobility can be significantly different between the two phases, and then, the classical theory does not necessarily apply, " first author Michio Tateno says. The team studied the phase separation of a fluid into a gas and liquid and the demixing of a colloidal suspension consisting of insoluble particles and a liquid, using molecular dynamics simulations and hydrodynamic calculations, respectively. They found that the minority phase of slow dynamics universally forms a network structure that grows with a growth exponent of 1/2, and provided a theoretical explanation for the mechanism.

"Significant differences in the particle mobility between the two phases plays a critical role in controlling the speed of the demixing process," senior author Hajime Tanaka says. Because many devices, like rechargeable batteries and catalysts, rely on the creation of intricate porous networks, this research may lead to advances in these areas. In addition, it may shed light on certain cellular functions that have been hypothesized to be controlled by internal biological phase separations.

Credit: 
Institute of Industrial Science, The University of Tokyo

The chemistry lab inside cells

image: (A) X-ray crystal structure of QhpG and schematic of crosslinked QhpC. The substrate QhpC is bound to the pocket formed by the catalytic domain, which includes the FAD cofactor and the small domain. (B) QhpG-catalyzed dihydroxylation reaction.

Image: 
Osaka University

Osaka, Japan - Investigators from the Institute of Scientific and Industrial Research at Osaka University, together with Hiroshima Institute of Technology, have announced the discovery of a new protein that allows an organism to conduct an initial and essential step in converting amino acid residues on a crosslinked polypeptide into an enzyme cofactor. This research may lead to a better understanding of the biochemistry underlying catalysis in cells.

Every living cell is constantly pulsing with an array of biochemical reactions. The rates of these reactions are controlled by special proteins called enzymes, which catalyze specific processes that would otherwise take much longer. A number of enzymes require specialized molecules called "cofactors," which can help shuttle electrons back and forth during oxidation-reduction reactions. But these cofactors themselves must be produced by the organisms, and often require the assistance of previously existing proteins.

Now, a team of scientists at Osaka University has identified a novel protein called QhpG that is essential for the biogenesis of the enzyme cofactor cysteine tryptophylquinone (CTQ). By analyzing the mass of the reaction products and determining its crystal structure, they were able to deduce the catalytic function of QhpG, which is adding two hydroxyl groups to a specific tryptophan residue within an active-site subunit QhpC of quinoheme protein amine dehydrogenase, the bacterial enzyme catalyzing the oxidation of various primary amines. The resulting dihydroxylated tryptophan and an adjacent cysteine residue are finally converted to cofactor CTQ.

However, the action of QhpG is somewhat unusual compared with other protein-modifying enzymes in that it reacts with the tryptophan residue on the QhC triply crosslinked by another enzyme QhpD in a process call post-translation modification. Tryptophan, which naturally contains rings with conjugated bonds, needs the fewest changes to become a quinone cofactor. "Although several enzymes are known to contain a quinone cofactor derived from a tryptophan residue, the mechanism involved in post-translational modification, as well as the structures of the enzymes involved in their biogenesis, remains poorly understood," lead author Toshinori Oozeki says.

The proteins were obtained by introducing plasmids with the corresponding genes into E. coli bacteria and made into crystals. X-ray diffraction data of the crystal can determine the QhpG protein structure. The team then used computer software to simulate the docking of the target molecules, the triply crosslinked polypeptide QhpC, based on the crystal structure they found for QhpG. The two post-translational modifications of QhpC are successively carried out in the modification enzyme complex QhpD-QhpG. "Our findings can be applied to development of novel bioactive peptides using enzymes that modify amino acids," senior author Toshihide Okajima says. Some of these applications include creating new enzymes for the bioremediation of toxic chemicals.

Credit: 
Osaka University

Virtual post-sepsis recovery program may also help recovering COVID-19 patients

image: Post-sepsis model may help COVID-19 patients after discharge.

Image: 
ATS

Feb. 10, 2021 - A new paper published online in the Annals of the American Thoracic Society describes a "virtual" recovery program for sepsis patients that may also help post-COVID-19 patients and survivors of other serious illnesses.

In "Translating Post-Sepsis Care to Post-COVID Care: The Case for a Virtual Recovery Program," Stephanie Parks Taylor, MD, Department of Internal Medicine, Atrium Health, Charlotte, North Carolina, and co-authors describe a model of virtual care they developed and successfully implemented for patients who have left the hospital after being treated for sepsis. They also address ways that this model of care may help severe COVID-19 patients who have survived their illness but need continuing care.

"Our initial health services research indicated that recommended post-sepsis care practices were inconsistently applied for sepsis survivors, but the application of these practices was associated with fewer rehospitalizations and deaths at 90 days post-discharge," said Dr. Taylor. "We decided to engage a multidisciplinary stakeholder group to develop a mechanism to deliver best-practice post-sepsis care. Given the challenges many patients experience returning for face-to-face visits after critical illness, the virtual transition program emerged as an ideal approach that combined quality, patient-centeredness and scalability."

This multicomponent sepsis transition and recovery program, known as "STAR," is conducted virtually by a specialized nurse navigator, who provides best-practice care for high-risk sepsis survivors post-hospital discharge. The nurse navigator helps deliver care through low-technology telehealth methods, including electronic health records (EHRs), secure messaging services and telephone. The sepsis nurse navigators monitor and support patients from a centralized, geographically distant location.

"While there is still a lot to learn about COVID-19 survivorship, based on what we currently know we can assume COVID survivors experience many of the same issues as recovering sepsis patients," said Dr. Taylor. "The STAR program leverages a virtual platform that addresses the challenges of care delivery in a pandemic setting."

According to Dr. Taylor, these challenges include strain on the health care system due to a rapid surge in survivors and reduced access to traditional primary care follow-up due to physical distancing.

Dr. Taylor and colleagues cite a number of factors that they have found important to successfully implementing their program:

Adequate human, financial and technological resources. Upfront funding and adequate training for the navigators are critical for success. Navigators should be trained in a number of areas, such as sepsis education, communication skills and cultural awareness.

A method to identify high-risk patients. The team developed and deployed a data-driven, EHR-embedded algorithm and risk models to identify patients at high risk of post-discharge death or rehospitalization.

Robust and effective operational processes. Among the additional elements that should be part of the program's operation are: optimization of medications, including frequent reassessment and adjustment of dosages; screening and early intervention for functional, cognitive and mental health problems, which are common among sepsis survivors and appear to be even worse for COVID-19 survivors; symptom monitoring to ensure that new infections do not occur, and to look for signs that other pre-existing conditions are worsening (such as weight gain for heart failure patients) or indications of adverse drug reactions (such as bleeding for patients receiving anticoagulants), and establishing goals of care, in partnership with patients, and communicating this information to the patient's primary care physician. STAR navigators are supported by a hospital-based physician (hospitalist) who reviews cases and discusses issues that arise.

Dr. Taylor notes that Atrium Health is an integrated health system, and that home health or community paramedicine providers can be activated for evaluation or treatment in patients' homes. "Whether the implementing site is an integrated health system or not, programs will need to establish robust communication pathways for efficient exchange of information between navigators and relevant partners," she said. "I think a potential misstep could be implementing a transition program that identifies problems among sepsis survivors but lacks an efficient process for responding effectively to those problems."

Additionally, much of the home care and some of the post-acute skilled nursing facilities are integrated within the health system, but many are not. Dr. Taylor and co-author Marc A. Kowalkowski, PhD, are now studying how to overcome barriers to providing extra support to post-acute care settings, as these patients are particularly vulnerable after hospital discharge.

The authors add that there are a number of advantages of a virtual navigator platform for sepsis transition and recovery:

Improved access and adherence to follow-up. This type of program can help ensure sepsis survivors get follow-up care, which is frequently not provided or easily accessible. It may also reduce rural, socioeconomic and disability disparities.

Frequent reassessment and adjustment of patients' care plan. This is made possible by virtual visits with nurse navigators, who can provide long-term follow up at short intervals. The STAR navigator program continues for 90 days after hospital discharge, and patients with persistent challenges have the most frequent navigator contact.

Consistent "check-ins" from a familiar health professional may have psychological benefits. The nurse navigator can help alleviate the stress disorders that are common among sepsis survivors.

The program is cost-effective and scalable. This model of care delivery enables one STAR navigator to accept 20-30 new patients a month and provide 90 days of support.

Dr. Taylor concludes, "Since severe COVID can be considered a type of sepsis, COVID patients are entering the STAR program if they meet its eligibility criteria. Currently, post-COVID patients are receiving the same elements of care as sepsis survivors, with special attention to respiratory symptoms and associated complications. Part of our research involves evaluating the extent to which post-COVID recovery differs from non-COVID sepsis recovery, so we hope to have data to determine whether there will be factors that are unique to COVID survivorship."

Credit: 
American Thoracic Society

Industrial compound gets eco-friendly reaction

image: Sodium or magnesium aryloxides can catalyse the transesterification of methyl (meth)acrylate at room temperature, with high chemoselectivity, producing a high yield of (meth)acrylate ester, and without the use of toxic metals or ligands.

Image: 
Kazuaki Ishihara

Nagoya University scientists have developed a chemical reaction that produces high yields of a compound used in a wide variety of industries, without needing high temperatures or toxic catalysts. The approach was described in the journal ACS Catalysis and offers a practical and sustainable solution for industrial (meth)acrylate (= acrylate or methacrylate) ester synthesis.

(Meth)acrylate esters are used in industrial coatings and masonry, and to make plastics, dyes and adhesives. But the chemical process for making them from methyl (meth)acrylates involves high temperatures, long reaction times and toxic compounds. It can also result in unwanted side reactions.

Scientists, including Nagoya University professor Kazuaki Ishihara and colleagues, have been working on improving this process to make it more eco-friendly. Specifically, they worked on improving the catalyst involved in the chemical reaction that turns methyl (meth)acrylates into (meth)acrylate esters, called transesterification.

"Millions of tons of (meth)acrylate esters are produced annually and are among the most important manufactured chemicals around," says Ishihara. "Their transesterification, using alcohol and a catalyst, fine-tunes their properties, producing a wide range of (meth)acrylate esters."

Ishihara and his colleagues found that sterically bulky sodium and magnesium aryloxides worked very well as non-toxic alternatives. They catalysed the transesterification of methyl (meth)acrylates at the relatively mild temperature of 25°C, producing high yields of a broad range of (meth)acrylate esters depending on the type of alcohol used in the reaction.

The team also conducted computational calculations to uncover the details of what happened during the chemical reaction, showing that it had high chemoselectivity; in other words, the reaction happened the way the scientists wanted it to without having undesirable side reactions.

"Our transesterification process is a practical and sustainable candidate for industrial (meth)acrylate ester synthesis, providing excellent chemoselectivity, high yields, mild reaction conditions and a lack of any toxic metal salts," says Ishihara.

The team next aims to collaborate with colleagues in industry to use their approach in (meth)acrylate ester production. They also aim to continue searching for efficient catalysts for the transesterification of methyl (meth)acrylates and to develop recyclable catalysts.

Credit: 
Nagoya University

Sleep keeps teens on track for good mental health

image: Teenagers need at least eight hours of sleep each night.

Image: 
Kinga Cichewicz

As families settle back into a new school year, sleep experts at the University of South Australia are reminding parents about the importance of teenagers getting enough sleep, cautioning them that insufficient sleep can negatively affect their mental health.

In a new research paper, UniSA sleep experts Dr Alex Agostini and Dr Stephanie Centofanti confirm that sleep is intrinsically linked to mental health, but is commonly overlooked by health practitioners as a contributing factor.

Dr Agostini says it's imperative that parents and medical practitioners are aware of the bi-directional relationship between sleep and mental health, particularly across the teenage years.

"Getting enough sleep is important for all of us ¬¬- it helps our physical and mental health, boosts our immunity, and ensures we can function well on a daily basis," Dr Agostini¬ says.

"But for teenagers, sleep is especially critical because they're at an age where they're going through a whole range of physical, social, and developmental changes, all of which depend on enough sleep.

"Research shows that teenagers need at least eight hours of sleep each night. Without this, they're less able to deal with stressors, such as bullying or social pressures, and run the risk of developing behavioural problems, as well as anxiety and depression.

"If sleep drops to less than six hours a night, research shows that teens are twice as likely to engage in risky behaviours such as dangerous driving, marijuana, alcohol or tobacco use, risky sexual behaviour, and other aggressive or harmful activities."

In Australia, almost one in seven children and adolescents (aged 4-17 years) will experience a mental health disorder. The World Health Organization says that while half of all mental health conditions start by age 14, most cases go undetected and untreated.

Co-researcher, Dr Centofanti says while many factors contribute to later bedtimes for teenagers, technology is one of the greatest offenders.

"Teens spend a lot of time on devices, whether it's texting friends, playing games, or watching videos, using technology late into the night is one of the most common disruptors of good sleep. Overuse of technology can also contribute to mental health issues likely to increase anxiety," Dr Centofanti says.

"Not only can technology use make us feel anxious and awake, but the blue light emitted from technology inhibits the production of the sleep hormone melatonin to delay the natural onset of sleep. This is problematic because teens already have a biological tendency to want to stay up late and sleep in.

"To make a real difference to teenage mental health, both parents and medical practitioners must understand how sleep can affect mental health in teenagers."

Credit: 
University of South Australia

Depressed moms who breastfeed boost babies' mood, neuroprotection and mutual touch

image: Researchers examined the infant's electroencephalogram activity (EEG) during development. Affectionate touch was coded during the mother-infant feeding context and included stroking, massaging and caressing initiated by either mother or infant.

Image: 
Florida Atlantic University

About 1 in 9 mothers suffers from maternal depression, which can affect the mother-infant bond as well as infant development. Touch plays an important role in an infant's socio-emotional development. Mothers who are depressed are less likely to provide their babies with soothing touch, less able to detect changes in facial expressions, and more likely to have trouble regulating their own emotions. In addition, infants of depressed mothers exhibit similar brain functioning patterns as their depressed mothers, which also are linked to temperament characteristics. Infants of depressed mothers are at a high risk of atypical and potentially dysregulated social interaction.

A first-of-its-kind study by researchers at Florida Atlantic University's Charles E. Schmidt College of Science examined the developing mother-infant relationship by studying feeding method (breastfeeding and/or bottle-feeding) and affectionate touch patterns in depressed and non-depressed mother-infant dyads as well examining the infant's electroencephalogram activity (EEG) during development. Affectionate touch was coded during the mother-infant feeding context and included stroking, massaging and caressing initiated by either mother or infant.

For the study, researchers evaluated 113 mothers and their infants and assessed maternal depressive symptoms, feeding and temperament or mood. They collected EEG patterns (asymmetry and left and right activity) from infants at 1 and 3 months old and videotaped mother-infant dyads during feeding to assess affectionate touch patterns in both mother and baby. They specifically focused on alterations in EEG activation patterns in infants across development to determine whether feeding and maternal depression are interactively related to changes in resting frontal EEG asymmetry and power.

Data from EEG activity, published in the journal Neuropsychobiology, revealed that mother-infant affectionate touch differed as a function of mood and feeding method (breastfeeding vs. bottle-feeding), affecting outcomes for infants of depressed mothers compared to non-depressed mothers. Researchers observed a reduction in infant touch toward their mothers only with the infants in the depressed and bottle-fed group. Affectionate touch of mothers and infants varied by depression interacting with feeding type, with breastfeeding having a positive effect on both maternal and infant affectionate touch. Infants of depressed and breastfeeding mothers showed neither behavioral nor brain development dysregulation previously found in infants of depressed mothers.

"We focused on mother-infant affectionate touch patterns during feeding in our study because touch is a form of mutual interaction established in early infancy, used to communicate needs, soothe, and downregulate stress responses, and because mothers and infants spend a significant amount of time feeding across the first three months postpartum," said Nancy Aaron Jones, Ph.D., lead author, an associate professor, and director of the FAU WAVES Emotion Laboratory in the Department of Psychology in the Charles E. Schmidt College of Science, and a member of the FAU Brain Institute. "As experience with maternal mood and feeding pervade the infant's early environment, we chose to examine how these factors interact to affect mother-infant affectionate touch, focusing fastidiously on the key roles of individual variation in temperament and EEG activation patterns."

Asymmetry patterns in certain infant populations, such as those of depressed mothers differ from the asymmetry patterns of typically developing infants and children. While EEG asymmetry measures the balance of the right and left hemisphere activity, infants of depressed mothers exhibit patterns of right frontal asymmetry, due in part to hypoactivation of the left hemisphere within the frontal region. This pattern of brain activation (greater right asymmetry) is similar to the pattern observed in depressed adults and is thought to represent heightened negative affect as well as motor tendencies for withdrawal and inhibited approach behaviors.

In addition to the tactile behavior changes, the infants in this study displayed differential brain activation patterns as a function of maternal depression and feeding group status. Not only were the infants' EEG patterns affected by their mother's depression status, stable breastfeeding experience also interacted with the depression group to impact EEG patterns across early development. Left frontal asymmetry in infants was associated with having a non-depressed mother and infant care experiences in the form of stable breastfeeding. Left frontal activity has been associated with advancing maturation, positive emotions, as well as higher order processing skills. Notably, EEG patterns of infants of depressed mothers showed right frontal asymmetry; however, shifts to greater left frontal activation (left frontal hyperactivation change) were found in those infants with stable breastfeeding experiences.

Analysis from the study also revealed that infant breastfeeding duration and positive temperamental characteristics predicted infant affectionate touch patterns, suggesting that early infant experiences, and more broadly, their underlying neurochemical regulatory processes during feeding could influence the development of infant physiology and behavior, even for infants of depressed mothers.

"Ultimately, our study provides evidence that the sensitive caretaking that occurs, even for mothers with postnatal depression in the context of more predominant breastfeeding, may redirect neurophysiological, temperamental, and socio-emotional risk through dyadic tactile experiences across early development," said Aaron Jones.

Credit: 
Florida Atlantic University

Where and when is economic decision-making represented in the brain?

image: Nevertheless, state-space analysis showed that as a population, only the cOFC and VS stably represented the calculated expected value (Fig).

Image: 
University of Tsukuba

Tsukuba, Japan -- Economists have been using game theory to study decision-making since the 1950s. More recently, the interdisciplinary field of neuroeconomics has gained popularity as scientists try to understand how economic decisions are made in the brain. Researchers led by Professor Masayuki Matsumoto and Assistant Professor Hiroshi Yamada at the University of Tsukuba in Japan studied populations of neurons across the monkey brain reward network to find out where and when expected value is calculated.

The team trained monkeys to perform a lottery task for a reward. The monkeys saw two pie charts on a computer screen. The colors in the charts told the monkeys the size of the reward and the probability of getting it. Mathematically, the expected value is the size of the reward multiplied by the chance of getting it. Thus, a highly probable large reward would create a high expected value and a small reward with a low probability would create a low expected value. The monkeys consistently chose the pie chart that depicted the higher expected value, and behavioral models showed that their decisions were indeed based on this integrated value, not simply the probability or the size of the reward.

The brain has a network of connected regions that all have functions related to processing rewards. The researchers recorded brain activity from four regions that have been implicated in decision-making: the cOFC, mOFC, VS, and DS. They analyzed brain activity when monkeys simply saw one pie chart, but did not have to make any choice. This allowed the researchers to identify brain regions involved in calculating the expected value, not those involved in making choices. They found that all four regions contained neurons that responded to parts of the calculation. Nevertheless, state-space analysis showed that as a population, only the cOFC and VS stably represented the calculated expected value. Additionally, these two regions also stably represented risk-return.

Although both the cOFC and VS integrated reward size and probability, the team noticed that brain cells in these regions did not behave the same way over time. Analysis showed that the expected value signals in the cOFC developed quickly, while those in the VS developed gradually each time the monkey saw one of the pie charts.

Finding that only the cOFC and VS signal expected value differed from previous studies. "Our use of state-space analysis as a means to characterize the dynamics of neuronal populations was the key," explains Matsumoto. "Using this method, we were able to see differences in both stability and as well as the time course of the signal."

Credit: 
University of Tsukuba

Pooping out miracles: scientists reveal mechanism behind fecal microbiota transplantation

image: Restoration of intestinal microflora functions reflects the success of FMT

Image: 
Satoshi Uematsu, Osaka City University

Clostridioides difficile infection (rCDI) occurs in the gut and is caused by the Gram-positive, spore-forming anaerobic bacterium, C. difficile when its spores attach to fecal matter and are transferred from hand to mouth by health care workers. Patients undergoing antibiotic treatment are especially susceptible as the microorganisms that maintain a healthy gut are greatly damaged by the antibiotics.

Treatment of rCDI involves withdrawing the causative antibiotics and initiating antibiotic therapy, although this can be very challenging. Fecal microbiota transplantation (FMT) is considered an effective alternative therapy as it addresses the issue from the ground up by replacing the damaged microflora with a healthy one through a stool transplant.

However, two deaths caused by antibiotic-resistant bacterial infections after FMT were reported in 2019, suggesting that a modification of FMT or alternatives are required to resolve safety concerns surrounding the treatment.

Researchers at Osaka City University and the Institute for Medical Science, University of Tokyo tackled this challenge head on in a great study now published in Gastroenterology.

Using their original analysis pipeline reported in 2020, the researchers obtained intestinal bacterial and viral metagenome information from the fecal samples of nine rCDI patients from Brigham and Women's Hospital in Boston who successfully had a FMT. They revealed the bacteria and phages involved in the pathogenesis of rCDI and the remarkable pathways important for the recovery of intestinal flora function.

By revealing how the bacteriome and virome in the intestine work together as an organ, the research team was able to show how FMT can be as safe as swapping out a bad organ with a good one.

"Intestinal microbiota should definitely be treated as an 'organ'!" says principal investigator Professor Satoshi Uematsu, "FMT drastically changed the intestinal bacteriome and virome and is sure to restore the intestinal bacterial and viral functions."

In the post-COVID-19 world, rCDI will become one of the more pressing international diseases. There is no doubt that FMT is an important therapeutic strategy for rCDI. "In addition to a variety of clinical surveys, comprehensive metagenomic analysis is very important in considering the safety of FMT." say Dr. Kosuke Fujimoto and Prof. Seiya Imoto.

Credit: 
Osaka City University

Novel analytical tools developed by SMART key to next-generation agriculture

image: Species-independent analytical platforms can facilitate the creation of feedback-controlled high-density agriculture.

Image: 
Betsy Skrip, Massachusetts Institute of Technology

Plant nanosensors and Raman spectroscopy are two emerging analytical technologies and tools to study plants and monitor plant health, enabling research opportunities in plant science that have so far been difficult to achieve with conventional technologies such as genetic engineering techniques

The species-independent analytical tools are rapid and non-destructive, overcoming current limitations and providing a wealth of real-time information, such as early plant stress detection and hormonal signalling, that are important to plant growth and yield

Perspective study evaluates further development of the tools, their economic potential and discusses implementation strategies for successful integration in future farming practices for traditional and urban agriculture

Singapore, 10 February 2021 - Researchers from the Disruptive & Sustainable Technologies for Agricultural Precision (DiSTAP) Interdisciplinary Research Group (IRG) of Singapore-MIT Alliance for Research and Technology (SMART), MIT's research enterprise in Singapore, and Temasek Life Sciences Laboratory (TLL), highlight the potential of recently developed analytical tools that are rapid and non-destructive, with a proof of concept through first-generation examples. The analytical tools are able to provide tissue-cell or organelle-specific information on living plants in real-time and can be used on any plant species.

In a perspective paper titled "Species-independent analytical tools for next-generation agriculture" published in the scientific journal Nature Plants, SMART DiSTAP researchers review the development of two next-generation tools, engineered plant nanosensors and portable Raman spectroscopy, to detect biotic and abiotic stress, monitor plant hormonal signalling, and characterise soil, phytobiome and crop health in a non- or minimally invasive manner. The researchers discussed how the tools bridge the gap between model plants in the laboratory and field application for agriculturally relevant plants. An assessment of the future outlook, economic potential, and implementation strategies for the integration of these technologies in future farming practices was also provided in the paper.

According to UN estimates, the global population is expected to grow by 2 billion within the next 30 years, giving rise to an expected increase in demand for food and agricultural products to feed the growing population. Today, biotic and abiotic environmental stresses such as plant pathogens, sudden fluctuations in temperature, drought, soil salinity, and toxic metal pollution - made worse by climate change - impair crop productivity and lead to significant losses in agriculture yield worldwide.

An estimated 11-30% yield loss of five major crops of global importance (wheat, rice maize, potato, and soybean) are caused by crop pathogens and insects; with the highest crop losses observed in regions already suffering from food insecurity. Against this backdrop, research into innovative technologies and tools are required for sustainable agricultural practices and meet the rising demand for food and food security - an issue that has drawn the attention of governments worldwide due to the COVID-19 pandemic.

Plant nanosensors, developed at SMART DiSTAP, are small nanosensors - smaller than the width of a hair - that can be inserted into the tissues and cells of plants to understand complex signalling pathways. The portable Raman spectroscopy, also developed at SMART DiSTAP, is a portable laser-based device that measures molecular vibrations induced by laser excitation, providing highly specific Raman spectral signatures that provide a fingerprint of a plant's health. These tools are able to monitor stress signals in short time-scales, ranging from seconds to minutes, which allow for early detection of stress signals in real-time.

"The use of plant nanosensors and Raman spectroscopy has the potential to advance our understanding of crop health, behaviour, and dynamics in agricultural settings," said Dr Tedrick Thomas Salim Lew, the paper's first author and a recent graduate student of the Massachusetts Institute of Technology (MIT). "Plants are highly complex machines within a dynamic ecosystem, and a fundamental study of its internal workings and diverse microbial communities of its ecosystem is important to uncover meaningful information that will be helpful to farmers and enable sustainable farming practices. These next-generation tools can help answer a key challenge in plant biology, which is to bridge the knowledge gap between our understanding of model laboratory-grown plants and agriculturally-relevant crops cultivated in fields or production facilities."

Early plant stress detection is key to timely intervention and increasing the effectiveness of management decisions for specific types of stress conditions in plants. The development of these tools capable of studying plant health and reporting stress events in real-time will benefit both plant biologists and farmers. The data obtained from these tools can be translated into useful information for farmers to make management decisions in real-time to prevent yield loss and reduced crop quality.

The species-independent tools also offer new study opportunities in plant science for researchers. In contrast to conventional genetic engineering techniques that are only applicable to model plants in laboratory settings, the new tools apply to any plant species which enables the study of agriculturally-relevant crops previously understudied. The adoption of these tools can enhance researchers' basic understanding of plant science and potentially bridge the gap between model and non-model plants.

"The SMART DiSTAP interdisciplinary team facilitated the work for this paper and we have both experts in engineering new agriculture technologies and potential end-users of these technologies involved in the evaluation process," said Professor Michael Strano, the paper's co-corresponding author, DiSTAP co-lead Principal Investigator, and Carbon P. Dubbs Professor of Chemical Engineering at MIT. "It has been the dream of an urban farmer to continually, at all times, engineer optimal growth conditions for plants with precise inputs and tightly controlled variables. These tools open the possibility of real-time feedback control schemes that will accelerate and improve plant growth, yield, nutrition, and culinary properties by providing optimal growth conditions for plants in the future of urban farming."

Species-independent analytical platforms can facilitate the creation of feedback-controlled high-density agriculture.

Photo Credit: Betsy Skrip, Massachusetts Institute of Technology

"To facilitate widespread adoption of these technologies in agriculture, we have to validate their economic potential and reliability, ensuring that they remain cost-efficient and more effective than existing approaches," the paper's co-corresponding author, DiSTAP co-lead Principal Investigator, and Deputy Chairman of TLL Professor Chua Nam Hai explained. "Plant nanosensors and Raman spectroscopy would allow farmers to adjust fertiliser and water usage, based on internal responses within the plant, to optimise growth, driving cost efficiencies in resource utilisation. Optimal harvesting conditions may also translate into higher revenue from increased product quality that customers are willing to pay a premium for."

Collaboration among engineers, plant biologists, and data scientists, and further testing of new tools under field conditions with critical evaluations of their technical robustness and economic potential will be important in ensuring sustainable implementation of technologies in tomorrow's agriculture.

Credit: 
Singapore-MIT Alliance for Research and Technology (SMART)

Scientists propose three-step method to reverse significant reforestation side effect

While deforestation levels have decreased significantly since the turn of the 21st century, the United Nations (UN) estimates that 10 million hectares of trees have been felled in each of the last five years.

Aside from their vital role in absorbing CO2 from the air, forests play an integral part in maintaining the delicate ecosystems that cover our planet.

Efforts are now underway across the world to rectify the mistakes of the past, with the UN Strategic Plan for Forests setting out the objective for an increase in global forest coverage by 3% by 2030.

With time being of the essence, one of the most popular methods of reforestation in humid, tropical regions is the planting of a single fast-growing species (monoculture) in a large area. This is especially important as a means of quickly preventing landslides in these regions that experience frequent typhoons and heavy rains.

However, new research published to Frontiers in Ecology and Evolution by a team from Hainan University and the Chinese Academy of Sciences has not only found this practice could have a detrimental effect on the surrounding soil water content, but it has developed a three-step method to remedy it.

To determine whether the monoculture plantings can help recover soil water content, the team reforested a small patch of extremely degraded tropical monsoon forest measuring 0.2 sq km near Sanya City, Hainan.

Rapid soil water loss

The team which included corresponding authors Dr Chen Wang and Dr Hui Zhang hypothesized that the significantly higher transpiration rate - the amount of water lost by plants over a period of time - would deplete soil water faster in their test forest than the slow-growing species found in the adjacent rainforest.

This would include both during the wet and dry seasons, resulting in much lower soil water content.
Testing showed that the transpiration rate and transpiration-related trait values were between 5 and 10 times greater in the fast-growing species than slow-growing species in the rainy and dry seasons.

It also found that soil water content surrounding the dominant slow-growing species in a nearby forest was between 1.5 and 3 times greater than fast-growing species for both the rainy and dry seasons.

Three-step remedy

Despite this drawback, the monoculture planting of a fast-growing species is seen as important to prevent landslides following frequent typhoons and heavy rain. To help prevent the loss of soil water content, the team has proposed a three-step method that it describes as easy to implement.

This includes:

Reconstructing the slope and soil layers based on a reference of undisturbed old-growth tropical forest.

Refiling the same soils from the undisturbed old-growth tropical monsoon forest to plant fast-growing tree species to minimize impacts from landslides and other soil disturbance events.

Planting slow-growing tree species from an undisturbed forest area within the fast-growing species stands to increase soil water content.

"Past and current human disturbances - such as ore mining and the plantation of commercial trees - have resulted in high rates of deforestation and ecosystem degradation across the world," said Dr Wang, based at the South China Botanical Gardens in Guangzhou, China.

"These, in turn, result in a major threat to the global supply of freshwater. It is therefore urgent to initiate and maintain reforestation projects aimed at recovering soil water content and increasing freshwater supply to human society."

Writing in their paper, the scientists added: "We expect that this simple three-step method can be an effective means of restoring extremely degraded tropical forests in other parts of the world."

Credit: 
Frontiers

Pre-COVID subway air polluted from DC to Boston, but New York region's is the worst

Commuters now have yet another reason to avoid packing themselves into subway stations. New York City's transit system exposes riders to more inhaled pollutants than any other metropolitan subway system in the Northeastern United States, a new study finds. Yet even its "cleaner" neighbors struggle with enough toxins to give health-conscious travelers pause.

Led by NYU Grossman School of Medicine researchers, the study measured air quality samples in 71 stations at morning and evening rush hours in Boston, New York City, Philadelphia, and Washington, D.C. Among the 13 underground stations tested in New York, the investigators found concentrations of hazardous metals and organic particles that ranged anywhere from two to seven times that of outdoor air samples.

Notably in the report, publishing online Feb. 10 in the journal Environmental Health Perspectives, one underground platform on the PATH line connecting New Jersey and Manhattan (Christopher Street Station) reached up to 77 times the typical concentration of potentially dangerous particles in outdoor, aboveground city air. This figure is comparable to sooty contamination from forest fires and building demolition, the study authors say.

Air quality was also measured in another 58 stations during rush hours in Boston, Philadelphia, and Washington. While no station's readings reached the severe levels of contamination seen in New York's worst transit lines, underground subway stations within each of these cities still showed at least twice the airborne particle concentrations as their respective outside samples at morning and evening rush hours.

"Our findings add to evidence that subways expose millions of commuters and transit employees to air pollutants at levels known to pose serious health risks over time," says study lead author David Luglio, a doctoral student at NYU Grossman.

"As riders of one of the busiest, and apparently dirtiest, metro systems in the country, New Yorkers in particular should be concerned about the toxins they are inhaling as they wait for trains to arrive," adds co-senior study author Terry Gordon, PhD, a professor in the Department of Environmental Medicine at NYU Grossman.

Further analysis of air samples showed that iron and organic carbon, a chemical produced by the incomplete breakdown of fossil fuels or from decaying plants and animals, composed three-quarters of the pollutants found in the underground air samples for all measured subway stations. Although iron is largely nontoxic, some forms of organic carbon have been linked to increased risk of asthma, lung cancer, and heart disease, the study authors say. Gordon notes that further research is needed to assess potentially higher risk for transit workers who spend far longer periods of time in the stations than riders.

The Metropolitan Transit Authority reported that 5.5 million people rode New York City's subways every day in 2019, while PATH puts its daily ridership at more than 284,000.

For the investigation, the research team took over 300 air samples during rush hour in stations in Manhattan, Philadelphia, Washington, Boston, and along train lines connecting New York City to New Jersey and Long Island. The data reflects more than 50 total hours of sampling across about 70 subway stops. In addition to real-time monitoring of the air quality, the team also used filters to collect airborne particles for later analysis.

According to the findings, the PATH-New York/New Jersey system had the highest airborne particle concentration at 392 micrograms per cubic meter, followed by the MTA-New York at 251 micrograms per cubic meter. Washington had the next highest levels at 145 micrograms per cubic meter, followed by Boston at 140 micrograms per cubic meter. Philadelphia was comparatively the cleanest system at 39 micrograms per cubic meter. By comparison, aboveground air concentrations for all measured cities averaged just 16 micrograms per cubic meter.

Meanwhile, the Environmental Protection Agency advises that daily exposures at fine particle concentrations exceeding 35 micrograms per cubic meter pose serious health hazards.

Besides the Christopher Street PATH station, the most polluted stations in the Northeast included Capitol South in Washington, Broadway in Boston, 2nd Avenue on the F line New York City, and 30th Street in Philadelphia, according to the findings.

Gordon cautions that the researchers did not measure riders' short-term exposure to the airborne substances, which would more closely mimic their experiences dashing to catch a train at the last minute. In addition, it remains unclear whether the steep drop in New York subway ridership due to the COVID-19 pandemic has influenced the metro's air quality, he adds.

Next, Gordon says he plans to investigate sources of subway station air contamination, such as exhaust given off by diesel maintenance locomotives, whipped up dust from the remains of dead rodents, and poor ventilation as potential culprits. He also encourages researchers and transit authorities to examine why some systems are less polluted than others in a bid to adopt practices that might relatively quickly make stations safer for riders.

Credit: 
NYU Langone Health / NYU Grossman School of Medicine

Research reveals why plant diversity is so important for bee diversity

image: A honey bee on a lavender plant, one of the species studied in the research

Image: 
Professor Francis Ratnieks, University of Sussex

As abundant and widespread bees, it is common to see both bumble bees and honey bees foraging on the same flower species during the summer, whether in Britain or many other countries.

Yet researchers at the Laboratory of Apiculture and Social Insects (LASI) at the University of Sussex, show that these two different bees dominate on different flower species and have found out why.

By studying 22 flower species in southern England and analysing the behaviour of more than 1000 bees, they found that ‘energy efficiency’ is a key factor when it comes to mediating competition.

Bee bodyweight and the rate at which a bee visits flowers determine how energy efficient they are when foraging. Bodyweight determines the energy used while flying and walking between flowers, with a bee that is twice as heavy using twice as much energy. The rate at which a bee visits flowers, the number of flowers per minute, determines how much nectar, and therefore energy, it collects. Together, the ratio of these factors determines bee foraging energy efficiency.

Professor of Apiculture, Francis Ratnieks, said: “While they forage on the same flowers, frequently we find that bumble bees will outnumber honey bees on a particular flower species, while the reverse will be true on other species growing nearby.

“What was remarkable was that differences in foraging energy efficiency explained almost fully why bumble bees predominated on some flower species and honey bees on others.

“In essence, bumble bees have an advantage over honey bees in being faster at visiting flowers, so can gather more nectar (energy), but a disadvantage in being larger, and so using more of the nectar energy to power their foraging. On some flower species this gave an overall advantage to bumble bees, but on others to honey bees.”

In the study, published in the journal Ecology, the researchers used stopwatches to determine how many flowers a bee visited in one minute. Using a portable electronic balance to weigh each bee, researchers found that, on average, bumble bees are almost twice as heavy as the honey bees. This means that they use almost twice as much energy as honey bees. The stopwatch results showed that they visit flowers at twice the rate of honey bees, which compensate in terms of energy efficiency.

On some flower species such as lavender, bumble bees dominated and were visiting flowers at almost three times the rate of honeybees.

The differences in the morphology of flowers likely affects how energy efficient the two bee types were. Ling heather, with its mass of small flowers, was better suited to the nimbler honey bee who were able to visit more flowers per minute than the bumble bees. By contrast, Erica heather, which was growing beside the ling heather in the same nature reserve, has large bell shaped flowers and was better suited to bumble bees.

Author Dr Nick Balfour said: “The energy efficiency of foraging is particularly important to bees. The research showed that the bees were walking (and flying) a challenging energy tightrope; half the energy they obtained from the nectar was expended in its collection.”

Energy (provided by nectar for bees) is a fundamental need, but the fact that honey bees and bumble bees do not compete head on for nectar is reassuring in terms of conservation and co-existence.

Prof Ratnieks explained: “Bumble bees have a foraging advantage on some plants, and predominate on them, while honey bees have an advantage on others and predominate on these.

“Bee conservation therefore benefits from flower diversity, so that should certainly be a focus on bee conservation efforts. But fortunately, flowering plants are diverse.”

The research team, which included Sussex PhD student Kyle Shackleton, Life Sciences undergraduates Natalie A. Arscott, Kimberley Roll-Baldwin and Anthony Bracuti, and Italian student volunteer, Gioelle Toselli, studied flower species in a variety of local locations. This included a nature reserve, the wider countryside, Brighton parks, Prof Ratnieks’s own garden and a flower bed outside Sussex House on the University campus.

Dr Balfour said: “Whether you have a window box, allotment or a garden, planting a variety of summer-blooming flowers or cutting your grass less often can really help pollinators during late summer.”

Credit: 
University of Sussex

Arizona economic burden of valley fever totals $736 million

A University of Arizona Health Sciences study has estimated total lifetime costs at $736 million for the 10,359 valley fever patients diagnosed in Arizona in 2019, underscoring the economic burden the disease places on the state and its residents.

The prevalence of valley fever, formally known as coccidioidomycosis or cocci, has increased in recent years, from 5,624 cases diagnosed in Arizona in 2014 to 10,359 cases in 2019. There currently are no certain means of prevention or vaccination for the fungal disease, which is caused by spores of Coccidioides, a family of fungi found in soils of the Southwest.

The findings highlight the need for a vaccine, better therapeutic options and more consistent use of rapid diagnostic testing - all areas of focus at the UArizona College of Medicine - Tucson's Valley Fever Center for Excellence. The study, "Clinical and Economic Burden of Valley Fever in Arizona: An Incidence-Based Cost-of-Illness Analysis," was recently published in the journal Open Forum Infectious Diseases.

"I was overwhelmed by how important this disease is in Arizona and how preventable some of the costs may be," said lead author Amy Grizzle, PharmD, associate director of the Center for Health Outcomes & PharmacoEconomic Research (or HOPE Center) in the UArizona College of Pharmacy. "Because it's kind of an isolated disease, I didn't necessarily think it was that expensive. I'm gratified that this study was able to shed some light on how many people this disease affects and how costly it is, especially taking into consideration some of the long-term complications."

Dr. Grizzle said she was surprised by the $736 million total, which can be broken down into direct and indirect costs of $671 million and $65 million, respectively. Among direct costs, she said, are health expenses expected over a person's lifetime, including hospitalization, diagnosis and treatment (chest X-rays, rapid diagnostic tests, medications, surgery, etc.), and follow-up care including skilled nursing facilities for rehabilitation. Indirect costs include short-term work loss and lost earnings due to premature mortality.

The study examined the cost-of-illness for valley fever's five primary manifestations:

Primary uncomplicated pneumonia, which comprises 85% of diagnosed valley fever cases;

Chronic pneumonia, which often requires treatment for two to three years;

Disseminated infection, which requires lifelong medication, periodic testing, and recurring hospitalization, and is the leading cause of death among valley fever patients;

Other pulmonary changes - pulmonary nodules, which account for about 7% of all valley fever cases; and,

Other pulmonary changes - pulmonary cavities, which affect about 3% of all valley fever patients. Patients with pulmonary changes, whether nodules or cavities, often require expensive diagnostics to rule out lung cancer, and about 40% of such cases require hospitalization.

Researchers found that severe cases of valley fever where the disease spreads to other parts of the body, known as disseminated valley fever, resulted in the highest economic burden at nearly $1.4 million per person.

The study's estimated lifetime costs are comparable to a 2019 California study that put total lifetime costs for the 7,466 people diagnosed with Valley fever in the Golden State in 2017 at just under $700 million, according to co-author John Galgiani, MD, an infectious diseases specialist with the College of Medicine - Tucson who also is director of the UArizona Valley Fever Center for Excellence and a member of the BIO5 Institute.

Leslie Wilson, PhD, of the University of California San Francisco, led the California study and is a co-author on the Arizona study along with Drs. Grizzle and Galgiani, and David Nix, PharmD, of the UArizona College of Pharmacy.

Dr. Galgiani, who founded the Valley Fever Center in 1996, said two-thirds of all valley fever cases reported in the U.S. occur in Arizona, with half in Maricopa County. Only about 5% of U.S. valley fever cases are reported outside of Arizona and California.

The primary symptoms of valley fever are respiratory problems similar to bacterial pneumonia, though more serious complications can arise when the infection spreads beyond the lungs to other areas including bones, joints, and the brain and central nervous system.

Delays in diagnosis are common because valley fever symptoms can mimic those of the flu or bacterial pneumonia. Such confusion often means ineffective treatment with antibiotics, instead of antifungals like fluconazole, for days or weeks, during which time a patient's condition might deteriorate. The situation has worsened with the emergence of COVID-19, Dr. Galgiani said.

"Basically, doctors were under-diagnosing valley fever in the springtime, because of several reasons. One, people were being tested for COVID and, when they were negative, they didn't do any more testing," Dr. Galgiani said. "And two, people weren't seeking hospital care for anything if they didn't think it was COVID."

Drs. Galgiani and Grizzle say the economic impacts highlighted in the study underscore the value of supporting research into developing more rapid diagnostic tests, better therapies and ultimately a preventative vaccine to address this important public health problem in Arizona.

"Hopefully, after more treatments and/or a vaccine are available, we can do another cost of illness analysis 10 years from now and see that these medications have greatly reduced the economic burden to Arizona," Dr. Grizzle said.

Credit: 
University of Arizona Health Sciences