Culture

More than half of COVID-19 health care workers at risk for mental health problems

The daily toll of COVID-19, as measured by new cases and the growing number of deaths, overlooks a shadowy set of casualties: the rising risk of mental health problems among health care professionals working on the frontlines of the pandemic.

A new study, led by University of Utah Health scientists, suggests more than half of doctors, nurses, and emergency responders involved in COVID-19 care could be at risk for one or more mental health problems, including acute traumatic stress, depression, anxiety, problematic alcohol use, and insomnia. The researchers found that the risk of these mental health conditions was comparable to rates observed during natural disasters, such as 9/11 and Hurricane Katrina.

"What health care workers are experiencing is akin to domestic combat," says Andrew J. Smith, Ph.D., director of the U of U Health Occupational Trauma Program at the Huntsman Mental Health Institute and the study's corresponding author. "Although the majority of health care professionals and emergency responders aren't necessarily going to develop PTSD, they are working under severe duress, day after day, with a lot of unknowns. Some will be susceptible to a host of stress-related mental health consequences. By studying both resilient and pathological trajectories, we can build a scaffold for constructing evidence-based interventions for both individuals and public health systems."

The study appears in the Journal of Psychiatric Research. In addition to U of U Health scientists, contributors include researchers from the University of Arkansas for Medical Sciences; University of Colorado, Colorado Springs; Central Arkansas VA Health Care System; Salt Lake City VA Healthcare System; and the National Institute for Human Resilience.

The researchers surveyed 571 health care workers, including 473 emergency responders (firefighters, police, EMTs) and 98 hospital staff (doctors, nurses), in the Mountain West between April 1 and May 7, 2020. Overall, 56% of the respondents screened positive for at least one mental health disorder. The prevalence for each specific disorder ranged from 15% to 30% of the respondents, with problematic alcohol use, insomnia, and depression topping the list.

"Frontline providers are exhausted, not only from the impact of the pandemic itself, but also in terms of coping day to day," says Charles C. Benight, Ph.D., co-author of the study and a professor of psychology at the University of Colorado, Colorado Springs. "They're trying to make sure that their families are safe [and] they're frustrated over not having the pandemic under control. Those things create the sort of burnout, trauma, and stress that lead to the mental health challenges we're seeing among these caregivers."

In particular, the scientists found that health care workers who were exposed to the virus or who were at greater risk of infection because they were immunocompromised had a significantly increased risk of acute traumatic stress, anxiety, and depression. The researchers suggest that identifying these individuals and offering them alternative roles could reduce anxiety, fear, and the sense of helplessness associated with becoming infected.

Alcohol abuse was another area of concern. About 36% of health care workers reported risky alcohol usage. In comparison, estimates suggest that less than 21% of physicians and 23% of emergency responders abuse alcohol in typical circumstances. Caregivers who provided direct patient care or who were in supervisory positions were at greatest risk, according to the researchers. They say offering these workers preventative education and alcohol abuse treatment is vital.

Surprisingly, health care workers in this study felt less anxious as they treated more COVID-19 cases.

"As these health care professionals heard about cases elsewhere before COVID-19 was detected in their communities, their anxiety levels likely rose in anticipation of having to confront the disease," Smith says. "But when the disease started trickling in where they were, perhaps it grounded them back to their mission and purpose. They saw the need and they were in there fighting and working hard to make a difference with their knowledge and skills, even at risk to themselves."

Among the study's limitations are its small sample size. It was also conducted early in the pandemic in a region that wasn't as affected by the disease as other areas with higher infection and death rates.

Moving forward, the researchers are in the final stages of a similar but larger study conducted in late 2020 that they hope will build on these findings.

"This pandemic, as horrific as it is, offers us the opportunity to better understand the extraordinary mental stress and strains that health care providers are dealing with right now," Smith says. "With that understanding, perhaps we can develop ways to mitigate these problems and help health care workers and emergency responders better cope with these sorts of challenges in the future."

Credit: 
University of Utah Health

NYUAD study finds fragmented sleep patterns can predict vulnerability to chronic stress

image: NYUAD Assistant Professor of Biology Dipesh Chaudhury

Image: 
NYU Abu Dhabi

Abu Dhabi, United Arab Emirates, January 12, 2020: New research from NYU Abu Dhabi's Laboratory of Neural Systems and Behavior for the first time used an animal model to demonstrate how abnormal sleep architecture can be a predictor of stress vulnerability. These important findings have the potential to inform the development of sleep tests that can help identify who may be susceptible -- or resilient -- to future stress.

In the study, Abnormal Sleep Signals Vulnerability to Chronic Social Defeat Stress, which appears in the journal Frontiers in Neuroscience, NYUAD Assistant Professor of Biology Dipesh Chaudhury and Research Associate Basma Radwan describe their development of a mouse model to detect how disruptions in Non-rapid Eye Movement (NREM) sleep result in increased vulnerability to future stress.

The researchers assessed the sleep characteristics of both stress-susceptible and stress-resilient mice before and after experiencing chronic social defeat (CSD) stress. The social behavior of the mice post-stress was classified in two main phenotypes: those susceptible to stress that displayed social avoidance and those that were resilient to stress. Pre-CSD, mice susceptible to stress displayed increased fragmentation of Non-Rapid Eye Movement (NREM) sleep due to increased switching between NREM and wake and shorter average duration of NREM bouts, relative to mice resilient to stress. Their analysis showed that the pre-CSD sleep features from both phenotypes of mice allowed prediction of susceptibility to stress with more than 80 percent accuracy. Post-CSD, susceptible mice maintained high NREM fragmentation during the light and dark phase while resilient mice exhibited high NREM fragmentation only in the dark.

The findings demonstrate that mice that become susceptible to CSD stress exhibit pre-existing abnormal sleep/wake characteristics prior to stress exposure. In addition, subsequent exposure to stress further impairs sleep and the homeostatic response.

"Our study is the first to provide an animal model to investigate the relationship between poor sleep continuity and vulnerability to chronic stress and depressive disorders," said Chaudhury and Radwan. "This marker of vulnerability to stress opens up avenues for many possible future studies that could further explain the underlying molecular processes and neural circuitry that lead to mood disorders."

Credit: 
New York University

Tweaking AI software to function like a human brain improves computer's learning ability

WASHINGTON - Computer-based artificial intelligence can function more like human intelligence when programmed to use a much faster technique for learning new objects, say two neuroscientists who designed such a model that was designed to mirror human visual learning.

In the journal Frontiers in Computational Neuroscience, Maximilian Riesenhuber, PhD, professor of neuroscience, at Georgetown University Medical Center, and Joshua Rule, PhD, a postdoctoral scholar at UC Berkeley, explain how the new approach vastly improves the ability of AI software to quickly learn new visual concepts.

"Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples," says Riesenhuber. "We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing."

Humans can quickly and accurately learn new visual concepts from sparse data ¬- sometimes just a single example. Even three- to four-month-old babies can easily learn to recognize zebras and distinguish them from cats, horses, and giraffes. But computers typically need to "see" many examples of the same object to know what it is, Riesenhuber explains.

The big change needed was in designing software to identify relationships between entire visual categories, instead of trying the more standard approach of identifying an object using only low-level and intermediate information, such as shape and color, Riesenhuber says.

"The computational power of the brain's hierarchy lies in the potential to simplify learning by leveraging previously learned representations from a databank, as it were, full of concepts about objects," he says.

Riesenhuber and Rule found that artificial neural networks, which represent objects in terms of previously learned concepts, learned new visual concepts significantly faster.

Rule explains, "Rather than learn high-level concepts in terms of low-level visual features, our approach explains them in terms of other high-level concepts. It is like saying that a platypus looks a bit like a duck, a beaver, and a sea otter."

The brain architecture underlying human visual concept learning builds on the neural networks involved in object recognition. The anterior temporal lobe of the brain is thought to contain "abstract" concept representations that go beyond shape. These complex neural hierarchies for visual recognition allow humans to learn new tasks and, crucially, leverage prior learning.

"By reusing these concepts, you can more easily learn new concepts, new meaning, such as the fact that a zebra is simply a horse of a different stripe," Riesenhuber says.

Despite advances in AI, the human visual system is still the gold standard in terms of ability to generalize from few examples, robustly deal with image variations, and comprehend scenes, the scientists say.

"Our findings not only suggest techniques that could help computers learn more quickly and efficiently, they can also lead to improved neuroscience experiments aimed at understanding how people learn so quickly, which is not yet well understood," Riesenhuber concludes.

Credit: 
Georgetown University Medical Center

Comprehensive characterization of vascular structure in plants

image: Isolated vascular cells of Arabidopsis; phloem parenchyma cells are labeled in cyan.

Image: 
HHU / Ji-Yun Kim

The leaf vasculature of plants plays a key role in transporting solutes from where they are made - for example from the plant cells driving photosynthesis - to where they are stored or used. Sugars and amino acids are transported from the leaves to the roots and the seeds via the conductive pathways of the phloem.

Phloem is the part of the tissue in vascular plants that comprises the sieve elements - where actual translocation takes place - and the companion cells as well as the phloem parenchyma cells. The leaf veins consist of at least seven distinct cell types, with specific roles in transport, metabolism and signalling.

Little is known about the vascular cells in leaves, in particular the phloem parenchyma. Two teams of Alexander von Humboldt professorship students from Düsseldorf and Tübingen, a colleague from Champaign Urbana in Illinois, USA, and a chair of bioinformatics from Düsseldorf have presented the first comprehensive analysis of the vascular cells in the leaves of thale cress (Arabidopsis thaliana) using single cell sequencing.

The team led up by Alexander von Humboldt Professor Dr. Marja Timmermans from Tübingen University was the first to use single cell sequencing in plants to characterise root cells. In collaboration with Prof. Timmermans' group, researchers from the Alexander von Humboldt Professor Dr. Wolf Frommer in Düsseldorf succeeded for the first time in isolating plant cells to create an atlas of all regulatory RNA molecules (the transcriptome) of the leaf vasculature. They were able to define the role of the different cells by analysing the metabolic pathways.

Among other things, the research team proved for the first time that the transcript of sugars (SWEET) and amino acids (UmamiT) transporters are found in the phloem parenchyma cells which transport these compounds from where they are produced to the vascular system. The compounds are subsequently actively imported into the sieve element companion cell complex via the second group of transporters (SUT or AAP) and then exported from the source leaf.

These extensive investigations involved close collaborations with HHU bioinformatics researchers in Prof. Dr. Martin Lercher's working group. Together they were able to determine that phloem parenchyma and companion cells have complementary metabolic pathways and are therefore in a position to control the composition of the phloem sap.

First author and leader of the work group Dr. Ji-Yun Kim from HHU explains: "Our analysis provides completely new insights into the leaf vasculature and the role and relationship of the individual leaf cell types." Institute Head Prof. Frommer adds: "The cooperation between the four working groups made it possible to use new methods to gain insights for the first time into the important cells in plant pathways and to therefore obtain a basis for a better understanding of plant metabolism."

Credit: 
Heinrich-Heine University Duesseldorf

Food insufficiency linked to depression, anxiety during the COVID-19 pandemic

A new study published in the American Journal of Preventive Medicine found a 25% increase in food insufficiency during the COVID-19 pandemic. Food insufficiency, the most extreme form of food insecurity, occurs when families do not have enough food to eat. Among the nationally representative sample of 63,674 adults in the US, Black and Latino Americans had over twice the risk of food insufficiency compared to White Americans.

"People of color are disproportionately affected by both food insufficiency and COVID-19," said Jason Nagata, MD, MSc, assistant professor of pediatrics at the University of California, San Francisco and lead author on the study. "Many of these individuals have experienced job loss and higher rates of poverty during the pandemic."

Overall, 65% of Americans reported anxiety symptoms and 52% reported depressive symptoms in the week prior to completing the survey. Those who did not have enough to eat during that week reported worse mental health, with 89% of food-insufficient Americans reporting anxiety symptoms compared to 63% of food-sufficient Americans. Similarly, 83% of food-insufficient Americans, compared to 49% of food-sufficient, Americans reported depressive symptoms.

"Hunger, exhaustion, and worrying about not getting enough food to eat may worsen depression and anxiety symptoms," said Nagata.

Researchers found that receipt of free groceries or meals alleviated some of the mental health burden of food insufficiency.

"Policymakers should expand benefits and eligibility for the Supplemental Nutrition Assistance Program (SNAP) and other programs to address both food insecurity and mental health," said Kyle Ganson, PhD, MSW, assistant professor at the University of Toronto, a co-author of the study.

Credit: 
University of Toronto

Johns Hopkins scientist develops method to find toxic chemicals in drinking water

Most consumers of drinking water in the United States know that chemicals are used in the treatment processes to ensure the water is safe to drink. But they might not know that the use of some of these chemicals, such as chlorine, can also lead to the formation of unregulated toxic byproducts.

Johns Hopkins Environmental Health and Engineering Prof. Carsten Prasse proposes a new approach to assessing drinking water quality that could result in cleaner, safer taps.

"We are exposing people in the United States to these chemical compounds without knowing what they even do," Prasse said. "I'm not saying that chlorination is not important in keeping our drinking water safe. But there are unintended consequences that we have to address and that the public needs to know about. We could do more than what we're doing."

Among disinfection byproducts, only 11 compounds are currently regulated in drinking water, according to his paper published in the Royal Society of Chemistry journal Environmental Science: Processes & Impacts. This is in stark contrast to the more than 700 disinfection byproducts that have so far been identified in chlorinated drinking water, he said.

Prasse said the number of disinfection byproducts that are regulated in drinking water have not changed since the 1990s, despite clear scientific evidence for the presence of other toxic compounds.

The existing approach to evaluate chemicals in drinking water is extremely tedious and based on methods that are often outdated, he said. For instance, chemicals are currently evaluated for toxicity by expensive, time-consuming animal studies.

Applying those same methods to the growing number of chemicals in drinking water would not be economically feasible, Prasse said. At a minimum, he added, new methods are needed to identify chemicals that are of highest concern.

Prasse proposes casting a bigger net to capture a more diverse mix of chemicals in water samples. The "reactivity-directed analysis" can provide a broader readout of what's present in drinking water by targeting the largest class of toxic chemicals known as "organic electrophiles."

"This method can help us prioritize which chemicals we need to be paying closer attention to with possible new regulations and new limits while saving time and resources," Prasse said.

This new approach, which takes advantage of recent advances in the fields of analytical chemistry and molecular toxicology, identifies toxicants based on their reactivity with biomolecules such as amino acids, the building blocks of proteins. The new approach simulates this process in order to identify toxic chemicals in drinking water.

"We know that the toxicity of many chemicals is caused by their reaction with proteins or DNA which alter their function and can result, for example, in cancer," Prasse said.

Credit: 
Johns Hopkins University

NASA missions help investigate an 'Old Faithful' active galaxy

video: Watch as a monster black hole partially consumes an orbiting giant star. In this illustration, the gas pulled from the star collides with the black hole's debris disk and causes a flare. Astronomers have named this repeating event ASASSN-14ko. The flares are the most predictable and frequent yet seen from an active galaxy.

Watch on YouTube: https://youtu.be/4esMWZZAaA8

Download in HD: https://svs.gsfc.nasa.gov/13798

Image: 
NASA's Goddard Space Flight Center

During a typical year, over a million people visit Yellowstone National Park, where the Old Faithful geyser regularly blasts a jet of boiling water high in the air. Now, an international team of astronomers has discovered a cosmic equivalent, a distant galaxy that erupts roughly every 114 days.

Using data from facilities including NASA's Neil Gehrels Swift Observatory and Transiting Exoplanet Survey Satellite (TESS), the scientists have studied 20 repeated outbursts of an event called ASASSN-14ko. These various telescopes and instruments are sensitive to different wavelengths of light. By using them collaboratively, scientists obtained more detailed pictures of the outbursts.

"These are the most predictable and frequent recurring multiwavelength flares we've seen from a galaxy's core, and they give us a unique opportunity to study this extragalactic Old Faithful in detail," said Anna Payne, a NASA Graduate Fellow at the University of Hawai'i at Mānoa. "We think a supermassive black hole at the galaxy's center creates the bursts as it partially consumes an orbiting giant star."

Payne presented the findings on Tuesday, Jan. 12, at the virtual 237th meeting of the American Astronomical Society. A paper on the source and these observations, led by Payne, is undergoing scientific review.

Astronomers classify galaxies with unusually bright and variable centers as active galaxies. These objects can produce much more energy than the combined contribution of all their stars, including higher-than-expected levels of visible, ultraviolet, and X-ray light. Astrophysicists think the extra emission comes from near the galaxy's central supermassive black hole, where a swirling disk of gas and dust accumulates and heats up because of gravitational and frictional forces. The black hole slowly consumes the material, which creates random fluctuations in the disk's emitted light.

But astronomers are interested in finding active galaxies with flares that happen at regular intervals, which might help them identify and study new phenomena and events.

"ASASSN-14ko is currently our best example of periodic variability in an active galaxy, despite decades of other claims, because the timing of its flares is very consistent over the six years of data Anna and her team analyzed," said Jeremy Schnittman, an astrophysicist at NASA's Goddard Space Flight Center in Greenbelt, Maryland, who studies black holes but was not involved in the research. "This result is a real tour de force of multiwavelength observational astronomy."

ASASSN-14ko was first detected on Nov. 14, 2014, by the All-Sky Automated Survey for Supernovae (ASAS-SN), a global network of 20 robotic telescopes headquartered at Ohio State University (OSU) in Columbus. It occurred in ESO 253-3, an active galaxy over 570 million light-years away in the southern constellation Pictor. At the time, astronomers thought the outburst was most likely a supernova, a one-time event that destroys a star.

Six years later, Payne was examining ASAS-SN data on known active galaxies as part of her thesis work. Looking at the ESO 253-3 light curve, or the graph of its brightness over time, she immediately noticed a series of evenly spaced flares - a total of 17, all separated by about 114 days. Each flare reaches its peak brightness in about five days, then steadily dims.

Payne and her colleagues predicted that the galaxy would flare again on May 17, 2020, so they coordinated joint observations with ground- and space-based facilities, including multiwavelength measurements with Swift. ASASSN-14ko erupted right on schedule. The team has since predicted and observed subsequent flares on Sept. 7 and Dec. 20.

The researchers also used TESS data for a detailed look at a previous flare. TESS observes swaths of the sky called sectors for about a month at a time. During the mission's first two years, the cameras collected a full sector image every 30 minutes. These snapshots allowed the team to create a precise timeline of a flare that began on Nov. 7, 2018, tracking its emergence, rise to peak brightness, and decline in great detail.

"TESS provided a very thorough picture of that particular flare, but because of the way the mission images the sky, it can't observe all of them," said co-author Patrick Vallely, an ASAS-SN team member and National Science Foundation graduate research fellow at OSU. "ASAS-SN collects less detail on individual outbursts, but provides a longer baseline, which was crucial in this case. The two surveys complement one another."

Using measurements from ASAS-SN, TESS, Swift and other observatories, including NASA's NuSTAR and the European Space Agency's XMM-Newton, Payne and her team came up with three possible explanations for the repeating flares.

One scenario involved interactions between the disks of two orbiting supermassive black holes at the galaxy's center. Recent measurements, also under scientific review, suggest the galaxy does indeed host two such objects, but they don't orbit closely enough to account for the frequency of the flares.

The second scenario the team considered was a star passing on an inclined orbit through a black hole's disk. In that case, scientists would expect to see asymmetrically shaped flares caused when the star disturbs the disk twice, on either side of the black hole. But the flares from this galaxy all have the same shape.

The third scenario, and the one the team thinks most likely, is a partial tidal disruption event.

A tidal disruption event occurs when an unlucky star strays too close to a black hole. Gravitational forces create intense tides that break the star apart into a stream of gas. The trailing part of the stream escapes the system, while the leading part swings back around the black hole. Astronomers see bright flares from these events when the shed gas strikes the black hole's accretion disk.

In this case, the astronomers suggest that one of the galaxy's supermassive black holes, one with about 78 million times the Sun's mass, partially disrupts an orbiting giant star. The star's orbit isn't circular, and each time it passes closest to the black hole, it bulges outward, shedding mass but not completely breaking apart. Every encounter strips away an amount of gas equal to about three times the mass of Jupiter.

Astronomers don't know how long the flares will persist. The star can't lose mass forever, and while scientists can estimate the amount of mass it loses during each orbit, they don't know how much it had before the disruptions began.

Payne and her team plan to continue observing the event's predicted outbursts, including upcoming dates in April and August 2021. They'll also be able to examine another measurement from TESS, which captured the Dec. 20 flare with its updated 10-minute snapshot rate.

"TESS was primarily designed to find worlds beyond our solar system," said Padi Boyd, the TESS project scientist at Goddard. "But the mission is also teaching us more about stars in our own galaxy, including how they pulse and eclipse each other. In distant galaxies, we've seen stars end their lives in supernova explosions. TESS has even previously observed a complete tidal disruption event. We're always looking forward to the next exciting and surprising discoveries the mission will make."

Credit: 
NASA/Goddard Space Flight Center

Boomerang performance is on par with internal employees who never left the firm, new paper finds

Organizations seeking to fill internal roles traditionally have two options: promote from within or hire externally. Internal promotions benefit from being vetted talent who possess firm-specific skills while outside hires harbor external knowledge that can infuse an organization with new energy. Though this dichotomy is often accepted as unavoidable, there is a third option: boomerang employees.

Boomerang employees are those who return to an organization after an amicable absence. Whether the absence was for personal or professional reasons, their return provides unique value to an organization in that they represent external employees and the knowledge they contain yet have internal job experience. There is evidence to suggest that boomerangs gain practical experience and develop their network at interim firms while also outperforming non-boomerang external hires when placed in roles that involve relational demands and administrative coordination.

However, previous research has not fully assessed the financial, psychological, and behavioral differences between boomerangs and those developed internally. To address this gap, a new paper contrasts the outcomes for boomerangs with those of internally promoted employees to help firms determine whether to invest in talent management strategies that include boomerang rehiring or to focus on internal strategies that develop current employees.

Are Boomerangs Employees Ideal Hires for Performance?

The paper, "Hello Again: Managing Talent with Boomerang Employees," by researchers at Carnegie Mellon University, Providence College, and University College Dublin, was published in the journal, Human Resource Management.

Researchers utilized a sample from a professional services firm in the United States and drew from literature on talent management and psychological contracts to compare the compensation, satisfaction, commitment, and performance of boomerang employees to similar employees who never left the firm ("internal hires"). They found that reentry yielded improvements in compensation, satisfaction, and organizational commitment for boomerangs relative to matched internal hires. Yet, even as evidence suggests that boomerangs are more satisfied and committed employees, they are not necessarily better performers than internal hires.

"When compared to internal employees who never left the firm, boomerangs aren't better performers but they are happier and paid significantly more," said Catherine Shea, assistant professor of Organizational Behavior and Theory at Carnegie Mellon's Tepper School of Business, who co-authored the paper. "This counters much advice in the human resources world, which pitches boomerangs as ideal hires for performance."

The researchers theorized that when boomerangs are rehired, they are able to craft a more personalized and informed employment contract that allows for a stronger exchange relationship and a reduction of the negative feelings that may have led to their initial turnover from the organization. Boomerangs are likely to receive higher compensation than internal hires due to market and institutional forces--employees who stay with a firm are more likely to experience salary compression while those who have left a firm often do so in response to a more lucrative offer, increasing their salary negotiating power--both increasing compensation.

Furthermore, boomerangs are further likely to feel valued by the organization, as its hiring managers actively chose to rehire them. This dynamic also encourages boomerangs and employers to increase mutual commitment due to feelings of personal obligation.

More Satisfied and Committed Employees

While the researchers found no differences in performance ratings, they also measured how boomerangs allocated their time towards billable and non-billable hours. As billable hours are for client engagements, employee performance is often indicated by the number of hours they bill. The paper defined non-billable hours as internal-facing projects such as administrative or committee work, recruiting, or business development activities. While boomerangs did not have higher billable hours, the research does indicate that, compared with internal hires, boomerangs performed a higher number of non-billable hours, suggesting they focused on different tasks, specifically tasks related to developing the organization.

Overall, the paper reveals that boomerangs are more satisfied, committed, and engaged in more tasks to benefit the firm as compared to their matched counterparts who did not leave the organization.

Credit: 
Carnegie Mellon University

Mechanisms in the kidney that control magnesium and calcium levels discovered

BOSTON - While investigating the underlying causes of a rare skin disorder, a researcher at Massachusetts General Hospital (MGH) discovered a previously unknown mechanism in the kidneys that is important for regulating levels of magnesium and calcium in the blood.

The discovery, described in the journal Cell Reports, highlights the role of a previously little-studied gene called KCTD1. The gene directs production of a protein that regulates the kidney's ability to reabsorb magnesium and calcium from urine and return it to the bloodstream.

A genetic mutation causing the loss of KCTD1 results in defects in nephrons, the basic filtration units of the kidney, reports Alexander G. Marneros, MD, PhD, an investigator at the Cutaneous Biology Research Center at MGH and an associate professor of Dermatology at Harvard Medical School. KCTD1 is particularly important in the segments of the nephron involved in the regulation of reabsorption of salt, magnesium and calcium from filtered urine into the bloodstream.

Defects in nephrons resulting from KCTD1 loss in turn cause abnormally low levels of magnesium (hypomagnesemia) and calcium (hypocalcemia) in the bloodstream. The abnormally low blood levels of calcium trigger the parathyroid hormone-producing parathyroid glands in the neck to go into overdrive, a condition known as secondary hyperparathyroidism. The resulting high levels of parathyroid hormone lead to a release of calcium from bones in an attempt to counter the low calcium blood levels, eventually causing a loss of bone mass.

Marneros described the initial identification of kidney abnormalities that occur as a consequence of KCTD1 deficiency in a study published in 2020 in the journal Developmental Cell, which demonstrated that lack of KCTD1 in mutant mice leads to progressive kidney abnormalities, in part resembling the findings in patients with chronic kidney disease. Indeed, he observed that patients with KCTD1 mutations also developed chronic kidney disease with renal fibrosis (scarring of kidney tissue). These findings suggested that KCTD1 plays an important function in the kidney.

In the current study, Marneros reports that KCTD1 acts in a part of the nephron known as the distal nephron to regulate the reabsorption of electrolytes from urine into the bloodstream and maintain balanced levels (homeostasis) of these electrolytes.

"The distal nephron is important not just for salt reabsorption, but also for reabsorption of magnesium and calcium, and this study shows that KCTD1 is critical for the ability of the distal nephron to reabsorb these electrolytes from urine," he says.

The paper provides a detailed description of changes in proteins that shuttle electrolytes across membranes in the distal nephron when KCTD1 is missing. Collectively, the findings reveal that KCTD1 is a key regulator of the ability of distal nephrons not only to reabsorb salt but also magnesium and calcium from urine, thereby maintaining a healthy balance.

Credit: 
Massachusetts General Hospital

'Old Faithful' cosmic eruption shows black hole ripping at star

You've heard of Old Faithful, the Yellowstone National Park geyser that erupts every hour or two, a geological phenomenon on a nearly predictable schedule.

Now, an international group of scientists who study space have discovered an astronomical "Old Faithful" - an eruption of light flashing about once every 114 days on a nearly predictable schedule. The researchers believe it is a tidal disruption event, a phenomenon that happens when a star gets so close to a black hole that the black hole "rips" away pieces of the star, causing the flare.

The team made the discovery using data from NASA and from a network of telescopes operated by The Ohio State University.

Their findings, presented today at the Astronomical Society's annual meeting and accepted for publication in The Astrophysical Journal, are the first clear examples of regular flares erupting from a galaxy's core.

"It's really exciting, because we've seen black holes do a lot of things, but we've never seen them do something like this - cause this regular eruption of light - before," said Patrick Vallely, a co-author of the study and National Science Foundation Graduate Research Fellow at Ohio State. "It's like an extra-galactic Old Faithful."

The flare is coming from the center of a galaxy in our southern skies, about 570 million light years away. When scientists have witnessed tidal disruption events in the past, they have essentially seen the star destroyed.

But in this case, scientists think the star is circling around a supermassive black hole, getting closer, then zooming away again. As the star approaches the black hole each time, the black hole pulls a little bit of the star away, accreting that part into the black hole. That accretion sends out a flare of light about three times the size of Jupiter each time. That flare - and the fact that it happened regularly - gave scientists their first clue that this was no ordinary space phenomenon.

Scientists have named the regular outbursts of light ASASSN-14ko, after the All-Sky Automated Survey for Supernovae (commonly called ASAS-SN), a network of 20 robotic telescopes headquartered at Ohio State. Data from ASAS-SN allowed Anna Payne, lead author of the paper and a NASA Fellow at the University of Hawai'i at Mānoa, to identify that something strange was happening inside that galaxy.

"Knowing the schedule of this extragalactic Old Faithful allows us to coordinate and study it in more detail," Payne said. Payne and others confirmed the finding, and learned more about it, using data from NASA's Neil Gehrels Swift Observatory and Transiting Exoplanet Survey Satellite (commonly called TESS).

The ASAS-SN network first detected the flare on Nov. 14, 2014. Astronomers initially suggested the outburst was a supernova, but a supernova is a one-time event. In early 2020, Payne examined all of ASAS-SN's data on the galaxy and noticed a series of 17 flares spaced 114 days apart. Flares with any kind of regularity had never been seen before.

Payne and her colleagues predicted that the galaxy would flare again on May 17, 2020, so they coordinated joint observations with ground- and space-based facilities, including multiwavelength measurements with Swift. ASASSN-14ko happened right on schedule. The team predicted and observed flares on Sept. 7 and Dec. 26, 2020.

The team also used TESS data for a detailed look at a past flare. TESS observes swaths of the sky, called sectors, for about a month at a time. During the mission's first two years, the cameras collected a full sector image every 30 minutes. These snapshots created a precise timeline of a single flare that began on Nov. 8, 2018, from dormancy to rise, peak, and decline.

"TESS provided a very thorough picture of that particular flare, but because of the way the mission images the sky, it can't observe all of them," Vallely said. "ASAS-SN collects less detail on individual outbursts, but provides a longer baseline, which was crucial in this case. The two surveys complement one another."

Scientists say the black hole that is causing the flares is very large - about 20 times the size of the black hole in the center of our Milky Way.

Chris Kochanek, Ohio Eminent Scholar, astronomy professor at Ohio State and co-lead of the ASAS-SN project, said there is evidence that a second supermassive black hole exists in that galaxy.

"The galaxy that hosts this object is something of a 'trainwreck' consisting of two galaxies in the process of merging into one," he said.

And while astronomers observed the eruptions recently, they actually happened about 600 million years ago. Because the galaxy is so far away, the light took that long to reach us.

"There was life on Earth, but it was all very primitive," said Kris Stanek, a co-author on the paper and university distinguished professor of astronomy at Ohio State.

Astronomers classify galaxies with unusually bright and variable centers as active galaxies. These objects produce much more energy than the combined contribution of all their stars, which can include excesses at visible, ultraviolet and X-ray wavelengths. Astrophysicists think the extra emission comes from near the galaxy's central supermassive black hole, where a swirling disk of gas and dust accumulates and heats up because of gravitational and frictional forces. The black hole slowly consumes the material, which creates low-level, random changes in the disk's emitted light.

Astronomers have been searching for periodic emissions from active galaxies, which might signal theoretically suggested but observationally elusive cosmic phenomena. The 2020 Nobel Prize in Physics was awarded in part to astronomers studying the supermassive black hole in the Milky Way.

"In general, we really want to understand the properties of these black holes and how they grow," Stanek said. Because the eruptions from this black hole happen regularly and predictably, Stanek said, "it gives us a truly unique opportunity to better understand the phenomenon of episodic mass accretion on supermassive black holes. The ability to exactly predict the timing of the next episode allows us to take data that we could not otherwise take, and we are taking such data already."

Credit: 
Ohio State University

New method helps pocket-sized DNA sequencer achieve near-perfect accuracy 

Researchers have found a simple way to eliminate almost all sequencing errors produced by a widely used portable DNA sequencer, potentially enabling scientists working outside the lab to study and track microorganisms like the SARS-CoV-2 virus more efficiently.  

Using special molecular tags, the team was able to reduce the five-to-15 per cent error rate of Oxford Nanopore Technologies' MinION device to less than 0.005 per cent -- even when sequencing many long stretches of DNA at a time.    

"The MinION has revolutionized the field of genomics by freeing DNA sequencing from the confines of large laboratories," says Ryan Ziels, an assistant professor of civil engineering at the University of British Columbia and the co-lead author of the study, which was published this week in Nature Methods. "But until now, researchers haven't been able to rely on the device in many settings because of its fairly high out-of-the-box error rate."

Genome sequences can reveal a great deal about an organism, including its identity, its ancestry and its strengths and vulnerabilities. Scientists use this information to better understand the microbes living in a particular environment, as well as to develop diagnostic tools and treatments. But without accurate portable DNA sequencers, crucial genetic details could be missed when research is conducted out in the field or in smaller laboratories. 

So Ziels and his collaborators at Aalborg University created a unique barcoding system that can make long-read DNA sequencing platforms like the MinION over 1000 times more accurate. After tagging the target molecules with these barcodes, researchers proceed as they usually would -- amplifying, or making multiple copies of, the tagged molecules using the standard PCR technique and sequencing the resulting DNA.  

The researchers can then use the barcodes to easily identify and group relevant DNA fragments in the sequencing data, ultimately producing near-perfect sequences from fragments that are up to 10 times longer than conventional technologies can process. Longer stretches of DNA allow the detection of even slight genetic variations and the assembly of genomes in high resolution. 

"A beautiful thing about this method is that it is applicable to any gene of interest that can be amplified," says Ziels, whose team has made the code and protocol for processing the sequencing data available through open-source repositories. "This means that it can be very useful in any field where the combination of high-accuracy and long-range genomic information is valuable, such as cancer research, plant research, human genetics and microbiome science." 

Ziels is currently collaborating with Metro Vancouver to develop an expanded version of the method that permits the near-real-time detection of microorganisms in water and wastewater. With an accurate picture of the microorganisms present in their water systems, says Ziels, communities may be able to improve their public health strategies and treatment technologies -- and better control the spread of harmful microorganisms like SARS-CoV-2.

Credit: 
University of British Columbia

Gene-editing produces tenfold increase in superbug slaying antibiotics

image: Tetraponera penzigi, a species of African ant uses antibiotic-producing Streptomyces formicae bacteria to protect themselves against disease.

Image: 
Dr Dino J. Martins

Scientists have used gene-editing advances to achieve a tenfold increase in the production of super-bug targeting formicamycin antibiotics.

The John Innes Centre researchers used the technology to create a new strain of Streptomyces formicae bacteria which over-produces the medically promising molecules.

Discovered within the last ten years, formicamycins have great potential because, under laboratory conditions, superbugs like MRSA do not become resistant to them.

However, Streptomyces formicae only produce the antibiotics in small quantities. This has made it difficult to scale up purification for further study and is an obstacle to the molecules being taken forward for clinical trials.

In a new study, researchers used CRISPR/Cas9 genome editing to make a strain which produces ten times more formicamycins on agar plates and even more in liquid cultures.

Using DNA sequencing they found the formicamycin biosynthetic gene cluster consists of 24 genes and is controlled by the activity of three key regulators inside the cluster.

They used CRISPR/Cas-9 to make changes in regulatory genes and measured how much of the antibiotics were produced.

CRISPR/-Cas9, involves using part of a microbial immune system to make targeted changes in DNA. Through uncovering the roles of the three important regulators, the team were able to combine mutations to maximise production. They added an extra copy of the formicamycin boosting genes (forGF) effectively putting the foot on the accelerator, and removing the brake by deleting the repressor gene (forJ)

Surprisingly, lifting the brake , ForJ, lead to formicamycins being produced in liquid culture, which previously had not been possible and this was a barrier to scaling up production of these useful compounds. The same activity also led to production of different variations of formicamycins with promising antibiotic activity against MRSA.

"Formicamycins are promising and powerful new antibiotics and we have used gene editing to generate a strain which over-produces these molecules. This will allow us to understand how they work and determine if they have the potential for clinical development," said first author Dr Rebecca Devine.

The priority for the next steps of the research is to further understand the regulation of formicamycin biosynthesis as some of the gene deletions used to achieve the new strain had unexpected effects.

"There is still a lot to learn and we may be able to increase production even further when we've figured this out," said Professor Matt Hutchings, another author of the study and a group leader at the John Innes Centre.

"We will use the over-producing strain to purify enough formicamycins to figure out their mode of action, how they kill superbugs such as MRSA, and why these superbugs don't become resistant. This is vital to their further development as antibiotics," he added.

Streptomyces formicae is a strain of bacteria found in the nests of a species of African ant called Tetraponera penzigi. The ants use the antibiotic producing bacteria to protect themselves and their food source from pathogens.

Half of all known antibiotics are derived from the specialised metabolites of Streptomyces bacteria, many of which were discovered in the Golden Age of antibiotic discovery more than 60 years ago.

In the meantime, few new classes of antibiotics have been introduced, increasing the threat posed by antimicrobial resistance. Antibiotics that can kill so called superbugs like methicillin-resistant Staphylococcus aureus (MRSA) are urgently needed.

Professor Hutchings' group work was carried out in collaboration with that of Professor Barrie Wilkinson at the John Innes Centre. The recent purchase of natural products chemistry equipment due to a successful capital expenditure bid to BBSRC will significantly enhance the ability to scale up production of formicamycins in the next phase of this project, said Professor Wilkinson.

Credit: 
John Innes Centre

DNA in water used to uncover genes of invasive fish

ITHACA, NY - Invasive round goby fish have impacted fisheries in the Great Lakes and the Finger Lakes by competing with native species and eating the eggs of some species of game fish.

But the camouflaged bottom dwellers can be difficult to find and collect - especially when they first enter a new body of water and their numbers are low and they might be easier to remove.

In a proof-of-principle study, Cornell researchers describe a new technique in which they analyzed environmental DNA - or eDNA - from water samples in Cayuga Lake to gather nuanced information about the presence of these invasive fish.

The study, "Nuclear eDNA Estimates Population Allele Frequencies and Abundance in Experimental Mesocosms and Field Samples," was published Jan. 12 in the journal Molecular Ecology.

While eDNA techniques have been increasingly studied for the last decade, previous methods typically focused on whether a species was present in an ecosystem.

"With these new advancements to eDNA methods, we can learn not only which invasive species are present in the environment, but because we identify the genetic diversity in the samples, we can also predict how many individuals there are and possibly where they came from," said Kara Andres, the paper's first author and a graduate student in the lab of co-author David Lodge, professor of ecology and evolutionary biology in the College of Agriculture and Life Sciences (CALS) and the Francis J. DiSalvo Director of the Cornell Atkinson Center for Sustainability.

"For the first time, we demonstrate that there is sufficient genetic information in environmental samples to study the origins, connectivity, and status of invasive, elusive, threatened or otherwise difficult to monitor species without the need for direct contact," added Jose Andrés, senior research associate in the Department of Ecology and Evolutionary Biology in CALS and a senior author of the study.

Since the method provides a genetic signature of individuals in a sample, scientists might be able to pinpoint where they came from by matching their DNA with populations from other areas.

"We would be able to tell genetically if round gobies were introduced by ships from Europe, which is how they originally got to the Great Lakes, or by some other means of introduction. Knowing this information might be helpful if we hope to stem new introductions at early stages," Kara Andres said.

In addition, knowing the genetic diversity of species could prove useful in conservation efforts; low genetic diversity can indicate a dwindling or vulnerable population that requires managing its genetics.

Cornell impacting New York State

"In the near future, this type of technique is likely to revolutionize how environmental and conservation management agencies monitor wild populations," Jose Andrés said.

The researchers conducted controlled experiments using small artificial environments - water-filled bins with one, three, five or 10 gobies in them. After collecting genetic information from all the gobies, they took water samples from each bin to see if they could match DNA from the samples with individuals in the bins. They also tried to estimate the number of fish in each bin, based on the water sample alone. They were successful in both instances, Kara Andres said.

The researchers further validated their methods in Cayuga Lake, where they found high numbers of gobies, especially in shallow areas.

"This sensitive approach," Kara Andres said, "may overcome many of the logistical and financial challenges faced by scientists and conservation managers studying these species, allowing precious resources to be best allocated for improving conservation outcomes."

Credit: 
Cornell University

First-degree relative with kidney disease increases disease risk by three-fold

In a large population-based family study, family history of kidney disease was strongly associated with increased risk of chronic kidney disease.

In this large population-based family study recently published in the American Journal of Kidney Diseases, researchers investigated the familial aggregation of CKD by comparing the risk of chronic kidney disease (CKD) in individuals with an affected first-degree relative to that in the general population. Participants with an affected first-degree relative were observed to have a threefold higher risk of CKD compared to that in the general population, independent of BMI, hypertension, diabetes, hypercholesterolemia, history of cardiovascular disease (CVD), and smoking status. The authors of this study observed a 1.56 fold higher risk in those with an affected spouse, suggesting that shared environmental factors and/or assortative mating play a role. Heritability of eGFR was considerable (44%), whereas heritability of UAE was moderate (20%). Heritability of kidney related markers and serum electrolytes ranged between 20 and 50%. These results indicate an important role for genetic factors in modulating susceptibility to kidney disease in the general population.

Credit: 
National Kidney Foundation

New process evaluates patients for elective surgeries following COVID-19

Acknowledging that COVID-19 may be here to stay, Oregon Health & Science University has laid out a series of steps to prepare patients for elective surgery following their illness.

The evaluation, outlined in a commentary published in the journal Perioperative Medicine, is believed to be the first published protocol laying out a COVID-era path forward in American medicine.

"We think this is groundbreaking," said senior author Avital O'Glasser, M.D., associate professor of medicine (hospital medicine) in the OHSU School of Medicine. "We are hoping other clinics and surgical centers can use this to keep their patients safe."

The work started around Memorial Day, when OHSU clinicians began to see an increasing number of patients who had survived COVID-19 but now were in need of myriad types of elective surgeries. These patients needed hip replacements, fracture repairs, colonoscopies and other procedures that normally flow through OHSU Hospital.

"At the time, the main focus was on patient de-isolation protocols and determining appropriate PPE for providers," said co-author Katie Schenning, M.D., M.P.H., associate professor of anesthesiology and perioperative medicine in the OHSU School of Medicine. "How to safely manage surgical patients who had recovered from COVID was a big black box."

Surgeons had concerns about blood clots, cardiac scarring and other early reports of the compromised condition of COVID survivors. They wanted to be sure surgery would be safe for these patients.

"This is not the flu," O'Glasser said. "It's not a garden-variety upper respiratory infection. It's a serious, potentially fatal illness that affects any and all organs in the body. That's why we want to slow down and make sure it's safe enough for a patient to undergo surgery."

Researchers combed through data published worldwide about health outcomes of patients who underwent surgery following illness. By mid-August, OHSU had adopted a set of guidelines based on the research.

Among the key recommendations:

Minimum recovery time: The protocol calls for waiting a minimum of four weeks from the initial COVID-positive test for patients who had an asymptomatic infection and six to eight weeks for those who were more severely ill, "acknowledging that there is currently little data on the timeframe of recovery."

Evaluation: Patient history and physical assessment to determine any potential complications of surgery and to determine whether a patient has returned to their "pre-COVID" baseline health.

Objective testing: The protocol includes guidance for specific tests such as blood work based a patient's age, severity of symptoms, whether it's a major or minor procedure, and whether it includes putting a patient under general anesthesia.

The protocol does not account for patients who have not recovered from the illness, known as COVID long-haulers.

"As millions of people in the USA have recovered from COVID and are presenting for elective and non-urgent surgeries, we feel it is appropriate to set a standard for preoperative evaluation given the high risk of complications and high degree of clinical uncertainty," the authors write.

Credit: 
Oregon Health & Science University