Culture

Studies of U.S. national parks focused on popular parks, trending down

Research conducted in U.S. national parks has focused largely on five iconic parks, with more than a third of academic papers focused on Yellowstone National Park, researchers from North Carolina State University found in a new analysis.

They also found that the number of publications per year increased during the 1990s and 2000s, but has dropped since 2013. The findings, published in the journal Conservation Science and Practice, were drawn from an analysis of nearly 7,000 published, peer-reviewed studies conducted at U.S. designated national parks since 1970.

"Looking at the data was a surprise and perhaps a wake-up call," said the study's lead author Jelena Vukomanovic, assistant professor of parks, recreation and tourism management at NC State. "Overall, there's more research being published than ever before, but we're actually seeing a decline in publications of research conducted in the national parks."

In the study, researchers used the Web of Science database to analyze academic journal articles on national parks published between 1970 and 2018. While the National Park Service manages more than 400 properties including battlefields and national seashores, the researchers focused on 59 designated national parks. They excluded research on three newly established parks, and did not include reports published outside of peer-reviewed journals.

"National parks are large, relatively intact natural areas that serve conservation objectives and provide aesthetic and other connections between people and nature," Vukomanovic said. "We wanted to look at what research was being conducted in national parks and what kind of biases there might be in our understanding of these unique and special places."

They found more than half of all studies since 1970 have focused on five national parks - the parks that see the greatest number of visitors per year. Yellowstone accounted for 36.2% of studies; the Everglades represented 6.8%; Great Smoky Mountains accounted for 6.2%; Glacier represented 5.6%; and Yosemite was at 5.3%.

"We need to understand that this is the context behind the conclusions we come to about our landscapes," said study co-author Josh Randall, graduate student in parks, recreation and tourism management at NC State. "Many of the findings - and their management implications - that we may apply broadly come from a small number of parks."

The location of parks near research universities wasn't necessarily linked to the amount of published research on them.

"We know that well-funded researchers can travel to interesting places that have unique phenomena," Vukomanovic said. "We found some relatively understudied parks are close - within a day trip distance - from a lot of research universities. This is a call for future work on the drivers of research, which could help both park and university administrators identify and address barriers."

More than half of studies were conducted in parks in northwestern forested mountains. Seventeen percent of studies were done in North American deserts, 10% were in eastern temperate forests, tropical wet forests represented 7% of studies and marine west coast forests were at 6%.

"The national parks of the east represent extraordinary biodiversity and cultural heritage," Vukomanovic said. "How can we bolster research activity there?"

Life sciences research - and ecology and evolutionary biology studies in particular - made up the majority of park research, representing 60% of total publications.

Physical sciences and mathematics research made up 25% of studies, with the largest share of research in that field going to the earth sciences. Social and behavioral science research made up 8% of studies; engineering made up 3%; and education and multidisciplinary research made up 3%. The arts and humanities represented less than 1% of total research.

Researchers said availability of funding could be playing a role in the decline in publications per year since 2013, and should be studied further.

Federal agencies in the United States was the most commonly cited source of research funding. Notably, more than a quarter of research funding sources were based international internationally.

"These are the world's living laboratories," Vukomanovic said. "This interest from international sources speaks to the fact that everyone benefits from the research and knowledge that's being garnered at the national parks. There could be opportunities for that to grow. How can we make it easier for international collaborators to work in national parks?"

Researchers said increasing availability of data from the national parks could spur research.
"This analysis offers a comprehensive overview that can be built on to shape future work, to identify and address research gaps and determine funding priorities," Vukomanovic said. "It's a blueprint to consider for the future."

Credit: 
North Carolina State University

Research news tip sheet: Story ideas from Johns Hopkins Medicine

image: Research News Tip Sheet: Story Ideas From Johns Hopkins Medicine

Image: 
Johns Hopkins Medicine

JOHNS HOPKINS MEDICINE HELPS FIND CAUSE OF GUILLAIN-BARRE SYNDROME OUTBREAK IN PERU

Media Contact: Michel Morris, melben1@jhmi.edu

In the spring of 2019, Peruvian neurologists saw a significant increase in the number of patients with Guillain-Barre syndrome (GBS). This rare disorder occurs when a person's immune system damages the nerves, causing muscle weakness and sometimes paralysis. Although GBS typically only affects 3,000 - 6,000 people annually in the United States (or about 1 in 100,000), Peruvian doctors documented hundreds of cases between May and July 2019 and deemed it an outbreak.

A multinational team of GBS experts collaborated with Peruvian neurologists led by Ana Ramos from the Hospital Cayetano Heredia in Lima to clarify the causes of the outbreak and identify the culprit. Among the members of the international team was Carlos Pardo, M.D., a neurology professor at the Johns Hopkins University School of Medicine.

Pardo and other Johns Hopkins Medicine neurologists, microbiologists and computational biologists helped Peruvian neurologists identify the cause to be a common diarrheal infection by bacteria called Campylobacter jejuni. The team's findings are reported in the March 2021 issue of the journal Neurology, Neuroimmunology and Neuroinflammation.

While a third of GBS patients recover completely, another third may be permanently left with significant muscle impairment or mobility problems.

"It is a major problem for Peru's public health system," says Pardo. "Just having even 10 to 20 patients can overwhelm a system. They were seeing hundreds of cases in just over a few months."

The multinational team identified the cause of the outbreak by investigating blood, feces and spinal fluid samples from patients with GBS. The group knew that a bacterial infection was likely to be the initial trigger, but needed to identify the specific type so that it could be appropriately controlled.

Pardo and his team used genetic and molecular tools to identify the strain of bacteria in the samples. They found that it was not a new bacterium, but one associated with other GBS outbreaks in South America and China called Campylobacter jejuni. While trying to fight the bacteria, a patient's immune system can mistakenly attack the nervous system as well, resulting in GBS.

Once the researchers pinpointed the responsible bacterial strain, they were able to suggest potential treatments, such as vaccines and antibiotics, best suited for C. jejuni. Public health officials also were able to target measures to stop the bacteria from contaminating Peru's food and water supplies.

Pardo is available for interviews.

HEART DISEASE GAINING ON CANCER AS A MAJOR CAUSE OF DEATH IN YOUNG WOMEN, SAY RESEARCHERS

Media Contact: Brian H. Waters, bwaters3@Jhmi.edu

It's no secret that women tend to put the health of others before their own, especially those who must care for children, manage a household, work full time and shoulder other responsibilities. So, it may not be surprising that a recent nationwide study by Johns Hopkins Medicine researchers revealed women younger than 65 are dying from heart disease at an increased rate compared with past years.

"Young women in the United States are becoming less healthy, which is now reversing prior improvements seen in heart disease deaths for the gender," says Erin Michos M.D., M.H.S., director of women's cardiovascular health and associate professor of medicine at the Johns Hopkins University School of Medicine. "In a previous study in December 2018, we showed that more attention should be paid to the health of young women, particularly those with the risk factors that contribute to heart disease."

"Our latest research confirms that need still exists," she says.

In the new study -- an analysis of U.S. death certificates between 1999 and 2018 from a national database -- Michos and her colleagues compared heart disease and cancer deaths in women under 65. Their findings were reported Feb. 8, 2021, in the European Heart Journal - Quality of Care and Clinical Outcomes.

The researchers found that overall during the 10-year study period, cancer was the most prevalent cause of premature death in women under 65 -- slightly more than twice as much as heart disease. However, the overall cancer mortality rate (age adjusted) for women under 65 decreased from 62 to 45 deaths per 100,000 people while the overall heart disease mortality rate (age adjusted) dropped from 29 to 23 deaths per 100,000.

Another finding from the study was that the annual percentage change (APC) in age-adjusted mortality rates for cancer declined year after year during the study period, while it increased for heart disease in two specific groups from 2010 to 2018: women 25 to 34 (2.2%) and women 55 to 64 (0.5%). The APCs rose significantly after 2008 for women living in the midwestern United States, medium and small metropolitan areas, and rural areas. Additionally, APCs were found to have increased for white women from 2009 to 2013 and for Native American women from 2009 to 2018.

Finally, the researchers determined the mortality gap between cancer and heart disease in women under 65 narrowed from a mortality rate (age adjusted) of 33 deaths per 100,000 in 1999 to 23 deaths per 100,000 in 2018.

Compounding the problem of premature death from heart disease in women under 65, say the researchers, is the commonly held misconception that women are not at risk for heart disease before menopause. But in fact, statistics show one-third of all heart issues in women occur before age 65.

Another major factor -- the gender gap in cardiac disease care -- was revealed in the 2018 study by Michos and her colleagues.

"We showed that women were not getting the same level of care as men, and they feel that way too. Women are more likely to report communication problems with health care providers and dissatisfaction with their health care experience, and we think this contributes to the disparities that we see when it comes to getting preventive and other treatment for cardiovascular disease," said Michos after the 2018 study was published.

Along with advocating for equal health care, Michos recommends that women combat the risk of premature cardiac death by eating a healthy and balanced diet, getting regular physical activity, not smoking, and maintaining healthy blood pressure, blood sugar, cholesterol level and body weight.

Michos is available for interviews.

STUDY: DOCTORS 'OVERUSING' COSTLY, RISKIER METHOD FOR CLEARING CLOGGED OR BLOCKED VESSELS

Media Contact: Michael E. Newman, mnewma25@jhmi.edu

In a recent study reviewing Medicare claims data from 2019 for nearly 59,000 patients with peripheral artery disease (PAD), a Johns Hopkins Medicine research team provides statistical evidence that one method for restoring blood flow to clogged or completely blocked vessels is being overused or inappropriately used in the United States. This occurs, the researchers say, even though the procedure has not been shown in clinical studies to be more effective than two other less-expensive, less-risky surgical methods.

The findings were reported March 22, 2021, in JACC: Cardiovascular Interventions.

"We wanted to characterize physician practice patterns in treating PAD and determine if the therapy in question, atherectomy, was being used appropriately when compared to the use of balloon angioplasty, stents or a combination of angioplasty and stents," says study lead author Caitlin Hicks, M.D., associate professor of surgery at the Johns Hopkins University School of Medicine. "What we discovered is that although there is a wide distribution of practices across the nation for the use of atherectomies and only slightly more than half of PVIs [peripheral vascular interventions] performed in 2019 relied on the technique, atherectomy accounted for 90% of all Medicare PVI payments."

Atherectomy is one of the methods that clinicians use to remove plaque (the buildup of fat, cholesterol, calcium and other substances found in the blood) from blood vessels that have become narrowed or blocked. Unlike balloon angioplasties and stents that push plaque into the vessel wall to open the passageway, the atherectomy cuts it out.

However, clinical studies have never demonstrated that atherectomies are any more effective in treating PAD than angioplasties, stents or a combination of the two. Other studies have suggested that atherectomy increases the risk of distal embolization, in which a piece of plaque breaks free of a vessel and travels to the legs, dangerously reducing blood flow to the feet.

For the period Jan. 1 - Dec. 31, 2019, the Johns Hopkins Medicine researchers reviewed Medicare fee-for-service claims for 58,552 U.S. patients who received elective PVI -- atherectomy, angioplasty or stenting -- for the first time. Patients were characterized for their demographics -- including age, sex, race and ZIP code of residence -- as well as their reason for getting PVI -- including claudication (pain or cramping in the lower leg due to reduced blood flow) and chronic limb-threatening ischemia (inadequate blood supply to a leg that causes pain at rest or leads to gangrene). Histories of other conditions, such as kidney disease and diabetes, and lifestyle behaviors, such as smoking, also were noted.

Medicare claims for 1,627 doctors who performed PVIs on 10 or more patients during 2019 were reviewed in the study. The researchers documented a number of physician characteristics, including sex, years since graduation from medical school, primary specialty, census region of practice, population density of practice location, number of patients treated for PAD, and the type of medical facility in which care was primarily given.

Analysis of the data showed that during the study period, 31,476 (53.8%) of patients received atherectomies as their PVI. For age and sex, the numbers of atherectomies and non-atherectomies were approximately the same. However, the researchers found that atherectomies were performed more frequently on Black or Hispanic patients, those with claudication as their reason for treatment, and people living in urban settings and in the southern United States.

Physician use of atherectomies ranged from 0% (never used) to 100% (always used), with the latter being the case for 133 clinicians -- nearly 10% of the total studied. Men were more likely to use atherectomies, as were doctors in practice for more than 20 years, cardiologists and radiologists, and those who practiced in regions with a higher median Medicare-allowed payment for PVIs per patient.

Additionally, Hicks says, physicians who worked in ambulatory surgical centers or office-based laboratories used atherectomy seven times more often than physicians who worked primarily in controlled facilities such as hospitals.

Overall, nearly $267 million was reimbursed by Medicare for PVIs performed in 2019. Of this, approximately $241 million -- a resounding 90.2% -- was for atherectomies.

"We feel that these numbers -- especially when there is no solid evidence that atherectomies treat PAD any more effectively than angioplasty, stents or a combination of angioplasty and stents -- suggest there is potential overuse of atherectomies in certain situations," says Hicks. "This poses a high health care burden and should be addressed with professional guidelines for more appropriate use of the procedure."

Hicks is available for interviews.

Credit: 
Johns Hopkins Medicine

Dramatic increases seen in rates of insomnia, sleep apnea among US military

SAN ANTONIO (March 31, 2021) -- Insomnia and obstructive sleep apnea have increased dramatically among active-duty military members over a 14-year period, 2005 through 2019.

Insomnia increased 45-fold and sleep apnea went up more than 30-fold, according to a study led by The University of Texas Health Science Center at San Antonio (UT Health San Antonio).

The study found that the most likely military member to be diagnosed with either sleep disorder was married, male, white, a higher-ranking enlisted Army service member and age 40 or older.

The researchers compared medical codes that represent diagnosis of sleep apnea or insomnia in active-duty Army, Navy, Marine Corps and Air Force personnel. No medical code data was available for the Coast Guard or for the Space Force, which was established in December 2019.

"Other studies have been conducted in the past, but those were based more on self-reported surveys or focused on a single branch of the military. No one has studied these sleep disorders in multiple branches of the military before, based on universally used diagnostic medical codes from health records," said principal investigator Vincent Mysliwiec, MD.

Dr. Mysliwiec is a sleep medicine physician and professor of research in the Department of Psychiatry and Behavioral Sciences at UT Health San Antonio, and a retired U.S. Army colonel.

"The most surprising result was that military members in the Army had the highest rates of obstructive sleep apnea and insomnia diagnoses. These findings are concerning because service members across the military branches are otherwise healthy and have similar physical requirements. Their sleep disorders developed and were diagnosed while they were in the military," Dr. Mysliwiec said.

Lower rate seen in women

Another finding was that women were diagnosed for sleep disorders at a much lower rate than men. Women in the military were expected to have lower rates of sleep apnea than their male counterparts; however, women in the military were not expected to have lower rates of insomnia diagnoses. "This conflicts with the rate of insomnia diagnoses among female veterans and in civilian women, which are higher," he said.

A 2017 Department of Veterans Affairs study published in Women's Health Issues showed that more than half of female veterans reported in a postal survey that they have insomnia. "We were not expecting to find that women are potentially underdiagnosed for insomnia while they are on active duty," Dr. Mysliwiec said. "This is a concerning finding. We will need to conduct more research to better understand what contributes to the potential underdiagnosis of insomnia in active-duty women."

Long deployments

Study co-author Alan Peterson, PhD, added perspective to the large number of Army personnel who were diagnosed with sleep disorders. "While military deployments were not evaluated in this epidemiological study, previous research has shown a strong correlation between deployments and sleep disorders, and deployments combined with other chronic health conditions, such as post-traumatic stress disorder and traumatic brain injury," he said.

"In the wars in Afghanistan and Iraq, there were longer and more frequent deployments between 2008 and 2012. The Army typically had the longest and most frequent deployments -- 21 months -- compared to 12 to 16 months for the other services," he explained.

"While we don't know yet exactly why Army personnel were more likely to be diagnosed with obstructive sleep apnea or insomnia, another factor besides deployments could be that Army personnel have greater access to large medical centers, which are typically located on Army posts. In contrast, Marines rely on Navy medical facilities that may not be where they are serving," Dr. Peterson said.

"Another factor that could have influenced the results is that the Army was the first service to institute a service-wide education program on military sleep practices. Having greater access to medical facilities and the Army's emphasis on education about sleep disorders may have resulted in more soldiers recognizing their sleep disturbances and seeking appropriate treatment," Dr. Peterson said.

"Overall, this study provides a comprehensive overview of the two most common sleep disorders in the U.S. military and contributes to sleep research that opens the door to learning more about the causes for these diagnoses. This will lead to more targeted prevention strategies and more effective treatments," added Dr. Peterson, professor of psychiatry, chief of the Division of Behavioral Medicine and director of two national military PTSD research consortia based at UT Health San Antonio.

Credit: 
University of Texas Health Science Center at San Antonio

Study identifies risk factors for COVID-19 infection, hospitalization, and mortality among U.S. nursing home residents

Risks of SARS-CoV-2 coronavirus infection for long-stay nursing home residents were mainly dependent on factors in their nursing homes and surrounding communities, according to a large study led by a researcher at the Johns Hopkins Bloomberg School of Public Health.

By contrast, the study found that the risks of being hospitalized with, and of dying from, COVID-19, depended more on patient-specific characteristics such as age and body mass index--although the mix of factors linked to hospitalization was distinct from the mix of factors linked to mortality.

The study, which appears online March 31 in JAMA Network Open, detailed COVID-19 risk factors among more than 480,000 long-stay nursing home residents in the United States between April 1 and September 30, 2020. The study is thought to be the first national study of long-term nursing home residents in the U.S.

"Our findings suggest that the dynamics of the pandemic work differently in a nursing home setting than they do in the wider community," says study lead author Hemalkumar Mehta, PhD, assistant professor in the Department of Epidemiology at the Bloomberg School. "The findings should help community leaders and nursing home administrators in devising better protections for nursing home residents during the remainder of the COVID-19 pandemic and in future pandemics."

Among the roughly 30 million recorded cases of COVID-19 in the U.S. since the start of the pandemic, there have been more than 500,000 deaths. Of those U.S. deaths, about one-third have been in nursing homes. These facilities have been especially vulnerable to COVID-19 due to their concentrations of frail, elderly residents.

Even so, the COVID-19 case, hospitalization, and mortality rates have varied greatly among nursing homes.

For their study, the researchers used Medicare data to identify a cohort of 482,323 long-stay nursing home residents, aged 65 and up, who had not yet been diagnosed with COVID-19 as of April 1 of last year. The cohort included residents at 15,038 nursing homes across the U.S.

A total of 137,119 residents (28.4 percent) were diagnosed with SARS-CoV-2 infection during the period April 1 through September 30. The researchers found that the risk of infection was dependent mainly on which nursing home the resident lived in, and in which county, rather than patient-specific factors.

When accounting for the influence of local factors, the large nominal differences in infection rates between whites and Blacks, whites and Asians, and whites and Latinos went virtually to zero.

Among the personal characteristics examined, only body mass index, a gauge of thinness or obesity, appeared to be important in determining infection risk: Having a BMI greater than 45 ("morbidly obese") was associated with 19 percent more infection risk than having a BMI in the normal range of 18.5 to 25.

The risk of hospitalization varied more with personal factors. A BMI of 40-45 was associated with 24 percent greater risk, and a BMI over 45 with 40 percent greater risk, compared to a BMI in the normal range.

Frailty and poor health appeared to be factors too. Residents with severe functional impairment were 15 percent likelier to be hospitalized when they got COVID-19; and those who had to use a catheter were 21 percent likelier.

Ethnicity/race was a big factor in hospitalization risk, even when adjusting for nursing home facility and geography. Asian nursing home residents, for example, were 46 percent more likely than whites to be hospitalized when diagnosed with COVID-19.

Somewhat surprisingly--and contrary to the results of prior studies outside nursing homes-- factors most associated with hospitalization risk appeared to have a lesser role in mortality risk after controlling for differences among nursing homes. Asians were more likely than whites to die when they got COVID-19, but only 19 percent more likely. Moreover, Blacks and Hispanics had no significant difference in mortality risk compared to whites.

Again surprisingly, BMI was not a significant risk factor for mortality--except for those with a BMI below the normal range, who had a 19 percent greater risk of dying when infected with COVID-19.

Older age was the largest apparent mortality risk factor. For example, being over 90 was associated with 155 percent greater risk, compared to being 65-70, and even being 81-85 brought a 76 percent increase in mortality risk compared to the youngest age bracket of 65-70 years.

Cognitive impairment was another mortality risk factor--severely impaired residents were 79 percent more likely to die of COVID-19 than those with no cognitive impairment. Similarly, severe functional impairment was associated with a 94 percent greater chance of mortality.

Male residents were 57 percent likelier than females to die when they had COVID-19.

On the whole, according to the researchers, the results suggest that hospitalization and mortality risks, which normally go together for people living at home, were somewhat disconnected in the nursing home setting, at least under the unusual circumstances of the COVID-19 pandemic.

"This may represent resident or family preference to avoid hospitalization, triaging decisions when local hospitals were full, or other factors yet to be determined," Mehta says.

One bright spot in the data, the researchers note, is that the mortality rate dropped dramatically during the study period, from 29.9 percent in April to 15.8 percent in September.

Credit: 
Johns Hopkins Bloomberg School of Public Health

In search of the first bacterium

image: The metabolic network of the last bacterial common ancestor, LBCA. The small circles are metabolites or compounds; the diamonds are reactions. Arrows indicate the flow of compounds to and from reactions. Three large functional modules of the network are highlighted as large regions.

Image: 
HHU / Joana Xavier

What did the ancestor of all bacteria look like, where did it live and what did it feed on? A team of researchers from the Institute of Molecular Evolution at Heinrich Heine University Düsseldorf (HHU) has now found answers to these questions by analysing biochemical metabolic networks and evolutionary trees. In the journal Communications Biology, they report on how they can now even infer the shape of the first bacterium.

Roughly five years ago, Institute Head Prof. Dr. William (Bill) Martin and his team introduced the last universal common ancestor of all living organisms and named it "LUCA". It lived approximately 3.8 billion years ago in hot deep sea hydrothermal vents.

Now the evolutionary biologists in Duesseldorf have described a further ancient cell named "LBCA" ("Last Bacterial Common Ancestor"). It is the ancestor of today's largest domain of all living organisms: Bacteria. In Communications Biology, they report on their new research approaches which led to the successful prediction of the biochemistry of LBCA and its phylogenetic links.

Bacteria are almost as old as life itself. LBCA lived around 3.5 billion years ago in a similar environment to LUCA. In order to unlock LBCA's genetic code, its properties and its story, the research team examined the genomes of 1,089 bacterial anaerobes or bacteria that survive without oxygen. "Abandoning aerobes made sense for our work", explains first author Dr. Joana C. Xavier. "If bacteria originated at a time when the Earth was anoxic, it does not make sense to investigate their origin considering species full of adaptations caused by oxygen."

Higher life forms pass on their genetic code from parent to offspring via vertical gene transfer. As a result, the genome provides information on phylogenetic history. But bacteria are masters in another form of gene transfer, namely lateral gene transfer (LGT). This allows bacteria to exchange genetic information across different strains. This posed a major challenge in reconstructing the LBCA genome, as it renders the traditional phylogenetic methods incapable of inferring the root in the bacterial evolutionary tree.

For this reason, the researchers in Duesseldorf used biochemical networks together with thousands of individual trees. They investigated 1,089 anaerobic genomes and identified 146 protein families conserved in all bacteria. These proteins make up a nearly complete core metabolic network.

To complete LBCA's biochemistry, just nine further genes had to be added for the reconstructed metabolic network to include all essential and universal metabolites. To be fully independent and self-generated, LBCA's network would still require further genes inherited from the last universal common ancestor, LUCA, and nutrients from the environment.

With LBCA's metabolic network in hand, the authors then used statistical methods to determine which of the modern bacterial groups are most similar to LBCA. They did this using a method called Minimal Ancestor Deviation, MAD, previously developed by one of the co-authors, Fernando D. K. Tria: "The analyses revealed that the earliest branch of Bacteria to diverge was most similar to modern Clostridia, followed closely by Deltaproteobacteria, Actinobacteria and some members of Aquifex. In common, these groups have the acetyl-CoA pathway for carbon fixation and/or energy metabolism."

Prof. William Martin, senior author of the study, explains: "This is the only carbon fixation pathway present in both archaea and bacteria and that traces to LUCA. This result, obtained independently, is also in line with our most recent findings on the origin and early evolution of life in hydrothermal vents."

"We can infer with confidence that LBCA was most likely rod-shaped", says Xavier. "If it was similar to Clostridia, it is possible that LBCA was able to sporulate." This hypothesis was recently laid out by other researchers "and is highly compatible with our results", says Xavier. Forming spores would allow early cells to survive the inhospitable environment of the early Earth.

Credit: 
Heinrich-Heine University Duesseldorf

Sugar not so nice for your child's brain development

Sugar practically screams from the shelves of your grocery store, especially those products marketed to kids.

Children are the highest consumers of added sugar, even as high-sugar diets have been linked to health effects like obesity and heart disease and even impaired memory function.

However, less is known about how high sugar consumption during childhood affects the development of the brain, specifically a region known to be critically important for learning and memory called the hippocampus.

New research led by a University of Georgia faculty member in collaboration with a University of Southern California research group has shown in a rodent model that daily consumption of sugar-sweetened beverages during adolescence impairs performance on a learning and memory task during adulthood. The group further showed that changes in the bacteria in the gut may be the key to the sugar-induced memory impairment.

Supporting this possibility, they found that similar memory deficits were observed even when the bacteria, called Parabacteroides, were experimentally enriched in the guts of animals that had never consumed sugar.

"Early life sugar increased Parabacteroides levels, and the higher the levels of Parabacteroides, the worse the animals did in the task," said Emily Noble, assistant professor in the UGA College of Family and Consumer Sciences who served as first author on the paper. "We found that the bacteria alone was sufficient to impair memory in the same way as sugar, but it also impaired other types of memory functions as well."

Guidelines recommend limiting sugar

The Dietary Guidelines for Americans, a joint publication of the U.S. Departments of Agriculture and of Health and Human Services, recommends limiting added sugars to less than 10 percent of calories per day.

Data from the Centers for Disease Control and Prevention show Americans between the ages 9-18 exceed that recommendation, the bulk of the calories coming from sugar-sweetened beverages.

Considering the role the hippocampus plays in a variety of cognitive functions and the fact the area is still developing into late adolescence, researchers sought to understand more about its vulnerability to a high-sugar diet via gut microbiota.

Juvenile rats were given their normal chow and an 11% sugar solution, which is comparable to commercially available sugar-sweetened beverages.

Researchers then had the rats perform a hippocampus-dependent memory task designed to measure episodic contextual memory, or remembering the context where they had seen a familiar object before.

"We found that rats that consumed sugar in early life had an impaired capacity to discriminate that an object was novel to a specific context, a task the rats that were not given sugar were able to do," Noble said.

A second memory task measured basic recognition memory, a hippocampal-independent memory function that involves the animals' ability to recognize something they had seen previously.

In this task, sugar had no effect on the animals' recognition memory.

"Early life sugar consumption seems to selectively impair their hippocampal learning and memory," Noble said.

Additional analyses determined that high sugar consumption led to elevated levels of Parabacteroides in the gut microbiome, the more than 100 trillion microorganisms in the gastrointestinal tract that play a role in human health and disease.

To better identify the mechanism by which the bacteria impacted memory and learning, researchers experimentally increased levels of Parabacteroides in the microbiome of rats that had never consumed sugar. Those animals showed impairments in both hippocampal dependent and hippocampal-independent memory tasks.

"(The bacteria) induced some cognitive deficits on its own," Noble said.

Noble said future research is needed to better identify specific pathways by which this gut-brain signaling operates.

"The question now is how do these populations of bacteria in the gut alter the development of the brain?" Noble said. "Identifying how the bacteria in the gut are impacting brain development will tell us about what sort of internal environment the brain needs in order to grow in a healthy way."

Credit: 
University of Georgia

The color red influences investor behavior, financial research reveals

LAWRENCE, KANSAS -- The phrase "to see red" means to become angry. But for investors, seeing red takes on a whole different meaning.

William BazleyThat's the premise behind a new article by William Bazley, assistant professor of finance at the University of Kansas.

"Visual Finance: The Pervasive Effects of Red on Investor Behavior" reveals that using the color red to represent financial data influences individuals' risk preferences, expectations of future stock returns and trading decisions. The effects are not present in people who are colorblind, and they're muted in China, where red represents prosperity. Other colors do not generate the same outcomes.

The article appears in the current issue of Management Science.

"Our findings suggest the use of color deserves careful consideration when it's to be used on financial platforms, such as brokerage websites or by retirement service providers," Bazley said. "For instance, the use of color could lead to investors avoiding the platform or delaying important financial decisions, which could have deleterious long-term consequences."

Co-written by Henrik Cronqvist at the University of Miami and Milica Mormann at Southern Methodist University, the article demonstrates how evolutionary biology and social learning are what creates this color-coded behavior. With regards to Western culture, it's possible that social learning has reinforced biological underpinnings. Specifically, the physical and psychological context in which color is perceived influences its meaning and human responses to it.

"In Western cultures, conditioning of red color and experiences start in early schooling as students receive feedback regarding academic errors in red," Bazley said.

Red is associated with alarms and stop signs that convey danger and command enhanced attention. Other examples include when California issues a "Red Flag Warning" that signals imminent danger of extreme fire or when the American Heart Association uses red in its guidelines to indicate hypertensive crisis (a blood pressure reading higher than 180/120) that necessitates medical care. Over time, repeated pairings of a color with negative stimuli can influence subsequent behavior.

In regard to finance, Bazley was most surprised to find how red color appears to prolong pessimistic expectations in relation to negative stock returns, while viewing the same information in black or blue leads to reversal beliefs.

He said, "This suggests the use of color may have broad implications for stock market liquidity during times of crisis and the momentum anomaly."

Their research also drew on other examples outside the financial community where colors influence choice. An emerging field called color psychology analyzes how this affects human behavior. Bazley cites a 2005 study in the publication Nature that argued the color of sportswear may influence outcomes in the Olympics.

"Much like our everyday choices, our financial decisions are likely to be shaped by factors which are not specific to the decision at hand. This can be due to a variety of reasons, such as limits to our attention. Ultimately, it suggests that incorporating aspects of psychology when studying financial decision-making is likely to yield insights," said Bazley, whose "Pervasive Effects" research is based on eight experiments with a total of 1,451 individuals.

He emphasized this particular project originated in a neuroscience course during graduate school. The research also benefited from the varied expertise of Mormann, who is a visual scientist, and Cronqvist, a behavioral finance expert.

Bazley's interest in color effects relates to his overall study in the dynamics of financial decision-making.

"Our everyday choices are shaped by a multitude of factors," said Bazley, whose expertise incorporates behavioral and social influences and fintech.

"A similar process plays out when we make our financial choices. We are still at the early stages of understanding these dynamics, but learning about them has the potential to yield insights that could ultimately improve the outcomes individuals realize from their decisions," he said.

So what is Bazley's favorite financial term involving the color red?

"I appreciate the phrase 'red herring,'" he said.

"In finance, it refers to a preliminary prospectus that a company uses when issuing securities to the public. It is an important document for potential investors, but it tends to omit key pieces of information; hence, it usually has a red disclaimer on the front. I also find fish to be delicious."

Credit: 
University of Kansas

Study shows promise of quantum computing using factory-made silicon chips

image: Professor John Morton next to a dilution fridge

Image: 
A. Abrusci / UCL

The qubit is the building block of quantum computing, analogous to the bit in classical computers. To perform error-free calculations, quantum computers of the future are likely to need at least millions of qubits. The latest study, published in the journal PRX Quantum, suggests that these computers could be made with industrial-grade silicon chips using existing manufacturing processes, instead of adopting new manufacturing processes or even newly discovered particles.

For the study, researchers were able to isolate and measure the quantum state of a single electron (the qubit) in a silicon transistor manufactured using a 'CMOS' technology similar to that used to make chips in computer processors.

Furthermore, the spin of the electron was found to remain stable for a period of up to nine seconds. The next step is to use a similar manufacturing technology to show how an array of qubits can interact to perform quantum logic operations.

Professor John Morton (London Centre for Nanotechnology at UCL), co-founder of Quantum Motion, said: "We're hacking the process of creating qubits, so the same kind of technology that makes the chip in a smartphone can be used to build quantum computers.

"It has taken 70 years for transistor development to reach where we are today in computing and we can't spend another 70 years trying to invent new manufacturing processes to build quantum computers. We need millions of qubits and an ultra-scalable architecture for building them, our discovery gives us a blueprint to shortcut our way to industrial scale quantum chip production."

The experiments were performed by PhD student Virginia Ciriano Tejel (London Centre for Nanotechnology at UCL) and colleagues working in a low-temperature laboratory. During operation, the chips are kept in a refrigerated state, cooled to a fraction of a degree above absolute zero (?273 degrees Celsius).

Ms Ciriano Tejel said: "Every physics student learns in textbooks that electrons behave like tiny magnets with weird quantum properties, but nothing prepares you for the feeling of wonder in the lab, being able to watch this 'spin' of a single electron with your own eyes, sometimes pointing up, sometimes down. It's thrilling to be a scientist trying to understand the world and at the same time be part of the development of quantum computers."

A quantum computer harnesses laws of physics that are normally seen only at the atomic and subatomic level (for instance, that particles can be in two places simultaneously). Quantum computers could be more powerful than today's super computers and capable of performing complex calculations that are otherwise practically impossible.

While the applications of quantum computing differ from traditional computers, they will enable us to be more accurate and faster in hugely challenging areas such as drug development and tackling climate change, as well as more everyday problems that have huge numbers of variables - just as in nature - such as transport and logistics.

Credit: 
University College London

Study: Female monkeys use males as "hired guns" for defense against predators

image: Female putty-mosed monkey

Image: 
C. Kolopp/WCS

BRAZZAVILLE, Republic of Congo (March 31, 2021) - Researchers with the Wildlife Conservation Society's (WCS) Congo Program and the Nouabalé-Ndoki Foundation found that female putty-nosed monkeys (Cercopithecus nictitans) use males as "hired guns" to defend from predators such as leopards.

Publishing their results in the journal Royal Society Open Science, the team discovered that female monkeys use alarm calls to recruit males to defend them from predators. The researchers conducted the study among 19 different groups of wild putty-nosed monkeys, a type of forest guenon, in Mbeli Bai, a study area within the forests in Nouabalé-Ndoki National Park, Northern Republic of Congo.

The results promote the idea that females' general alarm requires males to assess the nature of the threat and that it serves to recruit males to ensure group defense. Females only cease the alarm call when males produce calls associated with anti-predator defense. Results suggest that alarm-calling strategies depend on the sex of the signaler. Females recruit males, who identify themselves while approaching, for protection. Males reassure their female of their quality in predation defense, probably to assure future reproduction opportunities.

Males advertise their commitment to serve as hired guns by emitting general "pyow" calls while approaching the rest of their group - a call containing little information about ongoing events, but cues to male identity, similar as to a signature call. Hearing his "pyow" call during male approaches enables females to identify high quality group defenders already from a distance. This might contribute to long-term male reputation in groups, which would equip females to choose males that ensure their offspring's survival most reliably.

Said the study's lead author Frederic Gnepa Mehon of WCS's Congo Program and the Nouabalé-Ndoki Foundation: "Our observations on other forest guenons suggest that if males do not prove to be good group protectors, they likely have to leave groups earlier than good defenders. To date, it remains unclear whether female guenons have a saying in mate choice, but our current results strongly suggest this possibility."

In the course of this study, a new call type was consistently recorded named "kek." They found that the males used the "kek" call when exposed to a moving leopard model created by researchers for field experiments. Previous studies of putty-nosed monkeys in Nigeria never reported "keks." This new type of call could thus be population-specific or it could be uttered towards moving threats. If "kek" calls are population specific, this could suggest that different "dialects" exist amongst putty-nosed monkeys - a strong indicator for vocal production learning, which is fiercely debated to exist in the animal kingdom.

Said co-author Claudia Stephan Wildlife Conservation Society's (WCS) Congo Program and the Nouabalé-Ndoki Foundation: "Sexual selection might play a far more important role in the evolution of communication systems than previously thought. In a phylogenetic context, what strategies ultimately drove the evolution of communication in females and in males? Might there even be any parallels to female and male monkeys' different communication strategies in human language?"

The authors say that current results considerably advanced the understanding of different female and male alarm calling both in terms of sexual dimorphisms in call production and call usage. Interestingly, although males have more complex vocal repertoires than females, the cognitive skills that are necessary to strategically use simple female repertoires seem to be more complex than those necessary to follow male calling strategies. In other words, female putty-nosed monkeys' alarms may contain little information, but they do so by purpose, namely to facilitate the manipulation of male behavior.

Credit: 
Wildlife Conservation Society

Preventive medicine physician shortage continues to fall behind population needs in the US

March 31, 2021 - The United States is facing a persistent and worsening shortage of physicians specializing in preventive medicine, reports a study in the Journal of Public Health Management and Practice. The journal is published in the Lippincott portfolio by Wolters Kluwer.

"The number of preventive medicine physicians is not likely to match population needs in the United States in the near term and beyond," according to the new research by Thomas Ricketts, Ph.D., MPH, and colleagues of University of North Carolina at Chapel Hill. The study appears in a supplement to the May/June issue of JPHMP, presenting new research and commentaries on preventive medicine, funded by the US Health Resources and Services Administration (HRSA).

Preventive medicine physician workforce: Getting older, not enough new trainees

Preventive medicine physicians work at the intersection of clinical medicine and public health, using specialized skills in health services planning and evaluation, epidemiology, and public health practice, management, and leadership. The COVID-19 pandemic has brought new attention to preventive medicine's role in bridging the health care and public health sectors.

Previous reports have noted a "real and significant shortage" of preventive medicine specialists. The most recent analysis found that the number of preventive medicine physicians decreased from 2.1 percent of the total workforce in 1970 to 0.9 percent in 1997. Dr. Ricketts and colleagues present an updated analysis of trends in the US supply of preventive medicine physicians.

Based on data from the American Board of Preventive Medicine, the number of Board-certified preventive medicine physicians increased from 6,091 in 1999 to 9,720 in 2018. The increase was largely driven by growth in occupational medicine, as well as the newer specialty of undersea and hyperbaric medicine.

On analysis of American Medical Association 2018 data, 6,866 physicians reported a specialty in preventive medicine. The figure held relatively steady, compared to previous years. However, there were only about two preventive medicine specialists for 100,000 population - approaching half the national supply at its peak in the 1970s.

In an encouraging development, the percentage of women specializing in preventive medicine has increased; women now account for 35 percent of preventive medicine physicians. However, the specialty is aging: in 2017, nearly half of US preventive medicine physicians were aged 60 years or older.

Meanwhile, the number of new trainees in preventive medicine residency programs has decreased: from 420 in 1999 to 313 in 2020. "This decline will contribute to a reduction in preventive medicine physician supply, but the future supply will initially be most affected by the aging of the current workforce," Dr. Ricketts and coauthors write. "Even with the growth of women...the training pipeline is not producing enough new preventive medicine physicians to keep pace with population growth and retirements."

These trends highlight an "urgent need for extension of training programs." Some efforts are already in place, including funding for preventive medicine residency programs through HRSA. Loss of funding would not only exacerbate the ongoing shortage, but "would also result in a leadership gap in public health and prevention in the United States that would last well into the future," Dr. Ricketts and colleagues add.

The new JPHMP supplement highlights HRSA's investment in preventive medicine, focusing on the strategic goal of achieving health equity and enhancing population health, according to an introduction of HRSA officers and guest editors Paul Jung, MD, MPH, FACPM, and Sophia Russell, DM, MBA, RN, NE-BC.

The special issue presents HRSA-funded projects by preventive medicine residents, targeting public health priorities such as vaping/e-cigarettes, the opioid crisis, the COVID-19 pandemic, and the population health needs of rural America. Drs. Jung and Russell conclude: "The future of the specialty of preventive medicine remains strong, and HRSA will continue to manage Congress' valuable financial support for the specialty to ensure that it remains viable and relevant for the future."

Credit: 
Wolters Kluwer Health

Decades of hunting detects footprint of cosmic ray superaccelerators in our galaxy

image: Ultrahigh-energy diffuse gamma rays (yellow points) are distributed along the Milky Way galaxy. The background color contour shows the atomic hydrogen distribution in the galactic coordinates. The gray shaded area indicates what is outside of the field of view.

Image: 
HEASARC / LAMBDA / NASA / GFSC

An enormous telescope complex in Tibet has captured the first evidence of ultrahigh-energy gamma rays spread across the Milky Way. The findings offer proof that undetected starry accelerators churn out cosmic rays, which have floated around our galaxy for millions of years. The research is to be published in the journal Physical Review Letters on Monday, April 5.

"We found 23 ultrahigh-energy cosmic gamma rays along the Milky Way," said Kazumasa Kawata, a coauthor from the University of Tokyo. "The highest energy among them amounts to a world record: nearly one petaelectron volt."

That's three orders of magnitude greater than any known cosmic-ray-induced gamma ray--or any particle humans have accelerated in state-of-the-art laboratories on Earth.

Since 1990, dozens of researchers from China and Japan have hunted for the elusive high-energy cosmic gamma rays. The Tibet ASγ Collaboration made its discovery using nearly 70,000 square meters of ground arrays and underground muon detectors on the Tibetan Plateau, sitting more than 14,000 feet above sea level.

"Scientists believe high energy gamma rays can be produced by the nuclear interaction between high energy cosmic rays escaping from the most powerful galactic sources and interstellar gas in the Milky Way galaxy," said Huang Jing, a coauthor from Institute of High Energy Physics, Chinese Academy of Sciences.

Chen Ding of the National Astronomical Observatories, Chinese Academy of Sciences, another coauthor, added, "The detection of diffuse gamma rays above 100 teraelectron volts is a key to understanding the origin of very-high-energy cosmic rays, which has been a mystery since their discovery in 1912."

Balloon experiments first identified cosmic rays, revealing they were a key source of radiation on Earth. Cosmic rays are highly energetic particles, mostly protons, that travel across space. Millions of these particles pass through your body every day. (They are believed harmless.)

But where do cosmic rays come from?

"We live together with cosmic-ray muons, though we are usually not sensitive to them," said Kawata. "Isn't it a fantasy to think of where and how these cosmic rays are produced and accelerated, traveling all the way to Earth?"

A popular theory argues that accelerators known as "PeVatrons" spew cosmic rays at energies up to one petaelectron volt (PeV). Possible PeVatrons include supernova explosions, star-forming regions, and the supermassive black hole at the center of our galaxy.

So far, no one has detected any such accelerators. If PeVatrons exist, their cosmic rays should leave trails of glowing gamma rays strewn across the galaxy. The new study reports the first evidence of this highly-energetic haze.

"These gamma rays did not point back to the most powerful known high-energy gamma-ray sources, but spread out along the Milky Way," said Masato Takita, a coauthor and colleague of Kawata. "Our discovery confirms evidence of the existence of PeVatrons."

The researchers now want to determine if the probable PeVatrons are active or dead.

"From dead PeVatrons, which are extinct like dinosaurs, we can only see the footprint--the cosmic rays they produced over a few million years, spread over the galactic disk," said Takita.

"But if we can locate real, active PeVatrons, we can study many more questions," he said. "What type of star emits our sub-PeV gamma rays and related cosmic rays? How can a star accelerate cosmic rays up to PeV energies? How do the rays propagate inside our galactic disk?"

Other future directions include looking for PeVatron footprints in the southern hemisphere and confirming the gamma-ray results using neutrino detectors in Antarctica and beyond.

The research could also aid in the quest for dark matter. Underground detectors allowed the researchers to cut away cosmic-ray background noise, revealing the kind of pure, diffuse gamma rays predicted to emanate from dark matter.

"We can reduce the cosmic ray background by a factor of one million. Then we see a high-purity gamma ray sky," said Takita.

The experimental achievement moves physicists significantly closer to discovering where cosmic rays are born.

"This pioneering work opens a new window for the exploration of the extreme universe," said Huang. "The observational evidence marks an important milestone toward revealing cosmic ray origins, which have puzzled mankind for more than one century."

Credit: 
American Physical Society

UNH Research: New Hampshire coastal recreationists support offshore wind

DURHAM, N.H.-- As the Biden administration announces a plan to expand the development of offshore wind energy development (OWD) along the East Coast, research from the University of New Hampshire shows significant support from an unlikely group, coastal recreation visitors. From boat enthusiasts to anglers, researchers found surprisingly widespread support with close to 77% of coastal recreation visitors supporting potential OWD along the N.H. Seacoast.

"This study takes a closer look at the lingering assumption that offshore wind in the United States might hurt coastal recreation and tourism when in fact, we found the opposite," said Michael Ferguson, assistant professor of recreation management and policy. "Our findings suggest that offshore wind energy development will likely have little impact on coastal recreation and tourism, and in some instances, may even help amplify visitation."

In the study, recently published in the journal Energy Research & Social Science, researchers collected data from N.H. coastal recreation visitors using on-site surveys at 18 different locations along the seacoast, including beaches, marinas, boat launches, angling locations and yacht clubs. They surveyed a variety of visitors from sightseeing and charter operators to beach goers, surfers and anglers assessing overall perceptions toward acceptance, support, fit, and recreation impact of OWD. The researchers found that when asked about OWD, 77% of coastal visitors were supportive, 73% were accepting and 58% agreed that OWD would fit the N.H. seascape.

To help coastal recreation visitors visualize what a commercial scale offshore wind farm would look like on the N.H. Seacoast, the researchers showed 50% of respondents a photo simulation of an OWD project while the other 50% of respondents did not view it. Findings indicated that it did not matter if respondents saw the photo simulation of not, their attitudes remained the same; largely positive and supportive. Additionally, most respondents agreed that OWD would not cause them to alter or substitute their recreation activities, behaviors, or experiences.

"Most of these coastal recreation visitors frequented the area, so these are people with strong ties to the N.H. Seacoast," said Ferguson. "And, since offshore wind energy development has had its hurdles gaining traction and acceptance in the United States, our findings suggest that coastal recreation visitors are open and supportive of it and policymakers, natural resource managers, and the OWD industry should recognize coastal recreation and tourism as critical stakeholder sectors."

The researchers cite in their paper that along the N.H. Seacoast, coastal recreation is an essential sector of the state economy, accounting for more than $1.5 billion in annual economic impact. Across the nation, beaches and their associated coastal recreation activities serve as a leading source of tourism revenue in states with coastlines.

Credit: 
University of New Hampshire

Nursing graduate students report high levels of stress, anxiety, depression

AURORA, Colo. (March 30, 2021 - Researchers at the University of Colorado College of Nursing have found that nearly one-quarter of graduate nursing students have reported elevated levels of stress, anxiety and depression, compounded in the past year by the COVID-19 pandemic.

Study findings, published recently in Nurse Educator, also reveal that 23.8% of student respondents scored within the area of clinical concern for PTSD and immune system suppression.

"Professions in healthcare are assumed to be high-stress, but the past year brought challenges so unprecedented that it's critical to understand how our students - who juggle clinical work and studies - are faring," says Laura Rosenthal, DNP, associate professor at the University of Colorado College of Nursing and study author.

Graduate students (n=222) within CU's College of Nursing participated in the study, which utilized a cross-sectional 149-item electronic survey that included two validated instruments: Depression, Anxiety and Stress Scale (DASS-21) and Impact of Events Scale - Revised (IES-R). Nearly 10% of respondents reported severe or extremely severe scores in depression and anxiety axes, while 14% responded similarly on the stress axis in the DASS-21. The IES-R results found 23.8% of respondents had clinically concerning scores, 9.5% had a possible diagnosis of PTSD and 6.2% scored high enough to possibly suppress immune system functions.

Study findings also suggest that students who experienced abrupt and mandatory changes in clinical work hours - decrease or increase - or had previously worked in a hospital pre-pandemic were at greater risk for clinically concerning scores.

"These study results are crucial in understanding how to take care of our future workforce who are already on the front lines," says Rosenthal. "Controlled and current mental health challenges can be amplified, and students without any underlying mental health concerns can experience trauma during extreme periods of stress and uncertainty as experienced during the COVID-19 pandemic. Gauging anxiety, stress and depression can help educators and administrators provide adequate support to our student population who may be struggling."

Credit: 
University of Colorado Anschutz Medical Campus

Even without a brain, Penn Engineering's metal-eating robots can search for food

image: Penn Engineering's metal-eating robot can follow a metal path without using a computer or needing a battery. By wiring the power-supplying units to the wheels on the opposite side, the robot autonomously navigates away from the tape and towards aluminum surfaces.

Image: 
University of Pennsylvania

When it comes to powering mobile robots, batteries present a problematic paradox: the more energy they contain, the more they weigh, and thus the more energy the robot needs to move. Energy harvesters, like solar panels, might work for some applications, but they don't deliver power quickly or consistently enough for sustained travel.

James Pikul, assistant professor in Penn Engineering's Department of Mechanical Engineering and Applied Mechanics, is developing robot-powering technology that has the best of both worlds. His environmentally controlled voltage source, or ECVS, works like a battery, in that the energy is produced by repeatedly breaking and forming chemical bonds, but it escapes the weight paradox by finding those chemical bonds in the robot's environment, like a harvester. While in contact with a metal surface, an ECVS unit catalyzes an oxidation reaction with the surrounding air, powering the robot with the freed electrons.

Pikul's approach was inspired by how animals power themselves through foraging for chemical bonds in the form of food. And like a simple organism, these ECVS-powered robots are now capable of searching for their own food sources despite lacking a "brain."

In a new study published as an Editor's Choice article in Advanced Intelligent Systems, Pikul, along with lab members Min Wang and Yue Gao, demonstrate a wheeled robot that can navigate its environment without a computer. By having the left and right wheels of the robot powered by different ECVS units, they show a rudimentary form of navigation and foraging, where the robot will automatically steer toward metallic surfaces it can "eat."

Their study also outlines more complicated behavior that can be achieved without a central processor. With different spatial and sequential arrangements of ECVS units, a robot can perform a variety of logical operations based on the presence or absence of its food source.

"Bacteria are able to autonomously navigate toward nutrients through a process called chemotaxis, where they sense and respond to changes in chemical concentrations," Pikul says. "Small robots have similar constraints to microorganisms, since they can't carry big batteries or complicated computers, so we wanted to explore how our ECVS technology could replicate that kind of behavior."

In the researchers' experiments, they placed their robot on aluminum surfaces capable of powering its ECVS units. By adding "hazards" that would prevent the robot from making contact with the metal, they showed how ECVS units could both get the robot moving and navigate it toward more energy-rich sources.

"In some ways," Pikul says, "they are like a tongue in that they both sense and help digest energy."

One type of hazard was a curving path of insulating tape. The researchers showed that the robot would autonomously follow the metal lane in between two lines of tape if its EVCS units were wired to the wheels on the opposite side. If the lane curved to the left, for example, the ECVS on the right side of the robot would begin to lose power first, slowing the robot's left wheels and causing it to turn away from the hazard.

Another hazard took the form of a viscous insulating gel, which the robot could gradually wipe away by driving over it. Since the thickness of the gel was directly related to the amount of power the robot's ECVS units could draw from the metal underneath it, the researchers were able to show that the robot's turning radius was responsive to that sort of environmental signal.

By understanding the types of cues ECVS units can pick up, the researchers can devise different ways of incorporating them into the design of a robot in order to achieve the desired type of navigation.

"Wiring the ECVS units to opposite motors allows the robot to avoid the surfaces they don't like," says Pikul. "But when the ECVS units are in parallel to both motors, they operate like an 'OR' gate, in that they ignore chemical or physical changes that occur under just one power source."

"We can use this sort of wiring to match biological preferences," he says. "It's important to be able to tell the difference between environments that are dangerous and need to be avoided, and ones that are just inconvenient and can be passed through if necessary."

As ECVS technology evolves, they can be used to program even more complicated and responsive behaviors in autonomous, computerless robots. By matching the ECVS design to the environment that a robot needs to operate in, Pikul envisions tiny robots that crawl through rubble or other hazardous environments, getting sensors to critical locations while preserving themselves.

"If we have different ECVS that are tuned to different chemistries, we can have robots that avoid surfaces that are dangerous, but power through ones that stand in the way of an objective," Pikul says.

Credit: 
University of Pennsylvania

Crnic Institute discovery may explain high risk of leukemia in children with Down syndrome

Denver, CO, March 31, 2020 - Children with Down syndrome are 20-times more likely to develop acute lymphocytic leukemia (ALL) and 150-times more likely to develop acute myeloid leukemia (AML) compared to their typical peers. According to a new study by researchers at the Linda Crnic Institute for Down Syndrome, the reason could be that children with Down syndrome are more likely to present with clonal hematopoiesis (CH), a process in which a blood stem cell acquires a genetic mutation that promotes replication.

The findings, published online by Blood Advances, add to a growing body of evidence, much of which has been established by the Crnic Institute, linking immune dysregulation to a dramatically different disease spectrum, whereby people with Down syndrome are highly predisposed to some diseases (e.g., leukemia, autoimmune disorders, and Alzheimer's disease) and highly protected from others (e.g., solid tumors).

"We found a higher-than expected rate of CH in individuals with Down syndrome between the age of one to 20 years old," says study author Dr. Alexander Liggett, who spearheaded the study as a doctoral candidate in the lab of Dr. James DeGregori, Professor of Biochemistry and Molecular Genetics. "It is a surprising finding, as the phenomenon is typically only observed in elderly people."

To perform the study, Liggett and DeGregori and colleagues applied an advanced sequencing technique they developed, called FERMI, to blood samples from the Crnic Institute Human Trisome Project BiobankTM. Not only were mutations detected more often in young individuals with Down syndrome, but those mutations were more likely to be oncogenic, or potentially cancerous. In the typical elderly population, oncogenic mutations are commonly found in the genes DNMT3A, TET2, ASXL1, TP53, and JAK2. Interestingly, in the Down syndrome cohort, oncogenic CH was dominated by mutations of the TET2 gene.

"Given the increased risk of leukemia that accompanies clonal expansion of blood cells carrying oncogenic mutations, these expansions may become an important biomarker of cancer risk in the future," says Dr. Liggett.

The study also found that CH in Down syndrome is associated with biosignatures of immune dysregulation that are linked to diseases that commonly co-occur with Down syndrome, including thyroiditis, Alzheimer's disease, and leukemia. This discovery opens new lines of investigation to understand how CH impacts a variety of health outcomes in Down syndrome and how to potentially counteract its effects.

"This is truly transformative. This team has identified a new trait of Down syndrome that has strong implications for understanding the appearance of comorbidities more common in this population, such as leukemia and premature ageing," said Dr. Joaquin Espinosa, Executive Director of the Crnic Institute. "The next step is to define the long-term impacts of this precocious clonal hematopoiesis and how to prevent its harmful effects."

Credit: 
University of Colorado Anschutz Medical Campus