Culture

A fast radio burst tracked down to a nearby galaxy

image: Image of SDSS J015800.28+654253.0, the host galaxy of Fast Radio Burst (FRB) 180916.J0158+65 (a.k.a. "R3") -- acquired with the 8-meter Gemini-North telescope. Images acquired in SDSS g', r', and z' filters are used for the blue, green, and red colors, respectively. The position of the FRB in the spiral arm of the galaxy is marked by white cross hairs.

Image: 
Danielle Futselaar (artsource.nl)

Astronomers in Europe, working with members of Canada's CHIME Fast Radio Burst collaboration, have pinpointed the location of a repeating fast radio burst (FRB) first detected by the CHIME telescope in British Columbia in 2018. The breakthrough is only the second time that scientists have determined the precise location of a repeating source of these millisecond bursts of radio waves from space.

In results published in the January 9 edition of Nature, the European VLBI Network (EVN) used eight telescopes spanning locations from the United Kingdom to China to simultaneously observe the repeating radio source known as FRB 180916.J0158+65. Using a technique known as Very Long Baseline Interferometry (VLBI), the researchers achieved a level of resolution high enough to localize the FRB to a region approximately seven light years across - a feat comparable to an individual on Earth being able to distinguish a person on the Moon.

A 'very different' location for an FRB

With that level of precision, the research team was able to train an optical telescope onto the location to learn more about the environment from which the burst emanated. What they found has added a new chapter to the mystery surrounding the origins of FRBs.

"We used the eight-metre Gemini North telescope in Hawaii to take sensitive images that showed the faint spiral arms of a Milky-Way-like galaxy and showed that the FRB source was in a star-forming region in one of those arms," said co-author Shriharsh Tendulkar, a former McGill University postdoctoral researcher who co-led the optical imaging and spectroscopic analyses of the FRB's location.

"This is a very different environment for a repeating FRB, compared to the dwarf galaxy in which the first repeating FRB 121102 was discovered to reside."

CHIME team's hypotheses in line with observed data

The discovery lined up with a number of ideas CHIME/FRB researchers had put forward following their initial detection of the burst in 2018.

"The FRB is among the closest yet seen and we even speculated that it could be a more conventional object in the outskirts of our own galaxy," said co-author Mohit Bhardwaj, a McGill University doctoral student and CHIME team member.

"However the EVN observation proved that it's in a relatively nearby galaxy, making it still a puzzling FRB, but close enough to now study using many other telescopes."

Zooming in on the radio sky

Since it began operation in the summer of 2018, CHIME has detected dozens of fast radio bursts, greatly accelerating the rate of discovery of these transient astrophysical phenomena. With over 1,000 antennas, CHIME's large field of view gives it a much greater chance of picking up fleeting bursts than conventional radio telescopes that are able to observe only a small area of the sky at a time.

When it came to pinpointing FRB 180916, the CHIME/FRB team worked closely with their EVN colleagues to determine exactly where to point the VLBI telescopes.

"By recording and processing the raw signal from each of the antenna elements that make up CHIME, we were able to refine the source position to a level close enough for EVN to successfully observe and localize multiple bursts from this FRB source," said co-author Daniele Michilli, a McGill University postdoctoral researcher and CHIME/FRB team member.

FRB's proximity opens the way for further study

At half-a-billion light years from Earth, the source of FRB 180916 is around seven times closer than the only other repeating burst to have been localized, and more than 10 times closer than any of the few non-repeating FRBs scientists have managed to pinpoint. That's exciting for astronomers because it will enable more detailed study that may help narrow down the possible explanations for FRBs.

"We have a new chance to perhaps detect emissions at other wavelengths - x-ray or visible light, for instance," said McGill University astrophysicist Victoria Kaspi, a leading member of the CHIME/FRB collaboration. "And if we did, that would be hugely constraining of the models."

Credit: 
McGill University

Maximizing bike-share ridership: New research says it's all about location

INFORMS Journal Management Science New Study Key Takeaways:

Nearly 80% of bike-share usage comes from areas within 1,000 feet of the stations.

A 10% increase in bike availability would increase ridership by more than 12%.

System use is more likely to be affected by bike availability rather than dock availability.

CATONSVILLE, MD, January 6, 2020 - The popularity of bike-share systems has grown in popularity thanks to the younger, more environmentally conscious generation. While they have garnered considerable attention in cities from Paris to Washington, D.C., their promise of urban transformation is far from being fully realized.

New research in the INFORMS journal Management Science found a key reason is that while companies have focused on bike design and technology aspects, there has been minimal research done on operational aspects such as station density and bike-availability levels.

"Almost 80% of bike-share usage comes from areas within 1,000 feet of the stations, or roughly four city blocks," said Elena Belavina, one of the study authors, from Cornell University. "Anything past 1,000 feet, potential users are almost 60% less likely to use a station."

The study, "Bike-Share Systems: Accessibility and Availability," conducted by Belavina alongside Ashish Kabra of the University of Maryland and Karan Girotra also at Cornell, analyzes the relationship between ridership and operational performance in bike-share design systems to achieve higher ridership.

Using data from the Vélib' system in Paris, this paper estimates the impact of two facets of system performance on bike-share ridership: accessibility, or how far the user must walk to reach stations, and bike availability.

There are two impacts of availability: First, a short-term impact is that if nearby stations do not have bicycles when a user wants to take a trip, users must go to stations farther away or abandon using bike-share. Second, if users typically expect a lower chance of finding a bicycle, they are less likely to even consider bike-share for their commutes and the system will have lower ridership in the long-term.

"Most users choose to abandon using bike-share," said Girotra, a professor at Cornell Tech and the Johnson College of Business at Cornell. "But overall, we find that a 10% increase in bike availability would increase ridership by more than 12%."

Between increasing bike-availability and decreasing walking distance, the study finds that the latter has a higher impact. Bike-share operators with limited resources must prioritize building more stations closer to riders.

Where should those stations go? The authors recommend locations where there are many points of interest and locations with lower bike availabilities.

"Among these, adding stations in areas closer to supermarkets provides more bene?ts than adding them closer to public transit and other points of interest," continued Girotra.

Credit: 
Institute for Operations Research and the Management Sciences

Jaguars could prevent a not-so-great American biotic exchange

image: A few coyotes were detected on the western edge of Panama's Darien National Park, the last barrier before they invade South America.

Image: 
Fundacion Yaguara Panama

For the first time, coyotes (Canis latrans) and crab-eating foxes (Cerdocyon thous) are occurring together. According to a recent study by researchers at the Smithsonian Tropical Research Institute (STRI) and collaborating institutions, deforestation along the Mesoamerican Biological Corridor may be the reason why canid species from North and South America ended up living side by side in eastern Panama, far from their original ranges.

When the Panama land-bridge emerged from the sea millions of years ago, mammals like giant sloths and saber-toothed cats dispersed between North and South America across the new corridor linking the continents, a phenomenon known as the Great American Biotic Interchange. Today, urban and agricultural development and deforestation are generating a new passageway of deforested habitats, ideal for invasive species adapted to human disturbance. Coyotes, native to regions spanning Canada to Mexico--and crab-eating foxes, commonly found between Colombia and northern Argentina, are among them.

"We knew the coyotes were moving south and the foxes north, but we didn't know how far they'd gotten, or what would happen when they met up," said Roland Kays, research associate at STRI, scientist at the North Carolina Museum of Natural Sciences and co-author of the new paper published in the Journal of Mammalogy. "Systematic camera trapping across both forests and agricultural land helped us find out."

To understand this phenomenon, scientists combined camera-trap surveys with observations from the literature and roadkill records. Their analysis revealed that coyote and crab-eating fox populations have colonized the agriculture-dominated corridor between Panama City and Lake Bayano. A few coyotes were even detected on the western edge of Panama's Darien National Park, the last barrier before they invade South America.

Jaguars and other tropical forest predators may have formed a barrier, keeping coyotes from moving farther south. "There is information about coyotes in Panama since 1981, and they have made progress across the isthmus thanks to the expansion of the livestock and agricultural frontier and deforestation in some areas of the country," said Ricardo Moreno, STRI research associate, president and researcher at Fundación Yaguará Panamá and co-author of the paper. "If the population of jaguars decreases and deforestation increases in Darien, surely the coyote will soon enter South America."

Despite originating on opposite sides of the American continent, these two canid-species evolved comparable traits: they are both nocturnal, have similar diets and use the same types of habitats. They have never been observed together on camera, but the authors suggest that their common characteristics could potentially lead to competition in their newly shared range.

For the researchers, a surprising revelation of this study was the dog-like appearance of some coyotes captured by camera traps. Many had unusually short tails, hound-like muzzles and variable coat patterns, indicating a possible recent hybridization with dogs. This could benefit coyotes if they inherit dog genes associated with fruit-eating, as they might be better able to exploit tropical fruit.

If deforestation continues in Panama and Central America, crab-eating foxes and coyotes could be among the first mammals in a new 'Not-So-Great American Biotic Interchange' with unknown ecological impacts on native prey or competitors. To address this challenge, the scientists emphasize the urgent need to prioritize conservation research that continues to explore the effects of these invasive species in relation to fragmentation, reforestation and the persistence of native apex predators, like jaguars, in the region.

"We found coyotes using fragmented rainforests, but not the bigger forests where jaguars persist," Kays said. "We think keeping the Darien jaguar-friendly will also make it hostile to coyotes."

Credit: 
Smithsonian Tropical Research Institute

Benefits of integrating cover crop with broiler litter in no-till dryland cotton systems

Although most cotton is grown in floodplain soils in the Mississippi Delta region, a large amount of cotton is also grown under no-till systems on upland soils that are vulnerable to erosion and have reduced organic matter. There are much lower levels of cotton residue in these systems, which limits the effectiveness of the no-till approach to improve soil health.

Repeatedly applying broiler litter applications to these systems exposes litter and nutrients to risk of loss, reduced effectiveness as a nutrient source, and yield reduction. In contrast, integrating a cover crop with broiler litter into no-till dryland cotton offers many benefits, including improved soil health indicators and increased plant residue, cotton yield, and infiltration and water storage.

In the webcast "Manure and Cover Crop Management Practices on Dryland No-Till Cotton System in Mississippi," USDA-ARS research soil scientist Ardeshir Adeli provides a basis for farmers and producers who want to adopt cover crop management practices to maintain the fertilizer value of broiler litter and reduce the use of purchased inorganic fertilizers. In doing so, growers can maximize their net returns and protect the environment. This presentation also serves as a guideline for broiler producers and helps agricultural consultants and the Natural Resources Conservation Service develop plans for nutrient management.

This 23-minute presentation is available through the "Focus on Cotton" resource on the Plant Management Network. This resource contains more than 75 webcasts, along with presentations from six conferences, on a broad range of aspects of cotton crop management: agronomic practices, diseases, harvest and ginning, insects, irrigation, nematodes, precision agriculture, soil health and crop fertility, and weeds. These webcasts are available to readers open access (without a subscription).

Credit: 
American Phytopathological Society

Genes controlling mycorrhizal colonization discovered in soybean

image: A University of Illinois/USDA Agricultural Research Service study has identified genes related to mycorrhizal fungus colonization in soybeans.

Image: 
Michelle Pawlowski, University of Illinois

URBANA, Ill. - Like most plants, soybeans pair up with soil fungi in a symbiotic mycorrhizal relationship. In exchange for a bit of sugar, the fungus acts as an extension of the root system to pull in more phosphorus, nitrogen, micronutrients, and water than the plant could on its own.

Mycorrhizal fungi occur naturally in soil and are commercially available as soil inoculants, but new research from the University of Illinois suggests not all soybean genotypes respond the same way to their mycorrhizal relationships.

"In our study, root colonization by one mycorrhizal species differed significantly among genotypes and ranged from 11 to 70%," says Michelle Pawlowski, postdoctoral fellow in the Department of Crop Sciences at Illinois and co-author on a new study in Theoretical and Applied Genetics.

To arrive at that finding, Pawlowski grew 350 diverse soybean genotypes in pots filled with spores of a common mycorrhizal fungus. After six weeks, she looked at the roots under a microscope to evaluate the level of colonization.

"It was a little bit of a gamble because we didn't know much about soybean's relationship with mycorrhizae and did not know if differences in colonization among the soybean genotypes would occur. So when we screened the soybean genotypes and found differences, it was a big relief," Pawlowski says. "That meant there was a potential to find genetic differences, too."

The process of root colonization starts before fungal spores even germinate in the soil. Roots exude chemicals, triggering spores to germinate and grow toward the root. Once the fungus makes contact, there's a complex cascade of reactions in the plant that prevents the usual defensive attack against invading pathogens. Instead, the plant allows the fungus to enter and set up shop inside the root, where it creates tiny tree-like structures known as arbuscules; these are where the fungus and plant trade sugar and nutrients.

The study suggests there is a genetic component to root colonization rates in soybean. To find it, Pawlowski compared the genomes of the 350 genotypes and honed in on six genomic regions associated with differing levels of colonization in soybean.

"We were able to use all the information we have on the soybean genome and gene expression to find possible causal genes within these six regions," she says.

According to the study, the genes control chemical signals and pathways that call fungus toward roots, allow the plant to recognize mycorrhizal fungus as a "good guy," help build arbuscules, and more. "For almost every step in the colonization process, we were finding related genes within those regions," Pawlowski says.

Knowing which genes control root colonization could lead breeders to develop soybean cultivars with a higher affinity for mycorrhizal fungus, which could mean improved nutrient uptake, drought tolerance, and disease resistance.

"This environmentally friendly approach to improving soybean production may also help reduce the overuse of fertilizers and pesticides and promote more holistic crop production systems," says Glen Hartman, plant pathologist in the Department of Crop Sciences and crop pathologist for USDA-ARS.

Credit: 
University of Illinois College of Agricultural, Consumer and Environmental Sciences

Story tips: Weather days, grid balance and scaling reactors

image: The microgrid for the Smart Neighborhood in Hoover, Alabama, consists of solar panels and a battery pack and allows homes to disconnect from the main power grid.

Image: 
Southern Company

Energy - Whatever the weather

To better determine the potential energy cost savings among connected homes, researchers at Oak Ridge National Laboratory developed a computer simulation to more accurately compare energy use on similar weather days.

"Since no two weather days are alike, we created a simulated weather identification model that keeps environmental impacts such as temperature changes and sunlight consistent," said ORNL's Supriya Chinthavali. "This will help address the challenge of quantifying energy cost savings, which utility companies and homeowners are most interested in."

The team is analyzing energy use data from a neighborhood-level research platform comprising 62 homes called Smart Neighborhood®, powered by traditional electric grid and microgrid sources.

The goal is to co-optimize energy cost, comfort, environment and reliability by controlling the connected homes' devices - particularly the HVAC and water heater, a home's largest energy consumers.

Future analysis by ORNL, Southern Company and university partners will include potential energy cost savings details.

Media Contact: Sara Shoemaker, 865.576.9219; shoemakerms@ornl.gov

Image: https://www.ornl.gov/sites/default/files/2020-01/alabama%20power%20smart%20neighborhood%20microgrid.jpg

Caption: The microgrid for the Smart Neighborhood in Hoover, Alabama, consists of solar panels and a battery pack and allows homes to disconnect from the main power grid. Credit: Southern Company

Image: https://www.ornl.gov/sites/default/files/2020-01/04.09.TD-SMartHome.jpg

Caption: The Smart Neighborhood in Hoover, Alabama, a 62-home development is connected to a microgrid operated by ORNL's open source controller. The research is sponsored by the DOE Building Technologies Office and supports BTO's Grid-Interactive Efficient Buildings strategy. Credit: Southern Company

Grid - Below-ground balancing

Oak Ridge National Laboratory researchers created a geothermal energy storage system that could reduce peak electricity demand up to 37% in homes while helping balance grid operations.

The system is installed underground and stores excess electricity from renewable resources like solar power as thermal energy through a heat pump. The system comprises underground tanks containing water and phase change materials that absorb and release energy when transitioning between liquid and solid states.

ORNL's design relies on inexpensive materials and is installed at shallow depths to minimize drilling costs. The stored energy can provide hours of heating in the winter or cooling in the summer, shaving peak demand and helping homeowners avoid buying electricity at peak rates.

"Shifting demand during peak times can help utilities better manage their loads while saving consumers money and encouraging greater use of renewable energy," said ORNL's Xiaobing Liu.

The team published results of the system's performance from a simulation.

Media Contact: Stephanie Seay, 865.576.9894; seaysg@ornl.gov

Image: https://www.ornl.gov/sites/default/files/2020-01/Geothermal_graphic.jpg

Caption: ORNL researchers have developed a system that stores electricity as thermal energy in underground tanks, allowing homeowners to reduce their electricity purchases during peak periods while helping balance the power grid. Credit: Andy Sproles/Oak Ridge National Laboratory, U.S. Dept. of Energy

Reactors - Quality codes to scale

Nuclear scientists at Oak Ridge National Laboratory have established a Nuclear Quality Assurance-1 program for a software product designed to simulate today's commercial nuclear reactors - removing a significant barrier for industry adoption of the technology.

The suite of tools called VERA, the Virtual Environment for Reactor Applications developed by the Consortium for Advanced Simulation of Light Water Reactors, or CASL, can be used to solve various challenges in nuclear reactor operations and consists of several physics codes related to neutron transport, thermal hydraulics, fuel performance and coolant chemistry.

The goals of the continued work in improving the simulation software are to help industry by boosting the power output from existing reactors and to improve designs and confidence in current and future reactors.

ORNL's Shane Stimpson co-leads MPACT, the component of VERA responsible for modeling power distribution throughout the reactor core.

"When developing these codes, we're listening to industry's needs to provide reactor simulations with broader appeal and value," he said.

Media Contact: Sara Shoemaker, 865.576.9219; shoemakerms@ornl.gov

Image: https://www.ornl.gov/sites/default/files/2020-01/VERA-NQA1.png

Caption: Oak Ridge National Laboratory has established a quality assurance program as part of an effort to highlight software simulation product quality and make the codes more useful to industry. Credit: Benjamin Collins/Oak Ridge National Laboratory, U.S. Dept. of Energy

Credit: 
DOE/Oak Ridge National Laboratory

Progesterone from an unexpected source may affect miscarriage risk

About twenty percent of confirmed pregnancies end in miscarriage, most often in the first trimester, for reasons ranging from infection to chromosomal abnormality. But some women have recurrent miscarriages, a painful process that points to underlying issues. Clinical studies have been uneven, but some evidence shows that for women with a history of recurrent miscarriage, taking progesterone early in a pregnancy might moderately improve these women's chances of carrying a pregnancy to term.

A recent study in the Journal of Lipid Research sheds some light on a new facet of progesterone signaling between maternal and embryonic tissue, and hints at a preliminary link between disruptions to this signaling and recurrent miscarriage.

Progesterone plays an important role in embedding the placenta into the endometrium, the lining of the uterus. The hormone is key for thickening the endometrium, reorganizing blood flow to supply the uterus with oxygen and nutrients, and suppressing the maternal immune system.

Progesterone is made in the ovary as a normal part of the menstrual cycle, and at first, this continues after fertilization. About six weeks into pregnancy, the placenta takes over making progesterone, a critical handoff. (The placenta also makes other hormones, including human chorionic gonadotropin, which is detected in a pregnancy test.) Placental progesterone comes mostly from surface tissue organized into fingerlike projections that integrate into the endometrium and absorb nutrients. Some cells leave those projections and migrate into the endometrium, where they help to direct the reorganization of arteries.

Using cells from terminated pregnancies, Austrian researchers led by Sigrid Vondra and supervised by Jürgen Pollheimer and Clemens Röhrl compared the cells that stay on the placenta's surface with those that migrate into the endometrium. They discovered that the enzymes responsible for progesterone production differ between the two cell types early in pregnancy.

As a steroid hormone, progesterone is derived from cholesterol. Although the overall production of progesterone appears to be about the same in migratory and surface cells, migratory cells accumulate more cholesterol and express more of a key enzyme for converting cholesterol to progesterone. Among women who have had recurrent miscarriages, that enzyme is lower in migratory cells from the placenta compared to women with healthy pregnancies. In contrast, levels of the enzyme don't differ between healthy and miscarried pregnancies in cells from the surface of the placenta.

The team's findings suggest that production of progesterone by the migratory cells may have a specific and necessary role in early pregnancy and that disruption to that process could be linked to miscarriage.

Credit: 
American Society for Biochemistry and Molecular Biology

Ghost worms mostly unchanged since the age of dinosaurs

image: Upper specimen: Stygocapitella josemariobrancoi from a beach close to Plymouth, UK: Lower specimen: Stygocapitella furcata from the 4th of July beach on San Juan island, WA, USA

Image: 
José Cerca, Christian Meyer, Günter Purschke, Torsten H. Struck

That size, shape and structure of organisms can evolve at different speeds is well known, ranging from fast-evolving adaptive radiations to living fossils such as cichlids or coelacanths, respectively.

A team lead by biologists at the Natural History Museum (University of Oslo) has uncovered a group of species in which change in appearance seems to have been brought to a complete halt.

The tiny annelid worms belonging to the genus Stygocapitella live in sandy beaches around the world. In their 275-million-year-old history the worms have evolved ten distinct species.

But what makes the group stand out is its presence of only four different appearances, or morphotypes. Such absence of morphological change has lately proven to be a common feature of many so-called cryptic species complexes, for example, in mammals, snails, crustaceans or jellyfishes.

- Cryptic species are species which have already been distinct species for a substantial amount of time, but have accumulated very little or no morphological differences.

- Such species can help us understand how evolution proceeds in the absence of morphological evolution, and which factors might be important in these cases, explains professor Torsten Struck at the Natural History Museum (University of Oslo)

Two of the Stygocapitella species that were investigated split apart at the same time when the Stegosaurus and Brachiosaurus roamed about.

But despite 140 million years of evolution, these ghost worms today look almost exactly the same. However, looks may be deceiving. Molecular investigations reveal that they are highly genetically distinct, and considered reproductively isolated species.

In comparison to other cryptic-species complexes that are separated by a maximum of a couple million years, the time span in this complex is ten times longer, which makes the lack of change in ghost worms extreme.

- These species can also be studied to understand how species respond to extreme ecological changes in the long run. Some of these morphotypes have experienced the much warmer conditions of the Cretaceous as well as the changing intervals of the ice ages, says Struck.

What makes the case of Stygocapitella particularly puzzling is that closely related taxa seem to be evolving morphotypes significantly faster. The findings therefore highlight that evolutionary change in appearance should be viewed as a continuum, ranging from accelerated to decelerated, and where the investigated worms stand out as one of the more extreme cases of the latter. The study also points out that species formation is not necessarily accompanied by morphological changes.

The researchers suggest that lack of morphological change may be linked to the worms having adapted to an environment that has changed little over time.

- Beaches have always been around and had the same composition then as now. We suspect these worms have remained in the same environment for millions and millions of years and they are well adapted. We suspect they have become good in moving around, but not having changed much, explains first author of the study PhD Fellow José Cerca.

- Alternatively, it has been suggested that populations regularly crash to only a few surviving individuals, and newly evolved characters get eliminated in the course of these events. Finally, besides or instead of the environment their development may constrain their evolution.

However, the reasons for the slow rate rare as of yet inconclusive in the current study, and remains to be tested by the group in the future.

Credit: 
Natural History Museum, University of Oslo

New study unravels the complexity of childhood obesity

image: Nitesh Chawla, the Frank M. Freimann Professor of Computer Science and Engineering at Notre Dame, director of the Center for Network and Data Science and a lead author of the study.

Image: 
University of Notre Dame

The World Health Organization has estimated more than 340 million children and adolescents ages 5-19 are overweight or obese, and the epidemic has been linked to more deaths worldwide than those caused by being underweight.

The Centers for Disease Control recently reported an estimated 1 in 5 children in the United States, ages 12-18, are living with prediabetes -- increasing their risk of developing type 2 diabetes as well as chronic kidney disease, heart disease and stroke.

Efforts to stem the crisis have led clinicians and health professionals to examine both the nutritional and psychological factors of childhood obesity. In a new study led by the University of Notre Dame, researchers examined how various psychological characteristics of children struggling with their weight, such as loneliness, anxiety and shyness, combined with similar characteristics of their parents or guardians and family dynamics affect outcomes of nutritional intervention.

What they found was a "network effect," suggesting a personalized, comprehensive approach to treatment could improve results of nutritional interventions.

"Psychological characteristics clearly have interactional effects," said Nitesh Chawla, the Frank M. Freimann Professor of Computer Science and Engineering at Notre Dame, director of the Center for Network and Data Science and a lead author of the study. "We can no longer simply view them as individualized risk factors to be assessed. We need to account for the specific characteristics for each child, viewing them as a holistic set for which to plan treatment."

The Notre Dame team collaborated with the Centre for Nutritional Recovery and Education (CREN), a not-for-profit, nongovernmental nutritional clinic in São Paulo, Brazil, where patients participate in a two-year interdisciplinary treatment program including family counseling, nutritional workshops and various physical activities. Researchers analyzed the medical records and psychological assessments of 1,541 children who participated in the program.

The study's key takeaway points to the significant impact parents and guardians have on their child's health when it comes to nutrition. Strong family dynamics, such as concern for behavior and treatment and a sense of protectiveness for the child, led to improved outcomes of nutritional interventions. A lack of authority, however, led to minimal changes in results.

"This is quantitative evidence of the success and failure of interactions as they relate to the characteristics and interactions between the child and the parent or guardian," Chawla said.

The study also highlights the need for clinics to expand their views on patient populations. For example, while treatment programs that incorporate development of interpersonal relationship -- familial and otherwise -- may improve outcomes of nutritional interventions, the same treatment plan may not have the same result for children experiencing loneliness coupled with anxiety.

"For the group without anxiety, this makes sense when you consider a treatment plan focused on strengthening a child's social circle and address issues stemming from loneliness, such as poor social network, bullying or self-imposed isolation," said Gisela M.B. Solymos, co-author of the study, former general manager of CREN and former guest scholar at the Kellogg Institute for International Studies at Notre Dame and at the Center for Network and Data Science. "But patients feeling loneliness and anxiety actually showed minimal changes to nutritional interventions, and may be more likely to benefit from additional services at clinics like CREN."

Co-authors of the study include Keith Feldman, also at Notre Dame, and Maria Paula Albuquerque at CREN.

The National Science Foundation partially funded the study.

Credit: 
University of Notre Dame

Ooh là là! Music evokes 13 key emotions. Scientists have mapped them

image: Scientists mapped music samples according to the 13 key emotions triggered in more than 2,500 people in the United States and China when they listened to the audio clips.

Image: 
Graphic by Alan Cowen

The "Star-Spangled Banner" stirs pride. Ed Sheeran's "The Shape of You" sparks joy. And "ooh là là!" best sums up the seductive power of George Michael's "Careless Whispers."

Scientists at the University of California, Berkeley, have surveyed more than 2,500 people in the United States and China about their emotional responses to these and thousands of other songs from genres including rock, folk, jazz, classical, marching band, experimental and heavy metal.

The upshot? The subjective experience of music across cultures can be mapped within at least 13 overarching feelings: Amusement, joy, eroticism, beauty, relaxation, sadness, dreaminess, triumph, anxiety, scariness, annoyance, defiance, and feeling pumped up.

"Imagine organizing a massively eclectic music library by emotion and capturing the combination of feelings associated with each track. That's essentially what our study has done," said study lead author Alan Cowen, a UC Berkeley doctoral student in neuroscience.

The findings are set to appear this week in the journal Proceedings of the National Academy of Sciences.

"We have rigorously documented the largest array of emotions that are universally felt through the language of music," said study senior author Dacher Keltner, a UC Berkeley professor of psychology.

Cowen translated the data into an interactive audio map, where visitors can move their cursors to listen to any of thousands of music snippets to find out, among other things, if their emotional reactions match how people from different cultures respond to the music.

Potential applications for these research findings range from informing psychological and psychiatric therapies designed to evoke certain feelings to helping music streaming services like Spotify adjust their algorithms to satisfy their customers' audio cravings or set the mood.

While both U.S. and Chinese study participants identified similar emotions -- such as feeling fear hearing the "Jaws" movie score -- they differed on whether those emotions made them feel good or bad.

"People from different cultures can agree that a song is angry, but can differ on whether that feeling is positive or negative," said Cowen, noting that positive and negative values, known in psychology parlance as "valence," are more culture-specific.

Furthermore, across cultures, study participants mostly agreed on general emotional characterizations of musical sounds, such as angry, joyful and annoying. But their opinions varied on the level of "arousal," which refers in the study to the degree of calmness or stimulation evoked by a piece of music.

For the study, more than 2,500 people in the United States and China were recruited via Amazon Mechanical Turk's crowdsourcing platform.

First, volunteers scanned thousands of videos on YouTube for music evoking a variety of emotions. From those, the researchers built a collection of audio clips to use in their experiments.

Next, nearly 2,000 study participants in the United States and China each rated some 40 music samples based on 28 different categories of emotion, as well as on a scale of positivity and negativity, and for levels of arousal.

Using statistical analyses, the researchers arrived at 13 overall categories of experience that were preserved across cultures and found to correspond to specific feelings, such as being "depressing" or "dreamy."

To ensure the accuracy of these findings in a second experiment, nearly 1,000 people from the United States and China rated over 300 additional Western and traditional Chinese music samples that were specifically intended to evoke variations in valence and arousal. Their responses validated the 13 categories.

Vivaldi's "Four Seasons" made people feel energized. The Clash's "Rock the Casbah" pumped them up. Al Green's "Let's Stay Together" evoked sensuality and Israel Kamakawiwo?ole's "Somewhere over the Rainbow" elicited joy.

Meanwhile, heavy metal was widely viewed as defiant and, just as its composer intended, the shower scene score from the movie "Psycho" triggered fear.

Researchers acknowledge that some of these associations may be based on the context in which the study participants had previously heard a certain piece of music, such as in a movie or YouTube video. But this is less likely the case with traditional Chinese music, with which the findings were validated.

Cowen and Keltner previously conducted a study in which they identified 27 emotions in response to evocative YouTube video clips. For Cowen, who comes from a family of musicians, studying the emotional effects of music seemed like the next logical step.

"Music is a universal language, but we don't always pay enough attention to what it's saying and how it's being understood," Cowen said. "We wanted to take an important first step toward solving the mystery of how music can evoke so many nuanced emotions."

Credit: 
University of California - Berkeley

Biomarker predicts which patients with heart failure have a higher risk of dying

image: A UCLA-led study revealed a new way to predict which patients with 'stable' heart failure -- those who have heart injury but do not require hospitalization -- have a higher risk of dying within one to three years.

Image: 
DEAN ISHIDA

A UCLA-led study revealed a new way to predict which patients with "stable" heart failure -- those who have heart injury but do not require hospitalization -- have a higher risk of dying within one to three years.

Although people with stable heart failure have similar characteristics, some have rapid disease progression while others remain stable. The research shows that patients who have higher levels of neuropeptide Y, a molecule released by the nervous system, are 10 times more likely to die within one to three years than those with lower levels of neuropeptides.

About half of people who develop heart failure die within five years of their diagnosis, according to an American Heart Association report, but it hasn't been understood why some live longer than others despite receiving the same medications and medical device therapy.

The researchers set out to determine whether a biomarker of the nervous system could help explain the difference.

To date, no other biomarker has been identified that can so specifically predict the risk of death for people with stable heart failure.

The researchers analyzed blood from 105 patients with stable heart failure, searching for a distinct biomarker in the blood that could predict how likely a person would be to die within a few years. They found that neuropeptide Y levels were the clearest and most significant predictor.

The scientists also compared nerve tissue samples from patients with samples from healthy donors and determined that the neurons in the people who were at most at risk for dying from heart failure were likely releasing higher levels of neuropeptides.

The results could give scientists a way to distinguish very-high-risk patients with stable heart failure from others with the same condition, which could inform which patients might require more aggressive and targeted therapies. The study also highlights the need for heart failure therapies that target the nervous system.

Further studies could help determine whether a patient's risk for death can be ascertained through less invasive measures, such as a simple blood draw, and whether early aggressive intervention in these people could reduce their risk of death.

Credit: 
University of California - Los Angeles Health Sciences

Some genetic sequencing fails to analyze large segments of DNA

image: A re-analysis of clinical tests from three major U.S. laboratories showed whole exome sequencing routinely failed to adequately analyze large segments of DNA. UT Southwestern experts who conducted the review say the findings are indicative of a widespread issue for clinical laboratories.

Image: 
UTSW

Highlights:

Reanalysis of patient samples from 3 U.S. labs shows most tests didn't adequately analyze more than a quarter of genes.

Chance of detecting a disorder varied widely depending on which genes the lab completely analyzed in a given sample.

DALLAS - Jan. 6, 2020 - Children who undergo expansive genetic sequencing may not be getting the thorough DNA analysis their parents were expecting, say experts at UT Southwestern Medical Center.

A review of clinical tests from three major U.S. laboratories shows whole exome sequencing routinely fails to adequately analyze large segments of DNA, a potentially critical deficiency that can prevent doctors from accurately diagnosing potential genetic disorders, from epilepsy to cancer.

The reanalysis by UT Southwestern shows each lab on average adequately examined less than three-quarters of the genes - 34, 66, and 69 percent coverage - and had startlingly wide gaps in their ability to detect specific disorders.

Researchers say they conducted the study because they believe vast differences in testing quality are endemic in clinical genetic sequencing but have not been well documented or shared with clinicians.

"Many of the physicians who order these tests don't know this is happening," says Jason Park, M.D., Ph.D., associate professor of pathology at UT Southwestern. "Many of their patients are young kids with neurological disorders, and they want to get the most complete diagnostic test. But they don't realize whole exome sequencing may miss something that a more targeted genetic test would find."

Whole exome sequencing, a technique for analyzing protein-producing genes, is increasingly used in health care to identify genetic mutations that cause disease - mostly in children but also in adults with rare or undiagnosed diseases. However, Park says the process of fully analyzing the approximately 18,000 genes in an exome is inherently difficult and prone to oversights. About half the tests do not pinpoint a mutation.

The new study published in Clinical Chemistry gives insight into why some analyses may be coming back negative.

Researchers re-analyzed 36 patients' exome tests conducted between 2012 and 2016 - 12 from each of the three national clinical laboratories - and found starkly contrasting results and inconsistency with which genes were completely analyzed. A gene was not considered completely analyzed unless the lab met an industry-accepted threshold for adequate analysis of all DNA that encodes protein, which is defined as sequencing that segment at least 20 times per test.

Notably, less than 1.5 percent of the genes were completely analyzed in all 36 samples. A review of one lab's tests showed 28 percent of the genes were never adequately examined and only 5 percent were always covered. Another lab consistently covered 27 percent of the genes.

"And things really start to fall apart when you start thinking about using these tests to rule out a disease," Park says. "A negative exome result is meaningless when so many of the genes are not thoroughly analyzed."

For example, the chances of detecting an epileptic disorder from any of the 36 tests varied widely depending on which genes were analyzed. One lab conducted several patient tests that fully examined more than three quarters of the genes associated with epilepsy, but the same lab had three other patient samples in which less than 40 percent were completely analyzed.

Three tests from another lab came in at under 20 percent.

"When we saw this data we made it a regular practice to ask the labs about coverage of specific genes," says Garrett Gotway, M.D., Ph.D., a clinical geneticist at UT Southwestern who is the corresponding author of the study. "I don't think you can expect complete coverage of 18,000 genes every time, but it's fair to expect 90 percent or more."

The findings build upon previous research that showed similar gaps and disparities in whole genome sequencing, a technique that examines all types of genes, regardless of whether they produce proteins.

Gotway says he hopes the findings will prompt more physicians to ask labs about which genes were covered and push for improved consistency in testing quality. He also encourages physicians - even before ordering the test - to consider whether whole exome sequencing is the best approach for the patient.

"Clinical exomes can be helpful in complex cases, but you probably don't need one if a kid has epilepsy and doesn't have other complicating clinical problems," Gotway says. "There's a decent chance the exome test will come back negative and the parents are still left wondering about the genetic basis for their child's disease."

In those cases, Gotway suggests ordering a smaller genetic test that completely analyzes a panel of genes associated with that disease. He says they're less expensive and just as likely to help physicians find answers.

Credit: 
UT Southwestern Medical Center

Fast action and the right resources are key to treating fulminant myocarditis

DALLAS, Jan. 6, 2019 -- The resources needed to treat fulminant myocarditis - severe, inflammation of the heart that develops rapidly - are outlined in a new Scientific Statement (Statement) from the American Heart Association on how best to reduce fatalities from this rare condition. The Statement is published today in the Association's premier cardiovascular journal Circulation.

Fulminant myocarditis, often caused by a viral infection, comes on suddenly and often with significant severity, resulting in an exceptionally high risk of death caused by cardiogenic shock (the heart's inability to pump enough blood), fatal arrhythmias (abnormal heartbeats) and multiorgan failure .

With many of today's technology advances, numerous devices can fully support a patient's circulation and oxygenation/ventilation when necessary. The early recognition of fulminant myocarditis, institution of circulatory support and maintenance of end-organ function (especially avoiding prolonged neurologic hypoxemia) can result in favorable outcomes for this previously almost universally fatal condition .

The new Statement details increasing awareness and education of fulminant myocarditis among health care providers to speed evaluation, diagnosis and treatment. Treatment options for optimal outcomes include supporting patients through the use of extracorporeal life support (heart lung machine), percutaneous and durable ventricular assist devices (devices to help the heart pump) and heart transplantation.

"It is fortunate that fulminant myocarditis is rare and that it usually presents in typically younger and healthier patients, rather than critically ill patients seen in the office or emergency department," said Leslie T. Cooper, M.D., FAHA, vice chair of the Statement Writing Group. "This is where there are the greatest opportunities: early diagnosis, rapid treatment and the ability of frontline clinicians to detect the subtle signs and symptoms of this serious condition."

The Statement has been endorsed by the Heart Failure Society of America and the Myocarditis Foundation.

Credit: 
American Heart Association

Scientists develop new method to detect oxygen on exoplanets

image: Conceptual image of water-bearing (left) and dry (right) exoplanets with oxygen-rich atmospheres. Crescents are other planets in the system, and the red sphere is the M-dwarf star around which the exoplanets orbit. The dry exoplanet is closer to the star, so the star appears larger.

Image: 
(NASA/GSFC/Friedlander-Griswold)

Scientists have developed a new method for detecting oxygen in exoplanet atmospheres that may accelerate the search for life.

One possible indication of life, or biosignature, is the presence of oxygen in an exoplanet's atmosphere. Oxygen is generated by life on Earth when organisms such as plants, algae, and cyanobacteria use photosynthesis to convert sunlight into chemical energy.

UC Riverside helped develop the new technique, which will use NASA's James Webb Space Telescope to detect a strong signal that oxygen molecules produce when they collide. This signal could help scientists distinguish between living and nonliving planets.

Since exoplanets, which orbit stars other than our sun, are so far away, scientists cannot look for signs of life by visiting these distant worlds. Instead, they must use a cutting-edge telescope like Webb to see what's inside the atmospheres of exoplanets.

"Before our work, oxygen at similar levels as on Earth was thought to be undetectable with Webb," said Thomas Fauchez of NASA's Goddard Space Flight Center and lead author of the study. "This oxygen signal is known since the early 1980s from Earth's atmospheric studies but has never been studied for exoplanet research."

UC Riverside astrobiologist Edward Schwieterman originally proposed a similar way of detecting high concentrations of oxygen from nonliving processes and was a member of the team that developed this technique. Their work was published today in the journal Nature Astronomy.

"Oxygen is one of the most exciting molecules to detect because of its link with life, but we don't know if life is the only cause of oxygen in an atmosphere," Schwieterman said. "This technique will allow us to find oxygen in planets both living and dead."

When oxygen molecules collide with each other, they block parts of the infrared light spectrum from being seen by a telescope. By examining patterns in that light, they can determine the composition of the planet's atmosphere.

Schwieterman helped the NASA team calculate how much light would be blocked by these oxygen collisions.

Intriguingly, some researchers propose oxygen can also make an exoplanet appear to host life when it does not, because it can accumulate in a planet's atmosphere without any life activity at all.

If an exoplanet is too close to its host star or receives too much star light, the atmosphere becomes very warm and saturated with water vapor from evaporating oceans. This water could then be broken down by strong ultraviolet radiation into atomic hydrogen and oxygen. Hydrogen, which is a light atom, escapes to space very easily, leaving the oxygen behind.

Over time, this process may cause entire oceans to be lost while building up a thick oxygen atmosphere -- more even, than could be made by life. So, abundant oxygen in an exoplanet's atmosphere may not necessarily mean abundant life but may instead indicate a history of water loss.

Schwieterman cautions that astronomers are not yet sure how widespread this process may be on exoplanets.

"It is important to know whether and how much dead planets generate atmospheric oxygen, so that we can better recognize when a planet is alive or not," he said.

Schwieterman is a visiting postdoctoral fellow at UCR who will soon start as assistant professor of astrobiology in the Department of Earth and Planetary Sciences.

The research received funding from Goddard's Sellers Exoplanet Environments Collaboration, which is funded in part by the NASA Planetary Science Division's Internal Scientist Funding Model. This project has also received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant, the NASA Astrobiology Institute Alternative Earths team, and the NExSS Virtual Planetary Laboratory.

Webb will be the world's premier space science observatory when it launches in 2021. It will allow scientists to solve mysteries in our solar system, look to distant worlds around other stars, and probe the mysterious structures and origins of our universe and our place in it.

Credit: 
University of California - Riverside

Finding a new way to fight late-stage sepsis

COLUMBUS, Ohio - Researchers have developed a way to prop up a struggling immune system to enable its fight against sepsis, a deadly condition resulting from the body's extreme reaction to infection.

The scientists used nanotechnology to transform donated healthy immune cells into a drug with enhanced power to kill bacteria.

In experiments treating mice with sepsis, the engineered immune cells eliminated bacteria in blood and major organs, dramatically improving survival rates.

This work focuses on a treatment for late-stage sepsis, when the immune system is compromised and unable to clear invading bacteria. The scientists are collaborating with clinicians specializing in sepsis treatment to accelerate the drug-development process.

"Sepsis remains the leading cause of death in hospitals. There hasn't been an effective treatment for late-stage sepsis for a long time. We're thinking this cell therapy can help patients who get to the late stage of sepsis," said Yizhou Dong, senior author and associate professor of pharmaceutics and pharmacology at The Ohio State University. "For translation in the clinic, we believe this could be used in combination with current intensive-care treatment for sepsis patients."

The study is published today (Jan. 6, 2020) in Nature Nanotechnology.

Sepsis itself is not an infection - it's a life-threatening systemic response to infection that can lead to tissue damage, organ failure and death, according to The Centers for Disease Control and Prevention. The CDC estimates that 1.7 million adults in the United States develop sepsis each year, and one in three patients who die in a hospital have sepsis.

This work combined two primary types of technology: using vitamins as the main component in making lipid nanoparticles, and using those nanoparticles to capitalize on natural cell processes in the creation of a new antibacterial drug.

Cells called macrophages are one of the first responders in the immune system, with the job of "eating" invading pathogens. However, in patients with sepsis, the number of macrophages and other immune cells are lower than normal and they don't function as they should.

In this study, Dong and colleagues collected monocytes from the bone marrow of healthy mice and cultured them in conditions that transformed them into macrophages. (Monocytes are white blood cells that are able to differentiate into other types of immune cells.)

The lab also developed vitamin-based nanoparticles that were especially good at delivering messenger RNA, molecules that translate genetic information into functional proteins.

The scientists, who specialize in messenger RNA for therapeutic purposes, constructed a messenger RNA encoding an antimicrobial peptide and a signal protein. The signal protein enabled the specific accumulation of the antimicrobial peptide in internal macrophage structures called lysosomes, the key location for bacteria-killing activities.

From here, researchers delivered the nanoparticles loaded with that messenger RNA into the macrophages they had produced with donor monocytes, and let the cells take it from there to "manufacture" a new therapy.

"Macrophages have antibacterial activity naturally. So if we add the additional antibacterial peptide into the cell, those antibacterial peptides can further enhance the antibacterial activity and help the whole macrophage clear bacteria," Dong said.

After seeing promising results in cell tests, the researchers administered the cell therapy to mice. The mouse models of sepsis in this study were infected with multidrug-resistant Staphylococcus aureus and E. coli and their immune systems were suppressed.

Each treatment consisted of about 4 million engineered macrophages. Controls for comparison included ordinary macrophages and a placebo. Compared to controls, the treatment resulted in a significant reduction in bacteria in the blood after 24 hours - and for those with lingering bacteria in the blood, a second treatment cleared them away.

Dong considers the lipid nanoparticle delivery of messenger RNA into certain kinds of immune cells applicable to other diseases, and his lab is currently working on development of cancer immunotherapy using this technology.

Credit: 
Ohio State University