Culture

How the urban environment affects the diet of its citizens

image: The UPV/EHU's Nursing and Health Promotion research group has published an study using photovoice methodology and which qualitatively compares citizens' perceptions about the food environment in three Bilbao neighbourhoods with different socioeconomic levels.

Image: 
Zelai R. García

Previous studies have revealed the influence of unhealthy food environments on the population's eating behaviour. Yet the socioeconomic differences in these environments and their influence on diet had never been studied previously. The doctor in the UPV/EHU's Faculty of Medicine and Nursing Leyre Gravina said "we have for the first time qualitatively compared the perception of the food environment in three neighbourhoods with differing socioeconomic contexts --Deusto (high socioeconomic level), Uribarri (medium), and San Francisco (low)--, using photovoice methodology. So we have managed to analyse the whole spectrum of Bilbao making it possible to explain how the neighbourhoods may affect their residents, in particular in terms of food".

In the photovoice study the participants (a total of 23 residents in the above-mentioned neighbourhoods) "analysed the environment closest to them, their neighbourhood, by means of photos that they themselves took and discussed in groups", explained Doctor Gravina, lead researcher in the UPV/EHU's Nursing and Health Promotion research group. "The research group then went on to group together six emerging themes that inform or describe the food environment of Bilbao: unhealthy eating behaviours, cultural diversity, retail transformation, social relations, precariousness and healthy eating."

A book, policy proposals for action and a new project

Although the availability of high quality food and fresh products in the three neighbourhoods was stressed, the participants discussed the reasons for unhealthy eating, characterised by the excessive consumption of alcohol (high level neighbourhood), sugary foods (high and medium neighbourhood) and fast food (medium and low neighbourhood). That way "we saw that in actual fact, both the environment and the needs of each neighbourhood were different, but we detected similarities in all of them", added the researcher, "and they are: the existence of great diversity and accessibility of international food in all of them; tolerance existing in our culture when it comes to consuming large amounts of alcohol and sugary beverages at celebrations; the limited involvement of the citizens and the authorities in improving the neighbourhoods; and finally, the promoting of healthy eating by small retail outlets and public markets that offer good quality of food, close contact and fresh products".

The differences found between neighbourhoods highlight the availability of food items, the diversity of food outlets and social and cultural factors as ones that determine the population's eating behaviour. "Our results provide new information to better understand how urban environments may affect how the population eats from an equity perspective", she stressed. Gravina remarked that the value of this citizen-based participatory project lies "in being able to turn the results of the research into policy proposals for action to improve the area studied and have the opportunity to transmit to the rest of the community (including people in positions of political responsibility) their needs and proposals for improvement. The creation of more effective policies that emanate from citizens to improve their own environment is thus strengthened and this will have an effect on their health".

The research group has passed these actions for improvement in the form of a report on recommended policies to the municipal authorities. In addition, the study has been reflected in the photo-book Proyecto Fotovoz Bilbao, which can be downloaded from the Mi Barrio Saludable website. At the same time, Gravina has announced the setting up of the Photovoice 2 project: "We have given it the name MugiBiL, and it will be analysing the same neighbourhoods from the perspective of physical activity. We are sure we will be touching on various subjects such as town planning, transport, mobility, accessibility in the neighbourhoods, ageing, gender differences, cleanliness, social determinants, etc. This new study is going to provide us with a more complete picture of how two crucially important factors like diet and physical activity can exert an influence on obesity in the population, and, at the end of the day, on health," she concluded.

Credit: 
University of the Basque Country

The seismicity of Mars

image: Mars is shaking.

Image: 
NASA / JPL - Caltech

On 26 November 2018, the NASA InSight lander successfully set down on Mars in the Elysium Planitia region. Seventy Martian days later, the mission's seismometer SEIS began recording the planet's vibrations. A team of researchers and engineers at ETH Zurich, led by ETH Professor Domenico Giardini, had delivered the SEIS control electronics and is responsible for the Marsquake Service. The latter is in charge for the daily interpretation of the data transmitted from Mars, in collaboration with the Swiss Seismological Service at ETH Zurich. Now, the journal Nature Geoscience published a series of articles on the results of the mission in the first months of operation on Mars.

As reported in these articles, InSight recorded 174 events until the end of September 2019. Since then, the measurements have continued leading to more than 450 observed marsquakes as of today, which have not yet been analysed in detail. This accounts for one event a day on average.

The data allows researchers observing how seismic waves travel through the planet and unveiling its internal characteristics - similar to how x-?rays are used in medical tomography. Before InSight landed, researchers had developed a wide range of possible models to represent the internal structure of the red planet. The recorded marsquakes, already after few months, enable refining the understanding of the structure of the planet and to reduce the uncertainties.

Interpreting marsquake data is challenging  

Marsquakes are similar to the seismic events we see on Earth, although they are generally of smaller magnitude. The 174 registered marsquakes can be categorized in two families: One includes 24 low-?frequency events with magnitudes between 3 and 4, as documented in the papers, with waves propagating through the Martian mantle. A second family of marsquakes comprises 150 events with smaller magnitudes, shallower hypocentral depth and high frequency waves trapped in the Martian crust.

"Marsquakes have characteristics already observed on the Moon during the Apollo era, with a long signal duration (10 to 20 minutes) due to the scattering properties of the Martian crust", explains ETH Professor Giardini. In general, however, he says, interpreting marsquake data is very challenging and in most cases, it is only possible to identify the distance but not the direction from which the waves are arriving.

InSight landed on a thin, sandy layer  

InSight opens a new era for planetary seismology. The SEIS performance exceeded so far expectations, considering the harsh conditions on Mars, characterized by temperatures ranging from minus 80 to 0 degrees Celsius every day and by strong wind oscillations. Indeed wind shakes the InSight lander and its instrumentation during the day leading to a high level of ambient noise. However, at sunset, the winds calm down allowing recording the quietest seismic data ever collected in the solar system. As a result, most seismic events detected on Mars by SEIS occurred in the quiet night hours. The challenging environment also requires to carefully distinguishing between seismic events and signals originating from movements of the lander, other instruments or atmospheric-?induced perturbances.

The hammering by the HP3 instrument (another InSight experiment) and the close passage of whirlwinds (dust devils), recorded by SEIS, allow to map the physical properties of the shallow soil layers just below the station. We now know that SEIS landed on a thin, sandy layer reaching a few meters deep, in the middle of a 20 meter-?wide ancient impact crater. At greater depths, the Martian crust has properties comparable to Earth's crystalline massifs but appears to be more fractured. The propagation of the seismic waves suggest that the upper mantle has a stronger attenuation compared to the lower mantle.

Seismic activity also induced by tectonic stress

InSight landed in a rather quiet region of Mars, as no events near the station have been recorded up to now. The three biggest events were located in the Cerberus Fossae region about 1'500 km away. It is a tectonic graben system, caused by the weight of the Elysium Mons, the biggest volcano in the Elysium Planitia area. This provides strong evidence that seismic activity on Mars is not only a consequence of the cooling and therewith the shrinking of the planet but also induced by tectonic stress. The total seismic energy released on Mars lies between the one of Earth and of the Moon.

SEIS, complementary to other InSight measurements, also meaningfully contributed data to better understand the meteorological processes on Mars. The instrument's sensitivity to both wind and atmospheric pressure allowed identifying meteorological phenomena characteristic of Mars, including the many dust devils that pass by the spacecraft every afternoon.

Credit: 
ETH Zurich

How sleep helps teens deal with social stress

image: Wang's study is the first the first to identify the timing in which sleep helps with adolescents cope with stress.

Image: 
Via Pexels

A new Michigan State University study found that a good night's sleep does adolescents good - beyond helping them stay awake in class. Adequate sleep can help teens navigate challenging social situations.

The study, which focused on ninth grade students, found that adequate sleep allowed students to cope with discrimination and challenges associated with ethnic or racial bias. It also helps them problem-solve more effectively and seek peer support when faced with hardships.

"Findings of this study have important implications," said Yijie Wang, assistant professor of human development and family studies at MSU. "Understanding how sleep helps adolescents negotiate social challenges may consequently elucidate how promoting sleep may improve adolescent adjustment during high school and beyond."

Published in Child Development, this is the first study to identify the timing in which sleep helps with adolescents cope with stress.

Compared to adults and children, high school students are particularly at risk for insufficient sleep due to early school times, busy schedules and increased social stressors. The transition to high school also introduces more diversity to their social environment and relationships.

Via this study, Wang and co-author Tiffany Yip of Fordham University wanted to pinpoint the effect sleep has on coping with discrimination. They found that if a teen has a good night of sleep, they are able to cope with harsh experiences - like discrimination - better.

"This study did not treat sleep as a consequence of discrimination," Wang said. "However, our team did identify the influence of discrimination on same-day sleep in other studies. These studies showed that, on days when adolescents experienced ethnic or racial discrimination, they slept less and also took longer to actually fall asleep."

Participants in the study wore an actigraphy watch, which tracked physical activities in one-minute intervals and determined their sleep-wake state, every day for two weeks. The students were also asked to complete a survey each day before bed, reporting their daytime experiences such as ethnic or racial discrimination, how they responded to stress and their psychological well-being.

A surprising finding in the study was that peers, not parents, were the immediate support that help adolescents cope with discrimination.

"Compared to parents, peers are likely to be witnessing and involved in adolescents' experiences of ethnic or racial discrimination on a daily basis," Wang said. "As such, they're more of an immediate support that backs up adolescents and comforts them when discrimination occurs."

Still, parents have an important role in helping their children cope with both sleep and social situations. Beyond getting the recommended eight hours, the quality of sleep is just as important. That includes having a regular bedtime, limiting media use and providing a quiet, less crowded sleep environment.

While encouraging good sleep habits in adolescents can be a struggle, said Wang that the benefits of a routine help them cope with the challenges of life in high school and beyond.

"The promotive effect of sleep is so consistent," Wang said. "It reduces how much adolescents ruminate, it promotes their problem solving and it also helps them to better seek support from their peers."

The research was first published online Oct. 28, 2019 in the journal Child Development, and will appear in the forthcoming Sept.-Oct. 2020 print edition.

Credit: 
Michigan State University

Epidemiologists correlate alcohol ads lead to youth drinking, want more regulations

image: Alcohol advertising photo

Image: 
Junjie Xu

PISCATAWAY, NJ - The marketing of alcoholic beverages is one cause of underage drinking, public health experts conclude. Because of this, countries should abandon what are often piecemeal and voluntary codes to restrict alcohol marketing and construct government-enforced laws designed to limit alcohol-marketing exposure and message appeal to youth.

These conclusions stem form a series of eight review articles published as a supplement to the Journal of Studies on Alcohol and Drugs, which synthesized the results of 163 studies on alcohol advertising and youth alcohol consumption.

"[T]here is persuasive evidence that exposure to alcohol marketing is one cause of drinking onset during adolescence and also one cause of binge drinking," write James D. Sargent, M.D., of the C. Everett Koop Institute at Dartmouth, and Thomas F. Babor, Ph.D., M.P.H., of the University of Connecticut, in a conclusion to the supplement.

Each of the eight review articles in the supplement evaluated a different aspect of alcohol marketing and drinking among young people. The reviews covered hundreds of studies that used different research designs and measurement techniques, and the data came from a variety of countries and scientific disciplines.

The authors of the reviews used the Bradford Hill criteria--a well-known framework for determining causal links between environmental exposures and disease--to determine whether marketing is a cause of youth alcohol use. The same criteria have been used to establish that smoking is a cause of cancer and that tobacco marketing is one cause of youth smoking. Hill's causality criteria involve determining the strength of association, consistency of the link, specificity of the association, temporal precedence of the advertising exposure, biological and psychological plausibility, experimental evidence and analogy to similar health risk exposures (e.g., tobacco advertising).

Sargent and Babor note that each of the Bradford Hill criteria were met within the eight reviews, supporting a modest but meaningful association between alcohol advertising and youth drinking.

Although such a relationship had been previously known, this is the first time any public health expert has explicitly concluded that advertising causes drinking among adolescents. As a result, the authors recommend the following:

Government agencies--independent from the alcohol industry--should restrict alcohol marketing exposures in the adolescent population. Often, advertising restrictions apply to only certain hours of television broadcasting, to certain types of drinks, or to certain formats. Further, new promotional methods on social media are even less regulated. "Although statutory bans can be circumvented," the authors write, "research suggests they are far more effective than voluntary codes."

The Centers for Disease Control and Prevention or the Office of the Surgeon General should sponsor a series of reports on alcohol and health, similar to the ones that have been published on tobacco. Reports from these government agencies can serve as a guide to public health policy, but there have been few reports on underage drinking and no reports alcohol marketing and its effects.

The U.S. National Institute on Alcohol Abuse and Alcoholism should resurrect its program to fund research on alcohol marketing and vulnerable populations. There is need for continued study on this topic, and funding has recently been directed to other priorities. "[I]t is disappointing that alcohol marketing research is no longer a programmatic priority at NIAAA," the authors write.

A larger international panel of public health experts should be convened in order to reach a broader consensus, particularly in relation to digital marketing. There is also a need for an international agreement to restrict alcohol marketing along the lines of the United Nations' Framework Convention on Tobacco Control.

To the extent that modest causal evidence has been found in a range of countries, and plausible mechanisms have been identified as possible mediating factors, Sargent and Babor expressed the hope that the findings will promote "thoughtful discourse among researchers, effective prevention measures among policymakers, and an effort to reach consensus on this issue among a larger and more representative body of scientists."

Credit: 
Journal of Studies on Alcohol and Drugs

Just as tobacco advertising causes teen smoking, exposure to alcohol ads causes teens to drink

Exposure to alcohol advertising changes teens' attitudes about alcohol and can cause them to start drinking, finds a new analysis led by NYU School of Global Public Health and NYU Grossman School of Medicine. The study, which appears in a special supplement of the Journal of Studies on Alcohol and Drugs funded by the National Institute on Alcohol Abuse and Alcoholism, uses a framework developed to show causality between tobacco advertising and youth smoking and applies it to alcohol advertising.

Advertising has long influenced how people purchase and consume goods. Youth are particularly vulnerable to the influence of advertising due to their potential for forming brand loyalties at an early age, limited skepticism, and high use of social media--where alcohol marketing is increasingly found.

Teen alcohol use is a major public health problem, with negative consequences ranging from injuries, including those from car crashes, to risky sexual behavior, to damage to the developing brain. Research shows that teen exposure to advertising is associated with drinking attitudes and behavior, but it has been unclear if these associations are causal.

There is scientific consensus that advertising by the tobacco industry--which has had a long history of marketing directly to youth--causes teen smoking. The National Cancer Institute, Master Settlement Agreement, and Surgeon General's 2012 Report on Preventing Tobacco Use Among Youth and Young Adults all agree that the evidence is strong enough to say that there is a causal relationship; the Surgeon General used a four-level hierarchy system to classify the strength of causal inferences based on available evidence, as well as statistical estimation and hypothesis testing of association.

"The conclusion that the association between exposure to tobacco advertising and adolescent tobacco use are causal allowed for policy development that justified further regulation of tobacco advertising aimed at youth," said Michael Weitzman, MD, professor of pediatrics and environmental health at NYU Grossman School of Medicine and NYU School of Global Public Health. "The conclusion also set the framework to investigate a potentially analogous relationship with alcohol."

In this study, Weitzman and his coauthor Lily Lee of SUNY Downstate Medical Center used one of the key elements of the Bradford Hill criteria--a well-known framework for determining causal links between environmental exposures and disease--to determine whether marketing is a cause of youth alcohol use, focusing on the criterion that relies on analogous relationships already established as causal. The same criteria have been used to establish that smoking is a cause of cancer and that tobacco marketing is one cause of youth smoking. The researchers compared the same categories the Surgeon General used to deem a causal relationship between tobacco advertising and youth smoking--including marketing strategies, frequency and density of ads, and teens' attitudes toward and use of cigarettes--to the case of alcohol.

They found that, in every aspect studied, the influence of tobacco and alcohol advertising on teens were analogous. For instance, both tobacco and alcohol companies have used mascots in advertisements (e.g., Joe Camel, the Budweiser frogs), which research shows are easily recognized and trusted by children. In addition, both tobacco and alcohol companies use or have used movies, television, and sporting events as opportunities for advertising and product placement, with studies showing that exposure to smoking and drinking increases the risk for youth initiation.

The researchers also found that neighborhoods with large numbers of tobacco retailers expose youth to more tobacco advertising and make it easier to buy cigarettes, a finding that held true for alcohol retailer density as well. Troublingly, tobacco and alcohol retailers are often near schools.

Finally, the researchers found that exposure to tobacco and alcohol advertising and teen knowledge, attitudes, initiation, and continued use of the products are extraordinarily similar. Many studies show that advertising is a risk factor for both smoking and drinking, with several studies showing a dose-dependent relationship, with more exposure to advertising increasing consumption.

These findings, when taken in the context of the Bradford Hill Criteria, indicate that exposure to alcohol advertising causes increased teen alcohol use.

"The association of alcohol and tobacco advertising exposure and adolescent perceptions, knowledge of, and use of these substances are remarkably similar, adding to the much-needed evidence that the association between alcohol advertising and teen alcohol use is causal in nature," concluded Weitzman.

Credit: 
New York University

Anonymous no more: combining genetics with genealogy to identify the dead in unmarked graves

image: To empirically test their method's identification potential, researchers selected six unidentified male skeletons that had been exhumed over the years at four historical cemeteries in Quebec.

Image: 
Isabelle Ribot, Université de Montréal

In Quebec, gravestones did not come into common use until the second half of the 19th century, so historical cemeteries contain many unmarked graves. Inspired by colleagues at Barcelona's Pompeu Fabra University, a team of researchers in genetics, archaeology and demography from three Quebec universities (Université de Montréal, Université du Québec à Chicoutimi and Université du Québec à Trois-Rivières) conducted a study in which they combined genealogical information from BALSAC (a Quebec database that is the only one of its kind in the world) with genetic information from more than 960 modern Quebecers in order to access the genetic profile of Quebec's historical population. The results, published in the American Journal of Physical Anthropology, suggest the capabilities that this method may offer in the near future.

The BALSAC database contains the genealogical relationships linking five million individuals, the vast majority of whom married in Quebec, over the past four centuries. Work on developing this database began in 1972 at Université du Québec à Chicoutimi under the direction of historian Gérard Bouchard.

The first author of this study is Tommy Harding, a postdoctoral researcher at Université de Montréal who specializes in DNA sequencing. BALSAC, he said, "is a fabulous database for researchers, because both the quantity and the quality of the data that it contains are truly exceptional. The parish records meticulously kept by Catholic priests have been very well preserved so that today, thanks to advances in technology, it is possible to use this data to identify the bones from unmarked graves."

Using the Y chromosome and mitochondrial DNA

This study was directed by Damian Labuda, an expert in genetic structure and diversity who is a professor in the Department of Pediatrics at Université de Montréal and its affiliated Sainte-Justine Hospital Research Centre. "Genetics," he said, "has of course been used many times to identify the remains of historical figures, such as the members of the Romanov Russian imperial family who were killed by the Bolsheviks and buried in a common grave, or the English king Richard III, who died in 1483 and whose remains were discovered in 2012.

"What is different about our research team's genetic method," Dr. Labuda added, "is that we use the information contained in two genetic markers that are transmitted to children by only one parent: the Y chromosome, which is passed from fathers to their sons, and mitochondrial DNA, which is passed from mothers both to their daughters and to their sons. These two genetic molecules are inherited with few modifications (that is, mutations), so that individuals today have the same, or almost the same, DNA sequence as their ancestors who lived more than 10 generations earlier."

Making old bones tell their tales

Added Harding: "To empirically test our method's identification potential, we selected six unidentified male skeletons that had been exhumed over the years at four historical cemeteries in Quebec. Two of these cemeteries were in Montreal (Notre Dame cemetery, active from 1691 to 1791, and Saint Antoine cemetery, active from 1799 to 1855). The two others were those of the former municipality of Pointe-aux-Trembles (active from 1709 to 1843) and the city of Sainte-Marie-de-Beauce (active from 1748 to 1878). We sent these bones to the Genomics Core Facility, a laboratory at Pompeu Fabra University in Barcelona that specializes in analyzing historical DNA. This laboratory extracted DNA from these remains and analyzed them to reveal their mitochondrial and Y chromosome genetic markers."

The Quebec researchers then compared the genetic markers from these historical remains with the same genetic markers from over 960 modern Quebecers who had volunteered to be genotyped in an earlier research project and whose genealogy had been established using population data from the BALSAC database. Through this process, the researchers were able to deduce the genetic profiles of approximately 1.7 million individuals from historical Quebec.

"However," Dr. Harding acknowledged, "only 12 per cent of the men married before 1850 who are included in the BALSAC database shared a mitochondrial profile and a Y chromosome with the 960 Quebecers from the modern sample. Because of this limited genetic coverage, none of the men among these 12 per cent had the same genetic profile as any of the unidentified remains."

Some remains still cannot be identified

Harding continued: "Presumably, the individuals whose remains we analyzed were not related maternally or paternally to any of the individuals in the modern sample. But if we could increase the number of genotyped modern individuals considerably - by hundreds of thousands - then we could identify up to 87 per cent of the men married before 1850."

Harding sees two possible sources from which more genetic profiles of modern Quebecers could be obtained to compare with the BALSAC database. "Thousands of genetic profiles of Quebecers have already been gathered by certain population genetics research platforms as well as by "acts of citizen participation", meaning that many people who have their genetic profile drawn up for personal reasons agree to allow it to be used for research purposes."

Other uses of this method

He added: "In addition to being used to identify historical remains here in Quebec so that they can be laid to rest again in marked graves, our method might be used to identify the remains of Canadian soldiers who died and were buried overseas during the two world wars."

He also believes that this method has potential applications in public health. "Studying the genetic baggage of the founders of the French-Canadian population can help us not only to calibrate other methods, such as the reconstruction of historical genomes using bioinformatic models, but also to advance knowledge of the epidemiology of genetic diseases by identifying the historical sources of their genetic determinants, thus opening the door to easier screening for some of these diseases."

Credit: 
University of Montreal

Cook County's short-lived 'soda' tax worked, revenue for store owners went down

image: Lisa Powell, UIC distinguished professor and director of health policy and administration

Image: 
UIC

A study of beverage sales in Cook County, Illinois, shows that for four months in 2017 -- when the county implemented a penny-per-ounce tax on both sugar-sweetened and artificially sweetened drinks -- purchases of the taxed beverages decreased by 21%, even after an adjustment for cross-border shopping.

The findings of the study, which was conducted by researchers at the University of Illinois at Chicago School of Public Health, are published today in the Annals of Internal Medicine.

"This study comprehensively assessed the impact, both intended and unintended, of Cook County's 2017 sweetened beverage tax, and it showed that the tax was an effective method for reducing consumption of many beverages known to contribute to chronic health conditions, like Type II diabetes and obesity," said UIC's Lisa Powell, lead author of the study. "It also showed that the potential impact of the county's tax on public health was dampened by cross-border shopping, an important potential unintended consequence of any local-level tax policy."

Sometimes referred to as a "soda tax," the tax was positioned by county officials as a policy instrument to both raise revenue for the county and improve population health by reducing sweetened beverage consumption.

To study beverage purchasing patterns, Powell and her colleagues tracked the quantity, by volume, of all beverages sold in and around Cook County using Universal Product Codes, or UPCs. The study included sales at supermarkets and grocery, convenience and other stores before and after the tax, which began on Aug. 2, 2017, and ended four months later on Nov. 30, when the tax was repealed. The post-tax data were compared with beverage purchases made during the same period in 2016. The researchers also compared the data with beverage purchases in Missouri's St. Louis County, which did not implement a similar tax.

In addition to a 21% net reduction in purchases of the taxed beverages -- which was adjusted from 27% to account for the increase in cross-border shopping that was observed -- the researchers found that, for untaxed beverages, there was no change in purchasing behavior in Cook County or in nearby communities.

"To see no change in purchases of untaxed beverages in the border area tells us that the observed increase in cross-border shopping was a tax avoidance strategy, not a shift that impacted general purchases," said Powell, UIC distinguished professor and director of health policy and administration.

"The data also showed that the tax was most effective when it came to larger-volume purchases, like cases or liters of soda, where the relative price increases faced by consumers were the greatest based on their low price per ounce," Powell said.

In the study, the researchers reference "price elasticity," which is an economics measurement of how responsive consumers are to changes in price alone.

The price elasticity of sweetened beverages in Cook County was -0.8, which Powell said is a bit lower than in other cities, like Seattle.

In another paper, which analyzed data from a sweetened beverage tax in Seattle -- which notably only taxed sugar-sweetened beverages, not artificially sweetened beverages -- Powell found that while sales of taxed beverages were reduced by a similar 22%, price elasticity was -1.1.

The data from Seattle, which were published last week in Economics & Human Biology, also diverged from the Cook County data in two areas: there was no notable cross-border shopping in Seattle, and there was an increase (4%) in the purchase of untaxed beverages.

"These differences in cross-border shopping patterns demonstrate that local geographic context and the proximity with which the population lives to the border communities are important considerations and must be accounted for when assessing the overall impact of a given tax," Powell said. "Both studies contribute to the growing evidence that a sweetened beverage tax can lead to lower sales of sweetened beverages and hence may be an effective policy tool for reducing the harms associated with consumption of sugary beverages."

Credit: 
University of Illinois Chicago

Predicting persistent cold pool events

image: Wind turbines along the Columbia River Gorge.

Image: 
Paytsar Muradyan / Argonne National Laboratory

Hot air rises, cold air sinks. It’s a basic tenet of nature.

Because it sinks, cold air often finds depressions or low-lying terrain, like a valley or basin, in which to collect, particularly at night as temperatures decrease. As the sun rises and temperatures rise, the cold air warms and mixes with the surrounding air. But during winter, and even into spring, this cold air can linger — often for several days — in a phenomenon known as a “cold pool event.”

Cold pools can trap pollutants that would normally mix and disperse with larger air currents, causing serious health risks in heavily populated urban areas. Known to reduce wind speeds and produce freezing rain, they also can negatively impact wind turbines in the area, diminishing electricity production in the short term and potentially damaging turbines.

Researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory recently collaborated in an 18-month, multi-institutional field campaign with the National Oceanic and Atmospheric Administration and other DOE-sponsored laboratories to study cold pool events in the Columbia River Gorge, along the Oregon-Washington border. The goal of the research is to better understand and forecast cold pool events, as part of DOE’s Wind Forecast Improvement Project. Their findings were recently published in the Journal of Applied Meteorology and Climatology.

“If you cannot predict these events, you can’t plan for and accommodate changes in your production of electricity. You think you will be producing a certain amount of electricity one day, but with sudden low winds caused by a cold pool event, you won’t be.” — Paytsar Muradyan, assistant atmospheric scientist

“As it stands, cold pool events are not well characterized for numerical weather prediction (NWP) models,” explained assistant atmospheric scientist Paytsar Muradyan of Argonne’s Environmental Science division. “Without accurate forecasting of these events, it becomes very difficult to prepare for them, particularly for energy producers.”

Inclement weather caused by cold pool events can decrease the longevity of wind turbines, particularly if the turbines are still active during these poor conditions. Freezing rain, for example, will still damage turbines at rest, but will cause more damage the faster they are moving, leading to issues with the overall production of electricity and the stability of the electrical grid.

“If you cannot predict these events, you can’t plan for and accommodate changes in your production of electricity,” said Muradyan. “You think you will be producing a certain amount of electricity one day, but with sudden low winds caused by a cold pool event, you won’t be.”

The unpredictable effects of cold pool events on electricity generation can continue even after the event has subsided. When the cold air finally does mix with and disperse into warmer air, it can cause sudden and dramatic shifts in wind speed and direction, referred to as “ramp-ups” or “wind ramps.”

“Improving the prediction of these wind ramps can lead to a more stable electrical grid and an overall lower cost of electricity,” Muradyan explained.

To get to those predictions, the researchers collected large amounts of data to characterize cold pool events. These data can then be used to improve parametrization in NWP models. The researchers were primarily interested in collocated vertical profiles of wind speed, wind direction, temperature and humidity to develop criteria for cold pool identification.

Argonne provided two of the radar wind profilers and two of the sodar wind profilers; both were used to analyze the depth of the cold pools and the wind speed distribution. Two radio acoustic sounding systems were used for temperature profiling.

“The idea was to use these measurements, gathered in a complex terrain like the Columbia River Gorge, to develop criteria to determine whether a cold pool event is taking place,” said Muradyan. “Factoring in temporal continuity, or the length of the events, we developed an algorithm to identify all of the cold pool events during the 18 months of the study.”

The algorithm, Muradyan continued, could potentially be applied to other locations to improve NWP forecasting of inclement weather caused by cold pool events.

“Getting advance notice to the average citizen, as well as energy companies, is what we’re striving for,” she added, “because this research has the potential to improve health and reduce energy costs.”

Credit: 
DOE/Argonne National Laboratory

Experts map future of family caregiving research

A new supplemental issue of the journal The Gerontologist from The Gerontological Society of America shares 10 research priorities to better support the needs of family caregivers.

The contents of the journal supplement are the result of the Research Priorities in Caregiving Summit, an expert gathering hosted in March 2018 by the Family Caregiving Institute (FCI) at the University of California, Davis. Attendees included representatives from service agencies, funding organizations, and academia.

The supplement -- titled "Advancing Family Caregiving Research" -- and the summit were sponsored and funded by the Gordon and Betty Moore Foundation.

"The research priorities and research statements that emerged from the summit offer concrete directions for
novice and well-established researchers to design family caregiving intervention research that addresses the most urgent gaps in the literature," wrote supplement associate editors Kenneth Hepburn, PhD, FGSA, and Elena O. Siegel, PhD, RN. "These 10 research priorities offer a roadmap for future research that will address gaps in the vast literature currently available."

The identified research priorities:

Evaluate technologies that facilitate choice and shared decision-making.

Determine where technology is best integrated across the trajectory of caregiving.

Evaluate family-centered adaptive interventions across conditions, situations, stages, needs, preferences, and resources.

Examine the heterogeneity of attitudes, values and preferences toward caregiving, services and supports.

Evaluate family caregiver interventions in ways that address real world complexity, translation, scalability, and sustainability.

Develop a conceptual framework and typology of the trajectory of caregiving for new interventions and outcomes.

Conduct risk/needs assessment of the changing needs of family caregivers over the trajectory of caregiving.

Conduct implementation research on evidence-based caregiving programs for diverse populations.

Develop outcome measures that are relevant to family caregivers from diverse social and cultural groups.

Develop research methodologies that account for the complex structures of family caregiving.

"This supplement stands as an acknowledgement of the FCI for convening the summit and for their well-thought out approach that achieved both breadth and depth of directions for next steps in family caregiving research -- identifying and gaining consensus for intervention research priorities, stemming from four broad topics: heterogeneity, trajectory, technology, and multicultural needs related to caregiving," Hepburn and Siegel stated.

Credit: 
The Gerontological Society of America

UK special school pupils 'treated differently', following removal of standardized assessments

Following the recent withdrawal of standardized assessments, children with intellectual disabilities at special schools in the UK are again being treated differently to children at mainstream schools, says a new study from researchers at The Open University.

Published in Disability & Society, the peer-reviewed research shows there is currently no national progress levels for children with severe or profound intellectual disabilities - meaning teachers have no standardized way of tracking the development of students in both academic and non-academic learning areas.

As in many countries, the intellectual and academic progress of pupils at UK schools is assessed using standardized, nationwide tests given at specific ages (for primary school pupils in the UK, these tests are given at age seven and 11). But these tests are not suitable for pupils with severe intellectual disabilities being taught in special schools, as these pupils will be operating far below the levels being tested.

Because pupils with severe intellectual disabilities should still make progress over time, their progress needs to be assessed just like pupils in mainstream schools, both to determine what kind of continuing support they need and to show at what level a pupil is working if they change schools. Until recently, the progress of children with severe intellectual disabilities was determined via standardized assessments known as Pre-National Curriculum Performance Levels, or P-levels, which were specially designed for pupils working below the level of the standard tests and assessments.

In 2016, however, a review of P-levels concluded they were no longer fit for purpose, because they were too restricted and limited to assess the complex difficulties associated with many children in special schools. This caused the UK government to discontinue P-levels for all but pupils with the most profound intellectual disabilities and instead ask special schools to develop their own assessment programmes.

According to lead researcher Elizabeth Smith, although this move did allow schools to tailor their assessments to the specific needs and abilities of their pupils, it has also created lots of problems and placed extra burdens on the schools. She and her colleagues argue that these downsides have not been properly considered.

"While some teachers welcome the chance to re-organise or design a new curriculum and associated assessments, many teachers are left perplexed and exasperated by the fact that they have no statutory guidelines or framework to work with and are expected to create their own," says Smith.

"And if schools are creating their own assessments, how can they ensure these systems are not just viewpoints or opinions but are valid assessment frameworks grounded in theory? With each school creating their own assessments, it will also be difficult for them to know at what level a pupil joining from a different school is working at."

Smith and her colleagues further argue that the abandonment of P-levels shows that special schools and their pupils are still viewed and treated very differently.

"This would never happen in mainstream schools, so why are special schoolteachers being left to cope with all this extra work without the time and resources to do so?" she says.

"Despite governments' policies promoting equality amongst all children and the need for inclusion of all, children in special schools are again being treated as 'other'."

Credit: 
Taylor & Francis Group

Half of Australian young women are unhappy, stressed about their sex lives

Half of young Australian women experience sexually-related personal distress, with one in five women having at least one female sexual dysfunction (FSD), new research by Monash University shows.

A study conducted by the Women's Health Research Program at Monash University has reported, for the first time, an overall picture of the sexual wellbeing of Australian women between the ages of 18 and 39. The findings have been published today (Monday 24 February 2020) in the international journal, Fertility and Sterility.

Results showed 50.2 per cent of young Australian women experienced some form of sexually-related personal distress. This relates to the degree of feeling guilty, embarrassed, stressed or unhappy about their sex lives.

A concerning 29.6 per cent of women experienced sexually-related personal distress without dysfunction, and 20.6 per cent had at least one FSD.

The most common FSD was low sexual self-image, which caused distress for 11 per cent of study participants. Arousal, desire, orgasm and responsiveness dysfunction affected 9 per cent, 8 per cent, 7.9 per cent and 3.4 per cent of the study cohort respectively.

Sexual self-image dysfunction was associated with being overweight, obese, living together with partner, not married, married and breastfeeding.

Taking psychotropic medication (such as antidepressants), reported by 20 per cent of surveyed women, had the most pervasive impact on sexual function. The use of the combined oral contraceptive pill was not associated with any sexual dysfunction.

"Sexual wellbeing is recognised as a fundamental human right. It is of great concern that one in five young women have an apparent sexual dysfunction and half of all women within this age group experience sexually-related personal distress," senior author and Professor of Women's Health at Monash University, Susan Davis, said.

"This is a wake-up call to the community and signals the importance of health professionals being open and adequately prepared to discuss young women's sexual health concerns."

The Grollo-Ruzzene Foundation Younger Women's Health Study, funded by Grollo Ruzzene Foundation, recruited 6986 women aged 18-39 years, living in Victoria, New South Wales and Queensland, to take part in the study.

All women completed a questionnaire that assessed their sexual wellbeing in terms of desire, arousal, responsiveness, orgasm, and self-image. Participants also evaluated whether they had sexually-associated personal distress and provided extensive demographic information.

Almost one-third of participants described themselves as single, 47 per cent had a body mass index within the normal range, and nearly 70 per cent had reported being sexually active in the 30 days preceding the study.

Women who habitually monitored their appearance, and for whom appearance determined their level of physical self-worth, reported being less sexually assertive and more self-conscious during intimacy, and experienced lower sexual satisfaction.

Professor Davis said if untreated, sexually-related personal distress and FSD could impact relationships and overall quality of life as women aged.

"The high prevalence of sexually-related personal distress signals the importance of health professionals, particularly those working in the fields of gynaecology and fertility, being adequately prepared to routinely ask young women about any sexual health concerns, and to have an appropriate management or referral pathway in place," Professor Davis said.

Credit: 
Monash University

Surgeons successfully treat brain aneurysms using a robot

LOS ANGELES, Feb. 21, 2020 -- Using a robot to treat brain aneurysms is feasible and could allow for improved precision when placing stents, coils and other devices, according to late breaking science presented today at the American Stroke Association's International Stroke Conference 2020 . The conference, Feb. 19-21 in Los Angeles, is a world premier meeting for researchers and clinicians dedicated to the science of stroke and brain health.

Robotic technology is used in surgery and cardiology, but not for brain vascular procedures. In this study, Canadian researchers report the results of the first robotic brain vascular procedures. They used a robotic system specifically adapted for neurovascular procedures. Software and hardware adaptations enable it to accommodate microcatheters, guidewires and the other devices used for endovascular procedures in the brain. These modifications also provide the operator additional precise fine-motor control compared to previous system models.

"This experience is the first step towards achieving our vision of remote neurovascular procedures," said lead researcher Vitor Mendes Pereira, M.D., M.Sc., a neurosurgeon and neuroradiologist at the Toronto Western Hospital, and professor of medical imaging and surgery at the University of Toronto in Canada. "The ability to robotically perform intracranial aneurysm treatment is a major step forward in neuro-endovascular intervention."

In the first case, a 64-year-old female patient presented with an unruptured aneurysm at the base of her skull. The surgical team successfully used the robot to place a stent and then, using the same microcatheter, entered the aneurysm sac and secured the aneurysm by placing various coils. All intracranial steps were performed with the robotic arm. Since this first case, the team has successfully performed five additional aneurysm treatments using the robot, which included deploying various devices such as flow-diverting stents.

"The expectation is that future robotic systems will be able to be controlled remotely. For example, I could be at my hospital and deliver therapy to a patient hundreds or even thousands of kilometers away," Mendes Pereira said. "The ability to deliver rapid care through remote robotics for time-critical procedures such as stroke could have a huge impact on improving patient outcomes and allow us to deliver cutting-edge care to patients everywhere, regardless of geography."

"Our experience, and that of future operators of this technology, will help develop the workflows and processes necessary to implement successful robotic programs, which will ultimately help establish remote care networks in the future," Mendes Pereira said.

Credit: 
American Heart Association

Where is the greatest risk to our mineral resource supplies?

image: Bastnaesite (the reddish parts) in Carbonatite. Bastnaesite is an important ore for rare earth elements, one of the mineral commodities identified as most at-risk of supply disruption by the USGS in a new methodology.

Image: 
Scott Horvath, USGS

Policymakers and the U.S. manufacturing sector now have a powerful tool to help them identify which mineral commodities they rely on that are most at risk to supply disruptions, thanks to a new methodology by the U.S. Geological Survey and its partners.

"This methodology is an important part of how we're meeting our goals in the President Trump's Strategy to ensure a reliable supply of critical minerals," said USGS director Jim Reilly. "It provides information supporting American manufacturers' planning and sound supply-chain management decisions."

The methodology evaluated the global supply of and U.S. demand for 52 mineral commodities for the years 2007 to 2016. It identified 23 mineral commodities, including some rare earth elements, cobalt, niobium and tungsten, as posing the greatest supply risk for the U.S. manufacturing sector. These commodities are vital for mobile devices, renewable energy, aerospace and defense applications, among others.

"Manufacturers of new and emerging technologies depend on mineral commodities that are currently sourced largely from other countries," said USGS scientist Nedal Nassar, lead author of the methodology. "It's important to understand which commodities pose the greatest risks for which industries within the manufacturing sector."

The supply risk of mineral commodities to U.S. manufacturers is greatest under the following three circumstances: U.S. manufacturers rely primarily on foreign countries for the commodities, the countries in question might be unable or unwilling to continue to supply U.S. manufacturers with the minerals; and U.S. manufacturers are less able to handle a price shock or from a disruption in supply.

"Supply chains can be interrupted for any number of reasons," said Nassar. "International trade tensions and conflict are well-known reasons, but there are many other possibilities. Disease outbreaks, natural disasters, and even domestic civil strife can affect a country's mineral industry and its ability to export mineral commodities to the U.S."

Risk is not set in stone; it changes based on global market conditions that are specific to each individual mineral commodity and to the industries that use them. However, the analysis indicates that risk typically does not change drastically over short periods, but instead remains relatively constant or changes steadily.

"One thing that struck us as we were evaluating the results was how consistent the mineral commodities with the highest risk of supply disruption have been over the past decade," said Nassar. "This is important for policymakers and industries whose plans extend beyond year-to-year changes."

For instance, between 2007 and 2016, the risk for rare earth elements peaked in 2011 and 2012 when China halted exports during a dispute with Japan. However, the supply of rare earth elements consistently remained among the highest risk commodities throughout the entire study period.

In 2019, the U.S. Department of Commerce, in coordination with the Department of the Interior and other federal agencies, published the interagency report entitled "A Federal Strategy to Ensure a Reliable Supply of Critical Minerals," in response to President Trump's Executive Order 13817. Among other things, the strategy commits the U.S. Department of the Interior to improve the geophysical, geologic, and topographic mapping of the U.S.; make the resulting data and metadata electronically accessible; support private mineral exploration of critical minerals; make recommendations to streamline permitting and review processes enhancing access to critical mineral resources.

Credit: 
U.S. Geological Survey

Lipid signaling from beta cells can potentiate an inflammatory macrophage polarization

image: Sasanka Ramanadham

Image: 
UAB

BIRMINGHAM, Ala. - Do the insulin-producing beta cells in the pancreas unwittingly produce a signal that aids their own demise in Type 1 diabetes?

That appears to be the case, according to lipid signaling research co-led by Sasanka Ramanadham, Ph.D., professor of cell, developmental and integrative biology at the University of Alabama at Birmingham, and Charles Chalfant, Ph.D., professor of cell biology, Microbiology and molecular biology, University of South Florida.

The research studied the signals that drive macrophage cells in the body to two different phenotypes of activated immune cells. The M1 type attacks infections by phagocytosis, and by the secretion of signals that increase inflammation and molecules that kill microbes. The M2 type acts to resolve inflammation and repair damaged tissues.

Autoimmune diseases result from sustained inflammation, where immune cells attack one's own body. In Type 1 diabetes, macrophages and T cells infiltrate the pancreas and attack beta cells. As they die, insulin production drops or vanishes.

UAB researcher Ramanadham has spent decades studying lipid signaling in Type 1 diabetes. At South Florida, Chalfant studies lipid signaling in cancer, and his lab provides mass spectrometry expertise for both studies. Mass spectrometry is a very sensitive approach to identify different classes of lipids and quantify the abundances of such lipids.

The researchers focused on a particular enzyme, called Ca2+-independent phospholipase A2beta, or iPLA2beta. This enzyme hydrolyzes membrane glycerophospholipids in the cell membrane to release a fatty acid and a lysophospholipid, which by themselves can modulate cellular responses. Other enzymes can then convert those fatty acids into bioactive lipids, which the Ramanadham lab has designated as iPLA2beta-derived lipids, or iDLs. The iDLs can be either pro-inflammatory lipids that promote the M1 macrophage phenotype or pro-resolving lipids that promote the M2 macrophage phenotype, depending on which pathways are most active. Furthermore, the iDLs are released by the cell, so they could participate in cell-to-cell signaling.

Beta cells and macrophages express iPLA2-beta activity.

To look at how the abundances of iDLs could affect inflammation, the researchers studied iPLA2beta-knockout mice and mice with beta cells in the pancreas engineered to overexpress iPLA2beta.

Macrophages from the iPLA2beta-knockout mice were isolated and then classically activated to induce the M1 phenotype. The researchers then measured the iDL eicosanoids produced by the macrophages that lacked iPLA2beta. Compared to wild type activated macrophages, the knockout macrophages produced less pro-inflammatory prostaglandins and more of a specialized pro-resolving lipid mediator called resolvin D2. Both changes were consistent with polarization to the M2 phenotype, a reduced inflammatory state.

Conversely, macrophages from the mice with beta cells that overexpressed iPLA2beta showed increased production of pro-inflammatory eicosanoids and reduced resolvin D2, as compared to wild type mice, reflecting an enhanced inflammatory state and polarization to the M1 phenotype.

"These findings, for the first time, reveal an association between selective changes in eicosanoids and specialized pro-resolving lipid mediators with macrophage polarization and, further, that the relevant lipid species are modulated by iPLA2beta activity," Ramanadham said. "Most importantly, our findings unveil the possibility that events triggered in beta cells can modulate macrophage -- and likely other islet-infiltrating immune cell -- responses."

"To our knowledge, this is the first demonstration of lipid signaling generated by beta cells having an impact on an immune cell that elicits inflammatory consequences," he said. "We think lipids generated by beta cells can cause the cells' own death."

Furthermore, "while the major focus of Type 1 diabetes studies is on deciphering the immunology components of the disease process, our studies bring to the forefront other equally important factors, such as locally generated lipid-signaling, that should be considered in the search for effective strategies to prevent or delay the onset and progression of Type 1 diabetes."

Credit: 
University of Alabama at Birmingham

Leg pain medication may prevent re-blockage of neck arteries after a stent

LOS ANGELES, Feb. 21, 2020 -- Adding cilostazol - an antiplatelet medication used to treat leg pain - tended to prevent re-blockage of carotid artery stents within two years, according to late breaking science presented today at the American Stroke Association's International Stroke Conference 2020. The conference, Feb. 19-21 in Los Angeles, is a world premier meeting for researchers and clinicians dedicated to the science of stroke and brain health.

Blockage of a neck artery (carotid artery) is a major cause of stroke. Opening the carotid artery with a mesh tube known as a stent is an effective treatment, however, patients can develop a blockage - known as in-stent restenosis, which may increase the risk of recurrent stroke.

Cilostazol is a unique antiplatelet agent. As a phosphodiesterase III inhibitor, it improves endothelial function, inhibits the clumping of blood cells (platelet aggravation), widens blood vessels (vasodilator) and mildly inhibits cell growth. It is FDA-approved to treat leg pain in people with peripheral vascular disease.

"This is the first trial to show potential effectiveness of medical management for the prevention of in-stent restenosis after carotid artery stenting," said Hiroshi Yamagami, M.D., Ph.D., lead study author and director of the Department of Stroke Neurology at National Hospital Organization Osaka National Hospital, Japan.

The Carotid Artery Stenting with Cilostazol Addition for Restenosis (CAS-CARE) study is a multi-center, prospective, randomized, open-label trial evaluating the inhibitory effect of cilostazol on in-stent restenosis, compared to other antiplatelet medications in patients scheduled to undergo carotid artery stenting.

Eligible patients were randomly assigned to receive cilostazol (50 mg or 100 mg, twice per day), or any antiplatelet agents other than cilostazol, starting three days before stenting and continued for two years. A total of 631 patients (average age 70, 88% men) were included in the full study analysis. In-stent restenosis occurred in 9.5% patients in the cilostazol group and 15% of patients in the non-cilostazol group during two years of follow-up. The rate of cardiovascular event occurrence was about 6% in both groups. Bleeding events were also similar for both groups at 1.1% in those treated with cilostazol vs. 0.3% among those not treated with cilostazol.

Credit: 
American Heart Association