Culture

Financial stress linked to heart disease risk among African Americans

Boston, MA -- Coronary heart disease (CHD) is the leading cause of death in the U.S., and African Americans are disproportionately affected. Prior studies have investigated how limited access to material resources due to financial hardship may influence health, but the association between that stress caused by financial hardship and coronary heart disease in African Americans has not previously been examined.

In a new study which examined data from 2,256 participants of the Jackson Heart Study, a longitudinal cohort study of cardiovascular disease risks in African-American men and women living in the Jackson, Miss., area, researchers examined the association between the psychological stress of financial hardship and CHD in this population and found that African Americans who experienced moderate to high financial stress had an increased risk of developing heart disease compared to those who did not report such stress.

The study authors concluded that the psychological toll of financial hardship may influence the development of heart disease in combination with stress-related behaviors, health conditions and emotions that contribute to heart disease. Results are published online January 17 in the American Journal of Preventive Medicine.

"Stress is known to contribute to disease risk, but the data from our study suggest a possible relationship between financial stress and heart disease that clinicians should be aware of as we research and develop interventions to address social determinants of health disparities," said senior author Cheryl Clark, MD, ScD, a hospitalist and researcher in the Division of General Medicine and Primary Care at Brigham and Women's Hospital, where she is also the director of Health Equity Research and Intervention in the Center for Community Health and Health Equity.

Researchers analyzed data from 2000 to 2012 from participants who did not have evidence of heart disease at the beginning of the study. Participants were asked to rate the stress they experienced in several areas, including financial hardship, such as having problems paying bills or running out of pocket money. Participants rated the severity of each experience of stress due to finances on a 7-point scale, which the researchers than used to categorize the total level of stress due to finances participants reported at the onset of the study.

Researchers simultaneously analyzed other participant characteristics and behaviors thought to lead to heart disease, including their physical activity and smoking behavior; the presence of chronic conditions including hypertension, high cholesterol, diabetes, and depression; whether participants had access to health care; and social issues such as education and income. After considering each of these factors, it was found that African-American men and women who experienced moderate-to-high financial stress had almost three times the risk of heart disease events - including heart attacks and procedures to investigate or treat heart disease - than those who did not experience financial stress. Individuals with mild financial stress had nearly two times the risk of developing heart disease than those unaffected by stress. The combination of three key factors - depression, smoking, and diabetes - appeared to explain some of the connection between financial stress and heart disease risk.

The study was limited to drawing associations in the data and did not prove a causal connection between stress and heart disease risk. The authors were also not able to determine whether short-term or long-term exposures to stress were enough to raise heart disease risk. Importantly, the findings were limited to those who were willing to report their stress to researchers.

Still, researchers conclude the results should prompt deeper investigation into the role of economic stress on disease risk and encourage policies to reduce these stressors.

"The information from this study covered experiences men and women had during the recession of 2007 and beyond," Clark said. "As we think about policies to prevent heart disease, we need to know a lot more about how economic volatility and financial stress may be connected to heart disease so that we can prevent unnecessary stress that may affect heart health."

Credit: 
Brigham and Women's Hospital

Bee surveys in newest US national park could aid pollinator studies elsewhere

image: Utah State University researcher Joan Meiners, pictured in California's Pinnacles National Park, is lead author of a new study on bee surveys in the Jan. 17, 2019 edition of 'PLOS One.'

Image: 
Therese Lamperty

LOGAN, UTAH, USA - Declines in native bee populations are widely reported, but can existing data really analyze these trends? In the Jan. 17, 2019, online edition of PLOS One, Utah State University and USDA researchers report findings about pollinator biodiversity in California's Pinnacles National Park derived from data collected from three separate surveys spanning 17 years. Their results documented 450 species of wild, native bees at Pinnacles, including 48 new to the area since 2002, and 95 detected at the site in the 1990s, but now missing.

"This number of species marks the park as a national biodiversity hotspot for bees," says lead author Joan Meiners, who completed a master's degree from Utah State in 2016.

In drafting the report, Meiners and co-researchers Terry Griswold, USDA-ARS entomologist and USU adjunct faculty member and USU alum Olivia Messinger Carril, an independent scientist, who conducted the original bee surveys at Pinnacles in the 1990s, also completed a literature review of similar studies. Their efforts were supported by the National Park Service and USDA-ARS.

"We found that only 23 natural areas across the country have been systematically and exhaustively surveyed for bee biodiversity, and no others have been later replicated to compare changes over time," says Meiners, currently a doctoral student at the University of Florida-Gainesville.

She says many estimations of bee biodiversity may not be reliable given limited records from source habitats and the inherently high natural variability of wild bee species across space and time. Over seven years of collecting, no two years had more than 81 percent of species in common, suggesting limited occurrence or detectability of species year-to-year, even in the best of collecting circumstances.

Accurate inventory is critical, she says, as native bees are important agricultural pollinators, contributing some $3 billion to pollination services in the U.S. each year.

"Increased land conservation and systematic, replicated monitoring efforts, similar to those we present from Pinnacles, will be essential to accurately track the extent and nature of widespread declines in our most important pollinators," Meiners says.

Credit: 
Utah State University

Despite concerns about anti-Semitism, the 2019 Women's March may still draw 100 times the national protest average

image: The 2017 Women's March in St. Paul, Minn., was one of many sister marches.

Image: 
Photo by Fibonacci Blue.

As the next Women's March approaches, a new study of the 2017 Women's March solidarity events led by University of Notre Dame Associate Professor of Sociology Kraig Beyerlein is likely a good predictor of what to expect. Based on a survey of sister marches across the United States, key characteristics of the events were massive turnout, majority female leadership, low rate of counterdemonstrators, substantial grassroots mobilization and strong support from faith-based groups.

The study, published in the December 2018 issue of the journal Mobilization, reveals that sister marches drew nearly a hundredfold more participants than an average U.S. protest as measured by Beyerlein and colleagues' National Study of Protest Events (NSPE).

"This participation blew that of a 'typical' recent protest in the United States out of the water. For example, while the mean number of protesters in the NSPE was 61, it was nearly 6,000 for the sister marches," Beyerlein and his co-authors write. "Turnout figures for the solidarity events were also considerably higher than protests from prior decades reported in the New York Times -- which is particularly impressive given that newspapers heavily skew toward large demonstrations -- and the April 15, 2009, Tea Party rallies."

The NSPE shows that, on a national scale, roughly one-third of all U.S. protests feature counterdemonstrators. However, only about 20 percent of the 2017 sister marches encountered counterdemonstrators. This is interesting in that conservative groups criticized the Women's March in the days and weeks leading up to it. But they generally stayed home on Jan. 21, 2017, keeping their opposition to themselves.

As expected, participants were largely female, but turnout at three-fourths of the sister marches included 25 percent or more men. For all but a fourth of solidarity events, organizing committees and volunteers were overwhelmingly female as well, at 92 percent and 96 percent, respectively. In addition, the vast majority of speakers at sister marchers were women. While men had a notable presence the day of event, the study shows that women were primarily responsible for both the organizational "heavy lifting" and for serving as the "voice" of the marches. Strong female leadership and the inclusive nature of the sister marches were likely two reasons for their numerical strength, Beyerlein said.

Grassroots efforts also likely contributed to the success of the solidarity events. Among the 86 percent of events with speakers or organizational sponsors, three-fourths or more of them had roots in the local community. Collaboration between different state marches (versus partnership with the national march) was most frequent, occurring 70 percent of the time.

While it is not surprising that the marches received strong support from women's rights and LGBTQIA groups -- both of which the Trump campaign targeted -- the level of sponsorship from religious-based groups is notable, and likely unexpected given popular perception of the right having a monopoly on faith.

"Faith communities' resources are rarely directed toward protest action, and when they are it tends to be for movements opposed to issues central to the Women's March, such as reproductive or LGBTQIA rights," Beyerlein and co-authors note. "Supporting this view, Trump received considerable support among certain religious circles, garnering 81 percent of the evangelical vote. An approach that emphasizes the politically conservative nature of religion would not have predicted the former to sponsor, participate in or provide material support to nearly 60 percent of all sister marches." In other research, Beyerlein and Notre Dame graduate student Peter Ryan demonstrated the dynamics of faith in the 2017 Women's March on Chicago.

Recruiting participants for sister marches was done almost exclusively through social media. This seems like a given in the internet age -- however, solidarity events also relied, in considerable numbers, on conventional mobilizing tools including traditional media, advertisements, flyers and posters. The combination of methods is likely another reason for the considerable turnout at sister marches across the United States.

Beyerlein and other members of the research team plan to continue to examine the solidarity events. "Studying change in the sister marches over time provides the opportunity to document continuity or discontinuity in gender dynamics, organizing strategies and the presence of counterdemonstrators, among other factors," Beyerlein and co-authors state at the end of the article. "Moreover, given that the 2017 Women's Marches were the first mass mobilizations of his presidency, our research can identify how they fit into the broader trajectory of the Trump resistance."

Credit: 
University of Notre Dame

From emergence to eruption: Comprehensive model captures life of a solar flare

video: This visualization is an animation of the solar flare modeled in the new study. The violet color represents plasma with temperature less than 1 million Kelvin. Red represents temperatures between 1 million and 10 million Kelvin, and green represents temperatures above 10 million Kelvin.

Image: 
Video courtesy Mark Cheung, Lockheed Martin, and Matthias Rempel, NCAR

A team of scientists has, for the first time, used a single, cohesive computer model to simulate the entire life cycle of a solar flare: from the buildup of energy thousands of kilometers below the solar surface, to the emergence of tangled magnetic field lines, to the explosive release of energy in a brilliant flash.

The accomplishment, detailed in the journal Nature Astronomy, sets the stage for future solar models to realistically simulate the Sun's own weather as it unfolds in real time, including the appearance of roiling sunspots, which sometimes produce flares and coronal mass ejections. These eruptions can have widespread impacts on Earth, from disrupting power grids and communications networks, to damaging satellites and endangering astronauts.

Scientists at the National Center for Atmospheric Research (NCAR) and the Lockheed Martin Solar and Astrophysics Laboratory led the research. The comprehensive new simulation captures the formation of a solar flare in a more realistic way than previous efforts, and it includes the spectrum of light emissions known to be associated with flares.

"This work allows us to provide an explanation for why flares look like the way they do, not just at a single wavelength, but in visible wavelengths, in ultraviolet and extreme ultraviolet wavelengths, and in X-rays," said Mark Cheung, a staff physicist at Lockheed Martin Solar and Astrophysics Laboratory and a visiting scholar at Stanford University. "We are explaining the many colors of solar flares."

The research was funded largely by NASA and by the National Science Foundation (NSF), which is NCAR's sponsor.

Bridging the scales

For the new study, the scientists had to build a solar model that could stretch across multiple regions of the Sun, capturing the complex and unique physical behavior of each one.

The resulting model begins in the upper part of the convection zone -- about 10,000 kilometers below the Sun's surface -- rises through the solar surface, and pushes out 40,000 kilometers into the solar atmosphere, known as the corona. The differences in gas density, pressure, and other characteristics of the Sun represented across the model are vast.

To successfully simulate a solar flare from emergence to energy release, the scientists needed to add detailed equations to the model that could allow each region to contribute to the solar flare evolution in a realistic way. But they also had to be careful not to make the model so complicated that it would no longer be practical to run with available supercomputing resources.

"We have a model that covers a big range of physical conditions, which makes it very challenging," said NCAR scientist Matthias Rempel. "This kind of realism requires innovative solutions."

To address the challenges, Rempel borrowed a mathematical technique historically used by researchers studying the magnetospheres of Earth and other planets. The technique, which allowed the scientists to compress the difference in time scales between the layers without losing accuracy, enabled the research team to create a model that was both realistic and computationally efficient.

The next step was to set up a scenario on the simulated Sun. In previous research using less complex models, scientists have needed to initiate the models nearly at the moment when the flare would erupt to be able to get a flare to form at all.

In the new study, the team wanted to see if their model could generate a flare on its own. They started by setting up a scenario with conditions inspired by a particularly active sunspot observed in March 2014. The actual sunspot spawned dozens of flares during the time it was visible, including one very powerful X-class and three moderately powerful M-class flares. The scientists did not try to mimic the 2014 sunspot accurately; instead they roughly approximated the same solar ingredients that were present at the time -- and that were so effective at producing flares.

Then they let the model go, watching to see if it would generate a flare on its own.

"Our model was able to capture the entire process, from the buildup of energy to emergence at the surface to rising into the corona, energizing the corona, and then getting to the point when the energy is released in a solar flare," Rempel said.

Now that the model has shown it is capable of realistically simulating a flare's entire life cycle, the scientists are going to test it with real-world observations of the Sun and see if it can successfully simulate what actually occurs on the solar surface.

"This was a stand-alone simulation that was inspired by observed data," Rempel said. "The next step is to directly input observed data into the model and let it drive what's happening. It's an important way to validate the model, and the model can also help us better understand what it is we're observing on the Sun."

Credit: 
National Center for Atmospheric Research/University Corporation for Atmospheric Research

Proteins use a lock and key system to bind to DNA

image: A new study by Gladstone scientists challenges a fundamental assumption of how proteins interact with the human genome.

Image: 
Gladstone Institutes

SAN FRANCISCO, CA--January 16, 2019--You can think of DNA as a string of letters--As, Cs, Ts, and Gs--that together spell out the information needed for the construction and function of cells. Each cell in your body shares the same DNA. So, for cells to take on their differing roles, they must be able to turn on and off specific genes with precise control. The genes active in a brain cell, for instance, are different than those active in a skin cell.

This is achieved in part by the action of "DNA binding proteins" that latch onto the human genome at particular places to turn genes on or off. Now, researchers at the Gladstone Institutes led by Katherine Pollard, PhD, made a major discovery about how these proteins bind to DNA.

Scientists have traditionally thought that DNA binding proteins use patterns in the genome's code of As, Cs, Ts, and Gs to guide them to the right location, with a given protein only binding to a specific sequence of letters. However, many proteins bind to several different letter combinations, and two different proteins may recognize the same pattern.

Despite this multitude of overlapping patterns, proteins never seem confused about where they're supposed to bind. In the new study, published in Cell Systems, the Gladstone scientists discovered that proteins must rely on another clue to know where to bind: the DNA's three-dimensional shape.

"For decades, we've had difficulty explaining how proteins find the correct places to bind in the DNA, and how they do that in a specific way and without binding to the wrong places," said Pollard, a senior investigator and director of the Gladstone Institute of Data Science and Biotechnology. "We hypothesized this could be explained by the structural aspect of the genome."

That's because DNA's string of letters is also a physical, three-dimensional structure, twisted into the famous double-helix shape and wrapped up into a microscopic package. Within its ladder-like structure, a variety of twists, grooves, and gaps can be found between the rungs and sides. Pollard and her team realized these variations create a type of keyhole that select proteins slot into. If the grooves on the protein don't match those on the genome, the key won't fit.

"There's a rich scientific literature on how proteins interact with each other or bind to chemicals, and it's always through a kind of lock and key mechanism; why would proteins binding to DNA be any different?" said Md. Abul Hassan Samee, PhD, a postdoctoral fellow at Gladstone who is the first author of the study. "We think the proteins dock onto DNA as a 3D structure, just like when they interact with other proteins or with chemicals."

Earlier work had raised the possibility that DNA shape provides additional information to proteins on where to bind, but it was unclear how influential these shapes were. To test their theory, the researchers adapted a common machine learning algorithm typically used to identify the letter sequences proteins bind to, except now they were looking for patterns in shape. They discovered that over 80 percent of proteins bind to a specific shape pattern in the genome.

The researchers say that although the proteins are frequently not reading the alphabetical code of the genome, the sequence of the letters is still vital to dictating where these proteins bind, but because it defines the genome's shape. Curiously, very different letter sequences can designate the same structure, while slightly different letter sequences can result in wildly different structures.

This fact helps explain the two biggest mysteries in protein binding to DNA. First, proteins that bind to multiple different letter sequences turn out to be homing in on the same spatial pattern, and second, proteins that appear to share letter sequences are in fact attaching to very different shapes. What's more, proteins that frequently bind to the genome as a pair are attracted to specific shapes that can differ from the shapes they recognize when they bind alone.

The current work was all done with computer modeling, so the researchers' next step is to prove their theory using molecular experiments.

"It was accepted that a pattern of As, Cs, Ts, and Gs where a protein bound to DNA had a particular shape," said Pollard, who is also a professor at UC San Francisco and a Chan Zuckerberg Biohub investigator. "But nobody had looked to see whether other binding locations that couldn't be explained with that pattern of letters might have the same shape. If we can show in a dish that proteins can recognize a DNA location because of its shape, even when it doesn't contain the established letter sequence, I think it would be game changing."

In recent years, scientists have discovered that most genetic mutations that result in disease are not in the genes themselves. Instead, they occur in so-called "dark DNA"--the 99 percent of the human genome that influences how, when, and where genes are turned on or off. With their recent discovery, the researchers have opened the door to understanding a new way that mutations could affect gene expression and, as a result, the functioning of cells.

"There's a huge effort right now to understand how mutations in this dark DNA cause disease, and that's important because for most complex diseases, the majority of the genetic mutations are outside of genes," explains Samee. "Everyone has been looking at the letter sequences and asking whether the mutations disrupt those sequences, but our work shows that you also need to ask whether the mutation is changing the shape of the DNA. You could have a mutation that changes the letter sequence, but if it doesn't change the shape, it may not always change the protein binding."

Credit: 
Gladstone Institutes

Artificial intelligence applied to the genome identifies an unknown human ancestor

image: Jaume Bertranpetit, researcher at the Institute of Evolutionary Biology, and Oscar Lao, researcher at the Centre for Genomic Regulation, co-led the study.

Image: 
Pilar Rodriguez

By combining deep learning algorithms and statistical methods, investigators from the Institute of Evolutionary Biology (IBE), the Centro Nacional de Análisis Genómico (CNAG-CRG) of the Centre for Genomic Regulation (CRG) and the Institute of Genomics at the University of Tartu have identified, in the genome of Asiatic individuals, the footprint of a new hominid who cross bred with its ancestors tens of thousands of years ago.

Modern human DNA computational analysis suggests that the extinct species was a hybrid of Neanderthals and Denisovans and cross bred with Out of Africa modern humans in Asia. This finding would explain that the hybrid found this summer in the caves of Denisova-the offspring of a Neanderthal mother and a Denisovan father-, was not an isolated case, but rather was part of a more general introgression process.

The study, published in Nature Communications, uses deep learning for the first time ever to account for human evolution, paving the way for the application of this technology in other questions in biology, genomics and evolution.

Humans had descendants with an species that is unknown to us

One of the ways of distinguishing between two species is that while both of them may cross breed, they do not generally produce fertile descendants. However, this concept is much more complex when extinct species are involved. In fact, the story told by current human DNA blurs the lines of these limits, preserving fragments of hominids from other species, such as the Neanderthals and the Denisovans, who coexisted with modern humans more than 40,000 years ago in Eurasia.

Now, investigators of the Institute of Evolutionary Biology (IBE), the Centro Nacional de Análisis Genómico (CNAG-CRG) of the Centre for Genomic Regulation (CRG), and the University of Tartu have used deep learning algorithms to identify a new and hitherto-unknown ancestor of humans that would have interbred with modern humans tens of thousands of years ago. "About 80,000 years ago, the so-called Out of Africa occurred, when part of the human population, which already consisted of modern humans, abandoned the African continent and migrated to other continents, giving rise to all the current populations", explained Jaume Bertranpetit, principal investigator at the IBE and head of Department at the UPF. "We know that from that time onwards, modern humans cross bred with Neanderthals in all the continents, except Africa, and with the Denisovans in Oceania and probably in South-East Asia, although the evidence of cross-breeding with a third extinct species had not been confirmed with any certainty".

Deep learning: deciphering the keys to human evolution in ancient DNA

Hitherto, the existence of the third ancestor was only a theory that would explain the origin of some fragments of the current human genome (part of the team involved in this study had already posed the existence of the extinct hominid in a previous study). However, deep learning has made it possible to make the transition from DNA to the demographics of ancestral populations.

The problem the investigators had to contend with is that the demographic models they have analysed are much more complex than anything else considered to date and there were no statistic tools available to analyse them. Deep learning "is an algorithm that imitates the way in which the nervous system of mammals works, with different artificial neurons that specialise and learn to detect, in data, patterns that are important for performing a given task", stated Òscar Lao, principal investigator at the CNAG-CRG and an expert in this type of simulations. "We have used this property to get the algorithm to learn to predict human demographics using genomes obtained through hundreds of thousands of simulations. Whenever we run a simulation we are travelling along a possible path in the history of humankind. Of all simulations, deep learning allows us to observe what makes the ancestral puzzle fit together".

It is the first time that deep learning has been used successfully to explain human history, paving the way for this technology to be applied in other questions in biology, genomics and evolution.

An extinct hominid could explain the history of humankind

The deep learning analysis has revealed that the extinct hominid is probably a descendant of the Neanderthal and Denisovan populations. The discovery of a fossil with these characteristics this summer would seem to endorse the study finding, consolidating the hypothesis of this third species or population that coexisted with modern human beings and mated with them. "Our theory coincides with the hybrid specimen discovered recently in Denisova, although as yet we cannot rule out other possibilities", said Mayukh Mondal, an investigator of the University of Tartu and former investigator at the IBE.

Credit: 
Center for Genomic Regulation

Kathmandu: Waiting for the complete rupture

image: Members of Nepal's army are helping to remove rubble after the devastating 2015 earthquake.

Image: 
Colourbox

In April 2015, Nepal - and especially the region around the capital city, Kathmandu - was struck by a powerful tremor. An earthquake with a magnitude of 7.8 destroyed entire villages, traffic routes and cultural monuments, with a death toll of some 9,000.

However, the country may still face the threat of much stronger earthquakes with a magnitude of 8 or more. This is the conclusion reached by a group of earth scientists from ETH Zurich based on a new model of the collision zone between the Indian and Eurasian Plates in the vicinity of the Himalayas.

Using this model, the team of ETH researchers working with doctoral student Luca Dal Zilio, from the group led by Professor Taras Gerya at the Institute of Geophysics, has now performed the first high-resolution simulations of earthquake cycles in a cross-section of the rupture zone.

"In the 2015 quake, there was only a partial rupture of the major Himalayan fault separating the two continental plates. The frontal, near-surface section of the rupture zone, where the Indian Plate subducts beneath the Eurasian Plate, did not slip and remains under stress," explains Dal Zilio, lead author of the study, which was recently published in the journal Nature Communications.

Normally, a major earthquake releases almost all the stress that has built up in the vicinity of the focus as a result of displacement of the plates. "Our model shows that, although the Gorkha earthquake reduced the stress level in part of the rupture zone, tension actually increased in the frontal section close to the foot of the Himalayas. The apparent paradox is that 'medium-sized' earthquakes such as Gorkha can create the conditions for an even larger earthquake," says Dal Zilio.

Tremors of the magnitude of the Gorkha earthquake release stress only in the deeper subsections of the fault system over lengths of 100 kilometres. In turn, new and even greater stress builds up in the near-surface sections of the rupture zone.

According to the simulations performed by Dal Zilio and his colleagues, two or three further Gorkha quakes would be needed to build up sufficient stress for an earthquake with a magnitude of 8.1 or more. In a quake of this kind, the rupture zone breaks over the entire depth range, extending up to the Earth's surface and laterally -- along the Himalayan arc -- for hundreds of kilometres. This ultimately leads to a complete stress release in this segment of the fault system, which extends to some 2,000 kilometres in total.

Historical data shows that mega events of this kind have also occurred in the past. For example, the Assam earthquake in 1950 had a magnitude of 8.6, with the rupture zone breaking over a length of several hundred kilometres and across the entire depth range. In 1505, a giant earthquake struck with sufficient power to produce an approximately 800-kilometre rupture on the major Himalayan fault. "The new model reveals that powerful earthquakes in the Himalayas have not just one form but at least two, and that their cycles partially overlap," says Edi Kissling, Professor of Seismology and Geodynamics. Super earthquakes might occur with a periodicity of 400 to 600 years, whereas "medium-sized" quakes such as Gorkha have a recurrence time of up to a few hundred years. As the cycles overlap, the researchers expect powerful and dangerous earthquakes to occur at irregular intervals.

However, they cannot predict when another extremely large quake will next take place. "No one can predict earthquakes, not even with the new model. However, we can improve our understanding of the seismic hazard in a specific area and take appropriate precautions," says Kissling.

The two-dimensional and high-resolution model also includes some research findings that were published after the Gorkha earthquake. To generate the simulations, the researchers used the Euler mainframe computer at ETH Zurich. "A three-dimensional model would be more accurate and would also allow us to make statements about the western and eastern fringes of the Himalayas. However, modelling the entire 2,000 kilometres of the rupture zone would require enormous computational power," says Dal Zilio.

Credit: 
ETH Zurich

Fiery sighting: A new physics of eruptions that damage fusion experiments

image: This photo shows physicists Ahmed Diallo, front, and Julien Dominski.

Image: 
Elle Starkman/PPPL Office of Communications.

Sudden bursts of heat that can damage the inner walls of tokamak fusion experiments are a hurdle that operators of the facilities must overcome. Such bursts, called "edge localized modes (ELMs)," occur in doughnut-shaped tokamak devices that house the hot, charged plasma that is used to replicate on Earth the power that drives the sun and other stars. Now researchers at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) have directly observed a possible and previously unknown process that can trigger damaging ELMs.

Working together, physicists Ahmed Diallo, an experimentalist, and Julien Dominski, a theorist, pieced together data from the DIII-D National Fusion Facility that General Atomics operates for the DOE in San Diego, to uncover a trigger for a particular type of ELM that does not fit into present models of ELM plasma destabilization. Their findings could shed light on the variety of mechanisms leading to the onset of ELMs and could broaden the portfolio of ELM suppression tools. Understanding ELM physics is crucial to developing fusion facilities that can fuse light elements in the form of plasma -- the state of matter composed of free electrons and atomic nuclei -- to produce a virtually inexhaustible supply of energy to generate electricity.

Puzzling data

The new observations, reported in Physical Review Letters, began as an effort to unravel puzzling data detected by probes of magnetic field and plasma density fluctuations during DIII-D experiments. The data showed the eruption of ELMs following periods of unusual quiescence. "These were special cases that didn't follow a standard model," said Diallo. "We started digging into this together," Dominski said. "It was a most interesting collaboration."

In roughly six months of joint research, the physicists uncovered previously unseen correlations of fluctuations in the DIII-D experiments. These correlations revealed the formation of two modes -- or waves -- at the edge of the plasma that coupled together to generate a third mode. The newcomer then moved toward the wall of the tokamak -- created a radial distortion in technical terms -- that triggered bursts of low-frequency ELMs.

The ELMs were a type also seen on the Joint European Torus (JET) in the United Kingdom, the ASDEX Upgrade in Germany and other fusion devices following periods of quiescence. In principle, the results could also apply to systems such as solar flares and geomagnetic storms that are suddenly unleashed, according to the paper.

Opening a door

While the findings open a door on a method for triggering ELMs, they do not fully explain the process. The two physicists thus seek to analyze more data sets. "If we can fully understand how the triggering works we can block and reverse it," Diallo said.

Credit: 
DOE/Princeton Plasma Physics Laboratory

Study: Despite progress, gay fathers and their children still structurally stigmatized

image: A study published in the February 2019 "Pediatrics" journal suggests the majority of gay fathers and their children continue to experience stigma with potentially harmful physical and psychological effects, despite legal, media and social advances. Study participants specifically cited structural stigma, such as state laws and beliefs of religious communities, as affecting their experiences in multiple social contexts.

Image: 
Benson Kau

A study published in the February 2019 "Pediatrics" journal suggests the majority of gay fathers and their children continue to experience stigma with potentially harmful physical and psychological effects, despite legal, media and social advances. Study participants specifically cited structural stigma, such as state laws and beliefs of religious communities, as affecting their experiences in multiple social contexts.

The study's researchers, including Sean Hurley, an associate professor in the University of Vermont's College of Education and Social Services, analyzed anonymous, online survey responses from 732 gay fathers of 1,316 children, from 47 states. Among the questions asked of participants was whether the fathers and/or their children had been "made to feel uncomfortable, excluded, shamed, hurt, or unwelcome" in various social contexts. Almost two-thirds of fathers responding (63.5 percent) reported they had experienced stigma over the past year based on being a gay father.

Most stigma occurred in religious environments (reported by 34.8 percent of fathers), while about one-quarter of respondents reported experiencing stigma in the past year from family members, neighbors, gay friends and/or service providers such as waiters and salespeople. Nearly 19 percent of fathers reported that their children had avoided activities with friends for fear of encountering stigma.

"The results of the study are important because they highlight that while much progress has been made in terms of the experiences of gay men parenting, we find that they and their children are still experiencing potentially harmful stigma in a variety of social contexts," says Hurley, who served as the study's methodologist.

For these families, the presence of laws and policies supportive of LGBT populations in states where they live reduced the experience of stigma. Prior research has shown that the amount of community support provided to members of sexual minorities is related to the well-being of lesbian and gay adolescents, adults, and children with lesbian or gay parents, and impacts rates of suicidality and psychiatric disorders.

This study's authors encouraged pediatricians caring for children and their gay fathers to have discussions with these families about potentially stigmatizing experiences to help them learn strategies to counteract their harmful effects. Researchers agree that pediatricians, as leaders in their communities, also have an opportunity to help oppose discrimination in religious and other community institutions.

Credit: 
University of Vermont

Researchers develop new zoning tool that provides global topographic datasets in minutes

image: The "GFPLAIN250m" dataset depicting Earth's floodplain boundaries

Image: 
WRRDC

Fluvial landscapes and the availability of water are of paramount importance for human safety and socioeconomic growth. Hydrologists know that identifying the boundaries of floodplains is often the first crucial step for any urban development or environmental protection plan.

Floodplain zoning is usually performed using complex hydrodynamic models, but modeling results can vary widely across methods and until now there has been no available unifying framework for global floodplain mapping.

With the increased availability of remote sensing technologies, however, scientists now have access to high-resolution datasets on Earth's surface properties at the global scale.

As a result, an international team of scientists, including ASU professor and hydrologist Enrique Vivoni of the School of Earth and Space Exploration, has published the first comprehensive high resolution map of Earth's floodplains in the Nature journal Scientific Data.

"Progress made in remote sensing has truly revolutionized our capacity to monitor the Earth," says Vivoni, who also holds a joint appointment at ASU's School of Sustainable Engineering and the Built Environment. "Since floodplains are so important to population centers, economic activities and transportation, it is indeed critical to be able to identify their extents. With this new view of Earth's floodplains, we can now characterize the human footprint on these globally-significant environments."

The international research project team, which includes ASU's Vivoni, was led by hydrologist Fernando Nardi of the Water Resources Research and Documentation Centre of the University for Foreigners in Perugia (Italy). Additional hydrologists on the team include Antonio Annis also of the University for Foreigners, Salvatore Grimaldi of the Tuscia University of Viterbo (Italy), and Giuliano Di Baldassarre of Uppsala University (Sweden).

The geomorphic floodplain zoning tool known as GFPLAIN - for Global Floodplain - is an open source program that can be shared with scientists and professional around the world. It will allow them to identify floodplain boundaries, identify morphology and landscape patterns and process regional topographic datasets in minutes or even seconds on a continental scale.

"Observing any aerial image of fluvial corridors, one can clearly distinguish floodplain boundaries by their unique shapes and colors," explains lead author Fernando Nardi, associate professor and director of the Water Resources Research and Documentation Centre at University for Foreigners of Perugia.

"These unique floodplain properties are linked to water-driven erosion and deposition processes, mainly associated to historical flood events, that give shape to fluvial landforms," Nardi says. "We found and exploited the principle that global topographic datasets implicitly contain the floodplain extent information and we have released the first global geomorphic model of Earth's floodplain together with a easy to use tool that both researchers and professional can use for their floodplain mapping projects."

By sharing the dataset of global floodplains as open data, the research team has provided novel opportunities for scientists and professionals worldwide to develop sustainable water management plans and to gain a better understanding of complex floodplain-urban interactions, especially in data-poor river basins that are stressed by growing human populations.

Credit: 
Arizona State University

Why haven't cancer cells undergone genetic meltdowns?

Cancer first develops as a single cell going rogue, with mutations that trigger aggressive growth at all costs to the health of the organism. But if cancer cells were accumulating harmful mutations faster than they could be purged, wouldn't the population eventually die out?

How do cancer cells avoid complete genetic meltdown?

To get at the heart of the matter, a team of scientists from Beijing and Taipei wanted to get a new hint at cancer vulnerability from a mutational perspective by probing the most famous cultured cancer cells, HeLa cells.

Famously isolated from cervical cancer victim Henrietta Lacks in 1951, they became the first immortalized cell line, helped in the development of the polio vaccine, and have become a biotechnology foundational resource for any in vitro drug development or cancer studies.

And they are still providing ample opportunities to further our understanding of cancer.

"In this study, HeLa cells are not used to reveal the process of tumorigenesis but mainly a model for addressing the underlying evolutionary forces, which need to be powerful enough to measure in laboratory settings. We examined variation in growth rate among individual HeLa cells by monitoring clones from a common ancestral HeLa cell population," said corresponding author Xuemei Lu.

They first established a HeLa cell line (E6) derived from an ancestral cell line. When the population size of E6 reached approximately 5 × 104 cells (15~16 divisions), five single-cell clones were generated and established in culture. They team DNA sequenced these clones to catalog the mutations. They focused on copy number variation (CNV) rather than single DNA changes because single-nucleotide mutation rates are too slow to produce significant sequence variation during the short-duration culturing experiments.

"We then estimated the deleterious mutation rate and the average fitness decrease per mutation by performing computer simulations of cell growth," said author Hurng-Yi Wang.

Overall, they found that the main mutations affect the copy number of genes, with an average of 0.29 deleterious events for every cell division. Each of these events reduces fitness 18 percent.

Their results indicate that heterogeneity in cell growth can be generated in a very short period of time in cancer cells and is heritable and genetically determined.

"Our estimates indicate that the HeLa cells experience a 5 percent reduction (0.29 ×0.18 ? 5%) in fitness for every generation. Our observations suggest that human cells that have been cultured for a sufficiently long period still generate deleterious mutations in the form of CNVs at a high rate and with a high intensity. For such systems, a mutational meltdown might be plausible."

For example, when they isolated 39 cells from B8 (a fast-growing clone) and 40 cells from E3 (slow growing clone), and monitored their growth from a single cell for seven days, approximately 23 percent of B8 and 50 percent of E3 cells died out within seven days, due to either damage caused during cell isolation or genetic defects.

Most cell lines with growth rates

Next, they picked about 20 cells from each of the single cell originated clones from B8 and counted their chromosome numbers.

The chromosomes varied far from the normal human number of 46. They ranged from 38 to 113 chromosomes, with most (72 percent) cells harboring between 55 and 70 chromosomes, indicating that they are triploid. Therefore, despite single-cell origin, the progeny quickly generated aneuploidy within only 20-30 cell divisions, again illustrating frequent cytogenetic change in cancer cells.

Despite the level of mutations occurring, reduction in growth rates, and chromosome numbers no longer representing that of normal humans, cancer cells still find a way to survive.

So how do HeLa cells persist?

"High deleterious mutation rate would raise an impression that the HeLa cell lines may have gone extinct long ago," said Lu.

Their simulation results indicated that although most of the cells accumulated deleterious mutations and were worse than the ancestral cells, there were still 13.1 percent of cells which were mutation-free.

"These mutation-free cells can avoid the population from extinction."

It also explains why, even if chemotherapy treatment successfully killed 90 percent of a cancer cell population, it may still not be enough.

The new study not only advances the understanding of the evolution of HeLa cells, and of tumors in general, but of the cells of multicellular organisms in culture in general. In future work, the scientists want to exploit their cancer cell fitness and growth rate findings to understand how cancer cells can become even more vulnerable to recent breakthroughs with checkpoint inhibitor drugs.

Credit: 
SMBE Journals (Molecular Biology and Evolution and Genome Biology and Evolution)

Cop voice: Jay-Z, Public Enemy songs highlight police tactic to frighten people of color

image: Jennifer Lynn Stoever is an associate professor of English at Binghamton University, State University of New York.

Image: 
Binghamton University, State University of New York

BINGHAMTON, N.Y. - What do songs by artists like Jay-Z and Public Enemy have in common? They feature representations of 'cop voice,' a racialized way of speaking that police use to weaponize their voices around people of color, according to faculty at Binghamton University, State University of New York.

Jennifer Lynn Stoever, associate professor of English at Binghamton University, studies what she refers to as the "sonic color line," the learned cultural mechanism that establishes racial difference through listening habits and uses sound to communicate one's position vis-à-vis white citizenship.

"In the United States," said Stoever, "the ideology of the sonic color line operates as an aural boundary: sounds are racialized, naturalized and then policed as 'black' or 'white'."

According to Stoever, police use a racialized and gendered way of speaking known as 'cop' voice' to provoke fear and extreme forms of compliance from people of color. In her new paper, Stoever identifies the phenomenon of the 'cop voice' and analyses how three hip-hop artists have deployed it as a trope in their songs to interrogate police violence in black communities.

"I define 'cop voice' as the way in which police wield a vocal cadence and tone structured by and vested with white masculine authority, a sound that exerts a forceful, unearned racial authority via the sonic color line to terrorize people of color," wrote Stoever. "Intentionally wielded, although allegedly 'inaudible' to its users, cop voice almost immediately escalates routine police interactions with people of color..."

Stoever argues that hip-hop artists like Jay-Z, Public Enemy and KRS-One represent 'cop voice' through shifts in their rapping flow or by using white guest rappers.

"When rappers re-enact the cadence of white supremacy in their songs, I argue, they use their vocal tone, cadence and timbre to share embodied listening experiences as black men and women," wrote Stoever. "By re-enacting these everyday moments, rappers verbally cite the violence inherent in the masculinist sound of the cop voice itself: the confident, assured violence propelling those aspirant 't's and rounded, hyper-pronounced 'r's."

Jay-Z's "99 Problems," features an interaction between a white police officer and the black man he has pulled over. According to Stoever, Jay-Z changes his cadence in the song to take on the sound of state-sanctioned white supremacy that he hears in the cop's voice.

"The contrast in the interplay between the white cop and the black driver highlights the racial scripting inherent in the cop's rhythmic vocal aggression," wrote Stoever. "Jay-Z's performance of this cop marshals the sound of whiteness, and involves accent, tone and grain - but it is more than these things, and yet all of these things at once. It is a cadence, an ideologically rhythmic iteration of white supremacy in the voice, one that surrounds, animates and shapes speech. Jay-Z's lyrical and vocal performance of cop voice embodies and deliberately grinds against the edge of the sonic color line, calling attention to it and enacting its relations of power by inhabiting whiteness with audible masculine swagger and expectation of immediate obedience."

Identifying and listening closely to these examples of cop voice reveal how people who are raced as 'white' in the United States mobilize this subject position in their voices through particular cadences that audibly signify racial authority, while at the same time, never hearing themselves as doing so, wrote Stoever.

"In each of these songs, male rappers vocally emphasize how cops sound to them; parroting this speech amplifies how white people weaponize their voices in these semiprivate encounters to exert unearned racial authority via the sonic color line," she wrote.

Credit: 
Binghamton University

Unraveling threads of bizarre hagfish's explosive slime

video: This video shows hagfish thread unspooling from a skein. It's also available at https://youtu.be/X4aXJ6G-M40.

Image: 
Image courtesy Jean-Luc Thiffeault.

MADISON -- Hundreds of meters deep in the dark of the ocean, a shark glides toward what seems like a meal. It's kind of ugly, eel-like and not particularly meaty, but still probably food. So the shark strikes.

This is where the interaction of biology and physics gets mysterious -- just as the shark finds its dinner interrupted by a cloud of protective slime that appeared out of nowhere around an otherwise placid hagfish.

Jean-Luc Thiffeault, a University of Wisconsin-Madison math professor, and collaborators Randy Ewoldt and Gaurav Chaudhary of the University of Illinois have modeled the hagfish's gag-inducing defense mechanism mathematically, publishing their work today in the Journal of the Royal Society Interface.

The ocean-dwelling hagfish is unique for all the strangest reasons. It has a skull, but no spine or jaw. Its skin hangs loose on its

body, attached only along the back. Its teeth and fins are primitive, underdeveloped structures best described with qualifiers -- "tooth-like" and "fin-like."

But it has an amazing trick up that creepy, loose sleeve of skin: In the blink of an eye (or flash of attacking tail and teeth) the hagfish can produce many times its own body's volume in slime. The goop is so thick and fibrous, predators have little choice but to spit out the hagfish and try to clear their mouths.
"The mouth of the shark is immediately chock full of this gel," Thiffeault says. "In fact, it often kills them, because it clogs their gills."

The gel is a tangled network of microscopic, seawater-trapping threads unspooled from balls of the stuff ejected from glands along the hagfish's skin. These "skeins" are just 100 millionths of a meter in diameter (twice the width of a human hair), but so densely coiled that they can contain as much as 15 centimeters of thread.
Curious scientists have looked at the unraveling before, putting the skeins in salt water to see how long it took them to come apart.

"The hagfish does it in less than half a second, but it took hours of soaking for the threads to loosen up in experiments," says Thiffeault, whose research is focused on fluid dynamics and mixing. "Until they stirred the water, and it happened faster. The stirring was the thing."

The slime modelers set out to see if math could tell them whether the forces of the turbulent water of a bite-and-shake attack were enough to unspool the skeins and make the slime, or if another mechanism -- like a chemical reaction providing some pop to the skein -- was required.

Ewoldt, a mechanical engineering professor, and his graduate student Chaudhary began unraveling skeins under microscopes, watching the process as loose ends of thread stuck to the tip of a moving syringe and trailing lengths spun out from the ball.

"Our model hinges on an idea of a small piece that's initially dangling out, and then a piece that's being pulled away," says Thiffeault. "Think of it as a roll of tape. To start pulling tape from a new roll, you may have to hunt for the end and pick it loose with your fingernail. But if there's already a free end, it's easy to catch it with something and get going."

Unrolling requires a big enough difference between the drag on the free end and an opposing push on the skein -- a ratio larger than a tipping point the researchers refer to informally as the "peeling number" -- to free more thread.

"That's unlikely to happen if the whole thing is moving freely in water," says Thiffeault. "The main conclusion of our model is we think the mechanism relies on the threads getting caught on something else -- other threads, all the surfaces on the inside of a predator's mouth, pretty much anything -- and it's from there it can really be explosive."

It doesn't even have to be a single snag.

"Biology being the way it is, it doesn't have to be exact. Things get to be messy," says Thiffeault. "That leading bit of thread can get caught a little bit, then slip, then get caught again. As long as it's happening to enough skeins, it's pretty fast that you're in the slime."

The skeins may get a boost from mucins, proteins found in mucus that could speed the breakup of packed thread, "but those kinds of things would just help the hydrodynamics," says Thiffeault, who once calculated the extent to which swimming marine life mix entire oceans with their fins and flippers.

"It's just hard to imagine there's another process other than hydrodynamic flow that can lead to these timescales, that burst of slime," he says. "When the shark bites down, that does create turbulence. That creates faster flows, the sorts of things that provide the seed for these things to happen. Nothing is going to happen as nicely as in our model -- which is more of a good start for anyone who wants to take more measurements -- but our model shows the physical forces play the biggest role."

The hydrodynamics of hagfish slime is not just a curiosity. Understanding the formation and behavior of gels is a standing issue in many biological processes and similar industrial and medical applications."

One of the things we'd love to work on in the future is the network of threads. I love thinking about modeling materials as big random collections of threads," Thiffeault says. "A simple model of entangled threads may help us see how that network determines the macroscopic properties of a lot of different, interesting materials."

Credit: 
University of Wisconsin-Madison

Food ads targeting black and Hispanic youth almost exclusively promote unhealthy products

image: Food-related advertising to Hispanic consumers almost exclusively promotes unhealthy brands.

Image: 
Bill Kelly, Kelly Design Company

Hartford, Conn. - Restaurant, food, and beverage companies (food companies) target Hispanic and Black children and teens with ads almost exclusively for fast food, candy, sugary drinks, and unhealthy snacks, according to a new report from the Rudd Center for Food Policy & Obesity at the University of Connecticut, the Council on Black Health at Drexel University, and Salud America! at UT Health San Antonio.

The new report finds that fast food, candy, sugary drinks, and unhealthy snacks represented 86 percent of food ad spending on Black-targeted TV programming, where Black consumers comprise the majority of viewers, and 82 percent of ad spending on Spanish-language TV, in 2017. According to researchers, food companies spent almost $11 billion in total TV advertising in 2017, including $1.1 billion on advertising in Black-targeted and Spanish-language TV programming.

"Food companies have introduced healthier products and established corporate responsibility programs to support health and wellness among their customers, but this study shows that they continue to spend 8 of 10 TV advertising dollars on fast food, candy, sugary drinks, and unhealthy snacks, with even more advertising for these products targeted to Black and Hispanic youth," said Jennifer Harris, PhD, the report's lead author and the Rudd Center's director of Marketing Initiatives.

Researchers also found that food companies increased their Black-targeted TV ad spending by more than 50 percent from 2013 to 2017, even though their total advertising spending on all TV programming declined by 4 percent. Black teens saw more than twice as many ads for unhealthy products compared to White teens in 2017.

The report, "Increasing disparities in unhealthy food advertising targeted to Hispanic and Black youth," analyzed advertising by 32 major restaurant, food, and beverage companies that spent at least $100 million or more on food advertising to children (age 2-11) and teens (age 12-17) in 2017 and/or participated in the Children's Food and Beverage Advertising Initiative (CFBAI). The CFBAI is a voluntary, self-regulatory program that sets standards for food advertising directed to children under age 12.

Researchers examined TV ad spending by food companies, as well as young people's exposure to this advertising, and identified brands targeting all children and teens and Hispanic and Black consumers on Spanish-language and Black-targeted TV programming. They compared these 2017 findings with data collected in 2013 from an earlier Rudd Center report on this topic. Researchers also examined companies' public statements about their targeted marketing.

Companies Rarely Advertise Healthy Products:

The report also finds that advertising for healthier product categories--including 100 percent juice, water, nuts, and fruit--totaled only $195 million on all TV programming in 2017, a figure that represented 3 percent of their overall ad spending. Companies were even less likely to advertise these products to Black consumers (representing just 1 percent of ad spending on Black-targeted TV), and they were not advertised at all on Spanish-language TV.

"At best, these advertising patterns imply that food companies view Black consumers as interested in candy, sugary drinks, fast food, and snacks with a lot of salt, fat, or sugar, but not in healthier foods," said Shiriki Kumanyika, PhD, MPH, study author and chair of the Council on Black Health at Drexel University, Dornsife School of Public Health. "Not only are these companies missing out on a marketing opportunity, but they are inadvertently contributing to poor health in Black communities by heavily promoting products linked to an increased risk of obesity, diabetes, and high blood pressure," she said.

Study authors call on food manufacturers to stop disproportionately targeting Black and Hispanic youth with ads for unhealthy food, expand corporate health and wellness commitments to promote marketing of healthier products to communities of color, and strengthen CFBAI standards to address targeted marketing of unhealthy products to all children and teens, including Black and Hispanic youth.

"This report shows just how much the food and beverage industry values Hispanic consumers when it comes to encouraging them to buy unhealthy products. But if the industry really values these consumers, companies will take responsibility for advertising that encourages poor diet and related diseases. They can start by eliminating the marketing of unhealthy products to Hispanic youth and families," said Amelie G. Ramirez, DrPH, MPH, study author and director of Salud America!, a national program to promote health equity based at the Institute for Health Promotion Research at UT Health San Antonio.

Other findings in the report include:

Black children and teens each viewed an average of more than 16 food-related ads per day in 2017, compared to 8.8 ads-per-day for White children and 7.8 ads for White teens.

Disparities in how many food-related TV ads Black and White youth view are increasing. In 2013, Black children and teens viewed 70 percent more food-related ads than their White peers. In 2017, these disparities grew to 86 percent more ads viewed by Black children and 119 percent more for Black teens compared to White children and teens.

Candy brands, in particular, disproportionately advertised to Hispanic and Black youth. Candy represented almost 20 percent of food-related TV ads viewed by Hispanic children and teens on Spanish-language TV. Black children and teens saw approximately 2.5 times as many candy ads as White children and teens.

Companies with the most brands targeted to all youth and to Black and/or Hispanic consumers of all ages included Mars (candy and gum brands), PepsiCo (snack and sugary drink brands), and Coca-Cola (sugary drink, diet soda, and drink mix brands).

Fast food restaurants represented approximately one-half of all food-related TV advertising in 2017 (almost $4 billion), including advertising on Black-targeted and Spanish-language TV programming.

Credit: 
UConn Rudd Center for Food Policy and Obesity

New insights into what Neolithic people ate in southeastern Europe

image: This is image shows the Iron Gates gorges (image C. Bonsall) and, inset, a reconstructed Starčevo pot (image M. Todera).

Image: 
The Iron Gates gorges (image C. Bonsall) and, inset, a reconstructed Starčevo pot (image M. Todera)

New research, led by the University of Bristol, has shed new light on the eating habits of Neolithic people living in southeastern Europe using food residues from pottery extracts dating back more than 8,000 years.

With the dawn of the Neolithic age, farming became established across Europe and people turned their back on aquatic resources, a food source more typical of the earlier Mesolithic period, instead preferring to eat meat and dairy products from domesticated animals.

The research, published today in the journal Royal Society Proceedings B, reveals that people living in the Iron Gates region of the Danube continued regular fish-processing, whereas pottery extracts previously examined from hundreds of sherds elsewhere in Europe show that meat and dairy was the main food source in pots.

This region is archaeologically very important because the sites document Late Mesolithic forager settlements and the first appearance of Neolithic culture, which is spreading up through Europe illustrated by the first appearances of pottery, domesticated plants and animals and different burial styles.

The Iron Gates is a unique landscape on the border between modern-day Romania and Serbia where the Danube cuts through the junction of the Balkan and Carpathian mountain chains. It provided a rich wild aquatic resource base for prehistoric hunter-fisher-foragers during the Late Glacial and early Holocene.

As farming spread from south west Asia into Europe, prehistoric diets ultimately transformed towards a diet based upon domesticated plants and animals. However, in this region, evidence has suggested that wild resources may have continued to be important well into the early Neolithic.

This research involved analysis of organic residues surviving in the fabric of 8,000-year-old Neolithic pottery excavated from sites on the banks of the Danube.

Chemical analyses allowed scientists to directly see what kinds of resources were being prepared in these newly-appearing pots and compare this with the way the same type of pottery was being used by farmers in the wider Balkans region.

Dr Lucy Cramp from the University of Bristol's Department of Anthropology and Archaeology, led the research. She said: "The findings revealed that the majority of Neolithic pots analysed here were being used for processing fish or other aquatic resources.

"This is a significant contrast with an earlier study showing the same type of pottery in the surrounding region was being used for cattle, sheep or goat meat and dairy products.

"It is also completely different to nearly all other assemblages of Neolithic farmer-type pottery previously analysed from across Europe (nearly 1,000 residues) which also show predominantly terrestrial- based resources being prepared in cooking pots (cattle/sheep/goat, possibly also deer), even from locations near major rivers or the coast."

The research team suggest that this unusual dietary/subsistence pattern may be for several reasons.

It is possible that farmers were attracted to this location by the impressive aquatic resources available including huge sturgeon which swam up the river from the Black Sea.

It may also be that Late Mesolithic dietary practices are continuing here, but now using new Neolithic pottery as a result of these early interactions between Mesolithic and Neolithic communities.

Credit: 
University of Bristol