Earth

Living in greener neighborhoods may postpone the natural onset of menopause

Barcelona, 13 February 2020. Living near green spaces is associated with a wide variety of benefits, including a lower risk of obesity, improved attention capacity in children and slower physical decline in old age. Now, for the first time, a study led by the University of Bergen and the Barcelona Institute for Global Health (ISGlobal), a centre supported by "la Caixa", has found that living in a greener neighbourhood is also associated with older age at the onset of menopause.

The study, published in the November issue of Environment International, analysed data on 1,955 women from nine countries (Spain, France, Germany, Belgium, United Kingdom, Sweden, Estonia, Iceland and Norway) who took part in the European Community Respiratory Health Survey (ECRHS). Over a 20-year period, participants completed questionnaires on their health and lifestyle factors and underwent blood sampling. The availability and extent of green space in their neighbourhoods was also calculated.

The study found that women living in neighbourhoods with little green space became menopausal 1.4 years earlier than those in living in the greenest areas. On average, age at menopause was 51.7 years for women living in the greenest areas, compared with 50.3 years for women living in areas with little green space.

In addition to genetic factors, age at menopause is influenced by lifestyle factors such as smoking, obesity, physical activity and the use of oral contraceptives. A number of biological processes could explain the association between green space and older age at menopause. "We know that stress increases the level of cortisol in the blood, and numerous studies have shown that exposure to green spaces reduces it," explained Kai Triebner, postdoctoral visiting researcher at ISGlobal and lead author of the study. "Low cortisol levels have been associated with increased levels of estradiol, an important female sex hormone. Perhaps women who live near green space have lower cortisol levels, which would allow them to maintain higher levels of estradiol, which may in turn delay the onset of menopause." He added: "Exposure to green space is also associated with a lower risk of certain mental health conditions, such as depression, which is also associated with younger age at menopause."

Credit: 
Barcelona Institute for Global Health (ISGlobal)

Fewer liquor stores may lead to less homicide

PISCATAWAY, NJ - Reducing the number of businesses in Baltimore that sell alcohol in urban residential areas may lower the homicide rate, according to new research.

As cities contemplate new zoning regulations regarding alcohol, the implications of those policies can have life-or-death outcomes.

"There is an ongoing violence epidemic in Baltimore, with recent years breaking records for number of homicides," write the authors, led by Pamela J. Trangenstein, Ph.D., M.P.H., of the University of North Carolina at Chapel Hill. "This study suggests that there is potential to prevent violent crimes by reducing alcohol outlet density in Baltimore City."

The results are published in the latest issue of the Journal of Studies on Alcohol and Drugs.

Baltimore is in the process of rewriting its zoning laws, and Trangenstein and colleagues patterned their research after the proposed zoning changes in that city as they relate to alcohol. Using a computer model that took into account homicide rates in Baltimore and previous research that shows 50 percent of violent crime can be attributed to access to alcohol, the researchers analyzed three main policy changes.

The first would reduce by 20 percent all outlets that sell alcohol. The second proposal would close liquor stores only in residential areas. The third would close outlets licensed as bars or taverns that were really operating as liquor stores. (In Baltimore, bars and taverns are allowed longer operating hours than liquor stores, which allows these "sham" bars and taverns to act as extended-hours outlets.)

After factoring in additional data related to homicide--such as socioeconomic status, population density, and drug arrests--the researchers' computer modeling predicted that an overall reduction of alcohol outlets by about 20 percent would cut homicides by 51 a year and save $63.7 million. Closing liquor stores in residential areas would eliminate 22 homicides a year, saving $27.5 million. But closing sham bars/taverns operating as liquor stores would reduce homicides by only 1 annually, saving $1.2 million.

Although the 20 percent reduction would curtail the homicide rate the most, the authors determined that Baltimore would need to close such a large number of alcohol outlets that the policy would likely be considered "anti-business" and politically unfeasible.

Therefore, the authors concluded, the best option would be to close the 80 liquor stores found in residential zones. Because Baltimore has over 1,200 licensed alcohol outlets, this means that closing only 1 of every 15 outlets would likely save 22 lives from among the more than 300 homicides the city sees annually.

"Alcohol outlets tend to cluster in low-income and minority neighborhoods," the authors write, "and alcohol outlet density zoning would ideally aim to reduce the concentration of outlets in these neighborhoods."

The authors note three main reasons alcohol access is linked to violence. First, more outlets means people can get alcohol more easily--they simply don't have to travel far to get it.

Second, a large concentration of businesses that sell alcohol can create "an atmosphere of immoral or illegal behavior," according to the researchers, and likely will attract young men, who themselves are more prone to violence, even if they aren't drinking.

Last, a high concentration of alcohol outlets brings more high-risk drinkers together in a smaller area, "fostering opportunities for violence," the authors write.

Trangenstein and colleagues note there is a recent trend in which some states and cities have adopted increasingly relaxed policies regarding the density of alcohol outlets. However, studies such as the current one may help policymakers make more evidence-based decisions.

Credit: 
Journal of Studies on Alcohol and Drugs

Fewer veterans dying or requiring amputations for critically blocked leg arteries

DALLAS, February 13, 2019 -- Between 2005 and 2014, the number of veterans who were hospitalized, required amputation or died due to critical blockages in leg arteries declined, according to new research published today in Circulation: Cardiovascular Interventions, a journal of the American Heart Association.

Critical narrowing of leg arteries, called critical limb ischemia (CLI), is an advanced state of disease in arteries that can lead to severe leg pain even at rest, wounds that don't heal and a very poor quality of life. Without proper treatment, CLI can lead to amputation, which further decreases mobility for daily living and severely impacts quality of life. Patients with CLI are also at high risk to have a heart attack or stroke.

The improvements in patient outcomes paralleled an increase in the number of veterans with CLI who underwent procedures to restore blood flow, either via surgical bypass or a less-invasive endovascular procedure to insert a stent to hold the artery open. These revascularization procedures can be effective in alleviating pain, improving wound healing and avoiding amputation.

In this retrospective analysis, researchers found overall positive trends among veterans treated for CLI at Veterans Affairs (VA) medical centers, yet there were potential areas for improvement. Many veterans were not taking recommended statin medications, and almost half of those who underwent amputation did not first receive a procedure to try to restore blood flow to the impacted limb.

"All patients with CLI should be evaluated to determine if they could benefit from a procedure to restore blood flow," said Saket Girotra, M.D., S.M., senior author of the study and assistant professor of cardiovascular medicine at the University of Iowa Carver College of Medicine in Iowa City. "In addition, patients with CLI should be aggressively treated with medications, including statins, blood pressure medications if they are hypertensive, and drugs to reduce platelet stickiness in order to reduce the risk of heart attack and stroke."

To identify trends in treatments and outcomes, the researchers examined nationwide data from all VA facilities for nearly 21,000 patients (average age 68) who were hospitalized for CLI between 2005 and 2014. Researchers found:

Mortality decreased from 12% to 10% (after adjusting for other risk factors);

Amputation decreased from 20% to 13% (after adjusting for other risk factors);

Patients who underwent procedures to restore blood flow were 55% less likely to die and 77% less likely to undergo amputation; and

There were sharp differences among VA hospitals in the proportion of patients receiving revascularization procedures, ranging from 13% to 53%, with little of the variation easily explained by differences in patients.

Although the researchers had access to information on patients' other medical conditions that might influence the decision to undergo an invasive procedure, they did not have information on the extent of disease in the arteries.

"The disease may have been too advanced in some patients, making surgery or stenting not feasible, therefore, the only option was an amputation to limit the spread of infection and gangrene. A more in-depth study is needed to determine if revascularization was not offered to some patients who may have benefited," Girotra said.

Because the study was conducted in a predominantly male, veteran population, the results may not be generalizable to female veterans, or non-VA healthcare settings in the general population.

Credit: 
American Heart Association

Making the internet more energy efficient through systemic optimization

image: Through optimizing the system, researchers from Chalmers University of Technology, Sweden, have helped create a model for a more energy efficient internet system. Algorithms for managing data center traffic, smart, error correcting data chips, and optical frequency combs can all contribute to reducing energy consumption.

Image: 
Yen Strandqvist/Chalmers

 

Researchers at Chalmers University of Technology, Sweden, recently completed a 5-year research project looking at how to make fibre optic communications systems more energy efficient. Among their proposals are smart, error-correcting data chip circuits, which they refined to be 10 times less energy consumptive. The project has yielded several scientific articles, in publications including Nature Communications.

Streaming films and music, scrolling through social media, and using cloud-based storage services are everyday activities now. But to accommodate this digital lifestyle, a huge amount of data needs to be transmitted through fibre optic cables - and that amount is increasing at an almost unimaginable rate, consuming an enormous amount of electricity. This is completely unsustainable - at the current rate of increase, if no energy efficiency gains were made, within ten years the internet alone would consume more electricity than is currently generated worldwide. Electricity production cannot be increased at the same rate without massively increasing the usage of fossil fuels for electricity generation, in turn leading to a significant increase in carbon dioxide emissions.

"The challenge lies in meeting that inevitable demand for capacity and performance, while keeping costs at a reasonable level and minimising the environmental impacts," says Peter Andrekson, Professor of Photonics at the Department of Microtechnology and Nanoscience at Chalmers.

Peter Andrekson was the leader of the 5-year research project 'Energy-efficient optical fibre communication', which has contributed significant advances to the field.

In the early phase of the project, the Chalmers researchers identified the biggest energy drains in today's fibre optic systems. With this knowledge, they then designed and built a concept for a system for data transmission which consumes as little energy as possible. Optimising the components of the system against each other results in significant energy savings.

Currently, some of the most energy-intensive components are error-correction data chips, which are used in optical systems to compensate for noise and interference. The Chalmers researchers have now succeeded in designing these data chips with optimised circuits.

"Our measurements show that the energy consumption of our refined chips is around 10 times less than conventional error-correcting chips," says Per Larsson-Edefors, Professor in Computer Engineering at the Department of Computer Science and Engineering at Chalmers.

At a systemic level, the researchers also demonstrated the advantages of using 'optical frequency combs' instead of having separate laser transmitters for each frequency channel. An optical frequency comb emits light at all wavelengths simultaneously, making the transmitter very frequency-stable. This makes reception of the signals much easier - and thus more energy efficient.

Energy savings can also be made through controlling fibre optic communications at the network level. By mathematically modelling the energy consumption in different network resources, data traffic can be controlled and directed so that the resources are utilised optimally. This is especially valuable if traffic varies over time, as is the case in most networks. For this, the researchers developed an optimisation algorithm which can reduce network energy consumption by up to 70%.

The recipe for these successes has been the broad approach of the project, with scientists from three different research areas collaborating to find the most energy-saving overall solution possible, without sacrificing system performance.

These research breakthroughs offer great potential for making the internet of the future considerably more energy-efficient. Several scientific articles have been published in the three research disciplines of optical hardware, electronics systems and communication networks.

"Improving the energy efficiency of data transmission requires multidisciplinary competence. The challenges lie at the meeting points between optical hardware, communications science, electronic engineering and more. That's why this project has been so successful" says Erik Agrell, Professor in Communications Systems at the Department of Electrical Engineering at Chalmers.

More on the research

The 5-year research project 'Energy-efficient optical fibre communication' ran from 2014-2019, and was financed by the Knut and Alice Wallenberg Foundation.The research could have huge potential to make future internet usage significantly more energy efficient. It has resulted in several research publications within the three scientific disciplines of optical hardware, electronics systems and communications networks, including the following three:

Energy-Efficient High-Throughput VLSI Architectures for Product-Like Codes in the Journal of Lightwave Technology

Phase-coherent lightwave communications with frequency combs, in the journal Nature Communications

Joint power-efficient traffic shaping and service provisioning for metro elastic optical networks, in the journal IEEE/OSA Journal of Optical Communications and Networking,

Some more information on the smart, error correcting data chips, or integrated circuits:

The data chips, or integrated circuits, have been designed by Chalmers and manufactured in Grenoble in France. The Chalmers researchers subsequently verified the chips' performance and measured the energy usage, which was just a tenth of current error-correcting chips. At a data transfer speed of 1 terabit per second (1 terabit = 1 trillion bits), the Chalmers error-correcting designs have been shown to draw an energy of around 2 picojoules (1 picojoule = 1 trillionth of a joule) per bit. This equates to a power consumption of 2 Watts at this data rate. Comparatively, the current energy usage at such high transfer speeds is around 50 picojoules per bit, around 50 Watts."

Credit: 
Chalmers University of Technology

Hydropower dams cool rivers in the Mekong River basin, satellites show

image: Using 30 years of satellite data to track changes in surface water temperature for the Sekong, Sesan and Srepok rivers, University of Washington researchers discovered that within one year of the opening of a major dam, below-dam river temperatures during the dry season dropped by up to 3.6 degrees Fahrenheit (2 degrees C). Interactive map available here: https://public.tableau.com/views/MekongRiverdams/Mapwithminimap?:embed=y...

Image: 
Rebecca Gourley/University of Washington

Hydropower dams, which use flowing water to turn a series of turbines to generate electricity, provide a source of energy that doesn't rely on fossil fuels. But they also disrupt the flow of rivers, and impact the fish and people that live there.

Scientists have been monitoring many environmental effects of dams, including how they affect a river's temperature -- and could potentially threaten the fish downstream.

Researchers at the University of Washington were interested in studying how several hydropower dams affected the temperature of three major rivers in Southeast Asia's Mekong River basin. Since 2001, each river has seen the construction of at least one major dam, with more planned. The three rivers converge into the Mekong River, which people rely on for fish and irrigation for rice and other crops.

Using 30 years of satellite data, the team discovered that within one year of the opening of a major dam, downstream river temperatures during the dry season dropped by up to 3.6 degrees F (2 degrees C). The cooling persisted where the rivers meet the Mekong River, which showed, at most, a 1.4 F (0.8 C) cooling. The researchers published their findings Feb. 13 in the journal Environmental Research Letters. The team is also speaking about related research Feb. 15 at the American Association for the Advancement of Science annual meeting in Seattle.

"People have modeled how far they could see a cooling effect after a hydropower dam goes in. In the U.S., that cooling tends to be localized around the dam. But what we see in the Mekong is like, 'Wow!'" said senior author Faisal Hossain, a civil and environmental engineering professor at the UW. "Everything has happened very dramatically in the last 20 years. Lots and lots of dams were just suddenly coming on, left and right. And now we can see this cooling effect that is no longer localized, but continuing into the river system. We've never seen anything like it, to the best of our knowledge."

The researchers used Landsat satellites to track changes in surface water temperature for the Sekong, Sesan and Srepok rivers. The satellites capture the heat, or infrared radiation, from the rivers.

"With these data, we're looking at the temperature emissions from the rivers. It's like night vision: Warmer things give off more emissions, colder things give off less," said lead author Matthew Bonnema, a postdoctoral researcher at NASA's Jet Propulsion Laboratory, who completed this research as a UW doctoral student in civil and environmental engineering. "These satellites have been predominantly used over land, not water, because you need to be looking at a big enough area. But there's almost 40 years of Landsat data that works great for large rivers that people are only recently starting to take advantage of."

Using satellite data to monitor river temperature has a caveat: clouds block the satellites' view of the Earth. So the team could only monitor changes during the region's dry season. Still, the researchers were able to detect decreases in river temperature within a year after major dams on all three rivers came online.

During dry season of 2001, the Sesan River had a 1.8 F (1 C) temperature drop, which corresponded with the completion of the Yali dam. Then, between 2008 and 2009, the temperature dropped by another 3.6 F (2 C) after two more dams -- the Sesan 4 and the Plei Krong -- were completed.

Similarly, in 2009, the Srepok River cooled by 2.5 F (1.4 C) in the dry season after a network of four dams came online.

And in 2015, the Sekong River temperature dropped 1.3 F (0.7 C) the year after the Xe Kaman dam was completed on the Xe Kaman River, a tributary to the Sekong.

These rivers also had sensors that monitored river temperature year-round between 2004 and 2011. Before 2009, all three rivers had a similar temperature pattern: The water started to warm up at the beginning of the dry season, around November or October, and then cooled off once the wet season started in April or May.

But after 2009, the Sesan and the Srepok rivers, which had major dams built during that time, stayed cool year-round.

"At the beginning of the wet season, the dams start to have more water than they can store, so they're letting it go in a controlled way," Bonnema said. "As the wet season goes on they're like, 'OK, let's fill up the reservoir' and hold the water. Then when dry season comes, they have this big water supply that they let out over the course of the dry season.

"If you look at the river flows after a dam goes in, you end up with more water in the dry season and less water in the wet season than before. The dry-season water also happens to be colder because it's pulled from deep within the reservoir. That brings the river temperature down closer to what it is in the wet season."

The team investigated whether anything else might be driving these temperature drops, such as air temperature, precipitation or land use in the surrounding region. Precipitation stayed mostly the same over the 30-year period. The air temperature showed a slight warming trend. The land around the rivers had been deforested during that period, but researchers said that is often linked to water warming, not cooling. That points to the role of the dams.

The Sekong, Sesan and Srepok rivers combine into one river, which eventually enters the Mekong River, a central feature of the Southeast Asian ecosystem. The team found that this infusion once warmed the Mekong so that the river was, at most, 0.72 F (0.4 C) warmer downstream of the confluence than it was upstream. But after 2001, the trend reversed, with the rivers now slightly cooling the Mekong River. The river is now up to 1.4 F (0.8 C) cooler -- not warmer -- downstream of the confluence.

The cooler water could have an effect on the fish that live downstream, the researchers said.

"They're going to keep building these dams," Bonnema said. "If you look at where new dams are planned in the 3S Basin, they're building closer and closer to the Mekong. These are also big dams, which means the impacts on the Mekong will likely be more significant -- these temperature changes are going to get more dramatic. So the question is how do we work with these dams to minimize their effect? My recommendation is that we slow down and think things through."

Credit: 
University of Washington

Mechanism of controlling autophagy by liquid-liquid phase separation revealed

image: Model of PAS organization via liquid-liquid phase separation
Under nutrient-rich conditions, Atg proteins are dispersed and mixed with various proteins in the cytoplasm. Upon starvation, Atg13 is dephosphorylated, which triggers the phase separation of Atg13 with other Atg proteins to form a liquid droplet on the vacuolar membrane, where autophagosome formation proceeds.

Image: 
Institute of Microbial Chemistry

Under JST's Strategic Basic Research Programs, Noda Nobuo (Laboratory Head) and Fujioka Yuko (Senior Researcher) of the Institute of Microbial Chemistry, in collaboration with other researchers, discovered that a liquid-like condensate (liquid droplets(1)) in which the Atg protein is clustered through the liquid-liquid phase separation(2) is the structure responsible for the progression of autophagy.

Autophagy is one of the mechanisms through which cellular protein is degraded. Previously, it was known that Atg proteins assemble to form a structure called PAS(3). However, the mechanism through which Atg proteins assemble and the physicochemical property of the formed structures had been unclear.

The research team elucidated characteristics of PAS through observing the Atg protein using a fluorescence microscope and successfully reconstituted PAS in vitro. The team revealed, for the first time, that PAS is in the state of liquid droplets formed by liquid-liquid phase separation of Atg13 together with other Atg proteins and that this liquid droplet is responsible for autophagy.

The finding that liquid-liquid phase separation directly controls autophagy suggests its involvement in a wide range of intracellular life phenomena. Reconsideration of molecular mechanisms underlying various intracellular phenomena is expected to proceed. Moreover, development of autophagy-specific control agents that focus on the regulation of liquid-liquid phase separation in autophagy-related diseases is anticipated.

Credit: 
Japan Science and Technology Agency

When frogs die off, snake diversity plummets

image: Frogs and their eggs are an important source of nutrition for many snakes. This tiny blunt-headed tree snake (Imantodes) snags a meal from of frog eggs in the Panamanian forest.

Image: 
Karen Warkentin

Since 1998, scientists have documented the global loss of amphibians. More than 500 amphibian species have declined in numbers, including 90 that have gone extinct, due to the fungal pathogen Batrachochytrium, commonly known as chytrid.

A new study by researchers from the University of Maryland and Michigan State University shows, for the first time, the ripple effects of amphibian losses on snakes. The results, published in the February 14, 2020, issue of the journal Science, reveal that after chytrid swept through a remote forest in Panama, decimating frog populations, the number of snake species scientists detected declined dramatically, causing the snake community to become more homogenized.

"This study highlights the invisibility of other changes that are occurring as a result of losing amphibians," said Karen Lips, a professor of biology at UMD and a co-author of the study.

Many snakes rely on frogs and frog eggs as part of their diet, so the researchers expected a decline in frogs to impact snake populations. But the slithery reptiles are notoriously cryptic and difficult to study in the wild. How snakes fare following a chytrid epidemic was mostly a matter of conjecture before this study.

Lips and her colleagues compared seven years of survey data collected in a national park near El Copé, Panama, before the 2004 chytrid outbreak caused mass amphibian die-off, with six years of survey data collected after the die-off.

"Comparing the after with the before, there was a huge shift in the snake community," Lips said. "The community became more homogeneous. The number of species declined, with many species going down in their occurrence rates, while a few species increased. Body condition of many snakes was also worse right after the frog decline. Many were thinner, and it looked like they were starving."

The researchers cannot say exactly how many snake species declined because snake sightings are rare in general. Some species were only seen once in the pre-chytrid surveys. The researchers could not confirm that a species had disappeared just because it was absent in the post-chytrid surveys. However, over half of the most common snakes (those observed more than five times throughout the total study) had declined in occurrence rates after the frog die-off. Further statistical analysis of the data confirmed a considerable drop in species diversity.

Researchers are confident the changes they observed in the snake community were due to the loss of amphibians and not some other environmental factor. The study area is in a national park with limited impacts from habitat loss, development, pollution or other phenomena that might affect snake populations directly. The remoteness of the El Copé research site and the fact that Lips had been conducting annual surveys there in the years prior to the chytrid epidemic combined to provide a rare window into the rapid changes in an ecosystem following the catastrophic loss of amphibians.

"This work emphasizes the importance of long-term studies to our understanding of the invisible, cascading effects of species extinctions," Lips said. "Everything we watched changed after the frogs declined. We have to know what we are losing, or we run the risk of undermining effective conservation."

Credit: 
University of Maryland

Oceans: particle fragmentation plays a major role in carbon sequestration

image: A BGC-Argo profiling float equipped with biological and chemical sensors, which can take measurements between the surface of the ocean and a depth of 2,000 metres.

Image: 
D. Luquet, IMEV

A French-British team directed by the Laboratoire d'océanographie de Villefranche-sur-Mer (CNRS/Sorbonne Université) has just discovered that a little known process regulates the capacity of oceans to sequester carbon dioxide (CO2). It should be noted that photosynthesis performed by phytoplankton on the ocean's surface transforms atmospheric CO2 into organic particles, some of which later sink to its depths. This essential mechanism sequesters part of oceanic carbon. However, approximately 70% of this particle flux is reduced between a depth of 100 and 1,000 metres. Earlier studies had shown that small animals consume half of it, but no measurements explained what happened to the other half. Using a fleet of robots deployed in different oceans, scientists revealed that approximately 35% of this flux is fragmented into smaller particles. The results were published on 14 February 2020 in Science.

Credit: 
CNRS

E-cigarette use among teens may be higher than previously thought, study finds

Juul, the popular e-cigarette brand that is being sued for fueling the youth e-cigarette epidemic, may have influenced high school students' perception of vaping such that some Juul users do not consider themselves e-cigarette users, a Rutgers-led study finds.

The ubiquity of the term "Juuling" has created challenges for measuring e-cigarette use, so in a 2018 tobacco focused survey of 4,183 public high school students in New Jersey, researchers added Juul specific questions to assess e-cigarette use and found high school students reported higher use when Juul was included in the measure of e-cigarette use. In some cases, the addition of the Juul specific question resulted in dramatic increases in youth e-cigarette estimates, particularly for female students and black students. For example, e-cigarette prevalence nearly doubled among black students when Juul use was included.

The study, published in JAMA Network Open, suggests that health officials might be underestimating the prevalence of teen e-cigarette use.

"We've suspected that the brand Juul contributed to the increase of e-cigarette use among teens, but I think we were surprised at the extent of the brand's popularity among young people," said Mary Hrywna, an assistant professor at the Center for Tobacco Studies and the Rutgers School of Public Health who co-authored the study with Michelle B. Manderski, also from the Center and School of Public Health, and Cristine Delnevo, director of the Rutgers Center for Tobacco Studies. Hrywna added that "almost half of current e-cigarette users said Juul was the first e-cigarette product they tried and more than half of the high students reported seeing people use Juul on school grounds."

Researchers found that current and frequent e-cigarette use was highest among 12th graders and in fact one out of ten high school seniors reported using e-cigarettes on 20 or more days in the 30 days preceding the survey.

"This pattern of heavy use is consistent with nicotine addiction," Delnevo said. "It's however not surprising given the high nicotine delivery of Juul".

"We need to think more carefully about how future questions are constructed when assessing e-cigarette use among teens," Hrywna said. "Policymakers must understand how certain brands have driven e-cigarette use and carve out policies that address restrictions by age and location as well the high nicotine concentrations in these products if we hope to reduce these prevalence rates."

Credit: 
Rutgers University

New potential cause of Minamata mercury poisoning identified

SASKATOON - One of the world's most horrific environmental disasters--the 1950 and 60s mercury poisoning in Minamata, Japan--may have been caused by a previously unstudied form of mercury discharged directly from a chemical factory, research by the University of Saskatchewan (USask) has found.

"By using state-of-the-art techniques to re-investigate a historic animal brain tissue sample, our research helps to shed new light on this tragic mass poisoning," said USask professor Ingrid Pickering, Canada Research Chair in Molecular Environmental Science. "Mercury persists for a long time in nature and travels long distances. Our research helps with understanding how mercury acts in the environment and how it affects people."

The study examining which mercury species could be responsible for the Minamata poisoning was published Feb. 12th in the journal Environmental Science & Technology. It is expected to prompt a wider re-assessment of the species of mercury responsible for not only the Minamata tragedy but perhaps also of other organic mercury poisoning incidents, such as in Grassy Narrows, Ontario.

Mercury-containing industrial waste from the Chisso Corporation's chemical factory continued to be dumped in Minamata Bay up to 1968. Thousands of people who ingested the mercury by eating local fish and shellfish died, and many more displayed symptoms of mercury poisoning including convulsions and paralysis.

"Something that was unknown at that time was that unborn children would also suffer the devastating effects of mercury poisoning, with many being born with severe neurological conditions," said USask PhD toxicology student Ashley James, the first author of the paper. "A mother may be essentially unaffected by the poisoning because the mercury within her body was absorbed by the unborn child."

The Minamata poisoning has been considered a textbook example of how inorganic mercury turns into organic mercury, and how a toxic substance propagates up the food chain to humans. For decades, it has been assumed that micro-organisms in the muds and sediments of Minamata Bay had converted the toxic inorganic mercury from the factory wastewater into a much more lethal organic form called methyl mercury, which targets the brain and other nervous tissue. This compound was thought to spread to humans from eating contaminated seafood.

Recent studies have suggested that methyl mercury itself may have been discharged directly from the Minamata plant.

But USask research--involving 60-year-old Minamata feline tissue samples--has found these assumptions may be misplaced.

Using a new type of spectroscopy and sophisticated computational methods, the USask researchers have found that the cat brain tissue contained predominantly organic mercury, contradicting previous findings and assumptions. The team's computer modelling was also able to predict which kinds of mercury waste compounds the chemical plant would be likely to produce.

"The most probable neurotoxic chemical form of mercury discharged from the factory was neither methyl mercury nor inorganic mercury," said Graham George, Canada Research Chair in X-ray Absorption Spectroscopy and an expert in spectroscopy of toxic heavy elements at USask's Toxicology Centre and geological sciences department.

"We think that it was caused by an entirely different type of organic mercury discharged directly from the Chisso factory at Minamata in an already deadly chemical form."

The cat brain samples from the USask study come from an experiment conducted by the Chisso company doctor in 1959 to determine the causes of the sickness, which was not at first connected to the industrial dumping. The doctor fed cats the industrial waste and they soon showed symptoms similar to the sick villagers. While the doctor was ordered to stop his experiments, he kept samples of brain tissue from one of the cats.

The USask team has found that the likely culprit of the poisoning is alpha-mercuri-acetaldehyde, a mercury waste product from aldehyde production not previously identified.

"It was this species that very likely contaminated Minamata Bay and subsequently gave rise to the tragedy of Minamata disease. We think that this was the dominant mercury species in the acetaldehyde plant waste. More work is needed to explore the molecular toxicology of these compounds, to understand the ways they could be toxic to humans, animals and the environment," said George.

The 12-member research team included researchers from USask, Stanford Synchrotron Radiation Lightsource at the SLAC National Accelerator Laboratory, Japanese National Institute for Minamata Disease, and the environmental medicine department of the University of Rochester.

While USask is home to the Canadian Light Source synchrotron, there are only two synchrotrons in the world set up with the specialized equipment needed for the advanced work that the team does with these precious samples--one in Grenoble, France and the other at Stanford.

The USask research was funded by the Natural Sciences and Engineering Research Council, the Canadian Institutes of Health Research, and the Canada Foundation for Innovation.

The new findings coincide with renewed public interest in the tragedy due to the much-anticipated premiere on Feb. 21st at the Berlin International Film Festival of a new movie "Minamata" which stars Johnny Depp as photojournalist W. Eugene Smith whose work publicized the devastating effects of the mercury poisoning.

Credit: 
University of Saskatchewan

Autophagy genes act as tumor suppressors in ovarian cancer

image: Joseph Delaney, Ph.D., collaborated with MUSC researchers and others to examine the role of autophagy genes in ovarian cancer.

Image: 
PHOTO BY VAGNEY BRADLEY OF MUSC HOLLINGS CANCER CENTER

Shedding light on a decades-old controversy, scientists at the Medical University of South Carolina (MUSC) and University of California at San Diego (UCSD) published findings in PLOS Genetics this month showing that autophagy or "self-eating" genes work against tumors in certain types of ovarian cancer.

Autophagy is a cellular recycling pathway that scientists believe plays a role in cancer resistance to stresses such as chemotherapy. The scientists found that autophagy genes also act to prevent tumor formation. This finding, validated in mice, helps to resolve a decades-old controversy based mostly on cells grown in plastic dishes, said Joe Delaney, Ph.D., a researcher at Hollings Cancer Center, who was the MUSC faculty lead on the study. Delaney teamed up with colleague Dwayne Stupack, Ph.D., in the division of gynecologic oncology at UCSD.

Delaney, who uses bioinformatic studies and wet-lab research, studies how aneuploidy contributes to disease, particularly in ovarian cancer. Aneuploidy is the presence of an abnormal number of chromosomes in a cell, which leads to diseases like cancer or conditions like Down syndrome. High-grade serous carcinoma is the most malignant form of ovarian cancer and accounts for up to 70% of all ovarian cancer cases.

The research centered on two autophagy genes: BECN1 and LC3B. Every person has two copies of these genes: one coming from each of their parents. One copy of BECN1 is lost in one of four breast cancers and three of five serous ovarian cancers. This is unusually frequent for a tumor- suppressor gene. BECN1 has been controversial, since it is rarely mutated, and even when it is lost, it is often lost with its neighbor, BRCA1. Some scientists believe that BECN1 deletion is a coincidental event, Delaney said.

However, previous studies in cell culture hinted at tumor-initiating characteristics when BECN1 was suppressed, prompting the current study.

Stupack said, "There were strong messages regarding the role of BECN1 in ovarian cancer that hinted that the situation was more complicated. We were working then with a mouse model of ovarian cancer that we could modify at the genetic level. It was time to put the question to rest and measure for tumor suppression directly."

Delaney and Stupack performed the same partial deletion found in patients, one of two BECN1 alleles in mammals, in this mouse model. Delaney said the results were immediate and conclusive. The very first BECN1-suppressed mouse enrolled in the study had a sizeable tumor at 3 months of age. "Tumors with the model would not normally form until 4 to 6 months. We knew we had discovered something right away," he said.

Years of research then went into the investigation of why BECN1 single-allele deletion acted as a tumor suppressor. Using the mouse tumors and human ovarian cancer cells within the lab, the team of 15 researchers tested multiple phenotypes to determine why autophagy gene deletion leads to cancer.

The authors suspected cellular metabolism to be one of the strongest effects following the disruption of this recycling pathway. Ultra-performance liquid chromatography mass-spectrometry metabolomics were performed on human ovarian cancer cells with reduced autophagy genes. Surprisingly, little metabolism was changed. Instead, ovarian cancer cells seemed to delete BECN1 to increase the tumor's ability to evolve.

"Cancer has always been hard to fight because of its ability to evolve in response to treatment. It was clear that removing these genes increased the rate of genetic evolution. Worse yet, many of these changes were random, allowing for the cancer to potentially evolve many different types of resistance," Delaney said.

While this tumor-suppressor role of autophagy genes is frightening, given how many patients have lost an allele of either autophagy gene, the tumor's reliance on low autophagy may actually enable the next generation of treatment for ovarian cancer, he said.

Stupack agreed. "We knew for a long time that autophagy was very different in ovarian cancer. Normal cells in the fallopian tube and on the ovary express high levels of autophagy proteins. Yet dozens of autophagy genes are found to be deleted in a single tumor. This lack of expression helps the tumor form but may also create a weakness for new drugs to target," Stupack said.

"Before this tumor-suppression study, we had already shown that further clogging the cells' recycling system killed more advanced forms of serous ovarian cancer. Now we understand why the tumors would have this vulnerability in the first place."

In their prior study, the authors found that autophagy drugs killed the most difficult form of ovarian cancer. The authors used tumor cells from a cancer patient that were grown in mice.

"Chloroquine, which targets autophagy, has been tried in cancer trials. Ironically, it was used in tumors with very high autophagy levels, the exact opposite of ovarian cancer. We think that, in combination with other autophagy disruptors, chloroquine may have its best effect in a serous ovarian cancer trial because these cells are already compromised. This has yet to be attempted," Delaney said.

Delaney and Stupack both added that the PLOS Genetics publication provides further rationale for why autophagy-targeting drugs should be tested in clinical trials. Nonprofit donors looking for new cures were essential to the completion of the autophagy studies, including Nine Girls Ask. Future studies to test preclinical models of these autophagy drugs will be funded in part by the Sheryl Prisby Research Scholarship and funds generated through the bike ride and fundraiser LOWVELO at Hollings Cancer Center.

Given that multiple autophagy gene losses are present in nine of 10 serous ovarian cancer patients, Delaney said he hopes to see more research funded. Ovarian cancer ranks fifth in cancer deaths among women, accounting for more deaths than any other cancer of the female reproductive system.

"Clinical trials are expensive, and these drugs are not profit-generating because they are inexpensive and usually off-patent. We will continue to work to convince the cancer research community autophagy disruption is a worthwhile strategy. We hope to save lives."

Credit: 
Medical University of South Carolina

University of Notre Dame-developed home lead screening kits shown to be highly accurate

image: Testing for lead

Image: 
Photo by Barbara Johnston/University of Notre Dame.

An inexpensive lead sample collection kit distributed to homes in St. Joseph County is comparable in accuracy and sensitivity to more costly in-home analysis, according to research published this month in the Journal of Environmental Research.

The Notre Dame Lead Innovation Team (ND LIT), central to the research, began its focus in 2016 to uncover hidden lead threats in homes before children are poisoned from their environments. The team is ready to take the next step toward distributing the screening kits for eventual nationwide use.

"Folks don't have an inexpensive or quick method to test for lead," said Heidi Beidinger-Burnett, assistant professor in the Department of Biological Sciences. "You have to call the health department, and it takes weeks to complete the entire process. Or if you have some money, it's $200 to $300 to have a private risk assessor come out to your house."

With the kit developed at Notre Dame -- which costs about $10 to manufacture --people can have results within a week, and are given do-it-yourself strategies to mitigate lead risks in their homes, Beidinger-Burnett said. The kit contains tools to collect samples from paint, soil and dust, and was distributed to 45 households during the summer of 2018 to screen homes for lead exposure risks.

Researchers observed how homeowners used the kit to collect three soil samples, two paint samples and three dust samples from inside and around their homes, according to the study. The team analyzed data in the laboratory of Marya Lieberman, professor in the Department of Chemistry and Biochemistry, using a portable X-ray fluorescence (XRF) spectrometer and compared those results with results from in-home XRF analysis.

The study showed the kits to be accurate about 96 percent of the time. Results also showed that the kit was both sensitive and specific, meaning that it identified samples that were above EPA thresholds as leaded, and didn't "cry wolf" by identifying non-leaded samples as leaded.

Paint chips were the only samples in which the lead testing kit was less sensitive than the in-home analysis. Because samples were collected using a card with double-sided tape, the tape sometimes retrieved only a small sample, or an oddly shaped paint chip, that didn't encompass the whole beam area of the lab's instrument. The team has since solved the problem by reporting the result as an "insufficient sample" if the paint chip is not large enough.

"The benefits to this country -- and globally -- of an inexpensive lead screening kit are huge," said Meghanne Tighe, lead author on the study and a third-year graduate student in the Lieberman and Peaslee laboratories.

In addition to Lieberman and Beidinger-Burnett, who are both affiliated with the Eck Institute for Global Health, and several graduate students, other members of ND LIT include Graham Peaslee, professor in the Department of Physics, Matthew Sisk, assistant librarian and Geographic Information Systems specialist for Hesburgh Libraries and Chris Knaub, Project Manager.

With plans to scale up the number of kits in use, the team is seeking manufacturing partners and exploring distribution methods. The researchers have built an inexpensive automated system that can analyze the samples. The automated XRF system, which can run hundreds of samples per day, can be replicated and installed in labs throughout the country, Tighe said.

Credit: 
University of Notre Dame

How a tiny and strange marine animal produces unlimited eggs and sperm over its lifetime

image: Piwi1-positive spermatogonia are shown in yellow; cell nuclei are in turquoise. Germ cell induction and all stages of gametogenesis can be visualized in these clonal animals.

Image: 
Timothy DuBuc, Ph.D. Swarthmore College

A little-known ocean-dwelling creature most commonly found growing on dead hermit crab shells may sound like an unlikely study subject for researchers, but this animal has a rare ability -- it can make eggs and sperm for the duration of its lifetime. This animal, called Hydractinia, does so because it produces germ cells, which are precursors to eggs and sperm, nonstop throughout its life. Studying this unique ability could provide insight into the development of human reproductive system and the formation of reproductive-based conditions and diseases in humans.

"By sequencing and studying the genomes of simpler organisms that are easier to manipulate in the lab, we have been able to tease out important insights regarding the biology underlying germ cell fate determination -- knowledge that may ultimately help us better understand the processes underlying reproductive disorders in humans," Dr. Andy Baxevanis, director of the National Human Genome Research Institute's (NHGRI) Computational Genomics Unit and co-author of the paper. NHGRI is part of the National Institutes of Health.

In a study published in the journal Science, collaborators at NHGRI, the National University of Ireland, Galway, and the Whitney Laboratory for Marine Bioscience at the University of Florida, Augustine, reported that activation of the gene Tfap2 in adult stem cells in Hydractinia can turn those cells into germ cells in a cycle that can repeat endlessly.

In comparison, humans and most other mammals generate a specific number of germ cells only once in their lifetime. Therefore, for such species, eggs and sperm from the predetermined number of germ cells may be formed over a long period of time, but their amount is restricted. An international team of researchers have been studying Hydractinia's genome to understand how it comes by this special reproductive ability.

Hydractinia lives in colonies and is closely related to jellyfish and corals. Although Hydractinia is dissimilar to humans physiologically, its genome contains a surprisingly large number of genes that are like human disease genes, making it a useful animal model for studying questions related to human biology and health.

Hydractinia colonies possess feeding polyps and sexual polyps as a part of their anatomy. The specialized sexual polyps produce eggs and sperm, making them functionally similar to gonads in species like humans.

During human embryonic development, a small pool of germ cells that will eventually become gametes is set aside, and all sperm or eggs that humans produce during their lives are the descendants of those original few germ cells. Loss of these germ cells for any reason results in sterility, as humans do not have the ability to replenish their original pool of germ cells.

In a separate study, Dr. Baxevanis at NHGRI and Dr. Christine Schnitzler at the Whitney Lab have completed the first-ever sequencing of the Hydractinia genome. In this study, researchers used this information to scrutinize the organism's genome for clues as to why there are such marked differences in reproductive capacity between one of our most distant animal relatives and ourselves.

"Having this kind of high-quality, whole-genome sequence data in hand allowed us to quickly narrow down the search for the specific gene or genes that tell Hydractinia's stem cells to become germ cells," said Dr. Baxevanis.

The researchers compared the behavior of genes in the feeding and sexual structures of Hydractinia. They found that the Tfap2 gene was much more active in the sexual polyps than in the feeding polyps in both males and females. This was a clue that the gene might be important in generating germ cells.

The scientists next confirmed that Tfap2 was indeed the switch that controls the process of perpetual germ cell production. The researchers used the CRISPR-Cas9 gene-editing technique to remove Tfap2 from Hydractinia and measured the resulting effects on germ cell production. They found that removing Tfap2 from Hydractinia stops germ cells from forming, bolstering the theory that Tfap2 controls the process.

The researchers also wanted to know if Tfap2 was influencing specific cells to turn into germ cells. Their analysis revealed that Tfap2 only causes adult stem cells in Hydractinia to turn into germ cells.

Interestingly, the Tfap2 gene also regulates germ cell production in humans, in addition to its involvement in myriad other processes. However, in humans, the germ cells are separated from non-germ cells early in development. Still, despite the vast evolutionary distance between Hydractinia and humans, both share a key gene that changes stem cells into germ cells.

Credit: 
NIH/National Human Genome Research Institute

Forests bouncing back from beetles, but elk and deer slowing recovery

image: Trees killed by bark beetles remain standing in the southern Rocky Mountains.

Image: 
Robert Andrus

Two words, and a tiny little creature, strike fear in the hearts of many Colorado outdoor enthusiasts: bark beetle. But new research from University of Colorado Boulder reveals that even simultaneous bark beetle outbreaks are not a death sentence to the state's beloved forests.

The study, published this month in the journal Ecology, found that high-elevation forests in the southern Rocky Mountains actually have a good chance of recovery, even after overlapping outbreaks with different kinds of beetles. One thing that is slowing their recovery down: Foraging elk and deer.

"This is actually a bright point, at least for the next several decades," said Robert Andrus, lead author of the study and recent PhD graduate in physical geography. "Even though we had multiple bark beetle outbreaks, we found that 86 percent of the stands of trees that we surveyed are currently on a trajectory for recovery."

Between 2005 and 2017, a severe outbreak of spruce bark beetles swept through more than 741,000 acres of high-elevation forest in the southern Rocky Mountains near Wolf Creek Pass -- killing more than 90 percent of Engelmann spruce trees in many stands. At the same time, the western balsam bark beetle infested subalpine fir trees across almost 124,000 acres within the same area.

If you go skiing in Colorado, you're usually in a high-elevation, Engelmann spruce and subalpine fir forest, said Andrus.

The researchers wanted to know if these overlapping events, caused by two different types of bark beetles, would limit the ability of the forest to recover. So they measured more than 14,000 trees in 105 stands in the eastern San Juan Mountains, tallying the surviving species and the number of deaths. They had expected that the combined effects of two bark beetle outbreaks would prevent forest recovery, but they found that the forests were quite resilient.

That's an important contrast from what happens following a severe fire, which can cause forests to convert to grasslands, according to previous research by Thomas Veblen, coauthor of the study and Distinguished Professor of Geography.

"It's important that we perform these sorts of studies, because we need different management responses depending on the forest type and the kind of disturbance," said Veblen.

They also found that greater tree species diversity prior to the bark beetle outbreaks was a key component of resilient forests.

Tens of millions of acres across the Western United States and North America have been affected in the past two decades, and Colorado has not been spared. A severe mountain pine beetle outbreak began in 1996, easily visible along I-70 and in Rocky Mountain National Park. Since 2000, more than 1.8 million acres of Engelmann spruce statewide have been affected by spruce beetles in high-elevation forests.

With continued warming there will come a time where conditions caused by climate change exceed the forests' ability to recover, said Veblen.

Impacts of Ungulates

The study is the first to consider the effects of two different types of beetles that affect two different dominant tree species, as well as the effects of browsing elk and deer in the same area.

Bark beetles prefer bigger, mature trees with thicker bark, which offer more nutrients and better protection in the wintertime. They typically leave the younger, juvenile trees alone--allowing the next generation to recover and repopulate the forest.

But while in the field, researchers noticed many smaller trees were being munched on by elk and deer. Known as "ungulates," these animals like to nibble the top of young trees, which can stunt the trees' vertical growth. They found more than half of the tops of all smaller trees had been browsed.

That doesn't mean that those trees are going to die--ungulates are just more likely to slow the rate of forest recovery.

Avid Colorado skiers and mountaineers looking forward to typical, green forests, however, will have to be patient.

"We don't expect full forest recovery for decades," said Andrus.

Credit: 
University of Colorado at Boulder

Study: Bariatric surgeries can double peak blood alcohol levels, patients may be unaware

image: Some bariatric surgery patients don’t sense heightened blood alcohol levels, according to research led by professor of food science and human nutrition M. Yanina Pepino, left. Maria Belen Acevedo, a postdoctoral research associate in the department, was the first author of the study.

Image: 
Photo by L. Brian Stauffer

CHAMPAIGN, Ill. -- A new study of 55 women found that two of the most popular forms of bariatric surgery - Roux-en-Y gastric bypass and laparoscopic sleeve gastrectomy - may dramatically change patients' sensitivity to and absorption of alcohol.

Some women's sensitivity to alcohol increased so much after bariatric surgery that the amount they could consume before feeling the effects was reduced by half compared with their pre-surgery drinking habits, while others had reduced sensitivity, researchers at the University of Illinois at Urbana-Champaign found.

After consuming an alcoholic beverage that was equivalent to having two standard drinks, women who had gastric bypass or sleeve gastrectomy surgery experienced blood alcohol-concentration peaks sooner and about twice as high - 50% above the .08% blood alcohol content that's the legal threshold for drunk driving in many states - compared with gastric band patients.

The findings were in line with previous studies that showed Roux-en-Y gastric bypass and sleeve gastrectomy cause a twofold increase in peak blood alcohol levels.

However, some women in the U. of I. study who reached this heightened peak were less sensitive to the effects of alcohol and reported almost no sedative effects from it, said M. Yanina Pepino, a professor of food science and human nutrition who led the study.

"About a third of women in the study felt almost no sedative effects, even when they reached peak blood alcohol concentrations that were comparable to those of women in the general population consuming four standard drinks," Pepino said. "People who have not had bariatric surgery and are less sensitive to the sedative or impairing effects of alcohol, and those who are more sensitive to its stimulant effects, are generally at greater risks for developing alcohol problems, even decades later."

The findings, which were based on the women's responses on several surveys about how alcohol affected them and analyses of their blood alcohol concentrations after drinking an alcoholic beverage, help shed light on why postoperative gastric bypass and sleeve gastrectomy patients may be at increased risks of developing alcohol problems after having weight-loss surgery.

The women in the study had undergone bariatric surgery at medical centers in Illinois and Missouri within the previous five years.

Of them, 16 had received Roux-en-Y gastric bypass surgery, which reduces the stomach to the size of an egg and reroutes the small intestine; 28 had undergone sleeve gastrectomy, which removes a majority of the stomach and reduces the remainder to a slender banana shape; and 11 had an adjustable gastric band placed around the top of the stomach to reduce it to a small pouch.

At the beginning of the study, participants filled out the Alcohol Sensitivity Questionnaire, which asked about the number of drinks they needed to consume to experience various effects, such as becoming more talkative or flirtatious, or experiencing hangovers.

"These and other effects such as feeling sedated can be signals to stop drinking, and being insensitive to them increases one's chances of consuming greater amounts of alcohol and the risk for an alcohol-use disorder," said Maria Belen Acevedo, a postdoctoral research associate at the U. of I. and the study's first author.

Participants completed the questionnaire twice - recalling how alcohol affected them before and after the surgery on separate questionnaires.

Of the women who completed the questionnaires, 45 also participated in tests that assessed their individual response to alcohol. The tests consisted of drinking a nonalcoholic juice beverage on one day and drinking the same beverage mixed with alcohol on another day, and having their blood alcohol concentrations measured at numerous points on both days.

The smell and flavor of the alcohol was masked so that participants could not tell if the drink contained it on their first sip. Before drinking either beverage and at several time points afterward, the participants completed surveys about any effects they were feeling, while the researchers collected multiple blood samples.

Screening post-bariatric surgery patients with the Alcohol Sensitivity Questionnaire could help identify people who might be at increased risk of alcohol problems after the surgery and enable clinicians to deliver more effective prevention programs for these patients, according to the researchers.

The study, currently in press with the journal Surgery for Obesity and Related Diseases, was supported by grants from the National Institutes of Health and the U.S. Department of Agriculture National Institute of Food and Agriculture's Hatch Project.

Credit: 
University of Illinois at Urbana-Champaign, News Bureau