Earth

New study solves mystery of salt buildup on bottom of Dead Sea

image: An aerial view of the Dead Sea taken by the Hubble Space Telescope. New research explains why salt crystals are piling up on the deepest parts of the Dead Sea's floor, a finding that could help scientists understand how large salt deposits formed in Earth's geologic past.

Image: 
NASA/Hubble

WASHINGTON--New research explains why salt crystals are piling up on the deepest parts of the Dead Sea's floor, a finding that could help scientists understand how large salt deposits formed in Earth's geologic past.

The Dead Sea, a salt lake bordered by Jordan, Israel and the West Bank, is nearly 10 times as salty as the ocean. Humans have visited the Dead Sea for thousands of years to experience its purported healing properties and to float in its extremely dense, buoyant waters, and mention of the sea goes back to biblical times.

Much of the freshwater feeding the Dead Sea has been diverted in recent decades, lowering the sea's water levels and making it saltier than before. Scientists first noticed in 1979, after this process had started, that salt crystals were precipitating out of the top layer of water, "snowing" down and piling up on the lakebed. The salt layer on the lake floor has been growing about 10 centimeters (4 inches) thicker every year.

The process driving this salt crystal "snow" and buildup of salt layers on the lakebed has puzzled scientists because it doesn't make sense according to the laws of physics. Now, a new study in AGU's journal Water Resources Research proposes that tiny disturbances in the lake, caused by waves or other motion, create "salt fingers" that slowly funnel salt down to the lakebed. Watch a video about this research here.

"Initially you form these tiny fingers that are too small to observe... but quickly they interact with each other as they move down, and form larger and larger structures," said Raphael Ouillon, a mechanical engineer at the University of California Santa Barbara and lead author of the new study.

"The initial fingers might only be a few millimeters or a couple of centimeters thick, but they're everywhere across the entire surface of the lake," said Eckart Meiburg, also a mechanical engineer at UC Santa Barbara and co-author of the new study. "Together these small fingers generate a tremendous amount of salt flux."

The new finding helps researchers better understand the physics of the Dead Sea but also helps explain the formation of massive salt deposits found within Earth's crust.

The Dead Sea is only hypersaline water body on Earth today where this salt fingering process is happening, so it represents a unique laboratory for researchers to study the mechanisms by which these thick salt deposits have formed, according to the authors.

"Altogether this makes the Dead Sea a unique system," said Nadav Lensky, a geologist with the Geological Survey of Israel and co-author of the new study. "Basically, we have here a new finding that we think is very relevant to the understanding of the arrangement of these basins that were so common in Earth's history."

A salty mystery

As the Dead Sea has become saltier in recent decades, much of that salt has become concentrated near its surface. During the summer, extra heat from the Sun warms the surface of the Dead Sea and divides it into two distinct layers: A warm top layer sitting atop a colder lower layer. As water evaporates from the top layer in the summer heat, it becomes saltier than the cooler layer below.

Researchers realized the salt snow they observed was originating in this top salty layer, but this warm water doesn't mix with the cooler water below because it's so much warmer and less dense. So they were puzzled as to how salt from the surface was entering the cooler layer and plummeting to the bottom of the lake.

Lensky and his colleagues proposed an explanation in 2016, and the new research tests this theory for the first time.

They propose that when the top layer of the lake is disturbed by waves or other motion, tiny parcels of warm water enter the cooler pool of water below. Heat diffuses more rapidly than salt, so this warm water parcel rapidly cools. But as it cools it holds less salt, so the salt precipitates out and forms crystals that sink to the bottom. Watch an animation of the salt fingers here.

In the new study, researchers created a computer simulation of how water and salt would flow in the Dead Sea if the salt fingers theory was correct. They found the salt fingers theory correctly predicted the downward flow of salt snow and buildup of salt layers in the middle of the lake's floor. Because the level of the lake is declining, due to pumping of freshwater from the nearby Jordan River, the salt layers are concentrated in the central part of the lake, according to the authors.

Understanding salt deposits elsewhere

The new finding also helps explain the formation of massive salt deposits found within Earth's crust.

"We know that many places around the world have thick salt deposits in the Earth's crust, and these deposits can be up to a kilometer thick," Meiburg said. "But we're uncertain how these salt deposits were generated throughout geological history."

One notable example is the thick salt layer underneath the Mediterranean Sea. Researchers know that about six million years ago, the Strait of Gibraltar closed off, because of the movements of Earth's tectonic plates. This cut off the supply of water from the Atlantic Ocean to the Mediterranean, creating a giant shallow inland sea.

After several hundred thousand years, the Mediterranean's water levels dropped so much that the sea partly or nearly dried out, leaving behind thick deposits of salt. The new finding suggests these deposits formed during this time in a similar manner to what is happening right now in the Dead Sea. When the Strait of Gibraltar opened up again, water flooded the basin and the salt deposits were buried under new layers of sediment, where they remain today.

Credit: 
American Geophysical Union

Two-degree climate goal attainable without early infrastructure retirement

image: If the world is to achieve the 1.5-degree Celsius goal, existing fossil-fuel-burning power plants and industrial equipment will need to be retired early unless they can be feasibly retrofitted with carbon capture and storage technologies or their emissions offset by negative emissions

Image: 
Public domain

Washington, DC--If power plants, boilers, furnaces, vehicles, and other energy infrastructure is not marked for early retirement, the world will fail to meet the 1.5-degree Celsius climate-stabilizing goal set out by the Paris Agreement, but could still reach the 2-degree Celsius goal, says the latest from the ongoing collaboration between the University of California Irvine's Steven Davis and Carnegie's Ken Caldeira.

To achieve the objective of limiting warming to no greater than 2 degrees Celsius--or, more optimistically, to less than 1.5 degrees Celsius--it will be necessary to reach net-zero emissions by mid-century.

In this new paper, published in Nature with lead author Dan Tong of UCI, the team calculates that if used at the current rate until they age out of functionality, existing power plants and other fossil-fuel-burning equipment will release about 658 gigatons of carbon into the atmosphere--more than half of it by the electricity sector. China is predicted to produce the largest share--41 percent--and the United States and European Union 9 percent and 7 percent respectively.

According to the authors, future emissions from these existing facilities would take up the entire carbon budget needed to to limit mean warming to 1.5 degrees Celsius and close to two-thirds of the budget needed to constrain warming to below 2 degrees Celsius over the next three decades.

Caldeira says: "The good news is that society still has the ability to avoid 2 degrees Celsius of warming without having to retire power plants early. But we would have to stop building things with smokestacks and tailpipes that dump CO2 pollution into the sky. If the Earth warms beyond 2 degrees Celsius, it will be the result of emissions from infrastructure we not yet built."

However, the number of fossil fuel-burning power plants and vehicles in the world has increased dramatically in the past decade, spurred by rapid economic and industrial development in China and India. Meanwhile, efforts such as those in the U.S. to replace old coal power plants with new natural gas ones have decreased the average age of fossil fuel-burning infrastructure in the West.

"Our results show that there's basically no room for new CO2-emitting infrastructure under the international climate goals. And if the world is to achieve the 1.5-degree Celsius goal, existing fossil fuel-burning power plants and industrial equipment will need to be retired early unless they can be feasibly retrofitted with carbon capture and storage technologies or their emissions offset by negative emissions," explains Davis. "Without such radical changes, we fear the aspirations of the Paris Agreement are already at risk."

Credit: 
Carnegie Institution for Science

Research questions link between unconscious bias and behavior

image: Patrick Forscher, associate professor of psychology, University of Arkansas.

Image: 
Russell Cothren

FAYETTEVILLE, Ark. - A new study calls into question the effectiveness of a popular concept for addressing social problems such as discrimination.

Implicit bias, a term for automatically activated associations, is often perceived to be a primary cause of discrimination against social groups such as women and racial minorities. Identifying and understanding implicit bias and modifying behavior that's based on it has long been a goal of those who seek to address such problems.

Patrick Forscher, assistant professor of psychology at the University of Arkansas, along with Calvin Lai, assistant professor of psychology at Washington University in St. Louis, and five other co-authors, reviewed 492 studies on changing implicit bias involving 87,418 participants. The researchers' goal was to investigate procedures that attempted to change implicit bias. They found that while implicit bias can be changed, only a small percentage of the studies they looked at examined changes over time, or whether the changes affected behavior. The study was published in the Journal of Personality & Social Psychology.

"When you hear people talk about implicit bias in popular media, there is often this assumption that you do this implicit bias training and the effects stick around for a long time," Forscher said. "What we found is that barely any of the studies that we captured in our analysis even attempted to assess changes over time."

Millions of people interested in assessing their own implicit bias have taken self-administered tests such as the Implicit Association Test, available on the internet, and many companies have created programs to address issues that stem from implicit bias, such as gender pay gaps and hiring discrimination.

"The promise is that if I can change what produces the score I am solving this big problem," Forscher said. "It offers an individualistic, easy solution."

But the researchers found little evidence that implicit bias can be changed long term, and even less evidence that such changes lead to changes in behavior.

"I don't think this research is ready for application," he said. "It could even be true that implicit bias doesn't have a strong impact on behavior. Even if this is not true we should not be using this body of research in its current state to inform public policy."

Credit: 
University of Arkansas

Transformer cells: Shaping cellular 'behaviour'

Scientists from the Sechenov University, conjointly with their fellow Chinese and American researchers, have examined the latest advances in the use of skeletal muscle progenitor cells, specifying the core challenges inherent to the applicability of MPCs in cell therapy, and outlining the most promising breakthrough technologies. The outcomes of this research were reported in Applied Physics Reviews, the article having been roundly praised by the editorial board.

Progenitor cells are cells that have the capacity to evolve (or differentiate) into a specific type of cell, for instance, muscle tissue cells. This ability makes them key candidates for cell therapy in the treatment of damaged muscle tissue due to injury, disease, or age-associated dysfunctions. The technique could be described as follows: progenitor cells are harvested from the patient's healthy muscle tissue sample, cultivated in vitro and then grafted onto the patient's damaged tissues. The method requires the appropriate environment (similar to that in the human body) to enable the differentiation of progenitor cells under laboratory conditions. However, being highly sensitive to the subtlest changes in the growth-supporting microenvironment, progenitor cells may alter their behavioural patterns ex vivo and lose the ability to differentiate into target types of cells.

The research demonstrates that proper management of progenitor cell behaviour requires both a suitable scaffold (or a 'backbone' on which the tissue is cultivated) and extracellular matrix that interconnects the surrounding cells and regulates the intracellular processes.

Extracellular matrix that provides the microenvironment for progenitor cells in vivo contains hundreds of various proteins, lipids, and carbohydrates, which play a crucial role in tissue regeneration. This microenvironment is extremely active and its internal processes are essential for cell growth and migration. Despite the existing multitude of artificial extracellular matrices, including those derived from animal tissues, native human tissues remain the most favourable environment for cell cultivation.

Prior to publishing their report, the authors had designed extracellular matrix-derived scaffolds for biofabricating skin, skeletal muscle and kidney tissues that demonstrated excellent viability results due to their tissue-specific differentiation. In order to engineer functional matrices, any cells and their components that may trigger immune reaction during grafting are mechanically isolated, or washed out with processing solution, from the target tissue sample. The scientists have designed and tested a tissue decellularisation method that efficiently removes the cell components, while preserving its structural support - the matrix - and active compounds (cytokines, growth factors), which essentially control the cell behaviour. This was made possible by accelerating the decellularization process: the solution remains in contact with critical compounds for a shorter period of time, ensuring their integrity and viability. There also exist a number of extracellular matrix hydrogel types that have proven to be reasonably effective in tissue construction and nutrient supply.

As Peter Timashev, a contributing author and Director of the Institute for Regenerative Medicine of the Sechenov University, remarked, "When engineering tissues or body organs in vitro, we always aim to create the sort of environment that would be as identical to the human body as practically possible. That being said, the sheer complexity of extracellular matrix makeup makes the fabrication of fully sustainable artificial matrices unachievable at this point in time. Therefore, our goal is to try to extract the matrix very carefully and use it in engineering target tissues - this technique will enable an accurate reproduction of living tissues in the future and facilitate their application in clinical settings."

Credit: 
Sechenov University

Cancer cell's 'self eating' tactic may be its weakness

image: Evidence of activity involving the NIX protein (brown staining) is seen in common precursor lesions (PanIN) for pancreatic cancer. NIX activity escalates in full pancreatic ductal adenocarcinoma (PDAC), the most common of pancreas cancers. NIX is directly responsible for triggering mitophagy, or "self eating" of mitochondria.

Image: 
Tuveson lab/CSHL

Cold Spring Harbor, NY -- Cancer cells use a bizarre strategy to reproduce in a tumor's low-energy environment; they mutilate their own mitochondria! Researchers at Cold Spring Harbor Laboratory (CSHL) also know how this occurs, offering a promising new target for pancreatic cancer therapies.

Why would a cancer cell want to destroy its own functioning mitochondria? "It may seem pretty counterintuitive," admits M.D.-Ph.D. student Brinda Alagesan, a member of Dr. David Tuveson's lab at CSHL.

https://www.youtube.com/watch?v=jZZ8lGZf7G0

According to Alagesan, the easiest way to think about why cancer cells may do this is to think of the mitochondria as a powerplant. "The mitochondria is the powerhouse of the cell," she recites, recalling the common grade school lesson. And just like a traditional powerplant, the mitochondria create their own pollution.

"These harmful byproducts, or pollutants, are called reactive oxygen species, or ROS," Alagesan adds. "A lot of it can be damaging to cells. We believe that [by eating their own mitochondria] the pancreatic cancer cells are reducing the production of these damaging ROS while still making enough energy to proliferate."

This is still a hypothesis but it could explain why pancreatic cancer cells become prone to mitophagy, a form of autophagy or 'self eating' of their mitochondria.

In the journal Cancer Discovery, Alagesan and co-lead author Dr. Timothy Humpton describe what happens when a protein called KRAS becomes active in the uniquely nutrient-depleted environment of a pancreas tumor. KRAS starts a "signaling cascade" which results in the cell eating its own mitochondria and the diversion of glucose and glutamine away from the remaining mitochondria. These diverted nutrients are used to support cell division.

"Ideally, we would want to inhibit the cancer promoting KRAS protein directly, but unfortunately so far no one has been able to do that in a clinically relevant way," Alagesan explains.

Instead of stopping KRAS directly, the Tuveson team traced the cascade of protein signals that follows KRAS activation. They found one pathway which leads to an increase in the protein NIX. NIX is directly responsible for triggering that mitophagy stage which appears to be so crucial for cancer cell proliferation.

"Results in mice are showing us that, by inhibiting the NIX pathway, we might prevent cancer cells from using energy the way they need to in order to proliferate," Alagesan says.

The Tuveson team is now turning its attention to disrupting this same NIX pathway in human pancreatic cancer cells, and applying this to the design of clinical trials.

Credit: 
Cold Spring Harbor Laboratory

Recycling plastic: Vinyl polymer broken down to aspirin components

image: Not a day goes by without news of microplastics in our oceans. There are not many efficient methods of recycling plastics without compromising quality. A beacon of hope was recently lit at Shinshu University where researchers discovered acid hydrolysis of a vinyl polymer breaks down into salicylic acid and acetic acid, precursors to dehydroaspirin which in theory can be made into vinyl polymers again.

Image: 
Yasuhiro Kohsaka Ph D., the Research Initiative for Supra-Materials, Shinshu University

Before you read this, look around your room. How much of your surroundings are made of plastic? The chair that you sit on, the desk, the casing on your computer and monitor, the pen you use, the carpet, the shoes you wear, your clothes, your bag, the soda bottle you sip from, the furniture, the walls and even the plumbing- how many items can you identify that are plastic? Depending on where you reside, the majority of the things around you might be made of different types of plastic. Now, if you're outside, various parts of your car, the buses and trains, even the interior of airplanes are mostly plastic. Currently there are not many methods to recycle plastics efficiently without compromising quality. So, it shouldn't come as a surprise that not a day goes by without news of microplastics in our oceans and possibly in our food supply.

A beacon of hope was recently lit at Shinshu University where Professor Yasuhiro Kohsaka and his graduate student Akane Kazama discovered acid hydrolysis of a vinyl polymer broke down into salicylic acid and acetic acid. These acids form aspirin through some reactions. Vinyl is the second most common plastic in the world today. Previous recyclable vinyl had been too unstable to work with at room temperature, and was not suitable for practical use.

The team at Shinshu plan to study the reaction mechanism in-depth which they hope will provide insight into real world applications for recyclable vinyl polymers. If it can become cost effective to recycle vinyl on an industrial scale, we will be one step closer to solving the global plastic waste problem.

Credit: 
Shinshu University

New measurements shed light on the impact of water temperatures on glacier calving

image: Glacier calving.

Image: 
Nina Kirchner/Stockholm University

With the help of new temperature sensors, which are being developed in collaboration with KTH, the Royal Institute of Technology, the researchers have collected continuous time series of water temperatures from locations in close proximity to the glaciers Tunabreen and Kronebreen. The results show that subsurface water temperature exerts the greatest influence on the mass loss of the glaciers - but it is not as significant as previously thought.

"One of the greatest uncertainties surrounding future sea level rise is how glacier dynamics change when glaciers come into contact with warming waters. Our measurements and results can be used to improve numerical models which estimate future sea level rise" says Felicity Holmes, a PhD student at the Department of Physical Geography, Stockholm University, and lead author of the study.

Many glaciers in the polar regions are shrinking due to global warming, contributing to sea level rise. Glaciers which extend into water don't only lose mass through melting on the surface, but also through the loss of icebergs in a process called calving.

"Calving is a process which is not completely understood, but with the measurement technology that we used in Svalbard, we have a good opportunity to increase our knowledge of which factors interact when glaciers calve. A better understanding of calving processes also benefits prognoses of how glaciers in West Antarctica will react to warming waters" says Nina Kirchner, Associate professor in glaciology at the Department of Physical Geography, Stockholm University, and director of the Bolin Centre for climate research.

When warm water from the Atlantic intrudes into fjords where glaciers meet the ocean, calving rates increase. This is seen along the west coast of Svalbard. But, the lack of data in close proximity to glacier fronts has made it hard to clearly identify warm water as the cause, due to the fact that measurements taken further away often give an incorrect picture of the water masses that actually reach the glacier fronts. This can therefore lead to an over- or under- estimation of how much the ocean actually impacts calving glaciers.

The new datasets are the first taken within just a kilometre of the glacier fronts and therefore play an important role in increasing our understanding of the impact of subsurface temperatures along Svalbard's west coast.

"It is exciting to develop measurement technology in close co-operation with climate scientists - together we work to make the technology cope with the tough challenges that the polar environment places on underwater instruments" says Jakob Kuttenkeuler, Professor at the Maritime Robotics Laboratory at the Royal Institute of Technology (KTH).

"We are proud of the unique measurements that we could collect in close proximity to the glacier fronts over the course of a whole year - the measurement series is now openly available to other researchers to use" concludes Holmes.

Credit: 
Stockholm University

A cold-tolerant electrolyte for lithium-metal batteries emerges in San Diego

image: Improvements to a class of battery electrolyte first introduced in 2017 -- liquefied gas electrolytes -- could pave the way to a high-impact and long-sought advance for rechargeable batteries: replacing the graphite anode with a lithium-metal anode. The research, published July 1, 2019 by the journal Joule, builds on innovations first reported in Science in 2017 by the same research group at the University of California San Diego and the university spinout South 8 Technologies. One of the tantalizing aspects of these liquefied gas electrolytes is that they function both at room temperature and at extremely low temperatures, down to minus 60 C. These electrolytes are made from liquefied gas solvents -- gases that are liquefied under moderate pressures -- which are far more resistant to freezing than standard liquid electrolytes. In the 2019 paper in Joule, the researchers report on how, through both experimental and computational studies, they improve their understanding on some of the shortcomings of the liquefied gas electrolyte chemistry. With this knowledge, they were able to tailor their liquefied gas electrolytes for improved performance in key metrics for lithium-metal anodes, both at room temperature and minus 60 C.

Image: 
UC San Diego Jacobs School of Engineering

Improvements to a class of battery electrolyte first introduced in 2017 - liquefied gas electrolytes - could pave the way to a high-impact and long-sought advance for rechargeable batteries: replacing the graphite anode with a lithium-metal anode.

The research, published July 1, 2019 by the journal Joule, builds on innovations first reported in Science in 2017 by the same research group at the University of California San Diego and the university spinout South 8 Technologies.

2017 press release: http://jacobsschool.ucsd.edu/news/news_releases/release.sfe?id=2235

2017 Science paper: https://science.sciencemag.org/content/356/6345/eaal4263

Finding cost-effective ways to replace the graphite anode in commercial lithium-ion batteries is of great interest because it could lead to lighter batteries capable of storing more charge, via a 50 percent increase in energy density at the cell level. The increased energy density would come from a combination of factors including the lithium-metal anode's high specific capacity, low electrochemical potential, and light weight (low density).

As a result, switching to lithium-metal anodes would significantly extend the range of electric vehicles and lower the cost of batteries used for grid storage, explained UC San Diego nanoengineering professor Shirley Meng, a corresponding author on the new paper in Joule.

However, making the switch comes with technical challenges. The main hurdle is that lithium metal anodes are not compatible with conventional electrolytes. Two long-standing problems arise when these anodes are paired with conventional electrolytes: low cycling efficiency and dendrite growth.

So Meng and colleagues' approach was to switch to a more compatible electrolyte, called liquefied gas electrolytes.

Liquefied gas electrolytes in action

One of the tantalizing aspects of these liquefied gas electrolytes is that they function both at room temperature and at extremely low temperatures, down to minus 60 C. These electrolytes are made from liquefied gas solvents -- gases that are liquefied under moderate pressures -- which are far more resistant to freezing than standard liquid electrolytes.

In the 2019 paper in Joule, the researchers report on how, through both experimental and computational studies, they improve their understanding on some of the shortcomings of the liquefied gas electrolyte chemistry. With this knowledge, they were able to tailor their liquefied gas electrolytes for improved performance in key metrics for lithium-metal anodes, both at room temperature and minus 60 C.

In lithium-metal half-cell tests, the team reports that the anode's cycling efficiency (Coulombic efficiency) was 99.6 percent for 500 charge cycles at room temperature. This is up from the 97.5 percent cycling efficiency reported in the 2017 Science paper, and an 85 percent cycling efficiency for lithium metal anodes with a conventional (liquid) electrolyte.

At minus 60 C, the team demonstrated lithium-metal anode cycling efficiency of 98.4 percent. In contrast, most of conventional electrolytes fail to work below minus 20 C.

The UC San Diego team's simulation and characterization tools, many developed in the Laboratory for Energy Storage and Conversion led by Shirley Meng, allow the researchers to explain why lithium metal anodes perform better with liquefied gas electrolytes. At least part of the answer has to do with how the lithium particles deposit on the metal anode surface.

The researchers report the smooth and compact deposition of lithium particles on lithium-metal anodes when liquefied gas electrolytes are used. In contrast, when conventional electrolytes are used, needle-like dendrites form on the lithium metal anode. These dendrites can degrade the efficiency, cause short circuits, and lead to serious safety threats.

One measure of how densely lithium particles deposit on anode surfaces is porosity. The lower the porosity the better. The research team reports in Joule that porosity of lithium particle deposition on a metal anode is 0.90 percent at room temperature using liquefied gas electrolytes at room temperature. The porosity in the presence of conventional electrolytes jumps to 16.8 percent.

The race for the right electrolyte

There is currently a big push to find or improve electrolytes that are compatible with the lithium metal anode and are competitive in terms of cost, safety, and temperature range. Research groups have mainly been looking at highly-concentrated solvents (liquid) or solid-state electrolytes, but there is currently no silver bullet.

"As part of the battery research community, I am confident that we are going to develop the electrolytes that we need for lithium-metal anodes. I hope that this research inspires more research groups to take a serious look at liquefied gas electrolytes," said Meng.

Credit: 
University of California - San Diego

NLST follow up reaffirms that low dose CT reduces lung cancer mortality

Denver--July 1, 2019--Early detection and treatment through screening with low-dose computed tomography (LDCT) has been investigated as a potential means of reducing lung cancer deaths for more than two decades. In 2011, a large U.S. study, the randomized National Lung Screening Trial (NLST), reported a significant 20% reduction in lung cancer mortality in high-risk current and former smokers screened annually for three years with LDCT as compared to chest x-rays. The NLST study included 26,722 patients in the LDCT arm and 26,730 in the x-ray arm at 33 medical institutions in the United States.

Now, in the Journal of Thoracic Oncology, the authors of the NLST research report on an extended analysis of the patient cohort that was followed up on after the 2011 study was published. The authors report that their original findings have been sustained.

This follow up study also supports and reaffirms findings from the NELSON Trial, which found a 26% reduction in lung cancer mortality in men and a 39% reduction in women. The NELSON research was reported at the International Association for the Study of Lung Cancer's 2018 World Conference on Lung Cancer in Toronto.

The NLST study randomized high-risk current and former smokers to three annual screens with either low dose computed tomography (LDCT) or chest radiographs (CXR) and demonstrated a significant reduction in lung cancer mortality in the LDCT arm after median 6.5 years follow-up. In this latest report, lead researcher Paul Pinsky, Ph.D., from the National Cancer Institute, part of the National Institutes of Health, in Bethesda, Md., and his team extended the follow up to 11.3 years for incidence and 12.3 years for mortality.

The study authors wrote that with an additional six years of mortality follow up, researchers could better understand if low dose CT screening prevented deaths from lung cancer, or merely delayed them. They report that the extended follow up did allow them to determine that LDCT did, in fact, prevent lung cancer deaths or at least delayed them for more than a decade.

"Lung cancer is the leading cause of cancer death worldwide and early detection and treatment through screening with low-dose computed tomography has been investigated as a potential means of reducing lung cancer deaths for more than two decades. This study adds further weight to the notion that CT screening is effective," said Dr. Pinsky.

The original report in 2011 found that 320 patients would have to be screened to prevent one death from lung cancer while the current follow up research found that 303 patients would have to be screened to prevent one lung cancer death.

This study reaffirms previously published research that shows screening patients at high risk for lung cancer can reduce lung cancer mortality.

Credit: 
International Association for the Study of Lung Cancer

'Committed' CO2 emissions jeopardize international climate goals, UCI-led study finds

Irvine, Calif. - The nations that have signed agreements to stabilize the global mean temperature by 2050 will fail to meet their goals unless existing fossil fuel-burning infrastructure around the world is retired early, according to a study - published today in Nature - by researchers at the University of California, Irvine and other institutions.

"We need to reach net-zero carbon dioxide emissions by midcentury to achieve stabilization of global temperatures as called for in international agreements such as the Paris accords," said lead author Dan Tong, a UCI postdoctoral scholar in Earth system science. "But that won't happen unless we get rid of the long-lasting power plants, boilers, furnaces and vehicles before the end of their useful life and replace them with non-emitting energy technologies."

The number of fossil fuel-burning power plants and vehicles in the world has increased dramatically in the past decade, spurred by rapid economic and industrial development in places such as China and India. Meanwhile, the average age of infrastructure in developed countries has decreased. For example, old coal power plants in the U.S. have been supplanted by new natural gas ones.

According to the study, emissions from existing energy infrastructure take up the entire carbon budget to limit mean warming to 1.5 degrees Celsius and close to two-thirds of the budget to keep warming to under 2 C over the next three decades.

Although the pace of growth has slowed in recent years, a significant amount of new electricity-generating capacity has been proposed globally; some of it is already under construction. If this prospective infrastructure is built, total future emissions take up three-quarters of the budget to constrain warming to below 2 C.

Tong and her colleagues used detailed data sets of existing fossil fuel-burning infrastructure in 2018 to estimate "committed" carbon dioxide emissions. They assumed that power plants and industrial boilers will operate for about 40 years and that light-duty vehicles will be on the road for 15 years, with some regional variation in fuel economy and annual miles traveled.

The researchers also tested different lifetime assumptions in order to see how early CO2-emitting infrastructure might need to be retired in order to meet international climate goals. For example, a 1.5 C boost in mean temperature might still be avoided if current power plants were shuttered after 25, rather than 40 years of operation.

If existing infrastructure operates as usual, though, it will emit about 658 gigatons of CO2 during its operational lifetime, the scientists found. More than half of these emissions are projected to come from the electricity sector, with China producing the largest share, 41 percent, the U.S. producing 9 percent and the European Union 7 percent. If built, power plants being planned, permitted or under construction would emit an additional 188 gigatons of CO2, approximately, according to the study.

"Our results show that there's basically no room for new CO2-emitting infrastructure under the international climate goals," said co-author Steven Davis, a UCI associate professor of Earth system science. "Rather, existing fossil fuel-burning power plants and industrial equipment will need to be retired early unless they can be feasibly retrofitted with carbon capture and storage technologies or their emissions are offset by negative emissions. Without such radical changes, we fear the aspirations of the Paris agreement are already at risk."

Credit: 
University of California - Irvine

Evolution of life in the ocean changed 170 million years ago

video: Kilian Eichenseer and Dr Uwe Balthasar explain research suggesting evolution of life in the ocean changed 170 million years ago.

Image: 
University of Plymouth

The ocean as we understand it today was shaped by a global evolutionary regime shift around 170 million years ago, according to new research.

Until that point, the success of organisms living within the marine environment had been strongly controlled by non-biological factors, including ocean chemistry and climate.

However, from the middle of the Jurassic period onwards (some 170 million years ago), biological factors such as predator-prey relationships became increasingly important.

Writing in Nature Geoscience, scientists say this change coincided with the proliferation of calcium carbonate-secreting plankton and their subsequent deposition on the ocean floor.

They believe the rise of this plankton stabilised the chemical composition of the ocean and provided the conditions for one of the most prominent diversifications of marine life in Earth's history.

The research was led by academics from the University of Plymouth's School of Geography, Earth and Environmental Sciences and School of Computing, Electronics and Mathematics, in cooperation with scientists from the University of Bergen in Norway, and the University of Erlangen-Nuremberg in Germany.

PhD candidate Kilian Eichenseer, the study's lead author, explained the impact of calcifying plankton: "Today, huge areas of the ocean floor are covered with the equivalent of chalk, made up of microscopic organisms that rose to dominance in the middle of the Jurassic period. The chalky mass helps to balance out the acidity of the ocean and, with that balance in place, organisms are less at the mercy of short-term perturbations of ocean chemistry than they might have been previously. It is easier to secrete a shell, regardless of its mineralogy, if the ocean chemistry is stable."

The aim of the research was to test the hypothesis that the evolutionary importance of the non-biological environment had declined through geological time.

Since its emergence more than 540 million years ago, multicellular life evolved under the influence of both the non-biological and the biological environment, but how the balance between these factors changed remained largely unknown.

Calcified seashells provide an ideal test to answer this question, as aragonite and calcite - the minerals making up seashells - also form non-biologically in the ocean.

In their study, the authors used the vast global fossil record of marine organisms that secreted calcium carbonate, which encompasses more than 400,000 samples dating from 10,000 years BC up to around 500 million years ago.

Using reconstructions of the temperature and the ocean water composition of the past, the authors estimated the proportion of aragonite and calcite that formed inorganically in the ocean in 85 geological stages across 500 million years.

Through a series of specially developed statistical analyses, this inorganic pattern of aragonite-calcite seas was then compared with seashell mineral composition over the same time.

The results show that up until the middle of the Jurassic period, around 170 million years ago, the ecological success of shell-secreting marine organisms was tightly coupled to their shell composition: organisms that secreted the mineral that was environmentally favoured had an evolutionary advantage.

However, the Earth-Life system was revolutionised forever by the rise of calcifying plankton, which expanded the production of calcium carbonate from continental shelves to the open ocean.

This ensured that the evolutionary impact of episodes of severe climate changes, and resulting ocean acidification, was less severe than comparable events earlier in Earth history.

Dr Uwe Balthasar, Lecturer in Palaeontology, first published research exploring the dominance of aragonite and calcite in the marine environment in 2015. He said: "During the Earth's history there have been several major events that shaped the evolution of life on our planet, such as the five big mass extinctions or the radiation of complex animals during the 'Cambrian Explosion'. Our research identifies a previously overlooked event of this magnitude around 170 million years ago when the emergence of calcium carbonate-secreting plankton lifted constraints on the evolution of other marine organisms that we did not know existed. As a result, life in the ocean has diversified to levels far beyond what existed before."

Credit: 
University of Plymouth

PBS restrictions result in outdated and unsafe care

Prescribing restrictions for anti-epileptic drugs expose flaws in the review process of the Pharmaceutical Benefits Scheme (PBS), a University of Queensland researcher proposes.

UQ neurologist Professor Christian Gericke said the PBS needed to implement an effective review process of its own restrictions that would allow Australian doctors to prescribe in accordance with American and European clinical guidelines that facilitated safe and up-to-date clinical practice.

"The PBS urgently needs to update anti-epileptic drug restrictions that put patients and prescribers at risk," Professor Gericke said.

"When a new medication is first listed on the PBS, prescribing restrictions are put in place to protect the taxpayer from unreasonably high costs.

"The current system for listing drugs is well organised and provides a good balance between taxpayer and patient interests.

"But there is currently no adequate and effective mechanism to review PBS restrictions once they are in place.

"In the long term, this fosters outdated prescribing practice and suboptimal care."

Professor Gericke said the problem was not limited to anti-epileptic drugs but they were a glaring example of it.

In 2018, the Therapeutic Goods Administration Advisory Committee on Medicines advised doctors to avoid prescribing the common anti-epileptic drug valproate to pregnant women, due to the risk of birth defects and reduced intelligence in children exposed to valproate during pregnancy.

"However the TGA recommendations cannot be implemented in daily prescribing practice in Australia unless the PBS lifts its restrictions on other anti-epileptic drugs," Professor Gericke said.

"In practice, most epileptologists and many physicians and general practitioners ignore the PBS restrictions and follow the international prescribing guidelines in order to provide safe care for their patients.

"These medical practitioners are at risk of legal and financial sanctions.

"It's not fair on doctors, and it's not fair on our patients.

"Doctors should not be forced to choose between safe patient care and complying with outdated government regulation."

Professor Gericke called on the PBS to urgently update its restrictions on the use of anti-epileptic drugs and to create a regular review mechanism of its own prescribing restrictions for all medications on the PBS.

"This will allow Australian doctors to prescribe in line with international best practice without imposing a financial burden on patients or contravening PBS regulations," he said.

Credit: 
University of Queensland

New strategies and approaches needed to cope with growing burden of brain diseases

image: Professor Anne Hege Aamodt, President of the Norwegian Neurological Association, presented The Norwegian Brain Health Strategy 2018-2024 to attendees at the congress. Norway are the first country in Europe to launch a national brain health strategy.

Image: 
European Academy of Neurology (EAN)

(Oslo, Monday, 1 July, 2019) New strategies for preventing and coping with the growing burden of brain diseases were outlined today at the 5th European Academy of Neurology (EAN) Congress in Oslo, Norway.

Professor Anne Hege Aamodt, President of the Norwegian Neurological Association, presented The Norwegian Brain Health Strategy 2018-2024 to attendees at the congress. Norway are the first country in Europe to launch a national brain health strategy, which has four overarching aims:

1. Good lifelong brain health, prevention and quality of life

2. The provision of user-centred care, as well as support for relatives

3. The organisation of holistic care from multi-disciplinary teams

4. Ensuring adequate knowledge and quality through research and innovation

Brain diseases now account for 10% of the global burden of disease. Dementia, one of the most common brain diseases, now affects around 50 million people worldwide with 10 million new cases every year. By 2030, it's estimated that the number of people living with dementia will reach 82 million and this is expected to increase to 152 million by 2050.

"Brain diseases affect a wide range of people in all stages of life and, as people are living for longer, greater numbers now live with a range of brain diseases", explained Professor Aamodt. "Prevention of brain diseases, the provision of equal treatment, follow-up and rehabilitation, as well as increased research and expertise, is absolutely vital in providing patients with optimal outcomes. This strategy will help to facilitate this for a number of brain diseases, including dementia, multiple sclerosis, Parkinson's and stroke-related conditions."

Initiatives outlined in the plan are now underway, which include the funding of a €20 million National Clinical Research Centre devoted to the clinical treatment of severe diseases such as MS, dementia and amyotrophic lateral sclerosis (ALS). In addition, the Norwegian Research Council will also receive an additional €5 million for strengthening research and innovation in neurological conditions.

The Norwegian Neurological Association and the Norwegian Directorate of Health are working to action further objectives outlined in the plan, which is being seen as a model template for other European countries to follow.

Professor Aamodt adds, "We believe that this national strategy should be replicated and implemented across Europe, tailored for each country. The continent will undergo major societal transformations, such as the ageing population, that will impact on brain diseases and health services must adapt to these changes."

Following the launch of the Norwegian Brain Health Strategy, EAN and European Federation of Neurological Associations (EFNA) are also calling for a European Brain Health Plan raise public awareness of brain diseases, lobby governments and integrate the best science to improve outcomes for both patients and society.

Also at EAN 2019: Focusing on the burden of stroke and dementia:

Professor Vladimir Hachinski, world-renowned stroke expert, stressed during the EAN Congress that stroke accounts for 42% of neurological disease, compared to 10% for dementia and that many cases of dementia could be prevented by preventing stroke.

Professor Hachinski said the quest to find a cure for Alzheimer's in the past 40 years had focused on the amyloid/tau plaque hypothesis, but although this research had improved understanding of the dementia process, this 'monorail' approach had so far failed to yield a single disease-modifying drug.

Professor Hachinski stated "The good news is that stroke is 90% potentially preventable through the control of risk factors. Stroke and dementia share the same treatable risk factors and their control is associated with a decrease in stroke and some dementias. Additionally, intensive control of risk factors and enhancement of protective factors improve cognition.""Anticoagulation treatment of atrial fibrillation patients decreases their chance of developing dementia by 48 %. Preliminary data suggests that treating blood pressure to a target of 120mmHg systolic, compared to a target of 140mmHg, decreases the chances of mild cognitive impairment by 19%."

Professor Hachinski said neurological disorders are now responsible for the largest number of disability-adjusted life years (DALYs - a combined index of early mortality and years spent in disability). He added that the introduction of a stroke strategy in Ontario, Canada, which included building stroke units, stroke prevention clinics and campaigns to control risk factors, helped decrease the number of strokes by 32% over 12 years, with a 7% reduction in the incidence of dementia.

He stressed that whilst advancing age, genetic factors and family history couldn't be changed, many other risk factors for stroke could be modified with physical activity, antihypertensive drugs, following a Mediterranean diet, an active lifestyle and taking statins to lower cholesterol.

Professor Hachinski concluded: "Neurological disorders represent the leading cause of DALYs. More than half result from stroke and dementia, which are both preventable to different degrees. We need new vistas and approaches to grasp the opportunity of preventing stroke and some dementias, beginning now."

Credit: 
Spink Health

East Asian hot spring linked to the Atlantic sea surface temperature anomaly

image: Deviations of 2m air temperatures in spring 2018 from the climatology.

Image: 
Kaiqiang Deng

The changes in spring surface air temperature can exert significant impacts on human health and lead to considerable socioeconomic consequences. Therefore, it is of great interest to understand and predict the variations of spring temperatures. However, the dynamics and predictability of East Asian temperatures during boreal spring are more challenging compared to those in the other seasons. Part of the difficulty is due to the existence of the so-called spring predictability barrier--a phenomenon whereby the predictive skill based on ENSO decreases rapidly during boreal spring.

"East Asia experienced an unusually warm spring in 2018, when exceptionally high surface air temperatures were recorded in large areas of Asia, such as northern China, southern China, and Japan," says Dr Kaiqiang Deng, a climate researcher who works with Prof. Song Yang in the School of Atmospheric Sciences, Sun Yat-sen University, and the first author of a paper recently published in Atmospheric and Oceanic Science Letters.

"The intensity of ENSO usually peaks during boreal autumn and winter, and tends to decay remarkably during boreal spring, leading to a reduced predictability of climate and weather in East Asia during this period. However, the ENSO signal during 2017/18 was weak, and so it is interesting to explore whether other antecedent signals (in addition to ENSO) existed that could have been applied to predict the warm East Asia in spring 2018," explains Dr Deng.

Dr Deng and his collaborators investigated the spatiotemporal patterns of the record-breaking temperatures in East Asia in spring 2018 based on ERA-Interim reanalysis data. Their research linked East Asian extreme heat in boreal spring to North Atlantic SST anomalies.

The results indicated that the tripole mode of North Atlantic SST anomalies can trigger anomalous Rossby wave trains over the North Atlantic and Eurasia through modulating the North Atlantic baroclinic instability, which then propagate eastwards and induce anomalously high pressure and anticyclonic circulation over East Asia, leading to descending motion, reduced precipitation, and increased surface solar radiation, which were favorable for the record-breaking warmth in East Asia during spring 2018.

"The seasonal memory of the North Atlantic tripole SST mode from the previous winter to the following spring may provide useful implications for the seasonal prediction of East Asian weather and climate," says Dr Deng. "In the future, we would like to construct a statistical prediction model to improve the sub-seasonal to seasonal predictions for East Asian spring climate."

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Moments of clarity in dementia patients at end of life: Glimmers of hope?

It happens unexpectedly: a person long thought lost to the ravages of dementia, unable to recall the events of their lives or even recognize those closest to them, will suddenly wake up and exhibit surprisingly normal behavior, only to pass away shortly thereafter. This phenomenon, which experts refer to as terminal or paradoxical lucidity, has been reported since antiquity, yet there have been very few scientific studies of it. That may be about to change.

In an article published in the August issue of Alzheimer's & Dementia , an interdisciplinary workgroup convened by the National Institutes of Health's (NIH) National Institute on Aging and led by Michigan Medicine's George A. Mashour, M.D., Ph.D., outlines what is known and unknown about paradoxical lucidity, considers its potential mechanisms, and details how a thorough scientific analysis could help shed light on the pathophysiology of dementia.

"We've assumed that advanced dementia is an irreversible neurodegenerative process with irreversible functional limitations," says Mashour, professor in the department of anesthesiology, faculty in the neuroscience graduate program, and director of the Center for Consciousness Science. "But if the brain is able to access some sort of functional network configuration during paradoxical lucidity, even in severe dementia, this suggests a reversible component of the disease."

The paper describes earlier work documenting case studies of individuals with advanced dementia, including Alzheimer's disease, appearing to be able to communicate and recall in a seemingly normal fashion at the end of life, to the astonishment of their caregivers.

"The accumulation of anecdotal reports about paradoxical lucidity in the scientific literature prompts several important research questions," says NIA medical officer Basil Eldadah, M.D., Ph.D. "We look forward to additional research in this area, such as better characterization of lucidity in its varying presentations, new instruments or methods to assess episodes of lucidity retrospectively or in real-time, tools to analyze speech patterns or other behavioral manifestations of lucidity, and evidence to inform decision-making challenges and opportunities prompted by unexpected lucidity."

One precedent for investigating such events exists in the study of so-called near-death experiences. In 2013, Mashour and his collaborators at Michigan Medicine published a basic science study showing evidence of electrical brain features indicative of a conscious state following cardiac arrest. "We don't know that the same thing is occurring with paradoxical lucidity, but the fact that this is usually happening around the time of death suggests there could be some common neural network mechanism," he says.

Mashour admits that studying paradoxical lucidity will be a challenge, given the fleeting nature of the event. Case studies report episodes lasting from mere seconds to at most several days for a small minority of cases. The workgroup also outlines important ethical implications of this work, including the ability of vulnerable patients to participate in research and how the observation of paradoxical lucidity might change the way caregivers interact with people with dementia.

"Would research that might identify a systematically observable paradoxical lucidity provide comfort, for example, by offering loved ones a potential channel for closure, or might it induce worry if loved ones are left to wonder if a reversible cause of the dementia could have been found? We do not know the answers but these could be important research questions in their own right," says co-first author Lori Frank, Ph.D., of the RAND Corporation and former Health and Aging Congressional fellow with the National Institute on Aging.

The workgroup hopes their paper will help raise awareness within the scientific community to advance paradoxical lucidity research, and help validate the experiences of a multitude of caregivers.

Says Mashour, "Science is now trying to be thoughtful and attentive to something that has long been reported."

Credit: 
Michigan Medicine - University of Michigan