Tech

Oregon scientists decipher the magma bodies under Yellowstone

image: Graphic by University of Oregon scientists provides new structural information, based on supercomputer modeling, about the location of a mid-crustal sill that separates magma under Yellowstone.

Image: 
Courtesy of Dylan Colon

EUGENE, Ore. - April 16, 2018 - Using supercomputer modeling, University of Oregon scientists have unveiled a new explanation for the geology underlying recent seismic imaging of magma bodies below Yellowstone National Park.

Yellowstone, a supervolcano famous for explosive eruptions, large calderas and extensive lava flows, has for years attracted the attention of scientists trying to understand the location and size of magma chambers below it. The last caldera forming eruption occurred 630,000 years ago; the last large volume of lava surfaced 70,000 years ago.

Crust below the park is heated and softened by continuous infusions of magma that rise from an anomaly called a mantle plume, similar to the source of the magma at Hawaii's Kilauea volcano. Huge amounts of water that fuel the dramatic geysers and hot springs at Yellowstone cool the crust and prevent it from becoming too hot.

With computer modeling, a team led by UO doctoral student Dylan P. Colón has shed light on what's going on below. At depths of 5-10 kilometers (3-6 miles) opposing forces counter each other, forming a transition zone where cold and rigid rocks of the upper crust give way to hot, ductile and even partially molten rock below, the team reports in a paper in Geophysical Research Letters.

This transition traps rising magmas and causes them to accumulate and solidify in a large horizontal body called a sill, which can be up to 15 kilometers (9 miles) thick, according to the team's computer modeling.

"The results of the modeling matches observations done by sending seismic waves through the area," said co-author Ilya Bindeman, a professor in the UO's Department of Earth Sciences. "This work appears to validate initial assumptions and gives us more information about Yellowstone's magma locations."

This mid-crustal sill is comprised of mostly solidified gabbro, a rock formed from cooled magma. Above and below lay separate magma bodies. The upper one contains the sticky and gas-rich rhyolitic magma that occasionally erupts in explosions that dwarf the 1980 eruption of Mount St. Helens in Washington state.

Similar structures may exist under super volcanoes around the world, Colón said. The geometry of the sill also may explain differing chemical signatures in eruptive materials, he said.

Colón's project to model what's below the nation's first national park, which was sculpted 2 million years ago by volcanic activity, began soon after a 2014 paper in Geophysical Research Letters by a University of Utah-led team revealed evidence from seismic waves of a large magma body in the upper crust.

Scientists had suspected, however, that huge amounts of carbon dioxide and helium escaping from the ground indicated that more magma is located farther down. That mystery was solved in May 2015, when a second University of Utah-led study, published in the journal Science, identified by way of seismic waves a second, larger body of magma at depths of 20 to 45 kilometers (12-27 miles).

However, Colón said, the seismic-imaging studies could not identify the composition, state and amount of magma in these magma bodies, or how and why they formed there.

To understand the two structures, UO researchers wrote new codes for supercomputer modeling to understand where magma is likely to accumulate in the crust. The work was done in collaboration with researchers at the Swiss Federal Institute of Technology, also known as ETH Zurich.

The researchers repeatedly got results indicating a large layer of cooled magma with a high melting point forms at the mid-crustal sill, separating two magma bodies with magma at a lower melting point, much of which is derived from melting of the crust.

"We think that this structure is what causes the rhyolite-basalt volcanism throughout the Yellowstone hotspot, including supervolcanic eruptions," Bindeman said. "This is the nursery, a geological and petrological match with eruptive products. Our modeling helps to identify the geologic structure of where the rhyolitic material is located."

The new research, for now, does not help to predict the timing of future eruptions. Instead, it provides a never-before-seen look that helps explain the structure of the magmatic plumbing system that fuels these eruptions, Colón said. It shows where the eruptible magma originates and accumulates, which could help with prediction efforts further down the line.

"This research also helps to explain some of the chemical signatures that are seen in eruptive materials," Colón said. "We can also use it to explore how hot the mantle plume is by comparing models of different plumes to the actual situation at Yellowstone that we understand from the geologic record."

Colón is now exploring what influences the chemical composition of magmas that erupt at volcanoes like Yellowstone.

Studying the interaction of rising magmas with the crustal transition zone, and how this influences the properties of the magma bodies that form both above and below it, the scientists wrote, should boost scientific understanding of how mantle plumes influence the evolution and structure of continental crust.

Credit: 
University of Oregon

Spikes of graphene can kill bacteria on implants

image: Vertical graphene flakes form a protective surface that makes it impossible for bacteria to attach. Instead, bacteria are sliced apart by the sharp graphene flakes and killed. Human cells volume is typically 15,000 times larger. So, what constitutes a deadly knife attack for a bacterium, is therefore only a tiny scratch for a human cell. Coating implants with a layer of graphene flakes can therefore help protect the patient against infection, eliminate the need for antibiotic treatment, and reduce the risk of implant rejection. The osseointegration - the process by which the bone structure grows to attach the implant -- is not disturbed. In fact, the graphene has been shown to benefit the bone cells.

Image: 
Yen Strandqvist/Chalmers University of Technology

A tiny layer of graphene flakes becomes a deadly weapon and kills bacteria, stopping infections during procedures such as implant surgery. This is the findings of new research from Chalmers University of Technology, Sweden, recently published in the scientific journal Advanced Materials Interfaces.

Operations for surgical implants, such as hip and knee replacements or dental implants, have increased in recent years. However, in such procedures, there is always a risk of bacterial infection. In the worst case scenario, this can cause the implant to not attach to the skeleton, meaning it must be removed.

Bacteria travel around in fluids, such as blood, looking for a surface to cling on to. Once in place, they start to grow and propagate, forming a protective layer, known as a biofilm.

A research team at Chalmers has now shown that a layer of vertical graphene flakes forms a protective surface that makes it impossible for bacteria to attach. Instead, bacteria are sliced apart by the sharp graphene flakes and killed. Coating implants with a layer of graphene flakes can therefore help protect the patient against infection, eliminate the need for antibiotic treatment, and reduce the risk of implant rejection. The osseointegration - the process by which the bone structure grow to attach the implant - is not disturbed. In fact, the graphene has been shown to benefit the bone cells.

Chalmers University is a leader in the area of graphene research, but the biological applications did not begin to materialise until a few years ago. The researchers saw conflicting results in earlier studies. Some showed that graphene damaged the bacteria, others that they were not affected.

"We discovered that the key parameter is to orient the graphene vertically. If it is horizontal, the bacteria are not harmed" says Ivan Mijakovic, Professor at the Department of Biology and Biological Engineering.

The sharp flakes do not damage human cells. The reason is simple: one bacterium is one micrometer - one thousandth of a millimeter - in diameter, while a human cell is 25 micrometers. So, what constitutes a deadly knife attack for a bacterium, is therefore only a tiny scratch for a human cell.

"Graphene has high potential for health applications. But more research is needed before we can claim it is entirely safe. Among other things, we know that graphene does not degrade easily" says Jie Sun, Associate Professor at the Department of Micro Technology and Nanoscience.

Good bacteria are also killed by the graphene. But that's not a problem, as the effect is localised and the balance of microflora in the body remains undisturbed.

"We want to prevent bacteria from creating an infection. Otherwise, you may need antibiotics, which could disrupt the balance of normal bacteria and also enhance the risk of antimicrobial resistance by pathogens" says Santosh Pandit, postdoc at Biology and Biological Engineering.

Vertical flakes of graphene are not a new invention, having existed for a few years. But the Chalmers research teams are the first to use the vertical graphene in this way. The next step for the research team will be to test the graphene flakes further, by coating implant surfaces and studying the effect on animal cells.

Chalmers cooperated with Wellspect Healthcare, a company which makes catheters and other medical instruments, in this research. They will now continue with a second study. The projects are funded by Vinnova (a Swedish government agency).

The making of vertical graphene

Graphene is made of carbon atoms. It is only a single atomic layer thick, and therefore the world's thinnest material. Graphene is made in flakes or films. It is 200 times stronger than steel and has very good conductivity thanks to its rapid electron mobility. Graphene is also extremely sensitive to molecules, which allows it to be used in sensors.

Graphene can be made by CVD, or Chemical Vapor Deposition. The method is used to create a thin surface coating on a sample. The sample is placed in a vacuum chamber and heated to a high temperature at the same time as three gases - usually hydrogen, methane and argon - are released into the chamber. The high heat causes gas molecules to react with each other, and a thin layer of carbon atoms is created.

To produce vertical graphene forms, a process known as Plasma-Enhanced Chemical Vapor Deposition, or PECVD, is used. Then, an electric field - a plasma - is applied over the sample, which causes the gas to be ionized near the surface. With the plasma, the layer of carbon grows vertically from the surface, instead of horizontally as with CVD.

Watch the video on Youtube: Graphene spikes that kill bacteria

Credit: 
Chalmers University of Technology

Depression study pinpoints genes that may trigger the condition

Nearly 80 genes that could be linked to depression have been discovered by scientists.

The findings could help explain why some people may be at a higher risk of developing the condition, researchers say.

The study could also help researchers develop drugs to tackle mental ill-health, experts say.

Depression affects one in five people in the UK every year and is the leading cause of disability worldwide. Life events - such as trauma or stress - can contribute to its onset, but it is not clear why some people are more likely to develop the condition than others.

Scientists led by the University of Edinburgh analysed data from UK Biobank - a research resource containing health and genetic information for half a million people.

They scanned the genetic code of 300,000 people to identify areas of DNA that could be linked to depression.

Some of the pinpointed genes are known to be involved in the function of synapses, tiny connectors that allow brain cells to communicate with each other through electrical and chemical signals.

The scientists then confirmed their findings by examining anonymised data held by the personal genetics and research company 23andMe, used with the donors' consent.

The study, published in Nature Communications, was funded by Wellcome as part of Stratifying Resilience and Depression Longitudinally, a £4.7 million project to better understand the condition.

Professor Andrew McIntosh of the University of Edinburgh's Centre for Clinical Brain Sciences, who leads the Edinburgh-based research group, said: "Depression is a common and often severe condition that affects millions of people worldwide. These new findings help us better understand the causes of depression and show how the UK Biobank study and big data research has helped advance mental health research.

"We hope that the UK's growing health data research capacity will help us to make major advances in our understanding of depression in coming years."

Dr David Howard, Research Fellow at the University of Edinburgh's Centre for Clinical Brain Sciences and lead author of the study, said: "This study identifies genes that potentially increase our risk of depression, adding to the evidence that it is partly a genetic disorder. The findings also provide new clues to the causes of depression and we hope it will narrow down the search for therapies that could help people living with the condition."

Credit: 
University of Edinburgh

Drinking up to 3 cups of coffee per day may be safe, protective

Many clinicians advise patients with atrial or ventricular arrhythmias to avoid caffeinated beverages, but recent research has shown that coffee and tea are safe and can reduce the frequency of arrhythmias, according to a review published today in JACC: Clinical Electrophysiology.

Arrhythmias, or abnormal heart rhythms, cause the heart to beat too fast, slow or unevenly. While some arrhythmias may be harmless or even go unnoticed in patients, others can increase risk for sudden cardiac arrest. Atrial fibrillation (AFib), the most common heart rhythm disorder, causes the heart to beat rapidly and skip beats, and if left untreated, can cause strokes.

A single cup of coffee contains about 95 mg of caffeine and acts as a stimulant to the central nervous system. Once in the body, caffeine blocks the effects of adenosine, a chemical that can facilitate AFib.

The authors analyzed multiple population-based studies to determine an association between caffeine intake and its effects on atrial and ventricular arrhythmias. These studies have consistently shown a decrease in AFib with an increase in caffeine ingestion, with one meta-analysis of 228,465 participants showing AFib frequency decreasing by 6 percent in regular coffee drinkers, and a further analysis of 115,993 patients showing a 13 percent risk reduction.

"There is a public perception, often based on anecdotal experience, that caffeine is a common acute trigger for heart rhythm problems," said Peter Kistler, MBBS, PhD, director of electrophysiology at Alfred Hospital and Baker Heart and Diabetes Institute, and the review's lead author. "Our extensive review of the medical literature suggests this is not the case."

The authors also determined that caffeine has no effect on ventricular arrhythmias (VAs). Caffeine doses up to 500 mg daily (equivalent to six cups of coffee) did not increase the severity or rate of VAs. A randomized study of 103 post-heart attack patients who received an average of 353 mg/day resulted in improved heart rate and no significant arrhythmias. Only two studies showed an increased risk for VAs, where patients ingested at least 10 cups and nine cups/day, respectively.

"Caffeinated beverages such as coffee and tea may have long term anti-arrhythmic properties mediated by antioxidant effects and antagonism of adenosine," Kistler said. "In numerous population-based studies, patients who regularly consume coffee and tea at moderate levels have a lower lifetime risk of developing heart rhythm problems and possibly improved survival."

The authors determined that energy drinks should be avoided by patients with pre-existing heart conditions. One energy drink can contain anywhere from 160-500 mg of concentrated caffeine. Three quarters of patients with pre-existing heart conditions who consumed two or more energy drinks/day reported palpitations within 24 hours.

Both large population studies and randomized control trials suggest caffeine intake of up to 300 mg/day may be safe for arrhythmic patients. However, there may be individual differences in susceptibility to the effects of caffeine on the factors which trigger arrhythmias in some, and up to 25 percent of patients report coffee as an AFib trigger. Patients with a clear temporal association between coffee intake and documented AFib episodes should accordingly be counseled to abstain. Future research looking at the relationship between heart rhythm patients and the impact of caffeine abstinence may be useful to further clarify this topic.

Credit: 
American College of Cardiology

New liquid biopsy-based cancer model reveals data on deadly lung cancer

image: Allison Stewart, Ph.D.

Image: 
MD Anderson Cancer Center

Small cell lung cancer (SCLC) accounts for 14 percent of all lung cancers and is often rapidly resistant to chemotherapy resulting in poor clinical outcomes. Treatment has changed little for decades, but a study at The University of Texas MD Anderson Cancer Center offers a potential explanation for why the disease becomes chemoresistant, and a possible avenue to explore new diagnostic approaches.

Findings from the study were presented today, at the American Association for Cancer Research Annual Meeting 2018 in Chicago by Allison Stewart, Ph.D., Research Scientist in Thoracic Head & Neck Medical Oncology.

"There have been few therapeutic advances in the past 30 years and platinum-based chemotherapy remains the standard of care. As a result, five-year survival is less than 7 percent across all stages," said Lauren Byers, M.D., associate professor of Thoracic Head & Neck Medical Oncology, and the study's principle investigator. "Most patients respond well to platinum chemotherapy initially, but relapse within a few months. There are no highly effective second-line therapies."

The challenge in studying why and how SCLC chemoresistance occurs is due to the fact that most patients do not undergo another biopsy or surgery at the time of cancer recurrence. This leaves investigators like Byers and Stewart with few SCLC samples with which to conduct genomic and biomarker analyses of drug-resistant tumors.

To overcome the lack of SCLC samples, the team developed novel disease models by isolating circulating tumor cells from a simple blood draw. The cells, placed under the mouse's skin, develop tumors representative of the patient from whom they were derived. These SCLC models, called circulating tumor cell-derived xenografts (CDX), are unique to each patient and provide an opportunity to assess treatment response to novel targeted therapies, as well as changes that may occur in response to therapy.

"We hypothesize that differences in gene and protein expression between tumor cells, called intratumoral heterogeneity, contribute to the rapid development of platinum chemotherapy resistance," said Stewart. "This means that there are likely multiple cell populations in SCLC patients who have not yet been treated. Some of those cells may be killed by chemotherapy but others will not. These resistant cells then continue to grow and prevent further response to treatment."

To study intratumoral heterogeneity (ITH) in SCLC, the investigators performed single-cell sequencing of CDX models to identify gene expression differences between individual cells from chemotherapy-sensitive CDX tumors compared to those that remain resistant.

"We conducted single-cell RNA sequencing to determine if ITH exists and to compare response to chemotherapy in the CDX and the patient," said Stewart. "We found several distinctions between sensitive and resistant models detected at the single-cell level, which testified to single cell sequencing's potential usefulness for understanding how these cancers may develop resistance."

SCLC has a variety of differences at the cellular and genetic level, from the way genes are expressed to which cell-signaling pathways are involved. These differences between tumor cells result in ITH. A more thorough understanding of ITH is important to identify populations of cells that may drive certain pathways associated with aggressive resistance to chemotherapy.

The team also found that SCLC models sensitive to chemotherapy had more cells that expressed two genes, ASCL1 and DLL3, while those that were chemoresistant had fewer cells expressing those genes or had undergone a process called epithelial-to-mesenchymal transition (EMT), which also has been shown to play a role in therapy resistance in other cancers.

"Cells expressing each of these characteristics were identified across all tumors, suggesting cells sensitive or resistant to chemotherapy are both present in the same tumor," said Stewart. "However, even subtle shifts in the distribution of these genes can exert significant impact on response to treatment."

Stewart adds that the team's data support further use of single-cell analysis to explore the role of ITH in SCLC, including effects of treatment on cell populations.

"Through use of these new mouse models, we report data that supports use of single-cell analysis to explore the role of ITH as a driver of drug resistance," said Stewart.

Credit: 
University of Texas M. D. Anderson Cancer Center

Australia to join global health and climate change initiative

The Lancet Countdown report on health and climate change was published in October 2017 by The Lancet and will be updated annually through to 2030.

It tracks progress on health and climate change across 40 indicators divided into five categories: climate change impacts, exposures and vulnerability; adaptation planning and resilience for health; mitigation actions and health co-benefits; economics and finance; and public and political engagement.

Dr Ying Zhang, a senior lecturer in the University of Sydney's School of Public Health, and Associate Professor Paul Beggs, from Macquarie University, wrote in the MJA that, from an Australian perspective, "with our high level of carbon emissions per capita, it will be important to reflect on our progress and how it compares with that of other countries, especially high-income countries".

"A group of Australian experts from multiple disciplines is commencing work on our first national countdown report," Zhang and Beggs wrote.

"The project recognises the importance of the climate change challenge in Australia, including its relevance to human health, and also the unique breadth and depth of the Australian expertise in climate change and human health.

"The Australian countdown will mirror the five domain sections of the Lancet Countdown, adopt the indicators used--where feasible and relevant to Australia--and include any useful additional indicators.

"The inaugural Australian report is planned for release in late 2018 and is expected to be updated annually. We hope to raise awareness of health issues related to climate change among Australian medical professionals, who play a key role in reducing their risks," the authors concluded.

"The Australian countdown is also envisioned as a timely endeavour that will accelerate the Australian government response to climate change and its recognition of the health benefits of urgent climate action."

Credit: 
University of Sydney

Updates on new therapies in development for rare liver diseases

14 April 2018, Paris, France: Promising results for three drugs for the treatment of three rare liver diseases were presented today at The International Liver Congress™ 2018 in Paris, France. Sebelipase alfa, approved for treatment of lysosomal acid lipase (LAL) deficiency in 2015,24 showed sustained improvements and long-term tolerability in a diverse patient population. Preliminary findings with two investigational RNA interference (RNAi) therapeutics were also positive; givosiran substantially reduced the annualized attack rate in patients with acute intermittent porphyria (AIP), and ARO-AAT demonstrated positive preclinical safety and efficacy in alpha-1 antitrypsin (AAT) deficiency - pointing to the developing potential of this new therapeutic strategy in patients with few treatment options. LAL deficiency, an underappreciated cause of cirrhosis and severe dyslipidaemia, is a rare autosomal recessive disorder characterized by accumulation of cholesteryl esters and triglycerides in the liver.25 The age at onset and rate of progression vary greatly.25 Sebelipase alfa is a recombinant human LAL enzyme indicated for the treatment of LAL deficiency which was approved in 2015 following successful Phase 2/3 trials.26,27

'It is exciting to see clinical benefits and good tolerability confirmed in this long-term follow-up across a diverse population of adult and paediatric patients with LAL deficiency', said Dr Florian Abel from Alexion Pharmaceuticals, Inc., New Haven, CT, USA. 'This population included patients who would have been ineligible to participate in previous clinical studies because of their age or prior transplant status'.

Data were presented today for 31 patients who were enrolled in a multicentre, open-label study of sebelipase alfa 1 mg/kg by intravenous (IV) infusion every other week for up to 96 weeks. Permitted dose escalation/reduction was from a maximum of 3 mg/kg weekly to a minimum of 0.35 mg/kg every other week.

There were marked reductions from baseline in alanine aminotransferase (ALT; -44.4%) and aspartate aminotransferase (AST; -38.4%). There were also reductions from baseline in liver volume (-17.6%), liver fat content (-14.9%), and spleen volume (-16.5%). In the 7/13 patients with data available, liver fibrosis improved or did not progress. Most adverse events were mild to moderate in severity, three patients experienced infusion-associated reactions. Two patients were positive for anti-drug antibodies, on one occasion each, but neither developed neutralizing antibodies.

'We were pleased to see that long-term treatment with sebelipase alfa was well tolerated and that improvements in markers of liver injury were sustained', said Dr Abel.

AIP is the most common form of acute hepatic porphyrias (AHPs), a family of rare, inherited metabolic diseases resulting in deficiencies in the liver enzymes responsible for haem biosynthesis.28 Central to the pathophysiology of all AHPs is the induction of aminolevulinic acid synthase 1 (ALAS1), which can lead to accumulation of the neurotoxic haem intermediates aminolevulinic acid (ALA) and porphobilinogen (PBG), which are causal for potentially life-threatening disease manifestations.29 RNAi is a naturally occurring cellular mechanism mediated by small interfering RNA (siRNA) that allows for the inhibition of protein synthesis through the cleavage and degradation of a specific mRNA.30 Givosiran is an investigational, subcutaneously administered RNAi therapeutic targeting liver ALAS1 to reduce ALA and PBG accumulation in patients with AHPs.

A Phase 1, multinational, randomized, placebo-controlled study of givosiran has been conducted in three parts; Part A: single ascending dose, Part B: multiple ascending dose and Part C: multiple dose (four cohorts of four to five patients each), to evaluate the safety, tolerability, pharmacokinetics, and pharmacodynamics of givosiran in patients with AIP (ClinicalTrials.gov Identifier: NCT02452372). The study has now been completed and givosiran was generally well tolerated, with no serious adverse events or clinically significant laboratory abnormalities related to the study drug.

Monthly dosing of givosiran led to rapid, dose-dependent, and durable silencing of induced ALAS1 mRNA of approximately 60%, with concomitant lowering of ALA and PBG by >80% in patients with recurrent attacks. Patients treated with a monthly dose of 2.5 mg/kg of givosiran had an 83% mean decrease in the annualized attack rate (requiring hospitalization, urgent care, or haemin) compared with placebo, and an 88% decrease in the number of haemin doses. Patients completing the Phase 1 study were eligible to enrol in an open-label extension study (NCT02949830). As of February 2018, the safety profile in patients in the open-label extension (n=16) was consistent with that observed in Part C. Patients that had received givosiran in Part C (n=12) had further reductions in annualized attack rate of 93%, relative to the 3-month run-in period.

'Givosiran has the potential to significantly lower liver ALAS1 levels in a sustained manner and to thereby decrease the accumulation of neurotoxic intermediates that potentially lead to severe or life-threatening neurovisceral attacks. We're very encouraged by our results, as treatment was associated with marked reductions in both annualized attack rate and haemin use', said Dr Eliane Sardh from the Karolinska University Hospital, Stockholm, Sweden. 'These results suggest that givosiran, which is currently being studied in a Phase 3 trial, has the potential to become a transformative treatment option for patients with hepatic porphyrias, a debilitating and potentially life-threatening disease'. (NCT03338816).

AAT deficiency is an autosomal, co-dominant genetic disorder in which the PiZ mutation results in the misfolded protein (Z-AAT) that accumulates in hepatocytes and can lead to fibrosis, cirrhosis and hepatocellular carcinoma.31 The only current treatment option for AAT deficiency-related liver disease is liver transplant.31 ARO-AAT is a second-generation, subcutaneously adminstered RNAi therapeutic that replaces ARC-AAT, a first-generation intravenously administered RNAi therapeutic that previously demonstrated proof of concept in the PiZ mouse model expressing human Z-AAT, and achieved deep knockdown in healthy volunteers and patients.32,33

'ARO-AAT is a liver-targeted RNAi therapeutic that durably reduced Z-AAT liver mRNA and serum protein in PiZ mice. The degree of mRNA reduction correlated with the amount of siRNA in the liver', said Dr Christine Wooddell of Arrowhead Pharmaceuticals, Madison, WI, USA. 'We have also assessed the pharmacokinetics and biodistribution of ARO-AAT in rats, efficacy in PiZ mice, and pharmacological activity in non-human primates'.

In the studies presented today, ARO-AAT in rats demonstrated high tissue distribution, with the highest exposure in the liver through Day 16, peaking at 4 hours. Repeat dosing (4 mg/kg once every 2 weeks, four times) of young PiZ mice reduced Z-AAT liver mRNA by 95%, plasma Z-AAT by 96%, monomeric liver Z-AAT by 98%, and polymeric Z-AAT by 41%. ARO-AAT prevented increases of Z-AAT polymer globules that were observed in untreated controls of the same age, with a 2.6-fold increase in number, an 8-fold increase in affected liver area, and a 3.3-fold increase in globule size. Non-human primates had a mean reduction of serum AAT of 89-91% that was sustained for more than 7 weeks after the second dose received, following administration of two doses of 3 mg/kg, 4 weeks apart. These results are supportive of monthly or less frequent dosing for ARO-AAT.

'We believe that the results from these studies strongly support advancement of ARO-AAT into the clinic', said Dr Wooddell. 'A Phase 1 single- and multiple-ascending dose study to evaluate the safety, tolerability, pharmacokinetics, and effect of ARO-AAT on serum alpha-1 antitrypsin levels in healthy adult volunteers started administering doses to subjects on 12 March 2018'.

'Rare diseases are a greater challenge than you might expect, as apart from the difficulties in reaching a full diagnosis, there are often no effective treatments available', said Prof. Marco Marzioni from the University Hospital of Ancona, Italy, and EASL Governing Board Member. 'For instance, the study investigating a treatment for LAL deficiency is important, as this is a disease that we only recently learned to identify'.

Credit: 
European Association for the Study of the Liver

Artificial intelligence accelerates discovery of metallic glass

image: With new, artificial intelligence approach, scientists discovered metallic glass 200 times faster than with an Edisonian approach.

Image: 
SLAC National Accelerator Laboratory

EVANSTON, Ill. -- If you combine two or three metals together, you will get an alloy that usually looks and acts like a metal, with its atoms arranged in rigid geometric patterns.

But once in a while, under just the right conditions, you get something entirely new: a futuristic alloy called metallic glass. The amorphous material's atoms are arranged every which way, much like the atoms of the glass in a window. Its glassy nature makes it stronger and lighter than today's best steel, and it stands up better to corrosion and wear.

Although metallic glass shows a lot of promise as a protective coating and alternative to steel, only a few thousand of the millions of possible combinations of ingredients have been evaluated over the past 50 years, and only a handful developed to the point that they may become useful.

Now a group led by scientists at the Department of Energy's SLAC National Accelerator Laboratory, the National Institute of Standards and Technology (NIST) and Northwestern University has reported a shortcut for discovering and improving metallic glass -- and, by extension, other elusive materials -- at a fraction of the time and cost.

The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning -- a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data -- with experiments that quickly make and screen hundreds of sample materials at a time. This allowed the team to discover three new blends of ingredients that form metallic glass, and to do it 200 times faster than it could be done before.

The study was published today, April 13, in Science Advances.

"It typically takes a decade or two to get a material from discovery to commercial use," said Chris Wolverton, the Jerome B. Cohen Professor of Materials Science and Engineering in Northwestern's McCormick School of Engineering, who is an early pioneer in using computation and AI to predict new materials. "This is a big step in trying to squeeze that time down. You could start out with nothing more than a list of properties you want in a material and, using AI, quickly narrow the huge field of potential materials to a few good candidates."

The ultimate goal, said Wolverton, who led the paper's machine learning work, is to get to the point where a scientist can scan hundreds of sample materials, get almost immediate feedback from machine learning models and have another set of samples ready to test the next day -- or even within the hour.

Over the past half century, scientists have investigated about 6,000 combinations of ingredients that form metallic glass. Added paper co-author Apurva Mehta, a staff scientist at SSRL: "We were able to make and screen 20,000 in a single year."

Just getting started

While other groups have used machine learning to come up with predictions about where different kinds of metallic glass can be found, Mehta said, "The unique thing we have done is to rapidly verify our predictions with experimental measurements and then repeatedly cycle the results back into the next round of machine learning and experiments."

There's plenty of room to make the process even speedier, he added, and eventually automate it to take people out of the loop altogether so scientists can concentrate on other aspects of their work that require human intuition and creativity. "This will have an impact not just on synchrotron users, but on the whole materials science and chemistry community," Mehta said.

The team said the method will be useful in all kinds of experiments, especially in searches for materials like metallic glass and catalysts whose performance is strongly influenced by the way they're manufactured, and those where scientists don't have theories to guide their search. With machine learning, no previous understanding is needed. The algorithms make connections and draw conclusions on their own, which can steer research in unexpected directions.

"One of the more exciting aspects of this is that we can make predictions so quickly and turn experiments around so rapidly that we can afford to investigate materials that don't follow our normal rules of thumb about whether a material will form a glass or not," said paper co-author Jason Hattrick-Simpers, a materials research engineer at NIST. "AI is going to shift the landscape of how materials science is done, and this is the first step."

Experimenting with data

In the metallic glass study, the research team investigated thousands of alloys that each contain three cheap, nontoxic metals.

They started with a trove of materials data dating back more than 50 years, including the results of 6,000 experiments that searched for metallic glass. The team combed through the data with advanced machine learning algorithms developed by Wolverton and Logan Ward, a graduate student in Wolverton's laboratory who served as co-first author of the paper.

Based on what the algorithms learned in this first round, the scientists crafted two sets of sample alloys using two different methods, allowing them to test how manufacturing methods affect whether an alloy morphs into a glass. An SSRL x-ray beam scanned both sets of alloys, then researchers fed the results into a database to generate new machine learning results, which were used to prepare new samples that underwent another round of scanning and machine learning.

By the experiment's third and final round, Mehta said, the group's success rate for finding metallic glass had increased from one out of 300 or 400 samples tested to one out of two or three samples tested. The metallic glass samples they identified represented three different combinations of ingredients, two of which had never been used to make metallic glass before.

Credit: 
Northwestern University

Individual impurity atoms detectable in graphene

image: Using the atomic force microscope's carbon monoxide functionalized tip (red/silver), the forces between the tip and the various atoms in the graphene ribbon can be measured.

Image: 
Image: University of Basel, Department of Physics

A team including physicists from the University of Basel has succeeded in using atomic force microscopy to clearly obtain images of individual impurity atoms in graphene ribbons. Thanks to the forces measured in the graphene's two-dimensional carbon lattice, they were able to identify boron and nitrogen for the first time, as the researchers report in the journal Science Advances.

Graphene is made of a two-dimensional layer of carbon atoms arranged in a hexagonal lattice. The strong bonds between the carbon atoms make graphene extremely stable yet flexible. It is also an excellent electrical conductor through which electricity can flow with almost no loss.

Graphene's distinctive properties can be further expanded by incorporating impurity atoms in a process known as "doping". The impurity atoms cause local changes of the conduction that, for example, allow graphene to be used as a tiny transistor and enable the construction of circuits.

Targeted incorporation

In a collaboration between scientists from the University of Basel and the National Institute for Material Science in Tsukuba in Japan, Kanazawa University and Kwansei Gakuin University in Japan, and Aalto University in Finland, the researchers specifically created and examined graphene ribbons containing impurity atoms.

They replaced particular carbon atoms in the hexagonal lattice with boron and nitrogen atoms using surface chemistry, by placing suitable organic precursor compounds on a gold surface. Under heat exposure up to 400°C, tiny graphene ribbons formed on the gold surface from the precursors, including impurity atoms at specific sites.

Measuring the strength of the atoms

Scientists from the team led by Professor Ernst Meyer from the Swiss Nanoscience Institute and the University of Basel's Department of Physics examined these graphene ribbons using atomic force microscopy (AFM). They used a carbon monoxide functionalized tip and measured the tiny forces that act between the tip and the individual atoms.

This method allows even the smallest differences in forces to be detected. By looking at the different forces, the researchers were able to map and identify the different atoms. "The forces measured for nitrogen atoms are greater than for a carbon atom," explains Dr. Shigeki Kawai, lead author of the study and former postdoc in Meyer's team. "We measured the smallest forces for the boron atoms." The different forces can be explained by the different proportion of repulsive forces, which is due to the different atomic radii.

Computer simulations confirmed the readings, proving that AFM technology is well-suited to conducting chemical analyses of impurity atoms in the promising two-dimensional carbon compounds.

Credit: 
University of Basel

Scientists use machine learning to speed discovery of metallic glass

image: Fang Ren, who developed algorithms to analyze data on the fly while a postdoctoral scholar at SLAC, at a Stanford Synchrotron Radiation Lightsource beamline where the system has been put to use.

Image: 
Dawn Harmer/SLAC National Accelerator Laboratory

Blend two or three metals together and you get an alloy that usually looks and acts like a metal, with its atoms arranged in rigid geometric patterns.

But once in a while, under just the right conditions, you get something entirely new: a futuristic alloy called metallic glass that's amorphous, with its atoms arranged every which way, much like the atoms of the glass in a window. Its glassy nature makes it stronger and lighter than today's best steel, plus it stands up better to corrosion and wear.

Even though metallic glass shows a lot of promise as a protective coating and alternative to steel, only a few thousand of the millions of possible combinations of ingredients have been evaluated over the past 50 years, and only a handful developed to the point that they may become useful.

Now a group led by scientists at the Department of Energy's SLAC National Accelerator Laboratory, the National Institute of Standards and Technology (NIST) and Northwestern University has reported a shortcut for discovering and improving metallic glass -- and, by extension, other elusive materials -- at a fraction of the time and cost.

The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning -- a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data -- with experiments that quickly make and screen hundreds of sample materials at a time. This allowed the team to discover three new blends of ingredients that form metallic glass, and to do this 200 times faster than it could be done before, they reported today in Science Advances.

"It typically takes a decade or two to get a material from discovery to commercial use," said Northwestern Professor Chris Wolverton, an early pioneer in using computation and AI to predict new materials and a co-author of the paper. "This is a big step in trying to squeeze that time down. You could start out with nothing more than a list of properties you want in a material and, using AI, quickly narrow the huge field of potential materials to a few good candidates."

The ultimate goal, he said, is to get to the point where a scientist could scan hundreds of sample materials, get almost immediate feedback from machine learning models and have another set of samples ready to test the next day -- or even within the hour.

Over the past half century, scientists have investigated about 6,000 combinations of ingredients that form metallic glass, added paper co-author Apurva Mehta, a staff scientist at SSRL: "We were able to make and screen 20,000 in a single year."

Just Getting Started

While other groups have used machine learning to come up with predictions about where different kinds of metallic glass can be found, Mehta said, "The unique thing we have done is to rapidly verify our predictions with experimental measurements and then repeatedly cycle the results back into the next round of machine learning and experiments."

There's plenty of room to make the process even speedier, he added, and eventually automate it to take people out of the loop altogether so scientists can concentrate on other aspects of their work that require human intuition and creativity. "This will have an impact not just on synchrotron users, but on the whole materials science and chemistry community," Mehta said.

The team said the method will be useful in all kinds of experiments, especially in searches for materials like metallic glass and catalysts whose performance is strongly influenced by the way they're manufactured, and those where scientists don't have theories to guide their search. With machine learning, no previous understanding is needed. The algorithms make connections and draw conclusions on their own, and this can steer research in unexpected directions.

"One of the more exciting aspects of this is that we can make predictions so quickly and turn experiments around so rapidly that we can afford to investigate materials that don't follow our normal rules of thumb about whether a material will form a glass or not," said paper co-author Jason Hattrick-Simpers, a materials research engineer at NIST. "AI is going to shift the landscape of how materials science is done, and this is the first step."

Strength in Numbers

The paper is the first scientific result associated with a DOE-funded pilot project where SLAC is working with a Silicon Valley AI company, Citrine Informatics, to transform the way new materials are discovered and make the tools for doing that available to scientists everywhere.

Founded by former graduate students from Stanford and Northwestern universities, Citrine has created a materials science data platform where data that had been locked away in published papers, spreadsheets and lab notebooks is stored in a consistent format so it can be analyzed with AI specifically designed for materials.

"We want to take materials and chemical data and use them effectively to design new materials and optimize manufacturing," said Greg Mulholland, founder and CEO of the company. "This is the power of artificial intelligence: As scientists generate more data, it learns alongside them, bringing hidden trends to the surface and allowing scientists to identify high-performance materials much faster and more effectively than relying on traditional, purely human-driven materials development."

Until recently, thinking up, making and assessing new materials was painfully slow. For instance, the authors of the metallic glass paper calculated that even if you could cook up and examine five potential types of metallic glass a day, every day of the year, it would take more than a thousand years to plow through every possible combination of metals. When they do discover a metallic glass, researchers struggle to overcome problems that hold these materials back. Some have toxic or expensive ingredients, and all of them share glass's brittle, shatter-prone nature.

Over the past decade, scientists at SSRL and elsewhere have developed ways to automate experiments so they can create and study more novel materials in less time. Today, some SSRL users can get a preliminary analysis of their data almost as soon as it comes out with AI software developed by SSRL in conjunction with Citrine and the CAMERA project at DOE's Lawrence Berkeley National Laboratory.

"With these automated systems we can analyze more than 2,000 samples per day," said Fang Ren, the paper's lead author, who developed algorithms to analyze data on the fly and coordinated their integration into the system while a postdoctoral scholar at SLAC.

Experimenting with Data

In the metallic glass study, the research team investigated thousands of alloys that each contain three cheap, nontoxic metals.

They started with a trove of materials data dating back more than 50 years, including the results of 6,000 experiments that searched for metallic glass. The team combed through the data with advanced machine learning algorithms developed by Wolverton and graduate student Logan Ward at Northwestern.

Based on what the algorithms learned in this first round, the scientists crafted two sets of sample alloys using two different methods, allowing them to test how manufacturing methods affect whether an alloy morphs into a glass.

Both sets of alloys were scanned by an SSRL X-ray beam, the data fed into the Citrine database, and new machine learning results generated, which were used to prepare new samples that underwent another round of scanning and machine learning.

By the experiment's third and final round, Mehta said, the group's success rate for finding metallic glass had increased from one out of 300 or 400 samples tested to one out of two or three samples tested. The metallic glass samples they identified represented three different combinations of ingredients, two of which had never been used to make metallic glass before.

Credit: 
DOE/SLAC National Accelerator Laboratory

Keeping an eye on the health of structures

image: Lake Urmia (LU), its water level over time, and the survey area of the four satellites are shown, along with a topographic map of Iran and a sketch of the Lake Urmia Causeway.

Image: 
<em>Scientific Reports</em>

Scientists at Tokyo Institute of Technology (Tokyo Tech) used synthetic-aperture radar data from four different satellites, combined with statistical methods, to determine the structural deformation patterns of the largest bridge in Iran.

The importance of roads and bridges for humans both during ancient and contemporary times is clearly evident. The structural health and integrity of such large structures are, however, not nearly as evident, mainly because structures tend to deteriorate over a long time. Determining the amount of deformation a structure has undergone (and how much it will undergo in the future) is crucial for ensuring the safety of the people in or near that structure and for minimizing repair costs and potential damage.

One large structure that has raised concerns over the last decade is the Lake Urmia Causeway (LUC), a series of roads and a bridge that go over Lake Urmia, located in northwest Iran (see Figure 1). Iran, known as an arid to semi-arid area, has serious problems with land subsidence due to excessive underground water extraction. Thus, Dr. Sadra Karimzadeh who realized the problem of the LUC and joined a team of scientists at Tokyo Tech, led by Professor Masashi Matsuoka, analyzed the recent deformation patterns that the LUC underwent from 2004 to 2017, using datasets obtained from four satellites equipped with synthetic-aperture radars (see Figure 2). As expected, these datasets required sophisticated mathematical and statistical analyses before the deformation rates (related to the natural settlement of the east embankment and the artificial uplift at the beginning of the west embankment) could be more accurately determined.

Using the small baseline subset (SBAS) technique on the satellites' data, the uncertainty in the obtained vertical displacement rates of the LUC was reduced. The research team also performed a field survey of the lake in 2017 to observe the physical conditions of the LUC and to investigate the most likely causes of the accelerated deformation affecting the structure.

In order to verify their assumptions on the causes of the accelerated deformation, the team conducted a principal component analysis (PCA) on the data and then used a hydro-thermal model to compare results. PCA is a technique that takes multi-dimensional data as an input and flattens them into usually two or three dimensions (referred to as the "principal components" or "PCs"), which can be then used to reveal new and valuable comparative information. Only three principal components accounted for almost all the variability in the data, with the first one (the most significant one) revealing an overall downwards trend in the structure caused by soil consolidation, and the second and third ones being associated to both seasonal changes and human activity affecting the lake (see Figure 3). The team made a prediction as to how much deformation can be expected to occur in the following 365 days.

Dr. Sadra Karimzadeh said, "The results of space-based monitoring of critical structures is quite useful in developing countries. It must be continuously utilized at affordable costs."

With this study, the research team demonstrated how PCA can be effectively employed to accommodate data from different datasets over multiple timescales. The combination of the aforementioned techniques prove how data from current and previous satellite missions can be used as an efficient mechanism to determine the current and future health of structures such that preventive actions can be taken to minimize potential damage and reduce costs.

Credit: 
Tokyo Institute of Technology

Lavas in the lab could lead miners to new iron ore deposits

image: The samples were placed in small golden capsules -- with a melting point of 1,064°C -- and subjected to temperatures of 1,000-1,040°C and 1,000 times the atmospheric pressure of Earth.

Image: 
© Lennart A. Fischer

Geologists have discovered that some magmas split into two separate liquids, one of which is very rich in iron. Their findings can help to discover new iron ore deposits for mining.

Iron ore is mined in about 50 countries, with Australia, Brazil and China as the largest producers. It is mostly used to produce the steel objects that are all around us - from paper clips to kitchen appliances and the supporting beams in skyscrapers.

Most iron ore deposits are found in sedimentary rocks. Others are mined in volcanic complexes such as El Laco in Chile and Kiruna in Sweden. These iron ore deposits, called Kiruna-type deposits, account for about 10% of the global production of iron, yet nobody knows how they are formed.

In Nature Communications, an international team of researchers from institutions including KU Leuven, Leibniz University Hannover, and ULiège present the first evidence that these iron ore deposits are formed when magma splits into two separate liquids.

"Previous studies have always focused on the texture or the composition of natural rocks. We were the first to actually reproduce magmas in the lab such as the ones found in El Laco," says last author Olivier Namur from the Department of Earth and Environmental Sciences at KU Leuven, Belgium.

"We wanted to reproduce the conditions found in magma chambers, where molten rock accumulates when it cannot rise to the surface of the Earth. This is also where the iron ore deposits beneath volcanoes are formed, so reproducing the temperature and pressure of the magma chambers seemed well worth examining."

"That's why we produced a mixture of iron-rich ore samples and typical lavas surrounding Kiruna-type deposits. This created a bulk magma composition that we believe exists in the deep magma chamber beneath volcanoes. We placed the mixture in a furnace and raised the temperature to 1,000-1,040°C. We also increased the pressure to about 1000 times the atmospheric pressure of Earth. These are the conditions of a magma chamber."

"We were surprised to find that, under these conditions, the magma split into two separate liquids. This process is known as immiscibility. Just think of what happens when oil spills into the ocean: the water becomes streaked with oil because oil and water cannot mix."

"One of these liquids contained a lot of silica, whereas the other was extremely rich in iron - up to 40% - and phosphorus. When this iron-rich liquid starts to cool down, you get iron-phosphorous Kiruna-type ore deposits."

"This is the first evidence that immiscibility is key to the formation of iron ore deposits such as the ones mined in El Laco. If we're right, these findings may help to find new iron ore deposits. This is necessary to keep up with the global demand for iron: recycling alone is not enough yet. And if you want to know where to look for iron ore, you have the understand how the deposits are formed."

Credit: 
KU Leuven

New insight into how Giant's Causeway and Devils Postpile were formed

image: This is the Giant's Causeway.

Image: 
University of Liverpool

A new study by geoscientists at the University of Liverpool has identified the temperature at which cooling magma cracks to form geometric columns such as those found at the Giant's Causeway in Northern Ireland and Devils Postpile in the USA.

Geometric columns occur in many types of volcanic rocks and form as the rock cools and contracts, resulting in a regular array of polygonal prisms or columns.

Columnar joints are amongst the most amazing geological features on Earth and in many areas, including the Giant's Causeway, they have inspired mythologies and legends.

One of the most enduring and intriguing questions facing geologists is the temperature at which cooling magma forms these columnar joints.

Liverpool geoscientists undertook a research study to find out how hot the rocks were when they cracked open to form these spectacular stepping stones.

In a paper published in Nature Communications, researchers and students at the University's School of Environmental Sciences designed a new type of experiment to show how as magma cools, it contracts and accumulates stress, until it cracks. The study was performed on basaltic columns from Eyjafjallajökull volcano, Iceland.

They designed a novel apparatus to permit cooling lava, gripped in a press, to contract and crack to form a column. These new experiments demonstrated that the rocks fracture when they cool about 90 to 140?C below the temperature at which magma crystallises into a rock, which is about 980?C for basalts.

This means that columnar joints exposed in basaltic rocks, as observed at the Giant's Causeway and Devils Postpile (USA) amongst others, were formed around 840-890 ?C.

Yan Lavallée, Liverpool Professor of Volcanology who headed the research, said: "The temperature at which magma cools to form these columnar joints is a question that has fascinated the world of geology for a very long time. We have been wanting to know whether the temperature of the lava that causes the fractures was hot, warm or cold.

"I have spent over a decade pondering how to address this question and construct the right experiment to find the answer to this question. Now, with this study, we have found that the answer is hot, but after it solidified."

Dr Anthony Lamur, for whom this work formed part of his doctoral study, added: "These experiments were technically very challenging, but they clearly demonstrate the power and significance of thermal contraction on the evolution of cooling rocks and the development of fractures".

Dr Jackie Kendrick, a post-doctoral researcher in the Liverpool group said: "Knowing the point at which cooling magma fractures is critical, as -beyond leading to the incision of this stunning geometrical feature- it initiates fluid circulation in the fracture network. Fluid flow controls heat transfer in volcanic systems, which can be harnessed for geothermal energy production. So the findings have tremendous applications for both volcanology and geothermal research."

Understanding how cooling magma and rocks contract and fracture is central to understand the stability of volcanic constructs as well as how heat is transferred in the Earth.

Professor Lavallée added: "The findings shed light on the enigmatic observations of coolant loss made by Icelandic engineers as they drilled into hot volcanic rocks in excess of 800?C; the loss of coolant in this environment was not anticipated, but our study suggests that substantial contraction of such hot rocks would have opened wide fractures that drained away the cooling slurry from the borehole.

"Now that we know this, we can revisit our drilling strategy and further our quest for the new development of magma energy sources."

Credit: 
University of Liverpool

Immunotherapy provides long-term survival benefit: Further evidence in lung cancer

image: This table is part of the abstract presented in the press release.

Image: 
© ELCC 2018

Lugano-Geneva, 12 April 2018 - Further evidence that immunotherapy provides long-term survival benefit for patients with lung cancer was presented today at ELCC 2018 (European Lung Cancer Congress) in Geneva, Switzerland. (1)

Researchers presented the three-year survival results of the randomised phase 2 POPLAR trial in second line, (2) which is the longest follow-up reported to date with anti-programmed death ligand 1 (PD-L1) immunotherapy in patients with previously treated, advanced non-small-cell lung cancer (NSCLC). The trial randomised 287 patients from 61 sites across 13 countries with advanced NSCLC to the anti-PD-L1 antibody atezolizumab or docetaxel (chemotherapy).

Overall survival was significantly higher with atezolizumab at two and three years compared with docetaxel. Nearly one-third of patients (32.2%) in the atezolizumab treatment group were alive at two years compared with 16.6% in the docetaxel group. Additionally, at three years, almost twice as many patients (18.7%) were alive in the atezolizumab group compared to the docetaxel group (10.0%). The long-term overall survival benefit with atezolizumab over docetaxel was observed across histology (squamous and non-squamous) and regardless of PD-L1 expression. Even patients with PD-L1 expression in less than 1% of tumour cells and less than 1% of immune cells had a promising rate of long-term survival.

The median duration of response was three times longer with atezolizumab (22.3 months) compared to docetaxel (7.2 months). Atezolizumab led to fewer adverse events than docetaxel.

Lead author Dr Julien Mazières of Toulouse University Hospital, Toulouse, France, said: "Nearly one in five patients treated with atezolizumab was alive at three years. This places atezolizumab among the drugs with the highest landmark overall survival in previously treated lung cancer patients."

"The fact that all subgroups of patients benefitted to a similar degree is good in the sense that atezolizumab can be tried in all advanced NSCLC patients," he continued. "On the other hand, it means that we cannot predict which patients are most likely to live for three years. We need to find biomarkers to help us identify the long-term survivors with the drug."

Mazières said the drug was well tolerated which meant that patients can keep taking it for several years. He said: "Some of my patients who were in the atezolizumab treatment group are now long-term survivors with lung cancer. They are not cured, but they have survived, have a good quality of life, and have returned to work. With immunotherapy we now have a new type of patient: long-term survivors with lung cancer who can go back to a normal life."

Commenting on the study, Prof. Solange Peters, Head of Medical Oncology, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland, ESMO President-Elect said: "Before immunotherapy, the long-term survival of non-small-cell lung cancer patients was close to 0%. POPLAR supports the concept that long-term survival is possible with immunotherapy. The three-year survival results of POPLAR are consistent with the three- and five-year survival with the anti-PD-1 antibodies pembrolizumab and nivolumab, respectively, in phase 1 trials. These latest results are exciting because unlike the previous two trials, POPLAR was a large, randomised trial and provides convincing proof that long-term survival now exists in lung cancer."

Peters said there was now a strong argument that every patient with advanced NSCLC should receive immunotherapy. She said: "In the nivolumab phase 1 trial 15% of patients were alive at five years, which in cancer is usually considered being cured. We should offer all patients this one in six chance of five-year survival. However, this poses a financial challenge for healthcare systems."

To make this strategy sustainable, Peters said a method was needed to identify the patients who will not benefit from immunotherapy. She said: "That would enable us to treat only the patients with a high chance of long-term survival with immunotherapy. POPLAR shows that PD-L1 is not a useful biomarker to exclude patients from immunotherapy, since some patients with very low expression had an overall survival benefit. Rather than a single biomarker, I think it will be a signature of many biomarkers including tumour mutation burden that identifies the patients who should not be treated."

Peters said trials are needed to assess the ability of a combination of biomarkers to predict which patients with advanced NSCLC do, and do not, survive long-term with immunotherapy: "These trials should be conducted in patients with similar characteristics to the long-term survivors in the phase 1 and 2 trials with atezolizumab, pembrolizumab, and nivolumab. So the first step will be to describe these patients in terms of demographics, smoking history, tumour mutation burden, expression of immune genes, and PD-L1 expression. Focusing future studies on these patients will help us to discover a biomarker signature for use in clinical practice."

Credit: 
European Society for Medical Oncology

Wiggling atoms switch the electric polarization of crystals

image: This is a crystal lattice of ferroelectric ammonium sulfate [(NH4)2SO4] with tilted ammonium (NH4+) tetrahedra (nitrogen: blue, hydrogen: white) and sulfate (SO42-) tetrahedra (sulfur: yellow, oxygen: red). The green arrow shows the direction of macroscopic polarization P. Blue arrows: local dipoles between sulphur and oxygen atoms. The electron density maps shown in the bottom left panel, in Fig. 2, and the movie are taken in the plane shown in grey. Bottom left: Stationary electron density of sulfur and oxygen atoms, displaying high values on the sulfur (red) and smaller values on the oxygens (yellow). Bottom right: Change of local dipoles at a delay time of 2.8 picoseconds (ps) after excitation of the ammonium sulfate crystallites. An anisotropic shift of charge reduces the dipole pointing to the right and increases the other 3 dipoles.

Image: 
MBI Berlin

Ferroelectric crystals display a macroscopic electric polarization, a superposition of many dipoles at the atomic scale which originate from spatially separated electrons and atomic nuclei. The macroscopic polarization is expected to change when the atoms are set in motion but the connection between polarization and atomic motions has remained unknown. A time-resolved x-ray experiment now elucidates that tiny atomic vibrations shift negative charges over a 1000 times larger distance between atoms and switch the macroscopic polarization on a time scale of a millionth of a millionth of a second.

Ferroelectric materials have received strong interest for applications in electronic sensors, memories, and switching devices. In this context, fast and controlled changes of their electric properties are essential for implementing specific functions efficiently. This calls for understanding the connection between atomic structure and macroscopic electric properties, including the physical mechanisms governing the fastest possible dynamics of macroscopic electric polarizations.

Researchers from the Max Born Institute in Berlin have now demonstrated how atomic vibrations modulate the macroscopic electric polarization of the prototype ferroelectric ammonium sulphate [Fig. 1] on a time scale of a few picoseconds (1 picosecond (ps) = 1 millionth of a millionth of a second). In the current issue of the journal Structural Dynamics [5, 024501 (2018)], they report an ultrafast x-ray experiment which allows for mapping the motion of charges over distances on the order of the diameter of an atom (10 to the power of minus 10 m = 100 picometers) in a quantitative way. In the measurements, an ultrashort excitation pulse sets the atoms of the material, a powder of small crystallites, into vibration. A time-delayed hard x-ray pulse is diffracted from the excited sample and measures the momentary atomic arrangement in form of an x-ray powder diffraction pattern. The sequence of such snapshots represents a movie of the so-called electron-density map from which the spatial distribution of electrons and atomic vibrations are derived for each instant in time ([Fig. 2], [Movie on MBI website, https://www.mbi-berlin.de/en/current/index.html#2018_04_12 ]).

The electron density maps show that electrons move over distances of 10 to the power of minus 10 m between atoms which are more than a thousand times larger than their displacements during the vibrations [Fig. 3]. This behavior is due to the complex interplay of local electric fields with the polarizable electron clouds around the atoms and determines the momentary electric dipole at the atomic scale. Applying a novel theoretical concept, the time-dependent charge distribution in the atomic world is linked to the macroscopic electric polarization [Fig. 3]. The latter is strongly modulated by the tiny atomic vibrations and fully reverses its sign in time with the atomic motions. The modulation frequency of 300 GHz is set by the frequency of the atomic vibrations and corresponds to a full reversal of the microscopic polarization within 1.5 ps, much faster than any existing ferroelectric switching device. At the surface of a crystallite, the maximum electric polarization generates an electric field of approximately 700 million volts per meter.

The results establish time-resolved ultrafast x-ray diffraction as a method for linking atomic-scale charge dynamics to macroscopic electric properties. This novel strategy allows for testing quantum-mechanical calculations of electric properties and for characterizing a large class of polar and/or ionic materials in view of their potential for high-speed electronics.

Credit: 
Forschungsverbund Berlin