Tech

Clinical trial of selpercatinib shows strong response for patients with non-small cell lung cancer

Singapore, 28 September 2020 - An international team of researchers has found that selpercatinib, a drug that precisely targets cancers driven by mutations or alterations in the gene RET, was effective at shrinking tumours in patients with nonsmall cell lung cancer (NSCLC), with a majority of patients living for more than a year without disease progression. Activity was also observed in thyroid cancer, and the findings were recently published in two back to back papers in the high-impact factor journal, New England Journal of Medicine.

Non-small-cell lung cancer accounts for more than 80 percent of all lung cancers. Cases of lung cancer in people who have never smoked are usually non-small-cell lung cancer. The disease affects more women than men. Standard treatment for nonsmall-cell lung cancer is a combination of surgery, chemotherapy and radiotherapy, with no targeted therapy option. Patients with advanced non-small-cell lung cancer have poor prognosis with median overall survival rate of 12 months (Toh et al. Annals academy Singapore 2017)

Selpercatinib was effective both in patients with no prior treatment with anti-cancer drugs and in those who had disease progression after treatment with other drug therapy. Results of the Phase 1-2 trial formed the basis of approval of selpercatinib in May 2020 by the US FDA for a) adults with metastatic RET-driven non-small cell lung cancer, b) adults and children 12 and older, with advanced or metastatic RET-mutated medullary thyroid cancer who require systemic therapy, and c) patients 12 and older with advanced or metastatic RET-fusion positive thyroid cancer resistant to radioactive iodine who require systemic therapy. Selpercatinib is the first approved drug of its kind that specifically targets cancers driven by mutations or alterations in the gene RET.

Patients with RET-associated cancers are typically treated with drugs that target RET and multiple other enzymes commonly found in many different types of cancer. However, the multi-kinase inhibitors currently approved for treatment have side effects that limit their use in patients with RET-driven cancers. The most common side effects with selpercatinib were high blood pressure, increased liver enzyme levels, decrease in sodium levels and low white blood cell count, all of which were manageable. Only 12 out of 531 patients on the trial had to stop because of side effects.

Clinical Associate Professor Daniel Tan, Senior Consultant, Medical Oncology, Deputy Head of the Division of Clinical Trials and Epidemiological Sciences, National Cancer Centre Singapore and co-first author of the study said, "The trial showed the targeted therapy to have good efficacy, strong, sustained response rates and fewer side effects. It has also demonstrated the importance of molecular profiling, and National Cancer Centre Singapore has implemented routine testing of the gene RET for all lung cancer patients, to enable this group of patients to benefit from the targeted treatment."

In the trial, 64 percent of previously treated patients achieved objective response and 63 percent continued to respond after 1 year. 85 percent of patients who had not previously received treatment, achieved objective response, with the median duration of response being 17.5 months.

Professor William Hwang, Medical Director, National Cancer Centre Singapore said, "This targeted therapy will provide patients with non-small-cell lung cancer with significantly improved health outcomes. The National Cancer Centre of Singapore is pleased to have participated in this trial to find precise oncology treatment options for patients. This underscores the important role the Experimental Cancer Therapeutic Unit plays in running impactful early phase clinical trials that can define new standards
of care."

The trial involved the participation from 65 leading cancer centers around the world from 12 countries including USA, Canada, France, Switzerland, Germany, Spain, Australia and Singapore. The trial was supported by Loxo Oncology, a wholly owned subsidiary of Eli Lilly and Company.

In Singapore, the trial was led by the Experimental Cancer Therapeutics Unit (ECRU), at the National Cancer Centre Singapore, which supports early phase clinical trials from Phase 0 to 2 trials. This inter-disciplinary team is engaged with preclinical and clinical development of new anti-cancer drugs for all solid tumours and lymphomas to offer patients new therapeutic options, and has spearheaded the Precision Oncology programme IMPACT (Individualised Molecular Profiling for Allocation to Clinical Trials) at NCCS, where more than 1500 patients have been recruited to date.

Credit: 
SingHealth

Recording thousands of nerve cell impulses at high resolution

image: Close-up of the new dual-mode chip. The measurement area in the centre of the image (green) is 2 x 4 millimetres.

Image: 
ETH Zurich / Xinyue Yuan

For over 15 years, ETH Professor Andreas Hierlemann and his group have been developing microelectrode-array chips that can be used to precisely excite nerve cells in cell cultures and to measure electrical cell activity. These developments make it possible to grow nerve cells in cell-culture dishes and use chips located at the bottom of the dish to examine each individual cell in a connected nerve tissue in detail. Alternative methods for conducting such measurements have some clear limitations. They are either very time-consuming - because contact to each cell has to be individually established - or they require the use of fluorescent dyes, which influence the behaviour of the cells and hence the outcome of the experiments.

Now, researchers from Hierlemann's group at the Department of Biosystems Science and Engineering of ETH Zurich in Basel, together with Urs Frey and his colleagues from the ETH spin-off MaxWell Biosystems, developed a new generation of microelectrode-array chips. These chips enable detailed recordings of considerably more electrodes than previous systems, which opens up new applications.

Stronger signal required

As with previous chip generations, the new chips have around 20,000 microelectrodes in an area measuring 2 by 4 millimetres. To ensure that these electrodes pick up the relatively weak nerve impulses, the signals need to be amplified. Examples of weak signals that the scientists want to detect include those of nerve cells, derived from human pluripotent stem cells (iPS cells). These are currently used in many cell-culture disease models. Another reason to significantly amplify the signals is if the researchers want to track nerve impulses in axons (fine, very thin fibrous extensions of a nerve cell).

However, high-performance amplification electronics take up space, which is why the previous chip was able to simultaneously amplify and read out signals from only 1,000 of the 20,000 electrodes. Although the 1,000 electrodes could be arbitrarily selected, they had to be determined prior to every measurement. This meant that it was possible to make detailed recordings over only a fraction of the chip area during a measurement.

Background noise reduced

In the new chip, the amplifiers are smaller, permitting the signals of all 20,000 electrodes to be amplified and measured at the same time. However, the smaller amplifiers have higher noise levels. So, to make sure they capture even the weakest nerve impulses, the researchers included some of the larger and more powerful amplifiers into the new chips and employ a nifty trick: they use these powerful amplifiers to identify the time points, at which nerve impulses occur in the cell culture dish. At these time points, they then can search for signals on the other electrodes, and by taking the average of several successive signals, they can reduce the background noise. This procedure yields a clear image of the signal activity over the entire area being measured.

In first experiments, which the researchers published in the journal Nature Communications, they demonstrated their method on human iPS-derived neuronal cells as well as on brain sections, retina pieces, cardiac cells and neuronal spheroids.

Application in drug development

With the new chip, the scientists can produce electrical images of not only the cells but also the extension of their axons, and they can determine how fast a nerve impulse is transmitted to the farthest reaches of the axons. "The previous generations of microelectrode array chips let us measure up to 50 nerve cells. With the new chip, we can perform detailed measurements of more than 1,000 cells in a culture all at once," Hierlemann says.

Such comprehensive measurements are suitable for testing the effects of drugs, meaning that scientists can now conduct research and experiments with human cell cultures instead of relying on lab animals. The technology thus also helps to reduce the number of animal experiments.

The ETH spin-off MaxWell Biosystems is already marketing the existing microelectrode technology, which is now in use around the world by over a hundred research groups at universities and in industry. At present, the company is looking into a potential commercialisation of the new chip.

Credit: 
ETH Zurich

Discovery of large family of two-dimensional ferroelectric metals

image: Schematic structures of 2D bimetal phosphates (MIMIIP2X6, MI and MII atoms are different metal elements, X=O, S, Se, Te). (a) Top view. (b) Side view of paraelectric phase. (c) Side view of ferroelectric phases with opposite polarization (P). (d)(e) Energy versus polarization of two ferroelectric metals (AuHfP2O6 and AuZrP2O6), showing the possible ferroelectric-paraelectric transition.

Image: 
©Science China Press

It is usually believed that ferroelectricity can appear in insulating or semiconducting materials rather than in metals, because conducting electrons of metals always screen out the internal static electric field arising from the dipole moments. In 1965, Anderson and Blount proposed the concept of 'ferroelectric metal', pointing out that the electric polarization may appear in certain martensitic transitions due to the inversion symmetry breaking [Anderson et al. Phys Rev Lett 1965, 14, 217-219]. However, after exploration of more than a half century, very rare ferroelectric metals were reported so far. In 2018, two-three layers WTe2 were reported to have switchable spontaneous out-of-plane polarization, which might be the first experimental observation for the coexistence of ferroelectricity and metallicity in a two-dimensional (2D) material [Fei et al. Nature 560, 336 (2018)].

Recently, Gang Su and coworkers have systematically investigated a large family (2,964) of 2D bimetal phosphates by using data-driven machine learning with novel electronic orbital based descriptors and high-throughput first-principles calculations, discovering 60 stable 2D ferroelectric materials with out-of-plane polarization that contain 16 novel ferroelectric metals and 44 ferroelectric semiconductors including seven multiferroics that reveal the coexistence of two or three types of ferroic orderings such as ferromagnetic, antiferromagnetic, ferroelectric, and ferroelastic orderings, and seven ferroelectrics suitable for water-splitting photocatalysts. These multiferroic materials may have potential applications in magnetoelectric, magnetostrictive, or mechanic-electric nanodevices.

By performing a detail charge analysis, they found that the conducting electrons mainly move on an upper surface of these 2D ferroelectric metals, whereas the electric polarization vertical to the upper surface is from the lower surface, which origins from the spontaneous inversion symmetry breaking induced by opposite displacements of bimetal atoms, thereby giving rise to the coexistence of metallicity and ferroelectricity. It is observed that the full-d-orbital coinage metal elements cause the displacements and polarization larger than other elements. The ferroelectric-paraelectric phase transition and polarization reversal in these 2D ferroelectric metals are also studied, and they exposed that the energy versus polarization profiles exhibit common double-well shape and clear bistability, where two minima correspond to ferroelectric phases of opposite polarizations, and the maximum corresponds to paraelectric phase. In addition, Van der Waals heterostructures based on these ferroelectric metals were also shown in possible applications to adjust the Schottky barrier height or the Schottky-Ohmic contact type, indicating that they may be used to manipulate vertical transport properties of the devices.

This present work not only expands greatly the family of 2D ferroelectric metals, which would spur more interest in further exploration of 2D ferroelectric metals both in theories and experiments, but also presents an efficient method by combining data-driven machine learning and high-throughput first-principle calculations to accelerate discoveries of new advanced functional materials.

Credit: 
Science China Press

Scientists explored optimal shapes of thermal energy storages

image: The picture illustrates one of the sources of renewable energy -- wind energy

Image: 
Karsten Würth, Unsplash

Scientists from Far Eastern Federal University (FEFU), and the Institute of Automation and Control Processes of the Far Eastern Branch of the Russian Academy of Sciences (IACP FEB RAS) have studied a correlation between the shape of Thermal Energy Storages (TES) used in traditional and renewable energy sectors and their efficiency. Using the obtained data, design engineers might be able to improve TES for specific needs. A related article was published in the Renewable Energy.

The scientists studied a correlation between the shape and efficiency of TES based on granular phase change materials. When heated, such materials change their phase from the solid to the liquid state, thus preserving the heat energy. When they solidify again, energy output takes place. Devices based on this principle are used in advanced energy systems.

Using a computational model that had been developed previously, the team found out the effect of narrowing and expansion of cylinder-shaped TES on the process of their charging (energy input), energy storage, and discharging (energy output) depending on various preference criteria.

"To study the charging and discharging of TES with different shapes, we used six efficiency criteria. In some cases, a heat accumulator that stores more energy is the most preferable. In other cases, a unit with the fastest charge time is the most efficient. It is the same for discharge: some need a device with the biggest energy output, and some would prefer one with maximal time of keeping the outlet temperature not lower than a given value," said Nickolay Lutsenko, a co-author of the work, a Professor at the Engineering Department of the Polytechnic Institute (School), FEFU, and a Laboratory Head at IACP FEB RAS.

According to the scientist' research, TES with straight walls are often the most preferable. However, the shape of a unit can depend on efficiency criteria and the details of the process, such as boundary conditions, phase transition temperature, and so on. In some scenarios, narrowing or expanding TES can be more beneficial than straight walls ones.

Thermal energy storages can also be parts of other types of energy accumulators, such as adiabatic compressed air energy storages that are used to store cheap energy coming from traditional power plants in the night time or from solar batteries and wind turbines in favorable weather conditions. Energy output from these storage units takes place in peak energy consumption times, such as mornings or evenings.

"These units store the energy of compressed gas that is pumped by compressors into huge containers capable of keeping it for a long time. When there is a shortage of energy, compressed gas is transmitted to turbines that move power plant generators. However, traditional compressed air energy storages have a disadvantage: when gas is compressed and pumped, its temperature increases, but the heat is lost. And when gas is released from containers, its temperature decreases, and it needs to be warmed up again before being transmitted to a turbine. To do so, power plants must consume fuel. Adiabatic compressed air energy storages, that are being developed today, make compressed gas go through a TES after pumping so that it only comes to a container after it releases all its heat. And when gas must be transmitted to a turbine, it passes through the same TES again where it absorbs energy and warms up. The performance factor of such units is much higher, and moreover they are more environmentally friendly, as no fuel needs to be burned and no atmospheric emissions take place," added Nickolay Lutsenko.

Credit: 
Far Eastern Federal University

New extreme ultraviolet facility opens for use

image: A schematic to show how the two beam sources are generated.

Image: 
© 2020 Springer Nature

Researchers have established a novel high-frequency laser facility at the University of Tokyo. The coherent extreme ultraviolet light source can reveal details of biological or physical samples with unprecedented clarity. It also allows for investigation of time-dependent phenomena such as ultrafast chemical reactions. Existing facilities for such investigations necessarily require enormous particle accelerators and are prohibitive to many researchers. This new facility should greatly improve access for a broad range of researchers.

You are probably familiar with ultraviolet (UV) light and X-rays. UV light from the sun helps your body produce vitamin D and makes solar panels generate power, and X-rays can be used to image the inside of your body to find broken bones or other ailments. But beyond these aspects, UV light and X-rays are also essential tools for the investigation of the physical world. Researchers use these forms of light to reveal details of biological, chemical and physical samples such as their makeup, structure and behavior.

Two kinds of light which are especially useful for state-of-the-art investigations into fast-acting phenomena, such as certain chemical reactions or biological processes, are coherent extreme ultraviolet (XUV) and soft X-ray pulses. These are both very precise forms of light with finely controlled parameters, akin to laser pulses, crucial for performing good rigorous experiments. However there are some drawbacks to how these beams are made.

"Facilities to produce coherent XUV and soft X-rays are huge machines based on particle accelerators -- like smaller versions of the Large Hadron Collider in Europe," said Professor Katsumi Midorikawa from the UTokyo Institute for Photon Science and Technology and RIKEN Center for Advanced Photonics. "Given the rarity of these facilities and the expense of running experiments there, it presents a barrier to many who might wish to use them. This is what prompted myself and colleagues at UTokyo and RIKEN to create a new kind of facility that we hope will be far more accessible for a greater number of researchers to use."

The new XUV source facility is much, much smaller than any that has come before it. It is housed inside a relatively modest lab underground at the University of Tokyo. The bulk of the machine is a 5-by-2-meter vacuum container housing a 100-meter-long ring, or resonator, down which a high-power laser light is stored. At two locations on this coil are pockets of special rare gases that alter characteristics of the passing laser. This results in the two separate beams of XUV and soft X-rays, which are cast onto samples undergoing investigation. Light reflected off the samples is then read by high-speed imaging sensors.

"What is really novel about our approach is that the XUV and soft X-ray pulses are extremely short but occur at very high frequencies, in the region of megahertz, or millions of cycles per second," said Midorikawa. "For perspective, established XUV facilities that use synchrotron radiation pulses also in the megahertz region have longer bursts which are less suitable for resolving dynamic phenomena. And those that use so-called X-ray-free electron laser sources have short pulses, but offer low frequencies of around 10 hertz to 100 hertz. So our facility offers the best of both worlds, with the added benefit of being only a fraction of the size and with far lower operating costs."

This new XUV source offers ultrashort pulses, useful for probing fast phenomena, and high frequencies, useful for investigating the structure and chemical properties of matter. This is possible, due to the process that creates the pulses as the laser interacts with the gas. It is called high-order harmonic generation and also because of this, the facility is the first of its kind capable of producing multiple XUV and soft X-ray beams.

"I have been working in the field of XUV generation and application for 30 years. Although high-order harmonic generation brought a breakthrough in this field, the generation efficiency and pulse repetition rate were still insufficient for many applications," said Midorikawa. "When I proposed the idea of this facility to my colleagues, they were instantly interested and we were able to acquire a suitable budget to complete it. We all hope this will open the door to new research from materials scientists, chemists and biologists who can finally access this amazing and powerful investigative tool."

Credit: 
University of Tokyo

AI technology can predict vanadium flow battery performance and cost

image: Cost, performance prediction and optimization of a vanadium flow battery using machine learning

Image: 
LI Tianyu

Vanadium flow batteries (VFBs) are promising for stationary large-scale energy storage due to their high safety, long cycle life, and high efficiency.

The cost of a VFB system mainly depends on the VFB stack, electrolyte, and control system. Developing a VFB stack from lab to industrial scale can take years of experiments due to complex factors, from key materials to battery architecture.

Novel methods to accurately predict the performance and cost of a VFB stack and further system are needed in order to accelerate the commercialization of VFBs.

Recently, a research team led by Prof. LI Xianfeng from the Dalian Institute of Chemical Physics (DICP) of the Chinese Academy of Sciences proposed a machine learning-based strategy to predict and optimize the performance and cost of VFBs.

"We use AI technology to improve efficiency, reduce research time, and provide important guidance for the research and development of VFBs" said Prof. LI. "It may accelerate the commercialization of VFBs."

This work was published in Energy & Environmental Science on Sept. 22.

The proposed strategy takes operating current density as the main feature, and the material and structure of the stack as auxiliary features.

This machine learning model can predict the voltage efficiency, energy efficiency, and electrolyte utilization ratio of the VFB stack, as well as the power and energy cost of the VFB system with high accuracy.

In addition, a future R&D direction for the VFB stack was proposed based on model coefficients of machine learning, i.e., developing high-power density VFB stacks under conditions of higher voltage efficiency and higher electrolyte utilization ratio.

This work not only has great significance for the R&D of VFB stacks, but also highlights the prospects for combining machine learning and experiments for optimizing and predicting the dynamic behavior of complex systems.

Credit: 
Chinese Academy of Sciences Headquarters

COVID-19 may deplete testosterone, helping to explain male patients' poorer prognosis

For the first time, data from a study with patients hospitalized due to COVID-19 suggest that the disease might deteriorate men's testosterone levels.

Publishing their results in the peer-reviewed journal The Aging Male, experts from the University of Mersin and the Mersin City Education And Research Hospital in Turkey found as men's testosterone level at baseline decreases, the probability for them to be in the intensive care unit (ICU) significantly increases.

Lead author Selahittin Çayan, Professor of Urology, states that while it has already been reported that low testosterone levels could be a cause for poor prognosis following a positive SARS-CoV-2 test, this is the first study to show that COVID-19 itself depletes testosterone.

It is hoped that the development could help to explain why so many studies have found that male prognosis is worse than those females with COVID-19, and therefore to discover possible improvement in clinical outcomes using testosterone-based treatments.

"Testosterone is associated with the immune system of respiratory organs, and low levels of testosterone might increase the risk of respiratory infections. Low testosterone is also associated with infection-related hospitalisation and all-cause mortality in male in ICU patients, so testosterone treatment may also have benefits beyond improving outcomes for COVID-19," Professor Çayan explains.

"In our study, the mean total testosterone decreased, as the severity of the COVID-19 increased. The mean total testosterone level was significantly lower in the ICU group than in the asymptomatic group. In addition, the mean total testosterone level was significantly lower in the ICU group than in the Intermediate Care Unit group. The mean serum follicle stimulating hormone level was significantly higher in the ICU group than in the asymptomatic group.

"We found, Hypogonadism - a condition in which the body doesn't produce enough testosterone -in 113 (51.1%) of the male patients.

"The patients who died, had significantly lower mean total testosterone than the patients who were alive.

"However, even 65.2% of the 46 male patients who were asymptomatic had a loss of loss of libido."

The research team looked at a total of 438 patients. This included 232 males, each with laboratory confirmed SARS-CoV-2. All data were prospectively collected. A detailed clinical history, complete physical examination, laboratory and radiological imaging studies were performed in every patient. All data of the patients were checked and reviewed by the two physicians.

The cohort study was divided into three groups: asymptomatic patients (n: 46), symptomatic patients who were hospitalized in the internal medicine unit (IMU) (n: 129), and patients who were hospitalized in the intensive care unit (ICU) (n: 46).

In the patients who had pre-COVID-19 serum gonadal hormones test (n: 24), serum total testosterone level significantly decreased from pre-COVID-19 level of 458?±?198?ng/dl to 315?±?120?ng/dl at the time of COVID-19 in the patients (p?=?0.003).

Death was observed in 11 of the male adult patients (4.97%) and 7 of the female patients (3.55%), revealing no significance between the two genders (p?>?0.05).

Commenting on the results of the study, Professor Çayan added: "It could be recommended that at the time of COVID-19 diagnosis, testosterone levels are also tested. In men with low levels of sex hormones who test positive for COVID-19, testosterone treatment could improve their prognosis. More research is needed on this."

The limitations of this study include it not including a control group of patients with conditions other than COVID-19, this was due to the restrictions placed on the hospital that they were monitoring the patients in.

The authors state future studies should look at the concentration levels of ACE2 (Angiotensin-converting enzyme 2) - an enzyme attached to the cell membranes of cells located in the intestines -, in relationship with the total testosterone levels.

Credit: 
Taylor & Francis Group

A new study may revise a theory of flowing viscous liquids that was accepted for 60 years

video: Temporal evolution of multiple droplet formation during VF experiment

Image: 
Yuichiro Nagatsu/TUAT, Takahiko Ban/ Osaka University, Manoranjan Mishra/IIT Ropar

The international collaborative team of Tokyo University of Agriculture and Technology (TUAT) in Japan, Indian Institute of Technology Ropar (IIT Ropar) in India, and Osaka University in Japan has discovered for the first time a topological change of viscous fingering (one of classical interfacial hydrodynamics), which is driven by "a partially miscibility," where the two liquids do not mix completely with finite solubility. This topological change originates from a phase separation and the spontaneous motion driven by it. It is a phenomenon that cannot be seen with completely mixed (fully miscible) system with infinite solubility or immiscible system with no solubility.

The researchers published their results in the Journal of Fluid Mechanics on Jun 30th, 2020.

When a less viscous fluid displaces a more viscous fluid in porous media, the interface between the two fluids becomes hydrodynamically unstable and deforms in a finger shape. This phenomenon is technically called "Viscous fingering (VF)". Since the 1950s, the VF has been studied as one fluid dynamics issue. Then, it is now widely known that the properties can be classified according to weather the two fluids are fully miscible or immiscible. The viscous fingering dynamics helps to understand the process of fluid displacement in porous media in reactions and separation in chemical processes, as well as in enhanced-oil-recovery and CO2 sequestration.

"It has long been pointed out that viscous fingering in partially miscible fluids occurs in underground processes with high-pressure conditions, such as oil recovery and CO2 storage. However, such viscous fingering has been theoretically studied in the last few years," said Dr. Nagatsu, one of the corresponding authors on the paper and Associate Professor in the Department of Chemical Engineering at Tokyo University of Agriculture and Technology (TUAT). "Experimental studies of such VF have not been done at all. One of the reasons is that fluid mechanics researchers did not use experimental conditions that were partially miscible at room temperature and atmospheric pressure."

The research team succeeded in changing the miscibility of the system to fully miscible, immiscible, and partially miscible with little change in the viscosities at room temperature and atmospheric pressure. They used an aqueous two-phase system consisting of polyethylene-glycol (PEG), sodium-sulfate (Na2SO4), and water (see Figure), which were described in the same research team's paper published in 2019. Here, in the partially miscible system, a pure PEG solution and a pure Na2SO4 solution dissolve each other with finite solubility, and as a result, the phase is separated into a PEG-rich phase (phase L) and a Na2SO4-rich phase (phase H) (see Figure).

They have carried out experiments by using this solution system in which a less-viscous liquid displaces a more-viscous one in a Hele-Shaw cell (see Figure) which is a model mimics flow in porous media. "Our team found that topological change is observed in the case where the two liquids are partially miscible (see Figure and Movie). This is the first instance of topological change in viscous fingering although various changes in the pattern due to various physicochemical effects, so far, have been reported when the two fluids are fully miscible or immiscible. We clearly showed this topological change originates from a phase separation occurring between the two fluids and the spontaneous motion driven by it," Nagatsu explains.

"Our result overturns the common understanding of more than 60 years in VF research which began in the 1950s that the characteristics of VF are divided into immiscible and fully miscible cases and it demonstrates the existence and importance of the partially miscible case, which becomes the third classification category. This will open a new cross-disciplinary research area involving hydrodynamics and chemical thermodynamics. Also, the displacement with partial miscibility in a porous medium takes places in the oil recovery process from the formation and the CO2 injection process into the formation. Thus, our finding is expected to create new control methodology of those processes by utilizing the partial miscibility," adds Nagatsu.

Credit: 
Tokyo University of Agriculture and Technology

Shorebirds more likely to divorce after successful breeding

An international team of scientists studying shorebirds, led by the University of Bath, has found that successful plover parents are more likely to divorce after nesting than those that did not successfully breed, in contrast to most other bird species which tend to split up after nest failure.

The researchers studied the mating behaviour of eight different species of Charadrius plovers, covering 14 populations in different locations across the world.

These shorebirds tend to lay 2-4 eggs per nest and can have up to four breeding attempts per season.

Plover chicks mature quickly and fly the nest around a month after hatching; in most plover species both parents care for the hatchlings, but in some species either parent can desert the nest to breed again with a new mate.

Surprisingly, the researchers found that pairs that successfully raised chicks were more likely to divorce, whereas unsuccessful pairs tended to stick together and try breeding again.

Females were more likely to desert the nest than males, and those that did often produced more offspring within a season than parents that retained their mate.

Plovers that divorced also dispersed across greater distances between breeding attempts to look for new mates.

The findings, published in the journal Scientific Reports, suggest that a range of factors including the adult sex ratio, the length of breeding season and adult lifespan affect the fidelity and parenting behaviour of these birds, rather than simply being due to the species.

Naerhulan Halimubieke, PhD student at the Milner Centre for Evolution at the University of Bath and first author of the paper, said: "Our findings go against what you'd intuitively expect to happen - that divorce would be triggered by low reproductive success.

"Interestingly, we found that mate fidelity varied amongst different populations of the same species - for example, Kentish plovers in Europe and China are serial polygamists and are migratory, whereas those found on Cape Verde are exclusively monogamous.

"This shows that mating behaviour is not simply down to which species they belong, but that other factors affecting the population are also important, such as ratio of males to females and temperature variation of the habitat."

Tamás Székely, Professor of Biodiversity at the Milner Centre for Evolution, said: "Our previous work has shown that in populations where there are more females than males, the female tends to leave the nest after breeding to make another nest with a new mate.

"Since plover chicks don't need much work in bringing them up, one of the parents can free themselves from the nest early and go on to breed elsewhere.

"Females are more likely to leave their partners if the population is skewed towards males, because they have a greater choice of potential partners and so are more likely to increase their reproductive success by breeding with another mate.

"More research is needed to fully understand how factors such as the adult sex ratio and the climate of the populations affects the breeding behaviour of these birds."

Credit: 
University of Bath

Antiferromagnet lattice arrangements influence phase transitions

New research published in EPJ B reveals that the nature of the boundary at which an antiferromagnet transitions to a state of disorder slightly depends on the geometry of its lattice arrangement.

Calculations involving 'imaginary' magnetic fields show how the transitioning behaviours of antiferromagnets are subtly shaped by their lattice arrangements.

Antiferromagnets contain orderly lattices of atoms and molecules, whose magnetic moments are always pointed in exactly opposite directions to those of their neighbours.

These materials are driven to transition to other, more disorderly quantum states of matter, or 'phases,' by the quantum fluctuations of their atoms and molecules - but so far, the precise nature of this process hasn't been fully explored. Through new research published in EPJ B, Yoshihiro Nishiyama at Okayama University in Japan has found that the nature of the boundary at which this transition occurs depends on the geometry of an antiferromagnet's lattice arrangement.

Nishiyama's discovery could enable physicists to apply antiferromagnets in a wider variety of contexts within material and quantum physics. His calculations concerned the 'fidelity' of the materials, which refers in this case to the degree of overlap between the ground states of their interacting lattice components. Furthermore, the fidelity 'susceptibility' describes the degree to which this overlap is influenced by an applied magnetic field. Since susceptibility is driven by quantum fluctuations, it can be expressed within the language of statistical mechanics - describing how macroscopic observations can arise from the combined influences of many microscopic vibrations.

This makes it a useful probe of how antiferromagnet phase transitions are driven by quantum fluctuations.

Using advanced mathematical techniques, Nishiyama calculated how the susceptibility is affected by 'imaginary' magnetic fields - which do not influence the physical world, but are crucial for describing the statistical mechanics of phase transitions. By applying this technique to an antiferromagnet arranged in a honeycomb lattice, he revealed that the transition between orderly, anti-aligned magnetic moments, and a state of disorder, occurs across a boundary with a different shape to that associated with the same transition in a square lattice. By clarifying how the geometric arrangement of lattice components has a subtle influence on this point of transition, Nishiyama's work could advance physicists' understanding of the statistical mechanics of antiferromagnets.

Credit: 
Springer

Avoiding environmental losses in quantum information systems

New research published in EPJ D has revealed how robust initial states can be prepared in quantum information systems, minimising any unwanted transitions which lead to losses in quantum information.

Through new techniques for generating 'exceptional points' in quantum information systems, researchers have minimised the transitions through which they lose information to their surrounding environments.

Recently, researchers have begun to exploit the effects of quantum mechanics to process information in some fascinating new ways. One of the main challenges faced by these efforts is that systems can easily lose their quantum information as they interact with particles in their surrounding environments. To understand this behaviour, researchers in the past have used advanced models to observe how systems can spontaneously evolve into different states over time - losing their quantum information in the process. Through new research published in EPJ D, M. Reboiro and colleagues at the University of La Plata in Argentina have discovered how robust initial states can be prepared in quantum information systems, avoiding any unwanted transitions extensive time periods.

The team's findings could provide valuable insights for the rapidly advancing field of quantum computing; potentially enabling more complex operations to be carried out using the cutting-edge devices. Their study considered a 'hybrid' quantum information system based around a specialised loop of superconducting metal, which interacted with an ensemble of imperfections within the atomic lattice of diamond. Within this system, the researchers aimed to generate sets of 'exceptional points.' When these are present, information states don't decay in the usual way: instead, any gains and losses of quantum information can be perfectly balanced between states.

By accounting for quantum effects, Reboiro and colleagues modelled how the dynamics of ensembled imperfections were affected by their surrounding environments. From these results, they combined information states which displayed large transition probabilities over long time intervals - allowing them to generate exceptional points. Since this considerably increased the survival probability of a state, the team could finally prepare initial states which were robust against the effects of their environments. Their techniques could soon be used to build quantum information systems which retain their information for far longer than was previously possible.

Credit: 
Springer

Study finds spreading ghost forests on NC coast may contribute to climate change

A new study found the spread of ghost forests across a coastal region of North Carolina may have implications for global warming. Ghost forests are areas where rising seas have killed off freshwater-dependent trees, leaving dead or dying white snags standing in marsh.

The study found that the transition from forest to marsh along the coastline of the Albemarle-Pamlico Peninsula led to a significant loss in the amount of carbon stored in the plants and trees above ground. When released into the atmosphere as a gas, carbon can contribute to global warming. However, researchers also uncovered some ways landowners can offset some of those carbon losses.

"Many people think about sea-level rise as being more of a long-term threat," said the study's lead author Lindsey Smart, a research associate at the North Carolina State University Center for Geospatial Analytics. "But we're actually seeing significant changes over shorter time periods because of this interaction between gradual sea-level rise and extreme weather events like hurricanes or droughts, which can bring salt water further inland."

In the study, researchers tracked the spread of ghost forests across 2,485 square miles on the peninsula. They found that on unmanaged, or natural, land such as publicly owned wildlife areas, ghost forests spread across 15 percent of the area between 2001 and 2014.

Using models created using data on vegetation height and type in the area, they calculated that across the 13 years of the study, 130,000 metric tons of carbon were lost from the spread of ghost forests on unmanaged land.

According to the U.S. Environmental Protection Agency's Greenhouse Gas Equivalencies Calculator, that's equivalent to what would be emitted into the air by 102,900 passenger vehicles driven for one year.

"Coastal forests are really unique in that they store carbon both in their foliage, and in their really rich organic soil," Smart said. "As saltwater intrusion increases, you're going to see impacts both to the aboveground and the belowground carbon. While we measured aboveground carbon losses, the next step will be to look at the response of these carbon stores below ground to saltwater exposure."

Researchers said it's unclear exactly what happens to the carbon released when the forests are killed by salt water, and it could be that it goes into the soil.

While they found ghost forests spread along the peninsula's coastline, they also saw that the losses at the coast were offset by tree plantings in forests grown for timber during the study.

That was one land-management practice that they found could offset carbon losses due to ghost forest spread. They also saw that the use of canals and drainage ditches had an impact on carbon losses. Canals could either aid saltwater encroachment on the land, or help prevent it, depending on their use.

"Drainage networks, if not maintained, can serve as conduits for saltwater intrusion," Smart said.

They also reported that catastrophic wildfires likely worsened the spread of ghost forest along the coast and had a greater impact on the loss of aboveground carbon compared with ghost forest spread.

"We think there may be an interaction between salt water and fire that accelerates forest retreat, and facilitates marsh migration into areas that were once healthy coastal forest," Smart said.

The researchers said drought also played a role in ghost forest spreads.

"Two severe droughts within the study period produced larger-than-typical wildfires and facilitated salinization of normally freshwater ecosystems," said study co-author Paul Taillie, a postdoctoral researcher at the University of Florida and former graduate student at N.C. State. "Thus the combination of rising sea level and future drought would be expected to cause a large net loss in biomass."

Overall, the study helps create a map of areas that are particularly vulnerable to sea-level rise, and offers clues to help prevent loss of coastal forests.

"Our paper helps to identify areas that are most vulnerable to the impact of sea-level rise and extreme weather events," Smart said. "This could help guide land-management decisions and help people think about ways to adapt to these changing environmental conditions."

Credit: 
North Carolina State University

Memory training for the immune system

image: Schematic representation of the function of BATF3. In the upper half you can see the physiological function and the consequences if this factor is missing (knockout). The lower half shows the consequences in case of an unnaturally increased expression with the resulting therapeutic applicability.

Image: 
Graphics: Dr. Marco Ataide

After an infection of the human body with a pathogen, a cascade of reactions will usually be set into motion. Amongst others, specific cells of the immune system known as T cells get activated in the lymph node and will subsequently divide and proliferate.

At the same time, these cells will gain certain functions, that enable them to destroy other cells, that are e.g. infected by a virus. In addition, they produce certain proteins - so called cytokines - with which they can stop the reproduction of the pathogen.

The immune system and its function are the main focus of the research of Professor Wolfgang Kastenmueller, director of the Chair of Systems Immunology I at the Institute of Systems Immunology of the Julius-Maximiliams-Universität Würzburg (JMU). Together with Professor Georg Gasteiger, director of the Chair of Systems Immunology II, they lead the Max-Planck Research group of Systems Immunology.

Their research focus is the interaction of the immune system with the organism, especially the interaction of different cells of the immune system within local networks and with other cells of other organ systems.

Publication in Nature Immunology

Recently Kastenmueller and his team deciphered new details of the functioning of the immune system, which are important for the immune system to remember recent infections. Their results have been published in the latest issue of the scientific journal Nature Immunology. Their findings could help to improve immune therapy towards tumor diseases.

"If a body has fought and eliminated a pathogen successfully, most of the recently proliferated T cells are no longer needed and will die", Wolfgang Kastenmueller explains. But about five to ten percent of these cells survive and develop into a long lasting "memory population", that will protect the body against future infections.

Improvement of the immunological memory

Kastenmueller describes the main result of his study : "In our recent work we identified a transcription factor - BATF3, that very specifically regulates the survival of these cells and therefore the transition into a memory response". The scientists could show that this factor only gets produced shortly after the initial activation of T cells. The absence of this factor leads to a permanent malfunction of the memory response.

Until now the role of this factor for so-called CD8+ T cells was unclear. It was only after the scientists overexpressed this factor in CD8+ T cells that the importance became clear, as they could see that the survival of these cells and thus the immunological memory improved significantly.

This study was conducted in close collaboration with the Medical Clinic II of the University clinic of Wuerzburg. It combines basic research with applied medicine and could help to develop better therapies for cancer treatments that use the immune system of the patient - so-called CAR-T cell therapy.

Using CAR-T cell therapy, T cells get extracted from the blood of the patient and are subsequently genetically modified with the chimeric antigen receptor (CAR) molecules. This modification enables T cells to attack tumor cells, which they couldn't biochemically detect before. These modified T cells are subsequently transferred back into the patient.

Currently CAR-T cells are successfully used for therapies for diseases such as B cell lymphoma, a malignant disease of the lymphatic system. Kastenmueller and his team together with Professor Michael Hudecek of the Medical Clinic II are now planning to modify CAR-T cells so as to improve the survival of the patients and thus the therapeutic efficiency.

Credit: 
University of Würzburg

Penn researchers uncover epigenetic drivers for Alzheimer's disease

PHILADELPHIA-- New findings suggest that late-onset Alzheimer's Disease is driven by epigenetic changes -- how and when certain genes are turned on and off -- in the brain. Results were published today in Nature Genetics.

Research led by Raffaella Nativio, PhD, a former research associate of Epigenetics, Shelley Berger, PhD, a professor of Genetics, Biology and Cell and Developmental Biology and Director of the Epigenetics Institute, and Nancy Bonini, PhD, a professor of Biology and Cell and Developmental Biology, all in the Perelman School of Medicine at the University of Pennsylvania, used post-mortem brain tissue to compare healthy younger and older brain cells to those with Alzheimer's Disease. The team found evidence that epigenetic regulators disable protective pathways and enable pro-disease pathways in those with the disease.

"The last five years have seen great efforts to develop therapeutics to treat Alzheimer's disease, but sadly, they have failed in the clinic to treat humans suffering from this horrible disease," Berger said. "We are trying a completely different approach to reveal the critical changes in brain cells, and our findings show epigenetic changes are driving disease."

Epigenetic changes alter gene expression without DNA mutation, but rather by marking proteins that package and protect DNA, called histones. Berger added, "the activity of epigenetic regulators can be inhibited by drugs, and hence we are excited that this may be an Achilles' heel of Alzheimer's that can be attacked by new therapeutics."

In this study, the researchers integrated many large-scale cutting-edge approaches of RNA, protein, and epigenomic analyses of postmortem human brains to interrogate the molecular pathways involved in Alzheimer's. They found upregulation of transcription- and chromatin-related genes, including of central histone acetyltransferases for marks that open up the chromatin (marks called acetylation of lysine 27 and 9 on histone H3, or H3K27ac and H3K9ac). Proteomic screening also singled out these marks as enriched in Alzheimer's. The findings were tested functionally in a fly Drosophila model, to show that increasing these marks exacerbated Alzheimer's Disease associated effects.

"Based on our findings, there is a reconfiguration of the epigenomic landscape -- that's the DNA genome plus associated proteins -- normally with age in the brain," Bonini said. "These changes fail to occur in Alzheimer's and instead other changes occur. What's remarkable is that the simple fruit fly Drosophila, in which we can express Alzheimer's associated proteins and confer an Alzheimer's effect, confirms that the specific types of changes to the epigenome we predict are associated with Alzheimer's do exaggerate the toxicity of Alzheimer's proteins."

These findings suggest that Alzheimer's Disease involves a reconfiguration of the epigenomic landscape, with the marks H3K27ac and H3K9ac affecting disease pathways by disrupting transcription- and chromatin-gene feedback loops. The identification of this process highlights potential strategies to modulate these marks for early-stage disease treatment.

This research built off a previous study published by the team in 2018. Like this study, they compared the epigenomic landscape of disease to both younger and elderly cognitively normal control subjects. The team described the genome-wide enrichment of another acetylation mark of acetylation of lysine 16 on histone H4 (H4K16ac). H4K16ac is a key modification in human health because it regulates cellular responses to stress and to DNA damage. The team found that, while normal aging leads to increasing H4K16ac in new positions along the genome and an increase in where it is already present, in great contrast, Alzheimer's entails losses of H4K16ac in the proximity of genes linked to aging and disease.

"Overall we found in the previous study that certain acetylation marks protect the brain during normal aging, whereas, strikingly, in our new study, we found that other acetylation marks drive disease. The next step is to identify mechanisms underlying the protective and degradative pathways, which will lead to a more targeted approach for Alzheimer's Disease therapy," Nativio said.

Credit: 
University of Pennsylvania School of Medicine

The Arctic is burning in a whole new way

image: A wildfire burns in an Alaskan boreal forest.

Image: 
Merritt Turetsky

"Zombie fires" and burning of fire-resistant vegetation are new features driving Arctic fires--with strong consequences for the global climate--warn international fire scientists in a commentary published in Nature Geoscience.

The 2020 Arctic wildfire season began two months early and was unprecedented in scope.

"It's not just the amount of burned area that is alarming," said Dr. Merritt Turetsky, a coauthor of the study who is a fire and permafrost ecologist at the University of Colorado Boulder. "There are other trends we noticed in the satellite data that tell us how the Arctic fire regime is changing and what this spells for our climate future."

The scientists contend that input and expertise of Indigenous and other local and communities is essential to understanding and managing this global issue.

The commentary identifies two new features of recent Arctic fires. The first is the prevalence of holdover fires, also called zombie fires. Fire from a previous growing season can smolder in carbon-rich peat underground over the winter, then re-ignite on the surface as soon as the weather warms in spring.

"We know little about the consequences of holdover fires in the Arctic," noted Turetsky, "except that they represent momentum in the climate system and can mean that severe fires in one year set the stage for more burning the next summer."

The second feature is the new occurrence of fire in fire-resistant landscapes. As tundra in the far north becomes hotter and drier under the influence of a warmer climate, vegetation types not typically thought of as fuels are starting to catch fire: dwarf shrubs, sedges, grass, moss, even surface peats. Wet landscapes like bogs, fens, and marshes are also becoming vulnerable to burning.

The team has been tracking fire activity in the Russian Arctic in real time using a variety of satellite and remote sensing tools. While wildfires on permafrost in Siberia south of the Arctic are not uncommon, the team found that 2019 and 2020 stood out as extreme in the satellite record for burning that occurred well above the Arctic Circle, a region not normally known to support large wildfires.

As a result, said lead author Dr. Jessica McCarty, a geographer and fire scientist at Miami University, "Arctic fires are burning earlier and farther north, in landscapes previously thought to be fire resistant."

The consequences of this new fire regime could be significant for the Arctic landscape and peoples and for the global climate. More than half of the fires detected in Siberia this year were north of the Arctic Circle on permafrost with a high percentage of ground ice. This type of permafrost locks in enormous amounts of carbon from ancient biomass. Climate models don't account for the rapid thaw of these environments and resulting release of greenhouse gases, including methane.

On a more local level, abrupt thawing of ice-rich permafrost in wildfires causes subsidence, floods, pits and craters, and can submerge large areas under lakes and wetlands. As well as disrupting the lives and livelihoods of Arctic residents, these features are associated with more greenhouse gases moving from where they are trapped in soils into the atmosphere.

These extensive changes have severe consequences for global climate.

"Nearly all of this year's fires inside the Arctic Circle have occurred on continuous permafrost, with over half of these burning on ancient carbon-rich peat soils," said Dr. Thomas Smith, a fire scientist at the London School of Economics and Political Science and a coauthor of the study. "The record high temperatures and associated fires have the potential to turn this important carbon sink into a carbon source, driving further global heating."

The severity of the 2020 Arctic fires emphasizes an urgent need to better understand a switch in Arctic fire regimes. New tools and approaches are required to measure how fires start and measure fire extent. Modeling tools and remote sensing data can help, but only if paired with local, specialized knowledge about where legacy carbon stored in peats or permafrost is vulnerable to burning and how environments change after wildfires.

The commentary cautions that this issue is so important to the climate system that it must be taken up as an issue of global importance. It outlines a path forward for not only understanding the role of changing fire in the Arctic but to ensure that research stays focused on local community and policy needs.

"We need global cooperation, investment, and action in monitoring fires, including learning from Indigenous and local communities how fire is traditionally used," said McCarty. "We need new permafrost- and peat-sensitive approaches to wildland fire fighting to save the Arctic--there's no time to lose."

Credit: 
University of Colorado at Boulder