Tech

Urgent need for anti-smoking campaigns to continue after pregnancy

Curtin University research has found quit support for smoking mothers should continue even after their first babies are born, given that many of those women will become pregnant again, and that quitting can substantially reduce the risk of future preterm births.

The longitudinal study examined the records and histories across 23 years, of 63,540 Australian women with more than one child, who smoked during their first pregnancy.

Lead researcher, Professor Gavin Pereira from Curtin's School of Population Health said more than one third of women who smoked during pregnancy were able to stop smoking for their next pregnancy.

"Our research found that for more than 30 percent of smoking mothers-to-be, quitting for their next pregnancies was achievable, and importantly could reduce the risk of early birth in subsequent pregnancy by as much as 26 per cent," Professor Pereira said.

"While the benefit of quitting in reducing the harm to unborn babies is well established, less well understood was the prevalence of maintaining the quit message at the next pregnancy and the associated risk of pre-term birth. This is what our research was looking to address.

"What is clear from the study, is that maintaining quit messages and support for women who smoked during pregnancy, even after birth can have a significantly positive outcome for both them and their subsequent babies."

Professor Pereira said he was concerned about the number of women who reportedly smoked during pregnancy.

"According to the latest figures from the Australian Institute of Health and Welfare, 75% of smokers continue to smoke after 20 weeks, after finding out that they are pregnant," Professor Pereira said.

"The second trimester is vital to an unborn babies' growth and formation - organs continue to develop, and the liver, pancreas and kidneys all start to function. Babies also begin to hear sounds, such as the mother's heartbeat.

"Smoking during this crucial time cuts oxygen to the unborn baby and exposes them to a cocktail of chemicals including those that cause cancer. This could delay growth and development, increase the risk of cleft palate and change the baby's brain and lungs."

While the research has shown the need for continuing anti-smoking campaigns for those who chose to smoke during their first pregnancies, Professor Pereira has urged those considering having a family, those already pregnant, or who have recently given birth to not smoke at all.

"Among mothers who smoked in their first pregnancy, the risk of having a preterm birth at their second pregnancy was 26% lower than those who continued to smoke."

"Despite smoking during a first pregnancy, woman can turn this around for their next pregnancy to reduce complications to their unborn. Quitting is achievable and is always the safest option."

Credit: 
Curtin University

75% of sexual assault survivors have PTSD one month later

Researchers want sexual assault survivors to know that it's normal to feel awful right after the assault, but that many will feel better within three months.

In a meta-analysis published in Trauma, Violence & Abuse, researchers found that 81% of sexual assault survivors had significant symptoms of post-traumatic stress (PTSD) one week after the assault. One month afterward - the first point in time that PTSD can be diagnosed - 75% of sexual assault survivors met criteria for the disorder. That figure dropped to 54% after three months and 41% after one year.

"One of the main takeaways is that the majority of recovery from post-traumatic stress happens in first three months," said lead author Emily Dworkin, assistant professor of psychiatry and behavioral sciences at the University of Washington School of Medicine. "We hope this will give survivors and clinicians a sense of what to expect and convey some hope."

The authors said this was the first meta-analysis of survivors' PTSD symptoms in the first year after a sexual assault. Their research underscored prior findings that PTSD is common and severe in the aftermath of sexual assault, and offered more details on the timeline for recovery.

The authors analyzed 22 studies that had assessed PTSD in sexual assault survivors over time, beginning soon after the traumatic event. The studies cumulatively involved 2,106 sexual assault survivors.

PTSD is characterized by symptoms such as reliving a traumatic event in nightmares, intrusive thoughts, or flashbacks; avoiding being reminded of the event; increases in negative emotions and decreases in positive emotions; self-blame; and feeling "keyed up" or on edge, Dworkin said.

A number of proven interventions, such as Prolonged Exposure Therapy and Cognitive Processing Therapy, help people recover from sexual assault and other traumas. Dworkin said it's important for people to seek help if PTSD symptoms interfere with their functioning, no matter how much time has passed since the traumatic event.

Dworkin and Michele Bedard-Gilligan, a co-author on this study, are currently testing ways to speed the recovery process for recent survivors. One is a smartphone app that teaches evidence-based coping skills. Survivors of recent sexual assault can learn more about this and other studies by visiting thriveappstudy.com.

Credit: 
University of Washington School of Medicine/UW Medicine

Research shows employer-based weight management program with access to anti-obesity medications results in greater weight loss

Tuesday, July 20, 2021, CLEVELAND: A Cleveland Clinic study demonstrates that adults with obesity lost significantly more weight when they had access to medications for chronic weight management in conjunction with their employer-based weight management program, compared to adults who did not have access to the medications. The study was published in JAMA Network Open.

Obesity is a complex disease that is caused by multiple factors, including genetic, environmental, and biological. A lifestyle intervention with a focus on nutrition and exercise is often not enough to treat obesity, which is a chronic disease that requires long-term therapy. The U.S. Food and Drug Administration (FDA) has approved several prescription medications for weight loss and chronic weight management, also called anti-obesity medications. However, they have limited health insurance coverage.

“The research results support the need to treat patients with a multidisciplinary weight management program that incorporates safe and effective medications to lose weight and maintain weight loss,” said Bartolome Burguera, M.D., Ph.D., chair of Cleveland Clinic’s Endocrinology & Metabolism Institute and primary investigator of the study. “Doctors prescribe medications to treat some of the health consequences associated with obesity, such as hypertension and type 2 diabetes. However, medications for weight loss and chronic weight management are underutilized.”

The Centers for Disease Control and Prevention (CDC) reported that more than 42% of U.S. adults have obesity. In addition to the serious health conditions associated with obesity – such as type 2 diabetes, obstructive sleep apnea, high-blood pressure, heart disease and stroke – the CDC also reported the economic impact of obesity on the U.S. healthcare system. The estimated medical care costs of the disease in the United States represented $147 billion (in 2008 dollars).

The objective of this study was to determine the effect of combining anti-obesity medications with a multidisciplinary employer-based weight management program.

The one-year, single-center, pragmatic clinical trial was conducted in the real-world setting of a workplace health plan. The study included 200 adults with obesity (body mass index of 30 or greater) who were enrolled in the Cleveland Clinic Employee Health Plan between January 2019 and May 2020. As part of the health plan, participants had access to a comprehensive weight management program.

In this real-world setting, eligible participants were randomized 1:1 to either a weight management program with FDA-approved anti-obesity medications or a weight management program alone. The weight management program was administered through monthly shared medical appointments (SMAs) that offered a multidisciplinary approach, including nutrition education. The monthly SMA visits focused on adopting a healthier lifestyle and addressed the five components of the weight management program: nutrition, physical activity, appetite control, sleep, and mental health. Due to the COVID-19 pandemic, some of the SMAs were conducted virtually.

The 100 study participants, randomized to the weight management program combined with access to the medications, received their prescriptions at the time of their monthly SMAs, based on recommended clinical practice.

Patients were prescribed one of five FDA-approved medications for chronic weight management – orlistat, lorcaserin, phentermine/topiramate, naltrexone/bupropion, liraglutide 3.0 mg. The medication selected for each patient was at the discretion of the treating provider, and was determined after a thorough assessment and discussion with the participants. (Lorcaserin was withdrawn from the market in February 2020. The eight patients taking lorcaserin at the time were notified immediately and either switched medications or discontinued medication due to proximity to the end of the study.)

Research results showed that the participants who had access to the anti-obesity medications averaged significantly greater weight loss at 12 months (-7.7%), compared to the participants who were in the weight management program alone (-4.2%). In the group who had access to the medications, 62.5% of the participants lost at least 5% of their weight, compared to 44.8% of the participants in the group with the weight management program alone. SMA attendance was higher among the participants who had access to the weight loss medications.

“Many patients see improvement in their health when they lose 5% of their weight,” said Kevin M. Pantalone, D.O., first author of the study and an endocrinologist at Cleveland Clinic. “Based on our study results, access to anti-obesity medications combined with a multidisciplinary weight management program provides a more effective treatment compared to a weight management program without access to these medications.”

More long-term research is needed in real-world, employer-based settings to evaluate the costs and benefits of anti-obesity medications and their use in conjunction with workplace wellness plans.

Credit: 
Cleveland Clinic

Dearth of mental health support during pandemic for those with chronic health problems

A new scoping review found that those with chronic health concerns, such as diabetes, heart disease, cancer, and autoimmune conditions, are not only at a higher risk of severe COVID-19 infection, they are also more likely to experience anxiety, depression or substance use during the COVID-19 pandemic.

The aim of the review was to address knowledge gaps related to the prevention and management of mental health responses among those with chronic conditions. The findings, recently published in the International Journal of Environmental Research and Public Health, were based on a comprehensive review of 67 Chinese and English-language studies.

"Levels of anxiety, depression, and substance use tended to be more prevalent among those with physical health concerns, and these mental health impacts also interfered with their treatment plans," says first author Karen Davison, Canada Research Chair at Kwantlen Polytechnic University.

Physical and mental health problems often occur together, possibly due to factors such as shared underlying inflammatory responses and the psychosocial effects of living with a health condition, say the study's authors. Economic instability, social isolation, and reduced access to health and social care services also increased the likelihood of mental health concerns among those with a chronic physical health condition.

"These circumstances, which became more prevalent during the pandemic, likely impact an individual's ability to cope," says co-author Professor Simon Carroll from the University of Victoria's Sociology department.

Rapidly spreading misinformation during the pandemic may have also influenced reactions that can worsen mental health.

"Lower levels of health literacy have been associated with poorer physical and mental health," says Brandon Hey, Policy and Research Analyst, COVID 19 Policy, Programs and Priorities at the Mental Health Commission of Canada. "This needs to be addressed by the public health community who can educate and support social and conventional media to accurately deliver information."

The findings and practice recommendations from this review have the potential to inform the work of policy-makers, practitioners, and researchers looking to provide better mental health supports for those with chronic illness.

"Several promising practices include screening for mental health issues, addressing factors such as income support, using digital resources to provide care, and providing services such as patient navigation, group online visits, peer support, and social prescribing," says co-author University of British Columbia Nursing Professor Maura MacPhee.

University of Toronto Social Work Professor, Esme Fuller-Thomson, who is also Director of the Institute for Life Course and Aging, says we now have the opportunity to shape policies, programs, and other efforts to strengthen people's mental health. "Multi-integrated interventions can help provide the supports that are needed to address the complex needs of different populations and foster resilience in times of public health crises," she says.

Credit: 
University of Toronto

New algorithm may help autonomous vehicles navigate narrow, crowded streets

image: Vehicles attempt to pass each other on a crowded street in Pittsburgh, Pa. Researchers at Carnegie Mellon University sought to enable autonomous vehicles to navigate this situation.

Image: 
Carnegie Mellon University

It is a scenario familiar to anyone who has driven down a crowded, narrow street. Parked cars line both sides, and there isn't enough space for vehicles traveling in both directions to pass each other. One has to duck into a gap in the parked cars or slow and pull over as far as possible for the other to squeeze by.

Drivers find a way to negotiate this, but not without close calls and frustration. Programming an autonomous vehicle (AV) to do the same -- without a human behind the wheel or knowledge of what the other driver might do -- presented a unique challenge for researchers at the Carnegie Mellon University Argo AI Center for Autonomous Vehicle Research.

"It's the unwritten rules of the road, that's pretty much what we're dealing with here," said Christoph Killing, a former visiting research scholar in the School of Computer Science's Robotics Institute and now part of the Autonomous Aerial Systems Lab at the Technical University of Munich. "It's a difficult bit. You have to learn to negotiate this scenario without knowing if the other vehicle is going to stop or go."

While at CMU, Killing teamed up with research scientist John Dolan and Ph.D. student Adam Villaflor to crack this problem. The team presented its research, "Learning To Robustly Negotiate Bi-Directional Lane Usage in High-Conflict Driving Scenarios," at the International Conference on Robotics and Automation.

The team believes their research is the first into this specific driving scenario. It requires drivers -- human or not -- to collaborate to make it past each other safely without knowing what the other is thinking. Drivers must balance aggression with cooperation. An overly aggressive driver, one that just goes without regard for other vehicles, could put itself and others at risk. An overly cooperative driver, one that always pulls over in the face of oncoming traffic, may never make it down the street.

"I have always found this to be an interesting and sometimes difficult aspect of driving in Pittsburgh," Dolan said.

Autonomous vehicles have been heralded as a potential solution to the last mile challenges of delivery and transportation. But for an AV to deliver a pizza, package or person to their destination, they have to be able to navigate tight spaces and unknown driver intentions.

The team developed a method to model different levels of driver cooperativeness -- how likely a driver was to pull over to let the other driver pass -- and used those models to train an algorithm that could assist an autonomous vehicle to safely and efficiently navigate this situation. The algorithm has only been used in simulation and not on a vehicle in the real world, but the results are promising. The team found that their algorithm performed better than current models.

Driving is full of complex scenarios like this one. As the autonomous driving researchers tackle them, they look for ways to make the algorithms and models developed for one scenario, say merging onto a highway, work for other scenarios, like changing lanes or making a left turn against traffic at an intersection.

"Extensive testing is bringing to light the last percent of touch cases," Dolan said. "We keep finding these corner cases and keep coming up with ways to handle them."

Credit: 
Carnegie Mellon University

Renewable energies: No wind turbine disturbing the scenery

image: This is where a wind turbine might be located. In beautiful landscapes, such as in the Alpine foothills, rejection of wind power often is very high. (Photo: Markus Breig, KIT)

Image: 
Photo: Markus Breig, KIT

Wind energy is of outstanding importance to the energy transition in Germany. According to the Federal Statistical Office, its share in total gross electricity production of about 24% is far higher than those of all other renewable energy sources. "To reach our climate goals, it is important to further expand these capacities and to replace as much coal-based power as possible," says Professor Wolf Fichtner from KIT's Institute for Industrial Production (IIP). "However, there is considerable resistance, especially in beautiful landscapes." A team of researchers from KIT, the University of Aberdeen, and the Technical University of Denmark has now calculated what this means for the costs of the energy transition and for the CO2 balance of municipalities in Germany.

Quantifying Wind Power Rejection

The calculations are based on evaluations of the beauty of German landscapes according to standardized criteria by thousands of respondents. "It was confirmed for Great Britain that rejection of wind energy expansion is much higher in municipalities located in beautiful sceneries than in less beautiful regions," says Max Kleinebrahm, IIP. "When transferring this finding to Germany and replacing the qualitative factor of rejection by a development scenario without wind power, the additional costs expected when using no wind turbines can be projected precisely." As a reference, the researchers used another techno-economically optimized scenario for the transformation of the energy system with the use of local wind power.

The comparison was made for 11,131 municipalities in Germany and projected until 2050. It was found that stopping the expansion of wind energy use in the most beautiful landscapes might increase power generation costs in some municipalities by up to 7 cents per kilowatt hour and CO2 emissions might rise by up to 200 g per kilowatt hour. "Instead of wind energy, it would then be necessary to expand use of other types of renewable energy sources, such as solar energy or bioenergy," says Jann Michael Weinand (IIP), one of the main authors of the study. "Solar energy, however, is associated with higher system integration costs causing most of the surcharge." Only in very few cases can wind energy for local electricity production be replaced completely. In many cases, power imports would be needed, which would result in comparably high CO2 emissions.

Participation for a Solution

The researchers cannot offer a quick solution for the conflict between nature protection and climate-friendly power production with wind turbines. Still, they would like their study to contribute to a reconciliation. "We provide the necessary data so that those responsible on the ground can make knowledge-based decisions," Fichtner says. Further analyses are planned to obtain in-depth understanding of the interrelations between local rejection of wind power, landscape beauty, and impacts on the energy system. (mhe)

Credit: 
Karlsruher Institut für Technologie (KIT)

Solar cells: Layer of three crystals produces a thousand times more power

The photovoltaic effect of ferroelectric crystals can be increased by a factor of 1,000 if three different materials are arranged periodically in a lattice. This has been revealed in a study by researchers at Martin Luther University Halle-Wittenberg (MLU). They achieved this by creating crystalline layers of barium titanate, strontium titanate and calcium titanate which they alternately placed on top of one another. Their findings, which could significantly increase the efficiency of solar cells, were published in the journal Science Advances.

electric crystals do not require a so-called pn junction to create the photovoltaic effect, in other words, no positively and negatively doped layers. This makes it much easier to produce the solar panels.

However, pure barium titanate does not absorb much sunlight and consequently generates a comparatively low photocurrent. The latest research has shown that combining extremely thin layers of different materials significantly increases the solar energy yield. "The important thing here is that a ferroelectric material is alternated with a paraelectric material. Although the latter does not have separated charges, it can become ferroelectric under certain conditions, for example at low temperatures or when its chemical structure is slightly modified," explains Bhatnagar.

Bhatnagar's research group discovered that the photovoltaic effect is greatly enhanced if the ferroelectric layer alternates not only with one, but with two different paraelectric layers. Yeseul Yun, a PhD student at MLU and first author of the study, explains: "We embedded the barium titanate between strontium titanate and calcium titanate. This was achieved by vaporising the crystals with a high-power laser and redepositing them on carrier substrates. This produced a material made of 500 layers that is about 200 nanometres thick."

When conducting the photoelectric measurements, the new material was irradiated with laser light. The result surprised even the research group: compared to pure barium titanate of a similar thickness, the current flow was up to 1,000 times stronger - and this despite the fact that the proportion of barium titanate as the main photoelectric component was reduced by almost two thirds. "The interaction between the lattice layers appears to lead to a much higher permittivity - in other words, the electrons are able to flow much more easily due to the excitation by the light photons," explains Akash Bhatnagar. The measurements also showed that this effect is very robust: it remained nearly constant over a six-month period.

Further research must now be done to find out exactly what causes the outstanding photoelectric effect. Bhatnagar is confident that the potential demonstrated by the new concept can be used for practical applications in solar panels. "The layer structure shows a higher yield in all temperature ranges than pure ferroelectrics. The crystals are also significantly more durable and do not require special packaging."

Credit: 
Martin-Luther-Universität Halle-Wittenberg

Study identifies MET amplification as driver for some non-small cell lung cancers

image: D. Ross Camidge, MD, PhD

Image: 
CU Cancer Center

A study led by D. Ross Camidge, MD, PhD, director of thoracic oncology at the University of Colorado School of Medicine and CU Cancer Center member, has helped to define MET amplification as a rare but potentially actionable driver for non-small cell lung cancer (NSCLC).

Camidge says many of the major developments in the treatment of non-small cell lung cancer have come from defining molecularly specific subsets of the disease for which researchers have been able to develop targeted treatments. Until now, all of these subsets have been based on either genetic mutations or gene rearrangements (where two separate genes fuse to create an oncogene).

“What we’ve started to realize is that non-small cell lung cancer isn’t just one disease,” Camidge says. “Over the last 15 or so years, we’ve started to pull apart separate diseases within that umbrella. Now, there are at least eight different molecularly specific subtypes with an FDA-approved therapy.”

Gene amplification as cancer driver

The new paper, titled “Crizotinib in Patients With MET-Amplified NSCLC,” and published in the June issue of the Journal of Thoracic Oncology, introduces a third means of defining NSCLC subsets that can be targeted with a specific drug. Rather than a mutation or a gene rearrangement, this third category represents oncogene activation through gene amplification. Gene amplification occurs when there is an increase in the usual number of copies of a particular gene, but the process can be difficult to identify.

“Unlike gene mutations or gene rearrangements — which are either there or not — gene amplification is a continuous variable,” Camidge says. “How many extra copies do you need for it to make a difference? Is it an increase in just that one gene because it’s so important to the cancer, or is it being dragged along for the ride by an increase in lots of other genes in the same part of the chromosome? Where do you put the cut point to say this level matters and this level does not? That’s why identifying gene amplification as a definable driver of NSCLC has been challenging.”

For this study, Camidge and the other investigators in the Pfizer-sponsored study focused specifically on MET amplification. MET is a gene that encodes a protein normally involved in cell growth. Although it is normally well-controlled, it can become dysregulated and drive some cancers’ behavior. This can sometimes occur as a result of genetic mutations or gene rearrangement, but it can also occur through gene amplification.

If MET amplification is a cancer driver in some patients, then it stood to reason that inhibiting MET could slow or stop the progression of NSCLC in those patients.

To test that theory, the study required hospitals and cancer centers to screen tumor samples from NSCLC patients for MET amplification using a genetic test called fluorescence in situ hybridization (FISH). At CU and for several other sites, the MET FISH testing was performed by Marileila Varella-Garcia, PhD, a former professor of medical oncology at the School of Medicine (now retired).

During the study, a total of 88 patients with varying levels of MET amplification received crizotinib. Although crizotinib is currently licensed as an ALK (anaplastic lymphoma kinase) and ROS1 (c-ros oncogene 1) inhibitor for treatment of some other subtypes of NSCLC., it is also a MET inhibitor.

The results showed that patients with the highest levels of MET amplification responded to therapy with crizotinib at the highest rates, experiencing longer periods of tumor-progression-free survival, while patients with lower levels of MET amplification responded less favorably to the treatment.

The study, which started in 2006, is one of the largest efforts to define the relevant diagnostic test for meaningful levels of MET gene amplification and prove that MET-inhibitor drugs are effective for treating patients with NSCLC driven by MET amplification.

“It has been a long and difficult course for this rare subtype of lung cancer, but I think this is fairly good proof that there are some patients where MET amplification alone is driving their cancer,” Camidge says.

Making the case for MET amplification testing and therapies

Camidge says that MET amplification-driven NSCLC is unique for a number of reasons. First, it’s extremely rare, accounting for less than 1% of all NSCLCs.

Second, it tends to occur in patients who are not normally identified as having lung cancers with oncogenic drivers, including smokers and the elderly.

“It’s not your classic driver oncogene subtype,” he says. “It tends to break most of the rules we normally associate with driver oncogenes, which is that they are normally found in younger people and people who have never smoked. So, even if you’re a smoker, even if you’re older, if your doctor hasn’t found a driver oncogene and they haven’t looked for MET amplification, they should think about it.”

“This is a truly actionable oncogene. It’s rare, but it’s real.” – D. Ross Camidge, MD, PhD

Because of this, Camidge says that NSCLC patients without an identified driver oncogene should consider getting tested for MET amplification. He specifically recommends using the FISH testing method utilized in the study rather than relying solely on next generation sequencing, a different type of genetic testing that can return false negatives when it comes to identifying MET amplification.

“While some sequencing tests can reliably pick up gene amplification in a comparable manner to the FISH testing, others cannot,” Camidge says. “It’s all buried in the software that each commercial company or academic lab uses to analyze their sequencing data. I think pulling that apart will come in the near future as we better define what exactly we are looking for to make MET copy number information clinically relevant.”

As for using MET inhibitors to treat patients with MET amplification-driven NSCLC, Camidge says drug companies are starting to explore MET amplification as an additional target for new and existing MET inhibitors, and that he hopes the team’s findings will help inform that research and development to eventually help patients.

“This is a truly actionable oncogene,” he says. “It’s rare, but it’s real.”

Credit: 
University of Colorado Anschutz Medical Campus

The origin of bifurcated current sheets explained

image: The orbit class phase-spatial distribution of particles was theoretically derived and confirmed by particle simulation, and the bifurcated current sheet data was compared with the satellite data from NASA (USA).
a) The result of theoretically inducing the orbit class and phase-spatial distribution of the particles constituting the current sheet.
b) Particle simulation results showing the phase-spatial distribution change during the equilibration of the current sheet.
c) Bifurcated current sheets from the magnetosphere by NASA satellites (left) and current sheets from particle simulations (right).

Image: 
POSTECH

A Korean research team has identified the origin of bifurcated current sheets, considered one of the most unsolved mysteries in the Earth's magnetosphere and in magnetized plasma physics.

A POSTECH joint research team led by Professor Gunsu S. Yun of the Department of Physics and Division of Advanced Nuclear Engineering and Dr. Young Dae Yoon from the Pohang Accelerator Laboratory has theoretically established the process of collisionless equilibration of disequilibrated plasma current sheets. In addition, by comparing this with particle simulations and satellite data from NASA, the origin of the bifurcated current sheets - which had remained largely unknown - has been revealed.

In the Earth's magnetosphere, a sheet-shaped plasma is observed that is trapped between two regions of opposing magnetic fields. Because current flows inside it, it is also called a current sheet. According to the conventional theory, the current sheet exists as a single bulk in which the magnetic pressure due to the magnetic field generated by the current and the thermal pressure of the plasma balance one another, thereby forming an equilibrium. However, in 2003, the European Space Agency's Cluster mission observed a bifurcated current sheet in Earth's magnetosphere. Since then, similar phenomena have been observed.

On the other hand, extensive research has been accumulated on the condition in which the magnetic force and thermal pressure are perfectly balanced with each other in the current sheet. But the process through which a disequilibrated current sheet equilibrates remains largely unknown. Since plasma systems generally do not start from an equilibrium state, comprehension of the equilibration process is desired to better understand the current sheet plasma dynamics.

The joint research team thoroughly analyzed the process in which the disequilibrated sheet achieves equilibrium by considering the orbit classes and phase-space distributions of particles that constitute the current sheet and found that the current sheets can naturally bifurcate during the equilibration process. It was then confirmed that these theoretical predictions were consistent with the particle-in-cell simulation results performed by the KAIROS supercomputer at the Korea Institute of Fusion Energy. In addition, the simulation data were compared and verified with NASA's Magnetospheric Multiscale (MMS) measurements.

This achievement has enhanced the comprehension of magnetized plasma dynamics by incorporating theoretical analyses, supercomputer simulations, and satellite observations. Since the Earth's magnetospheric plasma has similar characteristics as other magnetized plasmas such as nuclear fusion plasmas in various ways, it is anticipated to contribute to a wide range of fields.

"This study has a significant academic value in that it simultaneously resolved two mysteries: the process through which disequilibrated current sheet equilibrates and the origin of bifurcated current sheets," explained Professor Gunsu S. Yun of POSTECH who participated as a co-corresponding author in the study. "We are trying to extend the analysis framework for plasmas with strong guide fields and hope to understand similar phenomena that occur in fusion plasmas."

Credit: 
Pohang University of Science & Technology (POSTECH)

Synthesis of new red phosphors with a smart material as a host material

image: LTT phosphor (left), LNT phosphor (right)

Image: 
COPYRIGHT (C) TOYOHASHI UNIVERSITY OF TECHNOLOGY. ALL RIGHTS RESERVED.

Overview:

Professor Hiromi Nakano of Toyohashi University of Technology used a material with a unique periodical structure (smart material: Li-M-Ti-O [M = Nb or Ta]) as a host material to synthesize new Mn4+-activated phosphors that exhibit red light emissions at 685 nm when excited at 493 nm. Because the valence of the Mn ions in the material changes from Mn4+ to Mn3+ according to the sintering temperature, composition, and crystal structure, there is a difference in the photoluminescence intensity of the phosphors. XRD, TEM, and XANES were used to clarify the relationship between the photoluminescence intensity and the sintering temperature, composition, crystal structure, and MgO co-doping.

Details:

The white color in white LEDs is usually achieved by exciting a yellow phosphor with blue light. However, the color rendering index with this method is assessed as low because there is insufficient red light when compared to sunlight. Therefore, phosphors that emit red light have an important role as materials with a high color rendering index.

Previously, Professor Nakano's team used a smart material (Li-M-Ti-O [M = Nb or Ta]) as a host material to synthesize an Eu3+-activated red phosphor. This time, they synthesized new Mn4+-activated red phosphors without using rare earth materials.

The Li-Nb-Ti-O (LNT) system and Li-Ta-Ti-O (LTT) system are both smart materials (see figure for example) that self-organize into a periodical structure with an intergrowth layer period that changes according to the TiO2 doping amount. The periodical structure area of the LTT system is narrower than that of the LNT system, and there is a difference in the sintering conditions for its creation. Therefore, while comparing the LNT and LTT systems, the team closely investigated how photoluminescence intensity and Mn ion valence change with the sintering temperature, composition, crystal structure, and MgO co-doping.

As a result of this research, it was understood that LTT had notably higher photoluminescence intensity than LNT because of changes in the crystal structure due to the sintering temperature and composition. Generally, if the sintering temperature is high, Mn4+ will likely reduce to Mn3+, explaining the decrease in the photoluminescence intensity. In regards to changes in the crystal structure, when the TiO2 doping amount is increased, the number of [Ti2O3]2+ periodical intergrowth layers also increases. Because the intergrowth layer is formed with Ti3+ ions, it was understood that the surrounding oxygen deficiencies contribute to reductions from Mn4+ to Mn3+. Additionally, when MgO doping was performed to increase the photoluminescence intensity, the LTT phosphor that did not have a periodical structure exhibited a 100% Mn4+ ratio and the highest photoluminescence intensity.

Development Background:

The student who was initially involved in the experiment stated that "the Mn4+ phosphor did not exhibit photoluminescence with the host material", and the research was put on hold for about six months. Next year, a different student synthesized the phosphor and stated, "it exhibits a weak photoluminescence, but I think we could try some things to improve it." Through repeated trial and error, the team uncovered an important factor: in addition to the sintering temperature, there were significant differences in the changes to the crystal structure when the Mn4+ ratio was controlled. Through numerous trips to the Aichi Synchrotron Radiation Center, the team was able to measure the Mn4+ ratio and consolidate their research results.

Future Outlook:

The Mn4+-activated phosphor had to be synthesized at a comparatively low 850 °C in order to increase the Mn4+ ratio. However, under this condition, there is an issue with moderately low crystallinity. In the future, they will try various co-dopants to further explore the synthesis process to achieve a brighter red phosphor. In recent years, there has been more interest in deep-red Mn phosphors activated without the use of rare-earth materials, such as for use in LED grow lights, and applications can be expected to expand in the future.

Credit: 
Toyohashi University of Technology (TUT)

Farm consolidation has negative effect on wild pollinators

image: Traditional smallholder farms in Jiangxi, China

Image: 
Dr Yi Zou

A new study by a team of researchers has found that the consolidation of traditional smallholder farms in China has a devastating effect on the biodiversity of wild pollinators in the area.

Pollinators play an essential role when it comes to supporting global food production.
However, wild pollinators are on the decline for several reasons, including the loss of floral resources and nesting sites. This loss of biodiversity could have far-reaching consequences for global food production in future.
"Biodiversity is essential for all life, with pollinators being one of the most important groups," says Dr Yi Zou from Xi'an Jiaotong-Liverpool University, corresponding author of the paper. "There are diverse insect pollinators in traditional smallholder agroecosystems in China supported by fine-scale field margins which contain semi-natural habitats."

Semi-natural habitats, such as forest and grassland, provide abundant resources and nesting sites and promote pollinator diversity.

However, these habitats are under threat. "China is conducting massive farmland consolidation projects, changing traditional fields to regular consolidated ones. As a result, fine-scale, semi-natural habitats are removed," says Dr Zou.

Changing traditions

Traditionally, smallholder farms in China are worked by hand and have an irregular shape informed by the landscape. There are narrow margins of semi-natural habitat between individual smallholder farms that allow farmers to move between them.

Consolidation reorganises the farmland into more uniform shapes that allow for mechanised agriculture and creates even, flat surfaces between the plots, which are sometimes paved to enable easier movement.

To assess the impact of these consolidation projects, Dr Zou and a team of researchers conducted a study in the Jiangxi province, comparing pollinator diversity of 18 rapeseed (canola) fields over two years - 2015 and 2019.

"We found that consolidated farming landscapes had about 30% lower pollinator biodiversity as opposed to traditional ones," explains Dr Zou.

"While semi-natural habitat in the areas surrounding the farmland has positive effects on both consolidated and traditional farming, it requires a great deal of land to offset the loss of margins. To compensate for the drop in diversity due to consolidation, we need an extra 55% landscape-scale semi-natural habitat, which is very difficult to achieve.

"Farmland consolidation is inevitable to improve agricultural productivity. However, the role of semi-natural habitat in supporting farmland biodiversity - and the associated beneficial services such as pollination and biological pest control - needs to be considered. For example, the establishment of fine-grained networks with flowering plant species and nesting sites may provide a feasible option to reduce negative impacts on wild pollinator diversity," concludes Dr Zou.

Methodology and findings

The researchers conducted the study in the Jiangxi Province of China in 2015 and 2019. A total of 20 study sites, with eight consolidated and 12 traditional farmlands were selected.

Pollinator communities in focal oilseed rape fields were sampled using pan traps placed in the centre of each field.

The pan trap sampling resulted in the collection of 6,910 wild pollinators, which accounts for 85 species.

The richness of rarefied species of wild pollinators was higher in traditional sites as opposed to consolidated sites. Species richness was positively associated with the proportion of semi-natural habitat in evidence.

The study found the following conclusions:

1. Wild pollinator richness and evenness is lower in oilseed rape fields in consolidated farmland.

2. Species richness of wild pollinators is positively associated with the proportion of semi-natural habitat.

3. The loss of diversity of wild pollinators, owing to land consolidation, is substantial.

Credit: 
Xi'an Jiaotong-Liverpool University

New method for uninterrupted monitoring of solid-state milling reactions

image: Dr. Stipe Lukin, Ruđer Bošković Institute

Image: 
Ruđer Bošković Institute

A team of chemists from the Croatian Ruđer Bošković Institute (RBI) described a new, easy-to-use method for uninterrupted monitoring of mechanochemical reactions. These reactions are conducted in closed milling devices, so in order to monitor the reaction one has to open the reaction vessel, thus interfering with the process. The new method uses Raman spectroscopy to get deeper insight into solid-state milling reactions, without the usual interruption of the chemical reaction process.

Mechanochemical synthesis by milling is used today to prepare all classes of compounds and materials. It is a simple, fast and more environmentally friendly alternative to classical solution synthesis, that greatly reduces the use of solvents and waste generation because the reactions take place in a solid state without solvents and are driven by the input of mechanical energy.

However, in order to bring mechanical energy into the system, the solids are placed in reaction vessels made of various metals such as steel, as well as clear plastic. Ball mills are then added with the solids, and the mill vessels are then placed on specialized mills where they oscillate at high frequencies.

''Although mechanochemical synthesis by milling is becoming more and more popular and widespread, the way in which reactions take place in such closed reaction vessels makes it impossible for us to monitor chemical and physical processes. Namely, in the past, the chemical reaction was often monitored by stopping the milling and opening the reaction vessel, and then taking a small part of the sample from the vessel for analysis. However, stopping milling does not necessarily mean that this chemical reaction is complete, which means that monitoring chemical processes in this way does not always give good results,'' explains Dr. Stipe Lukin from the Croatian research team.

In order for scientists to understand how and why a particular reaction occurs in the solid state, it was necessary to devise a way to successfully monitor these processes during milling, without the need to stop the experiment. This is exactly where the scientists from the Ruđer Bošković Institute are among the best in the world.

Namely, it was this group of chemists from the Croatian Institute who developed a method for successful instantaneous monitoring of mechanochemical reactions, such as synchrotron X-ray diffraction on a powder sample or laboratory techniques of Raman spectroscopy.

In this paper, Dr. Stipe Lukin, and his colleagues Dr. Ivan Halasz and Dr. Krunoslav Užarević have described in detail the method based on Raman spectroscopy that they developed at the Institute.

''Our Raman spectroscopy method uses a laser that passes through a clear plastic reaction vessel during the reaction allowing us to collect spectroscopic data. With this method we can monitor the formation and disappearance of various chemical bonds, and identify the newly formed products during the reaction. In this way, we can gain deeper insights into the reaction mechanisms and find out why and how reactions take place, '' explained Dr. Lukin and further stated that although Raman spectroscopy was an essential technique of process analytical technologies used in the chemical and biopharmaceutical industry for uninterrupted monitoring of manufacturing processes, it has not yet come close to realizing its full potential.

Dr. Lukin believes that publication of this paper is important, because it could enable the implementation of this method in other laboratories around the world.

This could eventually result in the expansion of results that deal with the mechanistic aspects of mechanochemical reactions, concluded Dr. Lukin.

Credit: 
Ruđer Bošković Institute

New method predicts 'stealth' solar storms before they wreak geomagnetic havoc on Earth

image: The novel imaging techniques applied to remote sensing data of the coronal mass ejection on 08 Oct 2016. A-D: Intensity of extreme UV (EUV; 21.1 nm) captured by the Atmospheric Imaging Assembly instrument on board NASA's Solar Dynamics Observatory.
1st column: 08 Oct 2016 15:00 UTC. 2nd column: 09 Oct 2016 00:00 UTC. 3rd column: 09 Oct 2016 09:00 UTC. 4th column: 09 Oct 2016 18:00 UTC. First row: unprocessed images. Second row: Difference images comparing EUV intensity to 12 h earlier. Third row: Images after Wavelet Packets Equalization (WPE), an image processing method. Fourth row: Images after Multi-scale Gaussian Normalization (MGN), another image processing method. Arrows denote dimmings and brightenings on the Sun's disc, previously overlooked but revealed with the new method.

Image: 
Palmerio, Nitta, Mulligan et al.

On 23 July 2012, humanity escaped technological and economic disaster. A diffuse cloud of magnetized plasma in the shape of a slinky toy tens of thousands of kilometers across was hurled from the Sun at a speed of hundreds of kilometers per second.

This coronal mass ejection (CME) just missed the Earth because its origin on the Sun was facing away from our planet at the time. Had it hit the Earth, satellites might have been disabled, power grids around the globe knocked out, GPS systems, self-driving cars, and electronics jammed, and railway tracks and pipelines damaged. The cost of the potential damage has been estimated at between $600bn and $2.6trn in the US alone.

While CMEs as large as the 2012 event are rare, lesser ones cause damage on Earth about once every three years. CMEs need between one and a few days to reach Earth, leaving us some time to prepare for the potential geomagnetic storm. Current efforts to limit any damage include steering satellites out of harm's way or redirecting the power load of electrical grids. But many CMEs -- called 'stealth CMEs' because they don't produce any clear signs close to the Sun's surface -- aren't detected until they reach Earth.

Now, an International Space Science Institute (ISSI) team of scientists from the US, Belgium, UK, and India shows how to detect potentially damaging stealth CMEs, trace them back to their region of origin on the Sun, extrapolate their trajectory, and predict if they will hit Earth. The results were recently published in the journal Frontiers in Astronomy and Space Sciences.

Visualizing the invisible

"Stealth CMEs have always posed a problem, because they often originate at higher altitudes in the Sun's corona, in regions with weaker magnetic fields. This means that unlike normal CMEs -- which typically show up clearly on the Sun as dimmings or brightenings -- stealth CMEs are usually only visible on devices called coronagraphs designed to reveal the corona," said corresponding author Dr Erika Palmerio, a researcher at the Space Sciences Laboratory of the University of California at Berkeley.

"If you see a CME on a coronagraph, you don't know where on the Sun it came from, so you can't predict its trajectory and won't know whether it will hit Earth until too late."

Palmerio continued: "But here we show that many stealth CMEs can in fact be detected in time if current analysis methods for remote sensing are adapted. Put simply, we compared 'plain' remote sensing images of the Sun with the same image taken between eight and 12 hours earlier, to capture very slow changes in the lower corona, up to 350,000km from the Sun's surface. In many cases, these 'difference images' revealed small, previously overlooked changes in the loops of magnetic fields and plasma that are hurled from the Sun. We then zoom in on these with another set of imaging techniques to further analyze the stealth CME's approximate origin, and predict whether it is headed towards Earth."

Stealth CMEs leave overlooked signs

Palmerio and collaborators looked at four stealth CMEs that occurred between 2008 and 2016. Unusually for stealth CMEs, their origin on the Sun was approximately known only because NASA's twin STEREO spacecraft, launched in 2006, had happened to capture them 'off-limb'. This means it was viewed outside the Sun's disc from another angle than from Earth.

With the new imaging techniques, the authors revealed previously undetected, tiny dimmings and brightenings on the Sun at the region of origin of all four stealth CMEs. They conclude that the technique can be used for the early detection of risky stealth CMEs.

"This result is important because it shows us what to look for if we wish to predict the impact on Earth from solar eruptions," said Palmerio.

"Another important aspect of our study -- using geometric techniques to locate a CME's approximate source region and model its 3D structure as it expands and moves towards Earth -- can only be implemented when we have more dedicated observatories with different perspectives, like the STEREO spacecraft."

The authors predict that the new European Space Agency's Solar Orbiter, launched in February 2020, will help with this, just like similar initiatives which are currently discussed by researchers worldwide.

"Data from more observatories, analyzed with the techniques developed in our study, could also help with an even more difficult challenge: namely to detect so-called 'super stealth CMEs', which don't even show up on coronagraphs," said coauthor Dr Nariaki V Nitta, a senior researcher at Lockheed Martin Solar and Astrophysics Laboratory in Palo Alto, US.

Credit: 
Frontiers

Capturing electrons in space

image: Physicists Roland Wester (left) and Malcolm Simpson (right) demonstrate how dipole-bound states allow negative ions to form in interstellar clouds.

Image: 
Bryan Goff on Unsplash / AG Wester

Interstellar clouds are the birthplaces of new stars, but they also play an important role in the origins of life in the Universe through regions of dust and gas in which chemical compounds form. The research group, molecular systems, led by ERC prize winner Roland Wester at the Institute for ion physics and applied physics at the University of Innsbruck, has set itself the task of better understanding the development of elementary molecules in space. "Put simply, our ion trap allows us to recreate the conditions in space in our laboratory," explains Roland Wester. "This apparatus allows us to study the formation of chemical compounds in detail." The scientists working with Roland Wester have now found an explanation for how negatively charged molecules form in space.

An idea built on theoretical foundations

Before the discovery of the first negatively charged carbon molecules in space in 2006, it was assumed that interstellar clouds only contained positively charged ions. Since then, it has been an open question how negatively charged ions are formed. The Italian theorist Franco A. Gianturco, who has been working as a scientist at the University of Innsbruck for eight years, developed a theoretical framework a few years ago that could provide a possible explanation. The existence of weakly bound states, so-called dipole-bound states, should enhance the attachment of free electrons to linear molecules. Such molecules have a permanent dipole moment which strengthens the interaction at a relatively great distance from the neutral nucleus and boosts the capture rate of free electrons.

Observing dipole-bound states in the laboratory

In their experiment, the Innsbruck physicists created molecules consisting of three carbon atoms and one nitrogen atom, ionized them, and bombarded them with laser light in the ion trap at extremely low temperatures. They continuously changed the frequency of the light until the energy was large enough to eject an electron from the molecule. Albert Einstein described this so-called photoelectric effect 100 years ago. An in-depth analysis of the measurement data by the early-stage researcher Malcolm Simpson from the doctoral training programme, atoms, light and molecules at the University of Innsbruck finally shed light on this difficult-to-observe phenomenon. A comparison of the data with a theoretical model finally provided clear evidence of the existence of dipole-bound states. "Our interpretation is that these dipole-bound states represent a kind of door opener for the binding of free electrons to molecules, thus contributing to the creation of negative ions in space," says Roland Wester. "Without this intermediate step, it would be very unlikely that electrons would actually bind to the molecules."

Credit: 
University of Innsbruck

Machine learning models to help photovoltaic systems find their place in the sun

image: Integrating photovoltaic systems into existing power grids is not straightforward and requires accurate predictions of the power they will generate to allow for proper grid management.

Image: 
https://unsplash.com/@scienceinhd

With the looming threat of climate change, it is high time we embrace renewable energy sources on a larger scale. Photovoltaic systems, which generate electricity from the nearly limitless supply of sunlight energy, are one of the most promising ways of generating clean energy. However, integrating photovoltaic systems into existing power grids is not a straightforward process. Because the power output of photovoltaic systems depends heavily on environmental conditions, power plant and grid managers need estimations of how much power will be injected by photovoltaic systems so as to plan optimal generation and maintenance schedules, among other important operational aspects.

In line with modern trends, if something needs predicting, you can safely bet that artificial intelligence will make an appearance. To date, there are many algorithms that can estimate the power produced by photovoltaic systems several hours ahead by learning from previous data and analyzing current variables. One of them, called adaptive neuro-fuzzy inference system (ANFIS), has been widely applied for forecasting the performance of complex renewable energy systems. Since its inception, many researchers have combined ANFIS with a variety of machine learning algorithms to improve its performance even further.

In a recent study published in Renewable and Sustainable Energy Reviews, a research team led by Jong Wan Hu from Incheon National University, Korea, developed two new ANFIS-based models to better estimate the power generated by photovoltaic systems ahead of time by up to a full day. These two models are 'hybrid algorithms' because they combine the traditional ANFIS approach with two different particle swarm optimization methods, which are powerful and computationally efficient strategies for finding optimal solutions to optimization problems.

To assess the performance of their models, the team compared them with other ANFIS-based hybrid algorithms. They tested the predictive abilities of each model using real data from an actual photovoltaic system deployed in Italy in a previous study. The results, as Dr. Hu remarks, were very promising: "One of the two models we developed outperformed all the hybrid models tested, and hence showed great potential for predicting the photovoltaic power of solar systems at both short- and long-time horizons."

The findings of this study could have immediate implications in the field of photovoltaic systems from software and production perspectives. "In terms of software, our models can be turned into applications that accurately estimate photovoltaic system values, leading to enhanced performance and grid operation. In terms of production, our methods can translate into a direct increase in photovoltaic power by helping select variables that can be used in the photovoltaic system's design," explains Dr. Hu. Let us hope this work helps us in the transition to sustainable energy sources!

Credit: 
Incheon National University