Earth

New 'bi-molecule' with multiple technological applications discovered

image: In the image, Rosario González- Férez, researcher at the Department of Atomic, Molecular and Nuclear Physics and the "Carlos I" Institute of Theoretical and Computational Physics of the UGR

Image: 
University of Granada

Dr. Rosario González-Férez, a researcher at the Department of Atomic, Molecular and Nuclear Physics and the "Carlos I" Institute of Theoretical and Computational Physics of the University of Granada, has published the article "Ultralong-Range Rydberg Bi-molecules" in the prestigious scientific journal Physical Review Letters. The results of the study show a new type of bi-molecule formed from two nitric oxide (NO) molecules, both in their ground state and in the Rydberg electronic state.

The work was made possible thanks to the scientific collaboration between the researcher and the Institute for Theoretical Atomic, Molecular and Optical Physics (ITAMP) at Harvard University. The study began during her stay at Harvard between March and July 2020, meaning that the entire process, from data-gathering and analysis to final written conclusions, was conducted during the Covid-19 pandemic. The stay, which was funded by the Fulbright Foundation and the Salvador de Madariaga programme of the Spanish Ministry of Science, Innovation and Universities, enjoyed the scientific collaboration of ITAMP's Hossein R. Sadeghpour and Janine Shertzer.

This new type of bi-molecule is the result of the union of two molecules of nitric oxide (NO) whose structure is arranged in such a way that the NO is located in one of the poles, while, in the other, is the NO + ion. The electron orbits around both, acting like a "glue" that binds this bi-molecule. In addition, its size corresponds to between 200 and 1,000 times that of NO, and its lifetime is long enough to enable its observation and experimental control, as these fragile systems are easily manipulated by means of very weak electric fields.

This type of bi-molecule enables researchers to implement and study chemical reactions at low temperatures from a quantum perspective and facilitates the investigation of intermolecular interactions at large distances, since they coexist at low temperatures.

Dr. González-Férez observes that the use of these bi-molecules in quantum technologies would be interesting both for the processing of information by entanglement and for the development of quantum sensors, with multiple technological applications in quantum optics and quantum computing.

González-Férez continues her work with two research groups, from the University of British Columbia in Canada and the University of Stuttgart in Germany, which aims to create this bi-molecule experimentally and confirm the theoretical predictions made over the last year.

Credit: 
University of Granada

A divided cell is a doubled cell

One big challenge for the production of synthetic cells is that they must be able to divide to have offspring. In the journal Angewandte Chemie, a team from Heidelberg has now introduced a reproducible division mechanism for synthetic vesicles. It is based on osmosis and can be controlled by an enzymatic reaction or light.

Organisms cannot simply emerge from inanimate material ("abiogenesis"), cells always come from pre-existing cells. The prospect of synthetic cells newly built from the ground up is shifting this paradigm. However, one obstacle on this path is the question of controlled division--a requirement for having "progeny".

A team from the Max Planck Institute for Medical Research in Heidelberg, Heidelberg University, the Max Planck School Matter to Life, and Exzellenzcluster 3D Matter Made to Order, headed by Kerstin Göpfrich, has now reached a milestone by achieving complete control over the division of vesicles. To achieve this, they produced "gigantic unilamellar vesicles", which are micrometer-sized bubbles with a shell made of a lipid bilayer that resembles a natural membrane. A variety of lipids were combined to produce phase-separated vesicles--vesicles with membrane hemispheres that have different compositions. When the concentration of dissolved substances in the surrounding solution is increased, osmosis causes water to exit the vesicle through the membrane. This shrinks the volume of the vesicle while keeping the membrane surface equal. The resulting tension at the phase interface deforms the vesicles. They constrict themselves along their "equator"--increasingly with increasing osmotic pressure--until the two halves separate completely to form two (now single-phase) "daughter cells" with different membrane compositions. When the separation that occurs depends only on the concentration ratio of osmotically active particles (osmolarity) and is independent of the size of the vesicle.

The method by which the osmolarity is raised also plays no role. The methods used by the team included using a sucrose solution and adding an enzyme that splits glucose and fructose to slowly increase the concentration. Using light to initiate splitting of molecules in the solution gave the researchers complete spatial and temporal control over the separation. Using tightly controlled, local irradiation allowed the concentration to be increased selectively around a single vesicle, triggering it to selectively divide.

The team is also able to grow the single-phase cells back into phase-separated vesicles by fusing them with tiny vesicles that have the other type of membrane. This was made possible by attaching single strands of DNA to both different types of membrane. These bind to each other and bring the membranes of the daughter cell and the mini vesicle into very close contact so that they can fuse. The resulting gigantic vesicles can subsequently undergo further division cycles.

"Although these synthetic division mechanisms differ significantly from those of living cells," says Göpfrich, "the question arises of whether similar mechanisms played a role in the beginnings of life on earth or are involved in the formation of intracellular vesicles."

Credit: 
Wiley

TPU scientists develop sensor with nanopores for definition doping in blood

image: sensor samples

Image: 
sensor samples

Scientists of Tomsk Polytechnic University jointly with colleagues from different countries have developed a new sensor with two layers of nanopores. In the conducted experiments, this sensor showed its efficiency as a sensor for one of the doping substances from chiral molecules. The research findings are published in Biosensors and Bioelectronics (IF: 10,257; Q1) academic journal.

The material is a thick wafer with pores of 20-30 nm in diameter. The scientists grew a layer of metal-organic frameworks (MOF) from Zn ions and organic molecules on these thick wafers. The MOF has about 3 nm nanopores only. It plays the role of a trap for molecules, which must be detected.

"This sensor can operate with chiral molecules. Such substances consisting of chiral molecules are a lot among medical drugs and biologically active compounds. Their feature lies in consisting of a couple of enantiomers, which are actually identical molecules with identical structure and physical properties, however, they are mirror images of each other. Due to this difference, enantiomers can have various biological effect: one enantiomer can be effective, while the second one will cause harm. The main challenge is that both enantiomers must be detected in a biological sample. Our research team specializes in creating chiral sensors, operating on the effect of surface plasmon resonance. We have already had a wide range of interesting efficient material sensors in this field, however, we offered absolutely a new structure in this research work," Pavel Postnikov, Associate Professor of the TPU Research School of Chemistry and Applied Biomedical Sciences, says.

If the light (for instance, a laser ray) is directed on this material, the effect of surface plasmon resonance occurs on the gold porous film. The surface plasmon resonance is the origin of an analytic signal that can be sensed by a portable surface-enhanced Raman spectroscopy (SERS) sensor. Due to the signal, it is possible to define, what a substance is captured by the MOF and in what volume. The overall analysis procedure takes less than 5 minutes.

"What did the obtained structure give to us? First, at the same time, we obtained two plasmonic effects: the surface plasmonic effect, as it occurs on the film surface and the localized surface plasmonic effect in pores. Using the other structure, such synergy cannot be achieved. Second, pores serve us as a filter twice and allow us to separate the required substance from other blood components, which can block the sensor," Olga Guselnikova, Research Fellow of the TPU School of Chemistry and Applied Biomedical Sciences, the author of the article, says.

The researchers tested the sensitivity of sensors not only on model solutions but also on blood plasma and serum, in which the doping substance was added. In the conducted experiments, two sensors were used: each was responsible for the definition of a particular enantiomer of this substance.

"Standard methods for the definition of chiral compounds, for instance, chromatography, are expensive and require complicated equipment and special skills to use it. Our sensors are suitable for portable SERS sensors, which are considerably cheaper and simpler in use," Olga Guselnikova notes.

Credit: 
Tomsk Polytechnic University

'Keep off the grass': the biofuel that could help us achieve net zero

The Miscanthus genus of grasses, commonly used to add movement and texture to gardens, could quickly become the first choice for biofuel production. A new study shows these grasses can be grown in lower agricultural grade conditions - such as marginal land - due to their remarkable resilience and photosynthetic capacity at low temperatures.

Miscanthus is a promising biofuel thanks to its high biomass yield and low input requirements, which means it can adapt to a wide range of climate zones and land types. It is seen as a viable commercial option for farmers but yields can come under threat from insufficient or excessive water supply, such as increasing winter floods or summer heat waves.

With very little known about its productivity in flooded and moisture-saturated soil conditions, researchers at the Earlham Institute in Norwich wanted to understand the differences in water-stress tolerance among Miscanthus species to guide genomics-assisted crop breeding.

The research team - along with collaborators at TEAGASC, The Agriculture and Food Development Authority in the Republic of Ireland, and the Institute of Biological, Environmental and Rural Sciences in Wales - analysed various Miscanthus genotypes to identify traits that provided insight into gene adaptation and regulation during water stress.

They found specific genes that play key roles in response to water stress across different Miscanthus species, and saw consistencies with functional biological processes that are critical during the survival of drought stress in other organisms.

Dr Jose De Vega, author of the study and Group Leader at the Earlham Institute, said: "Miscanthus is a commercial crop due to its high biomass productivity, resilience, and ability to continue photosynthesis during the winter months. These qualities make it a particularly good candidate for growth on marginal land in the UK, where yields might otherwise be limited by scorching summers and wet winters."

Previously, a decade-long trial in Europe showed that Miscanthus produced up to 40 tonnes of dry matter per hectare each year. This was reached after just two years of establishment, proving its biofuel capacity was more efficient in ethanol production per hectare than switchgrass and corn.

Miscanthus species have been used as forage species in Japan, Korea and China for thousands of years and, due to its high biomass yield and high ligno-cellulose (plant dry matter) content, they are commercially used as feedstock for bioenergy production.

Ligno-cellulose biomass is the most abundantly available raw material on Earth for the production of biofuels, mainly producing bio-ethanol. Miscanthus's high biomass ability makes the grass a valuable commodity for farmers on marginal land but the crop's responses to water-stress vary depending on the Miscanthus species' origin.

The scientists compared the physiological and molecular responses among Miscanthus species in both water-flooded and drought conditions. The induced physiological conditions were used for an in-depth analysis of the molecular basis of water stress in Miscanthus species.

A significant biomass loss was observed under drought conditions in all of the four Miscanthus species. In flooded conditions, biomass yield was as good as or better than controlled conditions in all species. The low number of differentially expressed genes, and higher biomass yield in flooded conditions, supported the use of Miscanthus in flood-prone marginal land.

"The global challenge of feeding the ever-increasing world population is exacerbated when food crops are being used as feedstock for green energy production," said Dr De Vega.

"Successful plant breeding for ethanol and chemical production requires the ability to grow on marginal lands alongside prioritising the attributes; non-food related, perennial, high biomass yield, low chemical and mechanical input, enhanced water-use efficiency and high carbon storage capacity. Miscanthus fulfils these for enhanced breeding - saving money and space for farmers, and lending a hand to our over polluted environment by emitting CO2.

"The research team is in the early selection process of high biomass genotypes from large Miscanthus populations that are better adapted to the UK conditions and require low inputs. The use of genomic approaches is allowing us to better understand the traits that make some Miscanthus species a commercially sustainable alternative for marginal lands and applying this to agri-practices."

Credit: 
Earlham Institute

Game on: Science edition

image: NSLS-II scientists, Daniel Olds (left) and Phillip Maffettone (right), are ready to let their AI agent level up the rate of discovery at NSLS-II's PDF beamline.

Image: 
Brookhaven National Laboratory

UPTON, NY — Inspired by the mastery of artificial intelligence (AI) over games like Go and Super Mario, scientists at the National Synchrotron Light Source II (NSLS-II) trained an AI agent – an autonomous computational program that observes and acts – how to conduct research experiments at superhuman levels by using the same approach. The Brookhaven team published their findings in the journal Machine Learning: Science and Technology and implemented the AI agent as part of the research capabilities at NSLS-II.

As a U.S. Department of Energy (DOE) Office of Science User Facility located at DOE’s Brookhaven National Laboratory, NSLS-II enables scientific studies by more than 2000 researchers each year, offering access to the facility’s ultrabright x-rays. Scientists from all over the world come to the facility to advance their research in areas such as batteries, microelectronics, and drug development. However, time at NSLS-II’s experimental stations – called beamlines – is hard to get because nearly three times as many researchers would like to use them as any one station can handle in a day—despite the facility’s 24/7 operations.

“Since time at our facility is a precious resource, it is our responsibility to be good stewards of that; this means we need to find ways to use this resource more efficiently so that we can enable more science,” said Daniel Olds, beamline scientist at NSLS-II and corresponding author of the study. “One bottleneck is us, the humans who are measuring the samples. We come up with an initial strategy, but adjust it on the fly during the measurement to ensure everything is running smoothly. But we can’t watch the measurement all the time because we also need to eat, sleep and do more than just run the experiment.”

“This is why we taught an AI agent to conduct scientific experiments as if they were video games. This allows a robot to run the experiment, while we – humans – are not there. It enables round-the-clock, fully remote, hands-off experimentation with roughly twice the efficiency that humans can achieve,” added Phillip Maffettone, research associate at NSLS-II and first author on the study.

According to the researchers, they didn’t even have to give the AI agent the rules of the ‘game’ to run the experiment. Instead, the team used a method called “reinforcement learning” to train an AI agent on how to run a successful scientific experiment, and then tested their agent on simulated research data from the Pair Distribution Function (PDF) beamline at NSLS-II.

Beamline Experiments: A Boss Level Challenge

Reinforcement learning is one strategy of training an AI agent to master an ability. The idea of reinforcement learning is that the AI agent perceives an environment – a world – and can influence it by performing actions. Depending on how the AI agent interacts with the world, it may receive a reward or a penalty, reflecting if this specific interaction is a good choice or a poor one. The trick is that the AI agent retains the memory of its interactions with the world, so that it can learn from the experience for when it tries again. In this way, the AI agent figures out how to master a task by collecting the most rewards.

“Reinforcement learning really lends itself to teaching AI agents how to play video games. It is most successful with games that have a simple concept—like collecting as many coins as possible—but also have hidden layers, like secret tunnels containing more coins. Beamline experiments follow a similar idea: the basic concept is simple, but there are hidden secrets we want to uncover. Basically, for an AI agent to run our beamline, we needed to turn our beamline into a video game,” said Olds.

Maffettone added, “The comparison to a video game works well for the beamline. In both cases, the AI agent acts in a world with clear rules. In the world of Super Mario, the AI agent can choose to move Mario up, down, left, right; while at the beamline, the actions would be the motions of the sample or the detector and deciding when to take data. The real challenge is to simulate the environment correctly – a video game like Super Mario is already a simulated world and you can just let the AI agent play it a million times to learn it. So, for us, the question was how can we simulate a beamline in such a way that the AI agent can play a million experiments without actually running them” said Maffettone.

The team “gamified” the beamline by building a virtual version of it that simulated the measurements the real beamline can do. They used millions of data sets that the AI agent could gather while “playing” to run experiments on the virtual beamline.

“Training these AIs is very different than most of the programming we do at beamlines.  You aren’t telling the agents explicitly what to do, but you are trying to figure out a reward structure that gets them to behave the way you want. It’s a bit like teaching a kid how to play video games for the first time.  You don’t want to tell them every move they should make, you want them to begin inferring the strategies themselves.” Olds said.

Once the beamline was simulated and the AI agent learned how to conduct research experiments using the virtual beamline, it was time to test the AI’s capability of dealing with many unknown samples.

“The most common experiments at our beamline involve everything from one to hundreds of samples that are often variations of the same material or similar materials – but we don’t know enough about the samples to understand how we can measure them the best way. So, as humans, we would need to go through them all, one by one, take a snapshot measurement and then, based on that work, come up with a good strategy. Now, we just let the pre-trained AI agent work it out,” said Olds.

In their simulated research scenarios, the AI agent was able to measure unknown samples with up to twice the efficiency of humans under strongly constrained circumstances, such as limited measurement time.

“We didn’t have to program in a scientist’s logic of how to run an experiment, it figured these strategies out by itself through repetitive playing.” Olds said.

Materials Discovery: Loading New Game

With the AI agent ready for action, it was time for the team to figure out how it could run a real experiment by moving the actual components of the beamline. For this challenge, the scientists teamed up with NSLS-II’s Data Acquisition, Management, and Analysis Group to create the backend infrastructure. They developed a program called Bluesky-adaptive, which acts as a generic interface between AI tools and Bluesky – the software suite that runs all of NSLS-II's beamlines. This interface laid the necessary groundwork to use similar AI tools at any of the other 28 beamlines at NSLS-II.

“Our agent can now not only be used for one type of sample, or one type of measurement—it’s very adaptable. We are able to adjust it or extend it as needed. Now that the pipeline exists, it would take me 45 minutes talking to the person and 15 minutes at my keyboard to adjust the agent to their needs,” Maffettone said.

The team expects to run the first real experiments using the AI agent this spring and is actively collaborating with other beamlines at NSLS-II to make the tool accessible for other measurements.

“Using our instruments’ time more efficiently is like running an engine more efficiently – we are making more discoveries per year happen. We hope that our new tool will enable a new transformative approach to increase our output as user facility with the same resources.”

The team who made this advancement possible also consists of Joshua K. Lynch, Thomas A. Caswell and Stuart I. Campbell from the NSLS-II DAMA Group and Clara E. Cook from the University at Buffalo.

This study was supported by a BNL Laboratory Directed Research and Development (LDRD) fund and the U.S. Department of Energy’s (DOE) Office of Science (BES). The National Synchrotron Light Source II (NSLS-II) is a DOE Office of Science User Facility operated for the DOE Office of Science by Brookhaven National Laboratory.

Brookhaven National Laboratory is supported by the U.S. Department of Energy’s Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.

DOI

10.1088/2632-2153/abc9fc

Credit: 
DOE/Brookhaven National Laboratory

New UCF nanotech gives boost to detection of cancer and disease

ORLANDO, March 25, 2021 - Early screening can mean the difference between life and death in a cancer and disease diagnosis. That's why University of Central Florida researchers are working to develop a new screening technique that's more than 300 times as effective at detecting a biomarker for diseases like cancer than current methods.

The technique, which was detailed recently in the Journal of the American Chemical Society, uses nanoparticles with nickel-rich cores and platinum-rich shells to increase the sensitivity of an enzyme-linked immunosorbent assay (ELISA).

ELISA is a test that measures samples for biochemicals, such as antibodies and proteins, which can indicate the presence of cancer, HIV, pregnancy and more. When a biochemical is detected, the test generates a color output that can be used to quantify its concentration. The stronger the color is, the stronger the concentration. The tests must be sensitive to prevent false negatives that could delay treatment or interventions.

In the study, the researchers found that when the nanoparticles were used in place of the conventional enzyme used in an ELISA -- peroxidase -- that the test was 300 times more sensitive at detecting carcinoembryonic antigen, a biomarker sometimes used to detect colorectal cancers.

And while a biomarker for colorectal cancer was used in the study, the technique could be used to detect biomarkers for other types of cancers and diseases, says Xiaohu Xia, an assistant professor in UCF's Department of Chemistry and study co-author.

Colorectal cancer is the third leading cause of cancer-related deaths in the U.S., not counting some kinds of skin cancer, and early detection helps improve treatment outcomes, according to the U.S. Centers for Disease Control and Prevention.

The increase in sensitivity comes from nickel-platinum nanoparticle "mimics" that greatly increase the reaction efficiency of the test, which increases its color output, and thus its detection ability, Xia says.

Peroxidases found in the horseradish root have been widely used to generate color in diagnostic tests for decades. However, they have limited reaction efficiency and thus color output, which has inhibited the development of sensitive diagnostic tests, Xia says.

Nanoparticle "mimics" of peroxidase have been extensively developed over the past 10 years, but none have achieved the reaction efficiency of the nanoparticles developed by Xia and his team.

"This work sets the record for the catalytic efficiency of peroxidase mimic," Xia says. "It breaks through the limitation of catalytic efficiency of peroxidase mimics, which is a long-standing challenge in the field."

"Such a breakthrough enables highly sensitive detection of cancer biomarkers with the ultimate goal of saving lives through earlier detection of cancers," he says.

Xia says next steps for the research are to continue to refine the technology and apply it to clinical samples of human patients to study its performance.

"We hope the technology can be eventually used in clinical diagnostic laboratories in the near future," Xia says.

Credit: 
University of Central Florida

Global evidence for how EdTech can support pupils with disabilities is 'thinly spread'

An 'astonishing' deficit of data about how the global boom in educational technology could help pupils with disabilities in low and middle-income countries has been highlighted in a new report.

Despite widespread optimism that educational technology, or 'EdTech', can help to level the playing field for young people with disabilities, the study found a significant shortage of evidence about which innovations are best-positioned to help which children, and why; specifically in low-income contexts.

The review also found that many teachers lack training on how to use new technology, or are reluctant to do so.

The study was carried out for the EdTech Hub partnership, by researchers from the Universities of Cambridge, Glasgow and York. They conducted a detailed search for publications reporting trials or evaluations about how EdTech is being used to help primary school-age children with disabilities in low- and middle-income countries. Despite screening 20,000 documents, they found just 51 relevant papers from the past 14 years - few of which assessed any impact on children's learning outcomes.

Their report describes the paucity of evidence as 'astonishing', given the importance of educational technologies to support the learning of children with disabilities. According to the Inclusive Education Initiative, as many as half the estimated 65 million school-age children with disabilities worldwide were out of school even before the COVID-19 pandemic, and most face ongoing, significant barriers to attending or participating in education.

EdTech is widely seen as having the potential to reverse this trend, and numerous devices have been developed to support the education of young people with disabilities. The study itself identifies a kaleidoscopic range of devices to support low vision, sign language programmes, mobile apps which teach braille, and computer screen readers.

It also suggests, however, that there have been very few systematic attempts to test the effectiveness of these devices. Dr Paul Lynch, from the School of Education, University of Glasgow, said: "The evidence for EdTech's potential to support learners with disabilities is worryingly thin. Even though we commonly hear of interesting innovations taking place across the globe, these are not being rigorously evaluated or documented."

Professor Nidhi Singal, from the Faculty of Education, University of Cambridge, said: "There is an urgent need to know which technology works best for children with disabilities, where, and in response to which specific needs. The lack of evidence is a serious problem if we want EdTech to fulfil its potential to improve children's access to learning, and to increase their independence and agency as they progress through school."

The report identifies numerous 'glaring omissions' in the evaluations that researchers did manage to uncover. Around half were for devices designed to support children with hearing or vision difficulties; hardly any addressed the learning needs of children with autism, dyslexia, or physical disabilities. Most were from trials in Asia or Africa, while South America was underrepresented.

Much of the evidence also concerned EdTech projects which Dr Gill Francis, from the University of York and a co-author, described as 'in their infancy'. Most focused on whether children liked the tools, or found them easy to use, rather than whether they actually improved curriculum delivery, learner participation and outcomes. Attention was also rarely given to whether the devices could be scaled up - for example, in remote and rural areas where resources such as electricity are often lacking. Few studies appeared to have taken into account the views or experiences of parents or carers, or of learners themselves.

The studies reviewed also suggest that many teachers lack experience with educational technology. For example, one study in Nigeria found that teachers lacked experience of assistive technologies for students with a range of disabilities. Another, undertaken at 10 schools for the blind in Delhi, found that the uptake of modern low-vision devices was extremely limited, because teachers were unaware of their benefits.

Despite the shortage of information overall, the study did uncover some clear evidence about how technology - particularly portable devices - is transforming opportunities for children with disabilities. Deaf and hard-of-hearing pupils, for instance, are increasingly using SMS and social media to access information about lessons and communicate with peers; while visually-impaired pupils have been able to use tablet computers, in particular, to magnify and read learning materials.

Based on this, the report recommends that efforts to support children with disabilities in low- and middle-income countries should focus on the provision of mobile and portable devices, and that strategies should be put in place to ensure that these are sustainable and affordable for parents and schools - as cost was another concern that emerged from the studies cited.

Critically, however, the report states that more structured evidence-gathering is urgently needed to ensure EdTech meets the UN's stated goal to 'ensure inclusive and equitable quality education and promote lifelong learning for all'. The authors suggest that there is a need to adopt more robust research designs, which should address a full range of disabilities, and involve pupils, carers and teachers in the process.

"There is no one-size-fits-all solution when working with children with disabilities," Singal added. "That is why the current lack of substantive evidence is such a concern. It needs to be addressed so that teachers, parents and learners are enabled to make informed judgements about which technological interventions work, and what might work best for them."

Credit: 
University of Cambridge

Common Alzheimer's treatment linked to slower cognitive decline

image: Hong Xu and Maria Eriksdotter, researchers at the Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Sweden. Photo: Ulf Sirborn

Image: 
Ulf Sirborn

Cholinesterase inhibitors are a group of drugs recommended for the treatment of Alzheimer's disease, but their effects on cognition have been debated and few studies have investigated their long-term effects. A new study involving researchers from Karolinska Institutet in Sweden and published in the journal Neurology shows persisting cognitive benefits and reduced mortality for up to five years after diagnosis.

Alzheimer's disease is a cognitive brain disease that affects millions of patients around the world. Some 100,000 people in Sweden live with the diagnosis, which has a profound impact on the lives of both them and their families. Most of those who receive a diagnosis are over 65, but there are some patients who are diagnosed in their 50s.

The current cost of care and treatment for people with dementia is approximately SEK 60 billion a year in Sweden. This is on a par with the cost of care and treatment of cardiovascular diseases and is twice as high as cancer care.

In Alzheimer's disease changes to several chemical neurotransmitters in the brain are found, and thus to the ability of the neurons to communicate with each other. Acetylcholine is one such substance and plays a key role in cognitive functions such as memory, attention and concentration.

There are three drugs that work as cholinesterase inhibitors and that are used in the treatment of Alzheimer's disease: galantamine, donepezil and rivastigmine.

The effects of cholinesterase inhibitors have, however, been debated, partly because there are relatively few longitudinal clinical studies. Researchers at Karolinska Institutet and Umeå University have now conducted a registry study of patients with Alzheimer's disease over a period of five years from point of diagnosis.

The study is based on data from SveDem (the Swedish Dementia Registry) on 11,652 patients treated with cholinesterase inhibitors and a matched control group of 5,826 untreated patients.

The results showed that treatment with cholinesterase inhibitors was associated with slower cognitive decline over five years, and 27 per cent lower mortality in patients with Alzheimer's disease compared with the controls.

"Of all three drugs, galantamine had the strongest effect on cognition, which may bedue to its effect on nicotine receptors and its inhibiting effect on the enzyme acetylcholinesterase, which breaks down the neurotransmitter acetylcholine," says the study's first author Hong Xu, postdoctoral researcher at the Department of Neurobiology, Care Sciences and Society, Karolinska Institutet.

"Our results provide strong support for current recommendations to treat people with Alzheimer's disease with cholinesterase inhibitors, but also shows that the therapeutic effect lasts for a long time," says the study's last author and initiator Maria Eriksdotter, professor at the Department of Neurobiology, Care Sciences and Society, Karolinska Institutet.

Credit: 
Karolinska Institutet

Study finds foster youth lack critical financial skills

VANCOUVER, Wash. - Most people rely on family members to help them learn how to open a bank account, find a job or create a budget, but that's often not an option for youth in foster care, according to a recent study in Child & Family Social Work.

"Foster kids have distinctive challenges," said Amy Salazar, lead author on the study and an assistant professor at Washington State University Vancouver. "They need more support in several areas, and financial capability is one of them, especially when they're transitioning out of the foster care system and into adulthood."

For the study, Salazar and her co-authors surveyed 97 foster care youths aged 14 to 20. They found that those who were age 18 and over had more advanced financial capability than younger kids, but still had not achieved key skills such as opening a checking account, building up savings or establishing a credit history. The paper recommends enhancements to the foster care system, such as financial literacy courses and more financial training for case workers, to fully prepare youth for independence.

Adult support and mentorship are key factors for financial capability in foster children, but for most of the study participants, that support came from their case workers or an independent living worker.

"That's concerning because those are people who will disappear when these youths age out of foster care," said Salazar, who is part of WSU's Department of Human Development. "Some respondents said foster parents were helpful, but not all youth have good relationships with their foster parents. They need more support."

Another reason these young people need more support is that they are at a much higher risk of identity fraud given the number of people who have access to their confidential records during their time in the child welfare system.

Offering foster care youth training in how to read credit reports and access their own credit reports are potential solutions to this issue.

The foster care system has been evolving over the last two decades or so to include additional services for those who traditionally would have aged out at 18. Teens can now enroll in independent living programs and make use of other supports that extend as far as age 21, for example.

In the past, and as is still the case in some states, as soon as someone in the foster system turns 18, even if they're in the middle of high school, they're cut from the foster system and completely on their own.

"We know the brain is still developing at 18," Salazar said. "For foster kids who may have experienced trauma, that may take even longer. Continuing services in areas like financial capability just makes sense beyond that arbitrary cutoff."

This new study is vital because very little data has been collected about supporting foster youth with financial literacy skills.

"We had no idea how these youth are doing," Salazar said. "We need to know if they're doing well or if there's a lot of work needed. Using our findings, hopefully more targeted support can come about as well as showing stakeholders the benefits of filling the knowledge gaps."

Credit: 
Washington State University

Once-in-a-century UK wildfire threats could happen most years by end of century

Extremely hot and dry conditions that currently put parts of the UK in the most severe danger of wildfires once a century could happen every other year in a few decades' time due to climate change, new research has revealed.
A study, led by the University of Reading, predicting how the danger of wildfires will increase in future showed that parts of eastern and southern England may be at the very highest danger level on nearly four days per year on average by 2080 with high emissions, compared to once every 50-100 years currently.
Wildfires need a source of ignition which is difficult to predict, so wildfire risk is typically measured by the likelihood that a fire would develop after a spark of ignition. This fire danger is affected by weather conditions. As temperatures rise and summer rainfall decreases, conditions highly conducive to wildfire could be nearly five times more common in some regions of the UK by the latter part of the century.
In the driest regions, this could put habitats at risk for up to four months per year on average, the scientists found.
Professor Nigel Arnell, a climate scientist at the University of Reading who led the research, said: "Extremely hot and dry conditions that are perfect for large wildfires are currently rare in the UK, but climate change will make them more and more common. In future decades, wildfires could pose as much of a threat to the UK as they currently do in the south of France or parts of Australia.
"This increased fire danger will threaten wildlife and the environment, as well as lives and property, yet it is currently underestimated as a threat in many parts of the UK. This research highlights the growing importance of taking the threat of wildfires seriously in the UK, as they are likely to be an increasing problem in future."
In the new study, published in the journal Environmental Research Letters, scientists looked at how often different regions of the UK would experience conditions that made it highly likely that any wildfire that occurred would become established. They calculated future fire danger based on the latest UKCP18 climate scenarios with both low and high emissions of greenhouse gases, using a version of the Met Office's Fire Severity Index which is used to define levels of wildfire danger.
They found the average number of 'very high' danger days each year will increase significantly in all parts of the UK by 2080. Excluding London, southern and eastern England were predicted to be worst affected, with the average number of danger days more than quadrupling, up to 111 days in the South East and 121 days in the East of England on average.
Significant increases by 2080 were also seen in the West Midlands (from 13 up to 96 days). Even traditionally wet parts of the UK would dry out for longer, leaving them vulnerable to severe fires for several weeks on average each year, including Wales (5 to 53), Northern Ireland (2 to 20), and West Scotland (3 to 16).
'Exceptional danger' days - currently extremely rare across the UK - were found to become more commonplace across the UK by 2080, with the East of England (0.02 to 3.55), East Midlands (0.03 to 3.23), South East (0.01 to 1.88), and Yorkshire and Humberside (0.01 to 1.55) all seeing large increases in numbers of days each year when these conditions were present.
The research showed that the projected increase in fire danger is predominantly due to hotter temperatures, less rainfall, lower humidity and stronger winds expected across the UK in future decades due to climate change.
Wildfires pose environmental, health and economic risks. Although the UK records tens of thousands fires each year, these are almost all very small, especially in comparison to those in countries and regions like Australia and California, which have the kinds of hotter, drier climates forecast for the UK in future decades.
Although the UK has experienced very low losses from wildfires so far, up to £15m is estimated to be spent each year tackling them. There is no coordinated strategy for wildfire in England, only a voluntary forum which does not have powers to set standards or guidance.
Notable examples of wildfires in the UK are the Swinley Forest fire on the Surrey/Berkshire border in May 2011 that threatened critical infrastructure; the Saddleworth Moor fire in the Peak District in May 2018 and Wanstead Flats fire in London in July 2018 that both led to residents being evacuated; residential and commercial property loss in Marlow, Buckinghamshire, in July 2018; and an extensive fire in Moray, Scotland, in April 2019 that endangered an onshore wind farm.
While natural weather and climate conditions directly affect the 'danger' of a wildfire becoming established, the 'risk' of a wildfire occurring often depends on deliberate or accidental human acts. This study, therefore, does not indicate how likely wildfires are to occur, only their likely severity if one did occur.

NOTES TO EDITORS

A table showing how fire danger is projected to increase in 14 regions of the UK can be found in the full research paper (available on request).

Full reference:
Arnell, N., Freeman, A., Gazzard, R. (2021); 'The effect of climate change on indicators of fire danger in the UK'; Environmental Research Letters; https://doi.org/10.1088/1748-9326/abd9f2

The University of Reading hosts the SPECIAL research group, part of the Leverhulme Centre for Wildfires, Environment and Society. Professor Sandy Harrison, a palaeoclimatologist in the University's Department of Meteorology, is associate director of the Centre. The £10m Centre is the first to seek to address wildfires from a global perspective, to improve the ability to understand, predict and manage them.

Credit: 
University of Reading

Building a picture of fathers in the family justice system in England

The invisibility of dads who lose access to their children because of concerns about child neglect or their ability to provide safe care comes under the spotlight in new research.

A research partnership between the University of East Anglia and Lancaster University provides new evidence ('Up Against It': Understanding Fathers' Repeat Appearance in Local Authority Care Proceedings) about fathers' involvement in care and recurrent care proceedings in England.

A national conference today (Wednesday 24th March), co-hosted online by the two universities, will share key insights from this study, funded by the Nuffield Foundation, with policy and practice audiences.

The researchers analysed anonymised family court records for more than 73,000 fathers appearing in care proceedings between 2010/11 and 2017/18.

In addition, the researchers conducted a survey of fathers in 18 local authorities and captured rich life histories through in-depth, longitudinal interviewing.

As the family courts continue to struggle with very high volumes of care cases, this research complements existing research on birth mothers, by uncovering fathers' histories, their struggles with parenthood, but also what factors help fathers recover their parenting capacity.

Dads featured in 80% of care cases. While fewer in numbers than mothers, a proportion of dads had also appeared in repeat care proceedings.

Significant childhood adversity, early entry to parenthood and persistent economic hardship, were key issues for dads who experienced repeat involvement in care proceedings.

Mothers and fathers involved in care proceedings invoke very different public and professional responses, with fathers often viewed solely in terms of the risks they present to women and children.

However, the research team argue for a more nuanced analysis of fathers' risks and resources and an understanding that all dads are individuals. Whilst fathers should be held accountable for the safe care of children to the same degree as mothers, fathers also need validation and support for their parenting.

Fathers who took part in the research had all experienced considerable adversity in their own childhoods, and in both childhood and adulthood, they lacked appropriate support at key points in their lives (including during and after care proceedings) to enable or sustain change.

Although fathers are able to opt out of parenting in ways not so readily available to mothers, the report suggests services should avoid assuming that fathers are always optional or secondary parents. In fact, the majority of fathers (79%) appeared as couples in repeat care proceedings.

Fathers described deep and long-lasting emotional pain following the loss of their children and desire to play an ongoing parenting role.

The majority of fathers who participated in the interviews were actively trying to make changes in their lives and in their roles as fathers.

But the resources and opportunities they had were scarce and fragile.

It was hard for dads to establish relationships of trust with social workers and other professionals. Without resources and support to manage emotions and relationships differently, couple conflict and its impact on parenting were key reasons why dads became stuck in a cycle of family court involvement.

Although there is much to be learnt from existing services for mothers, the team argue that service adaptations are sorely needed to engage fathers which focus on emotional regulation, resolution of loss and support for fatherhood as a mechanism for change and accountability.

Members of the research team from both Universities will be speaking at the event, including: Professor Marian Brandon, Dr Georgia Philip (University of East Anglia) Dr Yang Hu, Professor Karen Broadhurst and Dr Lindsay Youansamouth (Lancaster University).

Dr Georgia Philip, from the University of East Anglia, said: "We need a 'both-and' approach. Fathers involved in care proceedings are vulnerable; they may pose risks arising from their vulnerabilities, but they should also be seen as at-risk themselves."

Professor Karen Broadhurst, from Lancaster University, said: "Building on our research with birth mothers, this project delivers a wealth of completely new insights by throwing the spotlight on Dads. We now have a far more complete picture of mothers, fathers and couples in the family justice system, and what needs to change to prevent repeat involvement."

Rob Street, Director of Justice at the Nuffield Foundation, said: "Understanding more about the people who feature in care proceedings is an important goal and especially so in cases where the same children or parents are repeatedly involved. This significant new study sheds much-needed light on a previously largely neglected group: fathers recurrently involved in care proceedings. The insights that the research provides on the characteristics and needs of these men will provide vital information for policy and practice in this area."

Credit: 
Lancaster University

How grasslands respond to climate change

image: Grasses on the Rothamsted Research experimental field.

Image: 
Rothamsted Research

"Based on field experiments with increased carbon dioxide concentration, artificial warming, and modified water supply, scientists understand quite well how future climate change will affect grassland vegetation. Such knowledge is largely missing for effects that already occurred in the last century," says Hans Schnyder, Professor of Grassland at the TUM.

Based on the Park Grass Experiment at Rothamsted, researchers have now shown that future predicted effects of climate change on the nutrient status of grassland vegetation have already taken hold in the last century.

Plant intrinsic mechanisms respond to CO2 increase

Since 1856, research at Rothamsted has been testing the effects of different fertilizer applications on yield performance and botanical composition of hay meadows. Harvested material has been archived since the experiment began. This material is now available to researchers for studies of vegetation nutrient status, and the carbon and oxygen isotope composition of biomass.

"The increase in atmospheric CO2 concentration also affect the carbon, water, and nitrogen cycles in grasslands as well as other biomes," says Professor Schnyder. The mechanism that controls gas exchange with ambient air (the stomatal conductance of the plant canopy) is a key player in these cycles.

Plants control how far their stomata, small pores in the leaf epidermis, open to optimize the balance between carbon dioxide uptake (photosynthesis) and water loss (transpiration). With increased CO2 exposure, they reduce stomatal aperture to save water. This effect is particularly efficient in grasses. However, a reduction in transpiration leads to a reduced mass flow from the soil to the roots and leaves, which can result in reduced nitrogen uptake and feed back to weaken photosynthetic capacity.

Yield reduction and deterioration of the nitrogen nutrition status

Combining the new analyses of oxygen and carbon isotope composition, nitrogen and phosphorus in biomass, and yield and climate data, the research team, led by Professor Schnyder, analyzed the physiological effects of the emission-related increase in CO2 concentration (about 30%) and associated past climate change.

They found that in particular the grass-rich communities that were heavily fertilized with nitrogen experienced a deterioration in their nitrogen nutrition status. Climate change also resulted in greatly reduced stomatal conductance (now detectable with the new research methods) and significantly reduced yields.

The core element of the researchers' observations is the hypersensitive CO2 response of stomata in grasses which they believe limits transpiration-driven nitrogen uptake.

Nitrogen fertilization has no positive effects on grassland yield performance

"We also observed that fields that were heavily fertilized with nitrogen, and therefore rich in grass, largely lost their yield superiority over forbs- and legume-rich fields that were either less or completely unfertilized with nitrogen despite being otherwise equally supplied with nutrients over the course of the last century," says the first author of the study Juan Baca Cabrera, who is pursuing a doctorate at the TUM's chair of Grassland.

In the researchers' view, the results indicate that restraining nitrogen supply to grasslands in the future would enhance the yield contribution from forbs and legumes while at the same time would help limit nitrogen emissions to the environment. Professor Schnyder states, "Our findings are important for understanding the importance of grasses in earth systems and provide guidance for sustainable future grassland use."

Credit: 
Technical University of Munich (TUM)

For ancient farmers facing climate change, more grazing meant more resilience

umans are remarkably adaptable, and our ancestors have survived challenges like the changing climate in the past. Now, research is providing insight into how people who lived over 5,000 years ago managed to adapt.

Madelynn von Baeyer Ph.D. '18, now at the Max Planck Institute for the Science of Human History, UConn Associate Professor of Anthropology Alexia Smith, and Professor Sharon Steadman from The State University of New York College at Cortland recently published a paper in the Journal of Archaeological Science: Reports looking at how people living in what is now Turkey adapted agricultural practices to survive as conditions became more arid.

The work was conducted as von Baeyer's doctoral research at Çadr Höyük, a site located in Turkey that is unique because it has been continuously occupied for thousands of years.

"I was interested in studying how plant use was impacted by changing cultural patterns. This fit Steadman's research goals for Çad?r Höyük really well," says von Baeyer.

Smith explains the site is situated in an area with rich agricultural and pasture land that sustained generations through time.
Professor Sharon Steadman from The State University of New York College at Cortland photographing the dig site. (Contributed photo)

"People would build a mud brick structure, and over the years the structure is either abandoned or collapses and the people just build on top of it," Smith says. "Eventually these villages look like they have been built on hills, but they're really just occupations going up and up."

Just as the occupants built new layers up, the archaeologists excavate down to get a glimpse of history and how lives changed over the millennia. Within the layers, archaeobotanists like von Baeyer and Smith look for ancient plant remains; for instance, intentionally or unintentionally charred plant matter. Though wood was often used, much can be learned by looking at the remains of fires fueled by livestock dung, says Smith: "The dung contains seeds that give clues about what the animals were eating."

Von Baeyer explains the research process: "Archaeobotanical research has three, vastly different, main stages: data collection, identification, and data analysis. Data collection is in the field, on an archaeological dig, getting soil samples and extracting the seeds from the dirt; identification is in the lab, identifying all the plant remains you collected from the field; and data analysis to tell a full story. I love every step."

The focus was on a time period called the Late Chalcolithic, roughly 3700-3200 years before the common era (BCE). By referencing paleo-climatic data and Steadman's very detailed phasing at Çad?r Höyük, the researchers were able to discern how lifestyles changed as the climate rapidly shifted in what is called the 5.2 kya event, an extended period of aridity and drought at the end of the fourth millennium BCE.

With climate change, there are lots of strategies that can be used to adapt says Smith, "They could have intensified, diversified, extensified, or abandoned the region entirely. In this case they extensified the area of land used and diversified the herds of animals they relied upon."

Zooarchaeologists on the site examined the bones to further demonstrate the shift in the types of animals herded, while the seeds from the dung-fueled fires at the dig site gave clues to what the animals were eating.

Smith says, "We know they were herding cattle, sheep, goats, and pigs, and we saw a shift to animals that are grazers. They all have a different diet, and by diversifying you are maximizing the range of potential calories that can eventually be consumed by humans."

By employing this mixed strategy, the people of Çad?r Höyük were ensuring their survival as the climate became increasingly dry. Smith says that at the same time they continued to grow wheat, barley, chickpeas, and lentils, among other crops for humans, while the animals grazed on crops not suitable for human consumption -- a strategy to maximize resources and resilience.

Von Baeyer says she was not expecting to make an argument about climate and the environment at the outset of the study.

"What this study does is pretty rare in archaeobotany by tracing a shift due to climate change over a relatively short time period," she says. "Often when archaeobotanical studies talk about shifts in plant use over time, it's over large cultural changes. This study looks at a shift that only spans 500 years."

Though the circumstances are not exactly the same as they were nearly 6000 years ago, there are lessons we can apply today, says von Baeyer.

"We can take this idea of shifting animal management and feeding practices and make it work in a current context," she says. "I think that this case study, and other studies that use archaeological data to examine climate change, expands the range of possibilities for solutions to shifting environmental conditions. I think archaeological case studies invite more out of the box thinking than just current case studies. Right now, we need to think as creatively as possible to come up with sustainable and efficient responses to climate change."

Credit: 
University of Connecticut

Revealing complex behavior of a turbulent plume at the calving front of a Greenlandic glacier

video: Helicopter flight over the studied subglacial discharge plume at the calving front of Bowdoin Glacier in Greenland in July 2017 (Evgeny A. Podolskiy)

Image: 
Evgeny A. Podolskiy

For the first time, scientists have succeeded in continuous monitoring of a subglacial discharge plume, providing a deeper understanding of the glacier-fjord environment.

As marine-terminating glaciers melt, the fresh water from the glacier interacts with the seawater to form subglacial discharge plumes, or convective water flows. These turbulent plumes are known to accelerate the melting and breakup (calving) of glaciers, drive fjord-scale circulation and mixing, and create foraging hotspots for birds. Currently, the scientific understanding of the dynamics of subglacial plumes based on direct measurements is limited to isolated instances.

A team of scientists consisting of Hokkaido University's Assistant Professor Evgeny A. Podolskiy and Professor Shin Sugiyama, and the University of Tokyo's JSPS postdoctoral scholar Dr. Naoya Kanna have pioneered a method for direct and continuous monitoring of plume dynamics. Their findings were published by Springer-Nature in the journal Communications Earth & Environment.

Freshwater and marine water have very different densities, due to the salts dissolved in marine water. As a result of this density contrast, when the meltwater - originating from the glacier surface - flows down the cracks and emerges at the base of the glacier, it starts upwelling causing the formation of subglacial plumes. The rising plume entrains nutrient-rich, warmer water from the deep that further melts the glacier ice. In light of the effects of global warming and climate change, which have caused a massive loss in the volume of glaciers, understanding how plumes behave and evolve is crucial for predicting both glacier retreat and fjord response.

The scientists conducted the most comprehensive plume monitoring campaign to date at Bowdoin Glacier (Kangerluarsuup Sermia), Greenland. It involved a chain of subsurface sensors recording oceanographic data directly at the calving front at different depths. Additional observations were made by time-lapse cameras, a seismometer, unmanned aerial vehicles, and etc. This high-temporal-resolution dataset was then subjected to a thorough analysis to identify connections, patterns, and trends.

The study reveals that the dynamics of the plume and glacier-fjord are far more complex than previously thought. It is intermittent in nature and influenced by a diversity of factors, such as sudden stratification changes and drainage of marginal lakes. For example, the scientists observed the abrupt subglacial drainage of an ice-dammed lake via the plume which had a pronounced impact on its dynamics and was accompanied by a seismic tremor several hours long. They also show that tides may influence the plumes, which have not been accounted for in previous studies of Greenlandic glaciers. Additionally, they suggest that the wind needs more attention as it may also affect the structure of the subglacial plumes.

From their results, the scientists conclude that their work is the first step enabling researchers to transition from a snapshot view of a plume to a continuously updated image. The identified processes and their role in the glacier environments will have to be refined in future studies via modelling and new observations.

Credit: 
Hokkaido University

Greenland caves: Time travel to a warm Arctic

image: The expedition to Greenland was a challenge: After arriving by plane and boat, the team had to hike for three more days before they could set up their tents beneath the caves they were looking for.

Image: 
Robbie Shone

A 12-centimetre-thick sample of a deposit from a cave in the northeast of Greenland offers unique insights into the High Arctic's climate more than 500,000 years ago. The geologist and cave scientist Prof. Gina Moseley collected it during an exploratory expedition in 2015 for her palaeoclimatic research in one of the most sensitive areas of the world to climate change. The cave is located at 80° North 35 km from the coast and 60 km from the Greenland Ice Sheet margin. It was part of the Greenland Caves Project, funded by 59 different sponsors including the National Geographic Society. Moseley and her team are interested in the climate and environmental history captured by the unique cave deposit. "Mineral deposits formed in caves, collectively called speleothems, include stalagmites and stalactites. In this case we analysed a flowstone, which forms sheet-like deposits from a thin water film", explains Moseley. It is very special to find a deposit of this kind in the High Arctic at all, the geologist continues: "Today this region is a polar desert and the ground is frozen due to permafrost. In order for this flowstone to form, the climate during this period must have been warmer and wetter than today. The period between about 588,000 to 549,000 years before present is generally considered to be globally cool in comparison to the present. The growth of the speleothem at this time, however, shows that the Arctic was surprisingly warm".

Regional contrasts

Gina Moseley therefore highlights the regional heterogeneities that need to be considered when researching climate change especially for future developments in a warmer world. "Our results of a warmer and wetter Arctic support modelling results showing that regional heterogeneities existed and that the Arctic was anomalously warm as a consequence of the Earth's orbital relationship to the sun at the time. Associated with these warmer temperatures was a reduction of the extent of sea ice in the Arctic, thus providing open ice-free waters from which moisture could be evaporated and transported to northeast Greenland", adds the geologist from the University of Innsbruck. The speleothem palaeoclimate record offers the possibility to extend the knowledge of Greenland's past climate and hydrological conditions beyond the 128,000-year-limit of the deep Greenland ice cores. The team used state-of-the-art methods such as uranium-thorium dating which is able to enlarge the timeline much further back. "Since the Greenland ice cores are biased towards the last glacial period and therefore cold climates, the speleothem record provides a nice counter-balance with respect to past warm periods", Moseley says. "The Arctic is warming at more than twice the rate of the global average. Understanding more about how this sensitive part of the world responds in a warmer world is very important."

Valuable climate archive

Gina Moseley identified the importance of the caves in northeast Greenland back in 2008 while doing her PhD in Bristol, UK. In 2015, she led a five-person expedition funded by many different sponsors. The expedition was a challenge: The team first tried to fly as far as possible, then crossed a 20-kilometre-wide lake in a rubber boat and then had to hike for three days to get to the caves. This was the first time such climate records had been made from caves in the High Arctic and Gina Moseley was awarded the highly prestigious START Prize from the Austrian Science Fund (FWF) for her research, which enabled her to start a new six-year research project. In July 2019, Moseley and her Greenland Caves Project team returned to northeast Greenland for a three-week expedition.

Credit: 
University of Innsbruck