Earth

Study points to the challenges of harvest-time weed seed controls in Pacific Northwest

image: Collection of feral rye (Secale cereale), rattail fescue (Vulpia myuros), and downy brome (Bromus tectorum) in a winter wheat field near Pilot Rock, Oregon before the feral rye and rattail fescue started to shatter seeds.

Image: 
Photo taken by Judit Barroso on May 24, 2016.

WESTMINSTER, Colorado - March 02, 2021 - Herbicide-resistant weeds have fueled a growing demand for effective, nonchemical weed controls. Among the techniques used are chaff carts, impact mills and other harvest-time practices that remove or destroy weed seeds instead of leaving them on the field to sprout.

A recent article in the journal Weed Science explores whether such harvest-time controls would be effective against downy brome, Italian ryegrass, feral rye and rattail fescue - weeds that compete with winter wheat in the Pacific Northwest. Researchers set out to determine whether the four retained a significant number of seeds at harvest time and whether those seeds were high enough on the plant to be captured.

Data collected from five commercial farms in Oregon and Washington over three growing seasons revealed the following insights:

Weed seed production, seed retention at harvest and plant height differed among the four weed species and varied across field locations and growing years.

Environmental conditions were found to influence when weed seeds started to shatter and drop and at what rate that shattering occurred.

Though further research is needed, seed production and shattering patterns also appeared to be influenced by agronomic factors, including herbicide use, row spacing, and the height and vigor of the crop.

Feral rye exhibited the greatest potential for harvest-time control due to slower shattering rates and higher seed retention (54 percent on average). The remaining weeds in the study had an average seed retention at harvest of less than 50 percent.

The low height of rattail fescue at harvest makes that species a poor candidate for harvest-time weed seed controls.

"We found that the efficacy of harvest-time weed controls in winter wheat varies by weed species and is dependent on the environment and growing conditions," says Carolina San Martín, Ph.D., of Oregon State University. "It is also clear that when harvest-time controls are used, growers should harvest as soon as possible after crop maturity to capture as many of the remaining weed seeds as possible."

Credit: 
Cambridge University Press

New study finds atmospheric rivers increase snow mass in West Antarctica

image: Thwaites Glacier in 2019. Credit: Kiya Riverman

Image: 
Kiya Riverman

A new study published today in the journal Geophysical Research Letters used NASA's ice-measuring laser satellite to identify atmospheric river storms as a key driver of increased snowfall in West Antarctica during the 2019 austral winter.

These findings from scientists at Scripps Institution of Oceanography at the University of California San Diego and colleagues will help improve overall understanding of the processes driving change in Antarctica, and lead to better predictions of sea-level rise. The study was funded by NASA, with additional support from the Rhodium Group's Climate Impact Lab, a consortium of leading research institutions examining the risks of climate change.

Atmospheric rivers are phenomena that transport large amounts of water vapor in long, narrow "rivers" in the sky. They are known to be the main driver of precipitation along the West Coast of the United States, accounting for 25-50 percent of annual precipitation in key parts of the West. Increasing research on atmospheric rivers finds that they dominantly impact the western coasts of most continents, due to oceans evaporating and storms building high levels of moisture into the atmosphere.

NASA's Ice, Cloud, and land Elevation Satellite-2 (ICESat-2), launched into orbit in September 2018, is providing a detailed look at the height of ice and snow on the vast, frozen continent. The satellite works by sending 10,000 laser pulses per second to Earth's surface that measure the height of ice sheets, glaciers, and more by calculating the time it takes a handful of those pulses to return to the satellite. Each photon of light has a time tag, and these tags combine with the GPS location to pinpoint its exact place and height on the ground. It measures a detailed set of tracks over the Antarctic ice sheet every three months.

"ICESat-2 is the first satellite to be able to measure snowfall over the Antarctic continent in such a precise way," said Helen Amanda Fricker, a glaciologist at Scripps Oceanography and co-author of the study. "In winter, weather conditions prohibit having a field team there making observations on the ground. ICESat-2 is filling in this lack of data over the vast ice sheets, and giving us a greater understanding of snow mass gain and loss on a seasonal scale."

Looking at ICESat-2 data, scientists found increases in height over the Antarctic Ice Sheet between April 2019 and June 2020 due to increased snowfall. Using a computational model of the atmosphere and snow, they found that 41 percent of height increases over West Antarctica during the 2019 winter occurred because intermittent extreme precipitation events delivered large quantities of snow during short periods of time. Of those events, 63 percent were identified as landfalling atmospheric rivers. These systems were distinguished from other storms by the much higher moisture levels measured in the lower portions of the atmosphere.

The atmospheric rivers making landfall in Antarctica originate in the sub-tropical, mid-latitudes of the Southern Hemisphere. They travel long distances with no continents to stop them, eventually making landfall in West Antarctica.

"We know the frequency of atmospheric rivers is expected to increase, so it's important that scientists are able to measure how much they are contributing to snow mass increase or surface melting," said Susheel Adusumilli, lead author and PhD candidate at Scripps Oceanography. "Knowing how much snow is being accumulated across the continent helps us better understand how mass is changing as a whole, and informs our understanding of sea-level rise potential from the Antarctic Ice Sheet."

More than one hundred gigatons of ice are being lost to the ocean from Antarctica each year, contributing to ongoing sea-level rise. Most of this ice loss is driven by increased ice flow into the ocean from the melting of the floating ice shelves that surround Antarctica. Understanding the balance of mass gains from snowfall in the interior of Antarctica and mass loss from ocean warming is key to improving projections of sea-level rise.

While this study tracked ice mass in the short term, atmospheric rivers in Antarctica can also drive large amounts of snowmelt. In fact, this study found that around 90 percent of summer atmospheric rivers and 10 percent of winter atmospheric rivers coincided with potential surface melt over the West Antarctic Ice Sheet. Atmospheric river-driven melting is due to the low clouds from these systems, which can absorb and re-emit heat back to the surface. Scientists will need further study to understand if these events will be snow makers or melters, looking at factors such as seasonality, moisture level, cloud coverage, or if each is storm dependent.

"In the U.S., scientists study atmospheric rivers and look at if they might be beneficial for water supply in California or hazardous, causing flooding," said study co-author Meredith Fish, postdoctoral associate at Rutgers University and alumna of Scripps Oceanography, where she studied at the Center for Western Weather and Water Extremes. "What's interesting in Antarctica is the question, are atmospheric rivers going to contribute to snowmelt or snow accumulation?"

Credit: 
University of California - San Diego

Coffee for the birds: connecting bird-watchers with shade-grown coffee

image: A blackburnian warbler perches on a coffee bush in a shade-coffee farm in Colombia. Photo by Virginia Tech's Guillermo Santos.

Image: 
Virginia Tech

Since 1970, bird populations in North America have declined by approximately 2.9 billion birds, a loss of more than one in four birds. Factors in this decline include habitat loss and ecosystem degradation from human actions on the landscape.

At the same time, enthusiasm for bird-watching has grown, with more than 45 million recreational participants in the United States alone. Now, researchers are looking into how to mobilize these bird enthusiasts to help limit bird population declines.

Enter bird-friendly coffee.

Bird-friendly coffee is certified organic, but its impact on the environment goes further than that: it is cultivated specifically to maintain bird habitats instead of clearing vegetation that birds and other animals rely on.

Researchers from Virginia Tech's College of Natural Resources and Environment, Cornell University, and Columbia University explored whether bird-friendly coffee is on the radar of bird-watchers: are they drinking it and, if not, why not? The study results published in the journal People and Nature.

"We know bird-watchers benefit from having healthy, diverse populations of birds, and they tend to be conservation-minded folks," explained Assistant Professor Ashley Dayer of Virginia Tech's Department of Fish and Wildlife Conservation. "My colleagues and I wanted to dig into this key audience to determine their interest in bird-friendly coffee."

Bird-friendly coffee is shade-grown, meaning that it is grown and harvested under the canopy of mature trees, a process that parallels how coffee was historically grown. But with most farms in Central and South America and the Caribbean converting to full-sun operations, crucial bird habitats for migrating and resident bird species are being lost.

"Over recent decades, most of the shade coffee in Latin America has been converted to intensively managed row monocultures devoid of trees or other vegetation," explained Amanda Rodewald, the Garvin Professor and senior director of the Center for Avian Population Studies at the Cornell Lab of Ornithology. "As a result, many birds cannot find suitable habitats and are left with poor prospects of surviving migration and successfully breeding."

Purchasing shade-grown coffee is one of seven simple actions that people can take as a step toward returning bird populations to their previous numbers. "But even simple actions are sometimes not taken by people who you would expect to be on board. Human behavior is complex -- driven by knowledge, attitudes, skills, and many other factors," explained Dayer, an affiliate of the Global Change Center housed in Virginia Tech's Fralin Life Sciences Institute.

The research team surveyed more than 900 coffee-drinking bird-watchers to understand bird-friendly coffee behavior among bird-watchers.

"One of the most significant constraints to purchasing bird-friendly coffee among those surveyed was a lack of awareness," said Alicia Williams, lead author and former research assistant at the Cornell Lab of Ornithology and Virginia Tech. "This includes limits on understanding what certifications exist, where to buy bird-friendly coffee, and how coffee production impacts bird habitat."

"I was surprised to see that only 9 percent of those surveyed purchased bird-friendly coffee and less than 40 percent were familiar with it," Williams added. "It was also interesting, though not surprising, that a large number of our respondents reported that the flavor or aroma of coffee was an important consideration in their coffee purchases, which could be a useful attribute of bird-friendly coffee to stress going forward."

The next step to increasing awareness about shade-grown coffee and its potential impact on bird populations may include increased advertising for bird-friendly coffee, more availability of bird-friendly coffee, and collaborations between public-facing conservation organizations and coffee distributors.

Credit: 
Virginia Tech

New study proposes a low cost, high efficiency mask design

A new paper in Oxford Open Materials Science, published by Oxford University Press, presents low cost modifications to existing N95 masks that prolongs their effectiveness and improves their reusability post disinfectants.

The COVID-19 crisis has increased demand for respiratory masks, with various models of DIY masks becoming popular alongside the commercially available N95. The utility of such masks is primarily based on the size of aerosols that they are capable of filtering out and how long they can do so effectively.

Conventional masks like the N95 use a layered system and have an efficiency rate of 95%. Yet, this rate begins to drop after someone wears them for more than eight hours. This is due to the fact that N95 masks were designed for single use. The high demand caused by COVID-19 has led people to disinfect them for reuse. As such, a team of scientists have put together various techniques for decontamination and reuse of respiratory masks based on experimental data and guidelines issued by Center for Disease Control.

Researchers here propose a low cost ($1), tri-layer mask design containing nylon, modified polypropylene, and non-woven cotton fabrics. While the polypropylene layer is available in N95 masks, this design includes a graphene oxide and polyvinylidene fluoride mixture which acts as an active filtration layer. Recent studies show that the graphene oxide mixture has a high anti-bacterial activity making it ideal for respiratory masks. This coating has also proven to be effective even after being disinfected with H2O2, a popular practice when reusing masks. The addition of these membranes results in an efficiency level of 95%, like that of an N95, while also simplifying the number of layers in the design for increased comfort.

"The possibility to produce cost effective reusable N95 masks that can help the public health system and common citizens motivated the work. We tried to leverage the connection between electrostatic charge and the filtration efficiency of masks for submicron size particles and viruses to come up with a design to make N95 masks reusable" said By Dr. Rajalakshmi.

These cheap and simple modifications can provide people in all socioeconomic classes with a long-lasting, high-filtration respiratory mask.

Credit: 
Oxford University Press USA

Single cell sequencing opens new avenues for eradicating leukemia at its source

image: Image of stem cells (blue) in bone marrow together with a sinusoidal blood vessel (red) and an arteriolar blood vessel. Single cell sequencing is a promising method because it can help discriminate between healthy stem cells and potentially cancerous ones that cannot be identified via imaging.

Image: 
Jude Al-Sabah/DKFZ

A new method, described in a study published today in the journal Nature Communications, has the potential to boost international research efforts to find drugs that eradicate cancer at its source.

Most cancerous tissue consists of rapidly dividing cells with a limited capacity for self-renewal, meaning that the bulk of cells stop reproducing after a certain number of divisions. However, cancer stem cells can replicate indefinitely, fuelling long-term cancer growth and driving relapse.

Cancer stem cells that elude conventional treatments like chemotherapy are one of the reasons patients initially enter remission but relapse soon after. In acute myeloid leukaemia, a form of blood cancer, the high probability of relapse means fewer than 15% of elderly patients live longer than five years.

However, cancer stem cells are difficult to isolate and study because of their low abundance and similarity to other stem cells, hampering international research efforts in developing precision treatments that target malignant cells while sparing healthy ones.

Researchers from the Centre for Genomic Regulation (CRG) and the European Molecular Biology Laboratory (EMBL) have overcome this problem by creating MutaSeq, a method that can be used to distinguish cancer stem cells, mature cancer cells and otherwise healthy stem cells based on their genetics and gene expression.

"RNA provides vital information for human health. For example, PCR tests for coronavirus detect its RNA to diagnose COVID-19. Subsequent sequencing can determine the virus variant," explains Lars Velten, Group Leader at the CRG and author of the paper. "MutaSeq works like a PCR test for coronavirus, but at a much more complex level and with a single cell as starting material."

To determine if a single cell is a stem cell, the researchers used MutaSeq to measure thousands of RNAs at the same time. To then find out if the cell is cancerous or healthy, the researchers carried out additional sequencing and looked for mutations. The resulting data helped researchers track if stems cells are cancerous or healthy and helped determine what makes the cancer stem cells different.

"There are a huge number of small molecule drugs out there with demonstrated clinical safety, but deciding which cancers and more specifically which patients these drugs are well suited for is a daunting task," says Lars Steinmetz, Professor at Stanford University, Group Leader at EMBL Heidelberg and author of the paper. "Our method can identify drug targets that might not have been tested in the right context. These tests will need to be carried out in controlled clinical studies, but knowing what to try is an important first step."

The method is based on single cell sequencing, an increasingly common technique that helps researchers gather and interpret genome-wide information from thousands of individual cells. Single cell sequencing provides a highly detailed molecular profile of complex tissues and cancers, opening new avenues for research.

Explaining their next steps, Lars Velten says: "We have now brought together clinical researchers from Germany and Spain to apply this method in much larger clinical studies. We are also making the method much more streamlined. Our vision is to identify cancer stem cell specific drug targets in a personalized manner, making it ultimately as easy for patients and doctors to look for these treatments as it is testing for coronavirus".

Credit: 
Center for Genomic Regulation

How 'great' was the great oxygenation event?

Around 2.5 billion years ago, our planet experienced what was possibly the greatest change in its history: According to the geological record, molecular oxygen suddenly went from nonexistent to becoming freely available everywhere. Evidence for the "great oxygenation event" (GOE) is clearly visible, for example, in banded iron formations containing oxidized iron. The GOE, of course, is what allowed oxygen-using organisms - respirators - and ultimately ourselves, to evolve. But was it indeed a "great event" in the sense that the change was radical and sudden, or were the organisms alive at the time already using free oxygen, just at lower levels?

Prof. Dan Tawfik of the Weizmann Institute of Science's Biomolecular Sciences Department explains that the dating of the GOE is indisputable, as is the fact that the molecular oxygen was produced by photosynthetic microorganisms. Chemically speaking, energy taken from light split water into protons (hydrogen ions) and oxygen. The electrons produced in this process were used to form energy-storing compounds (sugars), and the oxygen, a by-product, was initially released into the surroundings.

The question that has not been resolved, however, is: Did the production of oxygen coincide with the GOE, or did living organisms have access to oxygen even before that event? One side of this debate states that molecular oxygen would not have been available before the GOE, as the chemistry of the atmosphere and oceans prior to that time would have ensured that any oxygen released by photosynthesis would have immediately reacted chemically. A second side of the debate, however, suggests that some of the oxygen produced by the photosynthetic microorganisms may have remained free long enough for non-photosynthetic organisms to snap it up for their own use, even before the GOE. Several conjectures in between these two have proposed "oases," or short-lived "waves," of atmospheric oxygenation.

Research student Jagoda Jabłońska in Tawfik's group thought that the group's focus - protein evolution - could help resolve the issue. That is, using methods of tracing how and when various proteins have evolved, she and Tawfik might find out when living organisms began to process oxygen. Such phylogenetic trees are widely used to unravel the history of species, or human families, but also of protein families, and Jabłońska decided to use a similar approach to unearth the evolution of oxygen-based enzymes.

To begin the study, Jabłońska sorted through around 130 known families of enzymes that either make or use oxygen in bacteria and archaea - the sorts of life forms that would have been around in the Archean Eon (the period between the emergence of life, ca. 4 billion years ago, and the GOE). From these she selected around half, in which oxygen-using or -emitting activity was found in most or all of the family members and seemed to be the founding function. That is, the very first family member would have emerged as an oxygen enzyme. From these, she selected 36 whose evolutionary history could be traced conclusively. "Of course, it was far from simple," says Tawfik. "Genes can be lost in some organisms, giving the impression they evolved later in members in which they held on. And microorganisms share genes horizontally, messing up the phylogenetic trees and leading to an overestimation of the enzyme's age. We had to correct for the latter, especially."

The phylogenetic trees the researchers ultimately obtained showed a burst of oxygen-based enzyme evolution about 3 billion years ago - something like half a billion years before the GOE. Examining this time frame further, the scientists found that rather than coinciding with the takeover of atmospheric oxygen, this burst dated to the time that bacteria left the oceans and began to colonize the land. A few oxygen-using enzymes could be traced back even farther. If oxygen use had coincided with the GOE, the enzymes that use it would have evolved later, so the findings supported the scenario in which oxygen was already known to many life forms by the time the GOE took place.

The scenario that Jabłońska and Tawfik propose looks something like this: Oxygen is one of the most chemically reactive elements around. Like one end of a battery, it readily accepts electrons, thus providing extra metabolic power. That makes it extremely useful to many life forms, but also potentially damaging. So photosynthetic organisms as well as other organisms living in their vicinity had to quickly develop ways to efficiently dispose of oxygen. This would account for the emergence of oxygen-utilizing enzymes that would remove molecular oxygen from cells. One microorganism's waste, however, is another's potential source of life. Oxygen's unique reactivity enabled organisms to break down and use "resilient" molecules such as aromatics and lipids, so enzymes that take up and use oxygen likely began evolving soon after.

Tawfik: "This confirms the hypothesis that oxygen appeared and persisted in the biosphere well before the GOE. It took time to achieve the higher GOE level, but by then oxygen was widely known in the biosphere."

Jabłońska: "Our research presents a completely new means of dating oxygen emergence, and one that helps us understand how life as we know it now evolved."

Credit: 
Weizmann Institute of Science

New skills of Graphene: Tunable lattice vibrations

image: Electron microscopy shows the graphene sample (gray) in which the helium beam has created a hole pattern so that the density varies periodically. This results in the superposition of vibrational modes and the emergence of a mechanical band gap. The frequency of this phononic system can be adjusted between 50 MHz and 217 MHz by mechanical tension.

Image: 
K. Höflich/HZB

Without electronics and photonics, there would be no computers, smartphones, sensors, or information and communication technologies. In the coming years, the new field of phononics may further expand these options. That field is concerned with understanding and controlling lattice vibrations (phonons) in solids. In order to realize phononic devices, however, lattice vibrations have to be controlled as precisely as commonly realized in the case of electrons or photons.

Phononic cyrstals

The key building block for such a device is a phononic crystal, an artificially fabricated structure in which properties such as stiffness, mass or mechanical stress vary periodically. Phononic devices are used as acoustic waveguides, phonon lenses, and vibration shields and may realize mechanical Qubits in the future. However, until now, these systems operated at fixed vibrational frequencies. It was not possible to change their vibrational modes in a controlled manner.

Periodic hole pattern in graphene

Now, for the first time, a team at Freie Universität Berlin and HZB has demonstrated this control. They used graphene, a form of carbon in which the carbon atoms interconnect two-dimensionally to form a flat honeycomb structure. Using a focused beam of helium ions, the team was able to cut a periodic pattern of holes in the graphene. This method is available at CoreLab CCMS (Correlative Microscopy and Spectroscopy). "We had to optimize the process a lot to cut a regular pattern of holes in the graphene surface without touching neighbouring holes," Dr. Katja Höflich, group leader at Ferdinand-Braun-Institut Berlin and guest scientist at HZB, explains.

Bandgap and tunability

Jan N. Kirchhof, first author of the study now published in Nano Letters, calculated the vibrational properties of this phononic crystal. His simulations show that in a certain frequency range no vibrational modes are allowed. Analogues to the electronic band structure in solids, this region is a mechanical band gap. This band gap can be used to localize individual modes to shield them from the environment. What's special here: "The simulation shows that we can tune the phononic system quickly and selectively, from 50 megahertz to 217 megahertz, via applied mechanical pressure, induced by a gate voltage." says Jan Kirchhof.

Future applications

"We hope that our results will push the field of phononics further. We expect to discover some fundamental physics and develop technologies that could lead to application in e.g. ultrasensitive photosensors or even quantum technologies" explains Prof. Kirill Bolotin, head of the FU working group. The first experiments on the new phononic crystals from HZB are already underway in his group.

Credit: 
Helmholtz-Zentrum Berlin für Materialien und Energie

New cancer scan could guide brain surgery

A type of ultrasound scan can detect cancer tissue left behind after a brain tumour is removed more sensitively than surgeons, and could improve the outcome from operations, a new study suggests.

The new ultrasound technique, called shear wave elastography, could be used during brain surgery to detect residual cancerous tissue, allowing surgeons to remove as much as possible.

Researchers believe that the new type of scan, which is much faster to carry out and more affordable than 'gold standard' MRI scans, has the potential to reduce a patient's risk of relapse by cutting the chances that a tumour will grow back.

A multi-institutional team led by The Institute of Cancer Research, London, and the National Hospital for Neurology and Neurosurgery, London, compared three different techniques to detect tumour tissue during surgery - shear wave scans, a standard 2D ultrasound, and a surgeon's opinion - in 26 patients.

The research was conducted in collaboration with clinicians from The Royal London Hospital and University Hospital Southampton.

Researchers performed shear wave scans and 2D ultrasounds during the operation - before, during and after tumour removal. The researchers also asked surgeons to identify potentially cancerous tissue before providing them with scan findings. The team then compared all techniques with gold-standard MRI scans after surgery.

The study is published in the journal Frontiers in Oncology and was funded by the Royal Free Charity and the Engineering and Physical Sciences Research Council, part of UKRI. It found that shear wave elastography was more sensitive in detecting residual tumour tissue than a standard ultrasound or the surgeon alone.

Shear wave scans detected tumour tissue with 94 per cent sensitivity - compared with 73 per cent for standard ultrasound and 36 per cent for the surgeon. This means that when there was residual tumour, shear wave scans were 2.5 times better than the surgeon at detecting it.

However, shear wave scans detected tumour tissue with only 77 per cent specificity - better than the 63 per cent for standard ultrasound but less good than the 100 per cent for surgeons.

That means that the new technique could yield more 'false positives' than surgeons - and for that reason the researchers believe it would be best used in combination with a surgeon's opinion.

Shear wave elastography measures tissues' stiffness and stretchiness. Vibrations or 'shear waves' are created and detected as they move through tissue - moving faster through stiffer tissue.

Brain tumours tend, on average, to be stiffer than normal brain tissue and the technique works by mapping suspicious areas of particular stiffness, which can then be examined and removed during surgery.

Patient outcomes from brain tumour surgery are known to be better when as much of the tumour as possible is removed. In order to make sure that none of the resectable tumour is left behind, neurosurgeons use tools to guide them during surgery.

But although MRI scans are the most accurate, their use during surgery is not normally an option - as they are costly, not normally available in operating theatres and would increase the duration of surgery by almost two hours.

Shear wave scans were shown to be as good as post-surgery MRIs at detecting tumour tissue that had been left behind - making them a cheaper, faster and more feasible alternative. The study is the first to demonstrate the potential of shear wave elastography as a neurosurgical tool to confirm during surgery the completeness of tumour removal - although the benefits of the technique will now need to be confirmed in larger studies before it can be recommended as standard practice.

Study leader Professor Jeffrey Bamber, Professor in Physics Applied to Medicine at The Institute of Cancer Research, London, said:

"Ensuring all of a brain tumour is removed without damaging healthy tissue is a major challenge in brain surgery. Using this new type of scan, surgeons could greatly increase confidence that no cancerous tissue is going to be left behind after surgery.

"Shear wave scanning can quickly and affordably map the stiffness of brain and tumour tissue in patients during surgery. Tumour tissue tends to have a different stiffness from that of surrounding healthy brain tissue and can be located and removed.

"We have shown for the first time that this new tool is better than either a standard 2D ultrasound or a surgeon's judgment on its own - and has the potential to supplement a surgeon's opinion as a means of improving outcomes from operations."

Professor Kevin Harrington, Head of the Division of Radiotherapy and Imaging at The Institute of Cancer Research, London, said:

"Imaging plays a crucial role in many aspects of cancer treatment, in providing valuable information about tumours and ensuring doctors don't have to make decisions blind. This new study has shown for the first time that a particular type of ultrasound scan could provide real-time guidance to brain surgeons during operations as they choose which tissue to remove. It's an exciting area of research which has the potential to improve outcomes for patients by ensuring surgeons take out the entire tumour while minimising damage to the healthy brain."

Mr Neil Dorward, Consultant Neurosurgeon and co-researcher at the National Hospital for Neurology and Neurosurgery said:

"This technique provides a very practical means of detecting areas of potentially removable tumour that are not readily visible to the operating surgeon. The surgeon must use his or her experience to decide whether the area of abnormality should be resected. This has the potential to substantially improve the outcome of such operations."

Dagmar Krafft, 54, who was not involved in the study, was diagnosed with brain cancer in 2013 after suffering a seizure. She said:

"I was diagnosed with brain cancer completely out of the blue. I was at an orchestral rehearsal - where I play as an amateur violinist - when I had a seizure and was rushed to hospital. The team there took really good care of me, and I had radiotherapy to reduce the tumour size which, thankfully, was successful. After the radiotherapy finished, I was kept under regular surveillance, but otherwise life was pretty much back to normal.

"But then, at a routine scan in 2019, they discovered the cancer had relapsed. It was a total shock. This was six years after my radiotherapy had finished, and so I'd thought I was safe. I had surgery last year where they removed as much of the tumour as possible, and I now have regular scans to monitor it. I think the findings about these scans are fantastic. Any new technology that can help the surgeons do their jobs can only be a good thing - and as a patient it's really encouraging to know you'll be in the safest hands possible."

Credit: 
Institute of Cancer Research

Watch: Recycled cotton becomes new fabric

A lot of us recycle our old textiles, but few of us know that they are very difficult to re-use, and often end up in landfills anyway. Now, researchers at Lund University in Sweden have developed a method that converts cotton into sugar, that in turn can be turned into spandex, nylon or ethanol.

WATCH: New method transforms old cotton into glucose
https://www.youtube.com/watch?v=B1V--prLs08

Every year, an estimated 25 million tonnes of cotton textiles are discarded around the world. In total, 100 million tonnes of textiles are thrown out. In Sweden, most of the material goes straight into an incinerator and becomes district heating. In other places, it is even worse, as clothes usually end up in landfills.

"Considering that cotton is a renewable resource, this is not particularly energy-efficient", says Edvin Ruuth, researcher in chemical engineering at Lund University.

"Some fabrics still have such strong fibres that they can be re-used. This is done today and could be done even more in future. But a lot of the fabric that is discarded has fibres that are too short for re-use, and sooner or later all cotton fibres become too short for the process known as fibre regeneration."

At the Department of Chemical Engineering in Lund where Edvin Ruuth works, there is a great deal of accumulated knowledge about using micro-organisms and enzymes, among other things, to transform the "tougher" carbohydrates in biomass into simpler molecules. This means that everything from biological waste and black liquor to straw and wood chips can become bioethanol, biogas and chemicals.

Now the researchers have also succeeded in breaking down the plant fibre in cotton - the cellulose - into smaller components. However, no micro-organisms or enzymes are involved this time; instead, the process involves soaking the fabrics in sulphuric acid. The result is a clear, dark, amber-coloured sugar solution.

"The secret is to find the right combination of temperature and sulphuric acid concentration", explains Ruuth, who fine-tuned the 'recipe' together with doctoral student Miguel Sanchis-Sebastiá and professor Ola Wallberg.

Glucose is a very flexible molecule and has many potential uses, according to Ruuth.

"Our plan is to produce chemicals which in turn can become various types of textiles, including spandex and nylon. An alternative use could be to produce ethanol."

From a normal sheet, they extract five litres of sugar solution, with each litre containing the equivalent of 33 sugar cubes. However, you couldn't turn the liquid into a soft drink as it also contains corrosive sulphuric acid.

One of the challenges is to overcome the complex structure of cotton cellulose.

"What makes cotton unique is that its cellulose has a high crystallinity. This makes it difficult to break down the chemicals and reuse their components. In addition, there are a lot of surface treatment substances, dyes and other pollutants which must be removed. And structurally, a terrycloth towel and an old pair of jeans are very different", says Ruuth.

"Thus it is a very delicate process to find the right concentration of acid, the right number of treatment stages and temperature."

The concept of hydrolizing pure cotton is nothing new per se, explains Ruuth; it was discovered in the 1800s. The difficulty has been to make the process effective, economically viable and attractive.

"Many people who tried ended up not utilising much of the cotton, while others did better but at an unsustainable cost and environmental impact", says Ruuth.

When he started making glucose out of fabrics a year ago, the return was a paltry three to four per cent. Now he and his colleagues have reached as much as 90 per cent.

Once the recipe formulation is complete, it will be both relatively simple and cheap to use.

However, for the process to become a reality, the logistics must work. There is currently no established way of managing and sorting various textiles that are not sent to ordinary clothing donation points.

Fortunately, a recycling centre unlike any other in the world is currently under construction in Malmö, where clothing is sorted automatically using a sensor. Some clothing will be donated, rags can be used in industry and textiles with sufficiently coarse fibres can become new fabrics. The rest will go to district heating.

Hopefully, the proportion of fabrics going to district heating will be significantly smaller once the technology from Lund is in place.

Credit: 
Lund University

Lake turbidity mitigates impact of warming on walleyes in upper Midwest lakes

image: Slightly higher water temperatures in some upper Midwest lakes have resulted in increased growth rates for young walleyes like these, but if water temperatures continue to rise, influenced by a warming climate, walleye populations in the region ultimately will suffer.

Image: 
Gretchen Hansen, University of Minnesota

Because walleyes are a cool-water fish species with a limited temperature tolerance, biologists expected them to act like the proverbial "canary in a coal mine" that would begin to suffer and signal when lakes influenced by climate change start to warm. But in a new study, a team of researchers discovered that it is not that simple.

"After analyzing walleye early-life growth rates in many lakes in the upper Midwest over the last three decades, we determined that water clarity affects how growth rates of walleyes change as lakes start to warm," said Tyler Wagner, Penn State adjunct professor of fisheries ecology. "In some lakes, warming actually led to increased walleye growth rates, in others there essentially was no change, and in others, growth rates declined. The different responses of growth rates to increasing water temperatures across lakes appear to be influenced by water turbidity."

The research is significant, Wagner explained, because walleye fisheries in the upper Midwest are important not just ecologically, but also from an economic and cultural perspective. Because walleye fishing is a valued social activity in Minnesota and Wisconsin and hundreds of thousands of walleye fingerlings are stocked there to bolster wild populations, the region is the ideal place to study the effect of warming conditions on the fish.

According to the U.S. Environmental Protection Agency, the Midwest has gotten warmer, with average annual temperatures increasing over the last several decades. Between 1900 and 2010, the average air temperature increased by more than 1.5 degrees Fahrenheit in the region.

"The rate of increase in air temperature has accelerated in recent decades, and this increase in air temperature will affect the thermal habitat for fishes across the region," Wagner said. "Temperatures are projected to continue increasing across the Midwest -- with the greatest increases in average temperature expected in northern areas -- so we wanted to know what was happening with walleye populations in the upper Midwest."

Using data provided by the Minnesota and Wisconsin departments of Natural Resources, researchers quantified annual walleye early-life growth rates from 1983 to 2015 in 61 lakes in the upper Midwest. Then they estimated the relationship between early-life growth rates and water growing degree days -- an indicator of the temperature the fish are exposed to -- over those 32 years. Importantly, they also examined how water turbidity influenced growth rates across the 61 lakes, correlated to an increased number of growing degree days.

Their findings, published Feb. 23 in the Canadian Journal of Fisheries and Aquatic Sciences, showed that, on average, early-life growth rates increased with increasing growing degree days in turbid lakes, remained more or less unchanged in moderately clear lakes, and decreased in very clear lakes. This suggests that a "one-size-fits-all" approach to managing walleye populations across a broad landscape may not be effective, according to Wagner.

"Rather, lake-specific characteristics likely will be important in determining how walleye populations respond to climate change," he said.

The analysis also indicated that walleye growth rates varied among lakes of different sizes, explained lead researcher Danielle Massie, who graduated from Penn State in 2020 with a master's degree in wildlife and fisheries science.

"Walleye early-life growth rates, on average, were significantly greater in larger lakes," she said. "Our results provide insights into the conservation of cool-water species in a changing environment and identify lake characteristics in which walleye growth may be at least somewhat resilient to climate change."

The results of the research were surprising, Wagner conceded, because researchers expected to see walleye growth rates in most lakes decrease with more growing degree days -- since walleyes prefer cool water. But that did not happen in most of the lakes they studied.

"It sounds counterintuitive at first, but if we think about fish growth, we can think about it as a performance curve, where growth increases with increasing temperature to a certain point," he said. "But as the lake warms past that optimum temperature, the curve descends, and we'll see declining growth as the temperature increases beyond that point."

Slightly higher water temperatures in some upper Midwest lakes have resulted in increased growth rates for walleyes, but if water temperatures continue to rise, influenced by a warming climate, walleye populations in the region will suffer, predicted Wagner, assistant leader of Penn State's Pennsylvania Cooperative Fish and Wildlife Research Unit, housed in the College of Agricultural Sciences.

"We're going to reach a water temperature tipping point where growth will decline, and then we'll see deleterious effects," he said. "This is why understanding what factors, such as turbidity and lake size, influence how fish populations respond to warming is critical for informing management and conservation efforts."

Credit: 
Penn State

Novel soft tactile sensor with skin-comparable characteristics for robots

video: The robotic gripper can stably grasp an egg even though the experimenter tried to drag it down. And when the experimenter stops dragging, the robotic gripper can adjust the magnitude of the force to avoid breaking the egg.

Image: 
Provided by Dr Shen's team

A joint research team co-led by City University of Hong Kong (CityU) has developed a new soft tactile sensor with skin-comparable characteristics. A robotic gripper with the sensor mounted at the fingertip could accomplish challenging tasks such as stably grasping fragile objects and threading a needle. Their research provided new insight into tactile sensor design and could contribute to various applications in the robotics field, such as smart prosthetics and human-robot interaction.

Dr Shen Yajing, Associate Professor at CityU's Department of Biomedical Engineering (BME) was one of the co-leaders of the study. The findings have been recently published in the scientific journal Science Robotics, titled "Soft magnetic skin for super-resolution tactile sensing with force self-decoupling".

Mimicking human skin characteristics

A main characteristic of human skin is its ability to sense the shear force, meaning the force that makes two objects slip or slide over each other when coming into contact. By sensing the magnitude, direction and the subtle change of shear force, our skin can act as feedback and allow us to adjust how we should hold an object stably with our hands and fingers or how tight we should grasp it.

To mimick this important feature of human skin, Dr Shen and Dr Pan Jia, a collaborator from the University of Hong Kong (HKU), have developed a novel, soft tactile sensor. The sensor is in a multi-layered structure like human skin and includes a flexible and specially magnetised film of about 0.5mm thin as the top layer. When an external force is exerted on it, it can detect the change of the magnetic field due to the film's deformation. More importantly, it can "decouple", or decompose, the external force automatically into two components - normal force (the force applied perpendicularly to the object) and shear force, providing the accurate measurement of these two forces respectively.

"It is important to decouple the external force because each force component has its own influence on the object. And it is necessary to know the accurate value of each force component to analyse or control the stationary or moving state of the object," explained Yan Youcan, PhD student at BME and the first author of the paper.

Deep learning enhanced accuracy

Moreover, the senor possesses another human skin-like characteristic - the tactile "super-resolution" that allows it to locate the stimuli's position as accurate as possible. "We have developed an efficient tactile super-resolution algorithm using deep learning and achieved a 60-fold improvement of the localisation accuracy for contact position, which is the best among super-resolution methods reported so far," said Dr Shen. Such an efficient tactile super-resolution algorithm can help improve the physical resolution of a tactile sensor array with the least number of sensing units, thus reducing the number of wirings and the time required for signal transmitting.

"To the best of our knowledge, this is the first tactile sensor that achieved self-decoupling and super-resolution abilities simultaneously," he added.

Robotic hand with the new sensor completes challenging tasks

By mounting the sensor at the fingertip of a robotic gripper, the team showed that robots can accomplish challenging tasks. For example, the robotic gripper stably grasped fragile objects like an egg while an external force trying to drag it away, or threaded a needle via teleoperation. "The super-resolution of our sensor helps the robotic hand to adjust the contact position when it grasps an object. And the robotic arm can adjust force magnitude based on the force decoupling ability of the tactile sensor," explained Dr Shen.

He added that the sensor can be easily extended to the form of sensor arrays or even continuous electronic skin that covers the whole body of the robot in the future. The sensitivity and measurement range of the sensor can be adjusted by changing the magnetisation direction of the top layer (magnetic film) of the sensor without changing the sensor's thickness. This enabled the e-skin to have different sensitivity and measurement range in different parts, just like human skin.

Also, the sensor has a much shorter fabrication and calibration processes compared with other tactile sensors, facilitating the actual applications.

"This proposed sensor could be beneficial to various applications in the robotics field, such as adaptive grasping, dextrous manipulation, texture recognition, smart prosthetics and human-robot interaction. The advancement of soft artificial tactile sensors with skin-comparable characteristics can make domestic robots become part of our daily life," concluded Dr Shen.

Credit: 
City University of Hong Kong

Scientists describe 'hidden biodiversity crisis' as variation within species is lost

image: Nature has always been a source of artistic inspiration and materials, and variation, both across species and within species, an important contributor to art and culture. This painting illustrates intraspecific variation in sockeye salmon runs and was created from natural botanical pigments foraged from North American native species, including Western red cedar (bark), red alder (cones), staghorn sumac (berries), and salal (berries).

Image: 
Simone Des Roches

The rapid loss of variation within species is a hidden biodiversity crisis, according to the authors of a new study looking at how this variation supports essential ecological functions and the benefits nature provides for people.

Published March 1 in Nature Ecology and Evolution, the study highlights the need to better understand and conserve variation within species in order to safeguard nature's contributions to people.

"Biodiversity means more than the number of species, and when we focus on species-level extinctions we are missing part of the story," said corresponding author Eric Palkovacs, professor of ecology and evolutionary biology at UC Santa Cruz. "Intraspecific variation is a neglected aspect of biodiversity, but it has value for people, and we need to start recognizing that and protecting this form of biodiversity."

An earlier study led by first author Simone Des Roches, a postdoctoral researcher at UC Santa Cruz now at the University of Washington, showed that the loss of variation within species can have serious ecological consequences. This got Des Roches and Palkovacs thinking about the broader implications of their findings for the values and services nature provides to people, from forest materials and clean water to commercial fisheries and medicines derived from natural products.

For the new study, they surveyed the scientific literature for studies showing how intraspecific variation supports ecosystem services and other aspects of nature's contributions to people. They found well documented connections across a wide variety of species, including fish and commercial fisheries, insects and crop pollination, woody plants and forestry products, many different crops and their wild ancestors, and more.

"There is a whole suite of documented cases, including several examples of what happens when we lose intraspecific variation," Palkovacs said. "One of the best examples is commercial fisheries, where diverse fish stocks help to stabilize the overall population."

Subpopulations of salmon, for example, are locally adapted to the conditions of different watersheds, allowing the overall population to remain stable even as environmental fluctuations cause declines in some subpopulations and increases in others. These "portfolio effects" in salmon are undermined by dams, which block subpopulations from critical spawning habitat, and by hatchery production, which can reduce genetic variation. The loss of intraspecific variation in salmon can lead to boom-bust population cycles that are detrimental to the long-term value of the fishery.

Des Roches noted that people have long depended on variation within domesticated and agriculturally important species. "Our coevolutionary history with hundreds of domesticated species is characterized by our continued selection for unusual and beneficial variants within species," she said. "We've often taken this too far and have thus lost critical genetic diversity in domesticated species. We depend on outbreeding with more genetically variable wild type or ancestral populations (when they exist) to restore this diversity."

Plants with medicinal value provide other well documented examples of the value of intraspecific variation, Palkovacs said. "Different varieties of the same plant species may have different compounds with different medicinal properties, such as different antimalarial drugs that depend on the genetic diversity of the plants they are derived from."

The authors emphasized the importance of collaborating with local and indigenous groups who have deep knowledge of the relationships between intraspecific variation and the natural products and services they use. "We need to take advantage of the local knowledge systems to inform our understanding of these connections," Palkovacs said.

He noted that Western science has focused overwhelmingly on species-level extinctions, and only the most well-studied groups of organisms have been characterized from the standpoint of intraspecific variation. Of all the species evaluated by the International Union for Conservation of Nature (IUCN), for example, only about 1 percent have been evaluated below the species level, and many of those show precipitous declines in diversity.

"There is strong evidence that the loss of intraspecific variation may be a very widespread problem, but we don't even know what is being lost," Palkovacs said.

There are practical steps that can be taken now, he said, to better document this variation, preserve biodiversity, and protect its contributions to the wellbeing of people. New genomic tools, for example, are available to quickly and systematically characterize the variation within species. This intraspecific variation can be directly incorporated into biodiversity assessments, such as those done by the IUCN and the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES).

Addressing this aspect of biodiversity should be a major goal of global conservation efforts, the authors said. "The available evidence strongly suggests that the benefits of studying and conserving intraspecific variation will far outweigh the costs," Palkovacs said.

He noted that variation within species is the raw material of adaptive evolution. In a rapidly changing world, this variation is critically important to enable species to adapt to the conditions of an unpredictable future.

Credit: 
University of California - Santa Cruz

Potential target for treating many cancers found within GLI1 gene

Scientists from the Stanley Manne Children's Research Institute at Ann & Robert H. Lurie Children's Hospital of Chicago found that a region within the DNA of the cancer-promoting GLI1 gene is directly responsible for regulating this gene's expression. These findings, published in the journal Stem Cells, imply that this region within GLI1 could potentially be targeted as cancer treatment, since turning off GLI1 would interrupt excessive cell division characteristic of cancer.

"From previous research, we know that GLI1 drives the unrelenting cell proliferation that is responsible for many cancers, and that this gene also stimulates its own expression," says co-senior author Philip Iannaccone, MD, PhD, Professor Emeritus at the Manne Research Institute at Lurie Children's and Northwestern University Feinberg School of Medicine. "We established in living human embryonic stem cells that removing the GLI1 regulatory region eliminated GLI1 expression and halted its activity. These findings are promising and could point to a therapeutic target for cancer."

Dr. Iannaccone and colleagues used CRISPR gene editing technology to delete the binding region of the GLI1 DNA in human embryonic stem cells. They found that without this region, GLI1 remained turned off, which interfered with the gene's normal activity of driving embryonic development of blood, bone, and nerve cells.

"A surprising aspect of this work was that turning GLI1 off affected stem cell differentiation to all three embryonic lineages," says first author Yekaterina Galat, BS, Research Associate at the Manne Research Institute at Lurie Children's.

"The developmental function of GLI1 ends after birth, so if we manage to stop its expression in the context of cancer, it should not have negative consequences to normal biology," explains Dr. Iannaccone.

GLI1 expression is associated with about a third of all human cancers. In addition to promoting cell proliferation, GLI1 expression increases tumor cell migration and is associated with resistance to chemotherapy drugs.

"Our team plans to study GLI1 associated proteins that assist in regulation of GLI1 expression through its binding region," says Dr. Iannaccone. "Targeting these proteins as a means to stop GLI1 activity could prove to be a fruitful treatment strategy for cancer."

Credit: 
Ann & Robert H. Lurie Children's Hospital of Chicago

On calm days, sunlight warms the ocean surface and drives turbulence

image: Clouds form over the Indian Ocean as the sun sets. A new study has found that In tropical oceans, a combination of sunlight and weak winds drives up surface temperatures in the afternoon, increasing atmospheric turbulence.

Image: 
Derek Coffman, NOAA.

CORVALLIS, Ore. - In tropical oceans, a combination of sunlight and weak winds drives up surface temperatures in the afternoon, increasing atmospheric turbulence, unprecedented new observational data collected by an Oregon State University researcher shows.

The new findings could have important implications for weather forecasting and climate modeling, said Simon de Szoeke, a professor in OSU's College of Earth, Ocean, and Atmospheric Sciences and the lead author of the study.

"The ocean warms in the afternoon by just a degree or two, but it is an effect that has largely been ignored," said de Szoeke. "We would like to know more accurately how often this is occurring and what role it may play in global weather patterns."

The findings were just published in the journal Geophysical Research Letters. Co-authors are Tobias Marke and W. Alan Brewer of the NOAA Chemical Sciences Laboratory in Boulder, Colorado.

Over land, afternoon warming can lead to atmospheric convection and turbulence and often produces thunderstorms. Over the ocean, the afternoon convection also draws water vapor from the ocean surface to moisten the atmosphere and form clouds. The warming over the ocean is more subtle and gets stronger when the wind is weak, said de Szoeke.

De Szoeke's study of ocean warming began during a research trip in the Indian Ocean several years ago. The research vessel was equipped with Doppler lidar, a remote sensing technology similar to radar that uses a laser pulse to measure air velocity. That allowed researchers to collect measurements of the height and strength of the turbulence generated by the afternoon warming for the first time.

Previous observations of the turbulence over the ocean had been made only by aircraft, de Szoeke said.

"With lidar, we have the ability to profile the turbulence 24 hours a day, which allowed us to capture how these small shifts in temperature lead to air turbulence," he said. "No one has done this kind of measurement over the ocean before."

Researchers gathered data from the lidar around the clock for about two months. At one point, surface temperatures warmed each afternoon for four straight days with calm wind speeds, giving researchers the right conditions to observe a profile of the turbulence created in this type of sea surface warming event.

It took a "perfect storm" of conditions, including round-the-clock sampling by the lidar and a long ocean deployment, to capture these unprecedented observations, de Szoeke said.

Sunlight warms the ocean surface in the afternoon, surface temperatures go up by a degree Celsius or more. This warming occurs during roughly 5% of days in the world's tropical oceans. Those oceans represent about 2% of the Earth's surface, about the equivalent of the size of the United States.

The calm wind and warming air conditions occur in different parts of the ocean in response to weather conditions, including monsoons and Madden-Julian Oscillation, or MJO, events, which are ocean-scale atmospheric disturbances that occur regularly in the tropics.

To determine the role these changing temperatures play in weather conditions in the tropics, weather models need to include the effects of surface warming, de Szoeke said.

"There are a lot of subtle effects that people are trying to get right in climate modeling," de Szoeke said. "This research gives us a more precise understanding of what happens when winds are low."

Credit: 
Oregon State University

Socioeconomic status plays a major role in cognitive outcomes

image: Heather Conklin, PhD, of St. Jude Psychology, contributed to research that studied risk factors of certain cancer treatments in children.

Image: 
St. Jude Children's Research Hospital

Childhood cancer and its treatment can result in cognitive struggles. Scientists atSt. Jude Children's Research Hospital are studying the risk factors. They looked at social and economic issues in children with brain tumors treated with radiation.

These patients have the greatest risk of cognitive problems. Scientists followed a group of St. Jude patients for 10 years. The children all had conformal radiation therapy.

For each patient, researchers looked at certain factors. These included the parent's job, education level, and whether it was a single parent home. The children were from different backgrounds.

The findings show social and economic status is linked to IQ, academics, attention and self-care skills before treatment. The study also shows that this gap widens over time.

"What was most surprising was that for some measures, the contribution of socioeconomic status was even greater than age at treatment, which has typically been the biggest risk factor," said Heather Conklin, PhD, of St. Jude Psychology.

Credit: 
St. Jude Children's Research Hospital