Tech

A COSMIC approach to nanoscale science

image: At the COSMIC Microscopy beamline, researchers probed the oxidation state of the chemical element cerium using scanning transmission X-ray microscopy (STXM) under operando conditions. It was a first demonstration of this capability at COSMIC. The results confirmed how cerium particles dictated the size and locations of the reaction sites of platinum particles. In this artistic depiction, hybrid CeOX-TiO2 nanoparticles (silver spheres) are shown evenly covered with platinum and cerium pairs (yellow and blue) while conventional titanium dioxide particles are shown less densely covered with larger platinum clusters (gold).

Image: 
Chungnam National University

COSMIC, a multipurpose X-ray instrument at Lawrence Berkeley National Laboratory's (Berkeley Lab's) Advanced Light Source (ALS), has made headway in the scientific community since its launch less than 2 years ago, with groundbreaking contributions in fields ranging from batteries to biominerals.

COSMIC is the brightest X-ray beamline at the ALS, a synchrotron that generates intense light - from infrared to X-rays - and delivers it to dozens of beamlines to carry out a range of simultaneous science experiments. COSMIC's name is derived from coherent scattering and microscopy, which are two overarching X-ray techniques it is designed to carry out.

Its capabilities include world-leading soft X-ray microscopy resolution below 10 nanometers (billionths of a meter), extreme chemical sensitivity, ultrafast scanning speed as well as the ability to measure nanoscale chemical changes in samples in real time, and to facilitate the exploration of samples with a combination of X-ray and electron microscopy. Soft X-rays represent a low range in X-ray energies, while hard X-rays are higher in energy. Each type can address a different range of experiments.

COSMIC is setting the stage for a long-term project to upgrade the decades-old ALS. The effort, known as the ALS Upgrade (ALS-U), will replace most of the existing accelerator components with state-of-the-art technology, ensuring capabilities that will enable world-leading soft X-ray science for years to come. The upgrade will also further enhance COSMIC's ability to capture nanoscale details in the structure and chemistry of a broad range of samples.

The expected 100-fold increase in X-ray brightness that ALS-U will deliver will provide a similar increase in imaging speed at COSMIC, and a more than threefold improvement in imaging resolution, enabling microscopy with single-nanometer resolution. Further, the technologies being developed now at COSMIC will be deployed at other beamlines at the upgraded ALS, making possible microscopy with higher X-ray energies for many more experiments. The instrument is one of many highly specialized resources available to scientists from around the world for free through a peer-reviewed proposal process.

A journal article, published Dec. 16, 2020, in Science Advances, highlights some of COSMIC's existing capabilities and those that are on the way. The paper offers examples of 8-nanometer resolution achieved in imaging magnetic nanoparticles, the high-resolution chemical mapping of a battery cathode material during heating, and the high-resolution imaging of a frozen-hydrated yeast cell at COSMIC. (A cathode is one type of battery electrode, a component through which current flows.) These results serve as demonstration cases, revealing critical information about the structure and inner workings of these materials and opening the door for further insights across many fields of science.

Another journal article, published Jan. 19, 2021), in Proceedings of the National Academy of Sciences, demonstrated the first-ever use of X-ray linear dichroic ptychography, a specialized high-resolution imaging technique available at COSMIC, to map the orientations of a crystal known as aragonite that is present in coral skeletons at 35-nanometer resolution. The technique shows promise for mapping other biomineral samples at high resolution and in 3D, which will provide new insights into their unique attributes and how to mimic and control them. Some biominerals have inspired humanmade materials and nanomaterials due to their strength, resilience, and other desirable properties.

"We use this user-friendly, unique platform for materials characterization to demonstrate world-leading spatial resolution, in conjunction with operando and cryogenic microscopy," said David Shapiro, the paper's lead author and the lead scientist for COSMIC's microscopy experiments. He also leads the ALS Microscopy Program. "Operando" describes the ability to measure changes in samples as they are occurring.

"There's no other instrument that has these capabilities co-located for X-ray microscopy at this resolution," Shapiro said. COSMIC can provide new clues to the nanoscale inner workings of materials, even as they actively function, that will lead to a deeper understanding and better designs - for batteries, catalysts, or biological materials. Equipping COSMIC with such a diversity of capabilities required an equally broad collaboration across scientific disciplines, he noted.

COSMIC contributors included members of Berkeley Lab's CAMERA (Center for Advanced Mathematics for Energy Research Applications) team, which includes computer scientists, software engineers, applied mathematicians, and others; information technology experts; detector specialists; engineers; scientists at the Molecular Foundry's National Center for Electron Microscopy; ALS scientists; and outside collaborators from the National Science Foundation's STROBE Science and Technology Center and Stanford University.

Several advanced technologies developed by different groups were integrated into this one instrument. Key to the demonstrations at COSMIC reported in the paper is the implementation of X-ray ptychography, which is a computer-aided image reconstruction technique that can exceed the resolution of conventional techniques by up to about 10 times.

With traditional methods, spatial resolution - the ability to distinguish tiny features in samples - is limited by the quality of the X-ray optics and their ability to focus the X-ray beam into a tiny spot. But conventional X-ray optics, which are the instruments used to manipulate X-ray light to see samples more clearly, are difficult to make, inefficient, and have short focal lengths.

Instead of relying on imperfect optics, ptychography records a large number of physically overlapping diffraction patterns - which are images produced as X-ray light scatters from the sample - each offering a small piece of the full picture. Rather than being limited by optics quality, the ptychography technique is limited by the brightness of the X-ray source - precisely the parameter that ALS-U is expected to improve a hundredfold. To capture and process the enormous amount of data and reconstruct the final image requires data processing facilities, computer algorithms, and specialized fast pixel detectors like those developed at Berkeley Lab.

"X-ray ptychography is a detector-enabled technique - first deployed with hard (high-energy) X-rays using hybrid pixel detectors, and then at the ALS with the FastCCD we developed," said Peter Denes, the ALS detector program lead who worked with lead engineer John Joseph on the implementation at COSMIC. "Much of the COSMIC technology benefited from the Laboratory Directed Research and Development (LDRD) Program, as did the FastCCD, which translated tools for cosmology into COSMIC observations." Berkeley Lab's LDRD Program supports innovative research activities that keep the Lab at the forefront of science and technology.

Ptychography utilizes a sequence of scattering patterns, produced as X-ray light scatters from a sample. These scattering patterns are analyzed by a computer running high-performance algorithms, which convert them into a high-resolution image.

In the Dec. 16, 2020, paper, researchers highlighted how ptychographic images made it possible to see the high-resolution chemical distribution in microscopic particles of a lithium iron phosphate battery cathode material (Li0.5FePo4). The ptychographic images showed nanoscale chemical features in the interior of the particles that were not visible using the conventional form of the imaging technique, called spectromicroscopy.

In a separate demonstration of ptychography at COSMIC, researchers noted chemical changes in a collection of LixFePO4 nanoparticles when subjected to heating.

Ptychography is also a source of COSMIC's heavy data demands. The beamline can produce several terabytes of data per day, or enough to fill a few laptop computers. The intensive computations required for COSMIC's imaging necessitate a dedicated cluster of GPUs (graphical processing units), which are specialized computer processors.

The ALS Upgrade will further drive its data demands up to an expected 100 terabytes per day, Shapiro noted. Plans are already being discussed for using more resources at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) to accommodate this pending ramp-up in data.

COSMIC is a stellar example of Berkeley Lab's Superfacility Project, which is designed to link light sources like the ALS and cutting-edge instrumentation including microscopes and telescopes with data and high-performance computing resources in real time, said Bjoern Enders, a data science workflows architect in NERSC's Data Science Engagement Group.

"We love data and computing challenges from instruments like COSMIC that venture beyond facility borders," Enders said. "We are working toward a future where it will be as easy as a button click to use NERSC's resources from a beamline." The addition of the new Perlmutter supercomputer at NERSC, he added, "will be an ideal partner for COSMIC in team science."

COSMIC started up in commissioning mode in March 2017, and opened to general scientific experiments about 2 years ago. Since this time, instrument staff have  launched the operando capabilities that measure active chemical processes, for example, and rolled out linear and circular dichroic microscopy and tomography capabilities that further extend COSMIC's range of imaging experiments.

Its coherent scattering branch is now undergoing testing and is not yet available to external users. Work is also in progress to correlate its X-ray microscopy results with electron microscopy results for active processes, and to further develop its cryogenic capabilities, which will allow biological samples and other soft materials to be protected from damage by the ultrabright X-ray beam while they are being imaged. The combination of X-ray and electron microscopy can provide a powerful tool for gathering detailed chemical and structural information on samples, as demonstrated in an experiment involving COSMIC that was highlighted in the journal Science Advances.

Shapiro noted that there are plans to introduce a new experimental station to the beamline, timed with ALS-U, to accommodate more experiments.

One secret to COSMIC's success is that the instrument is designed for compatibility with standard sample-handling components. Shapiro said this user-friendly approach "has been really important for us," and makes it easier for researchers from academia and industry to design COSMIC-compatible experiments. "Users can just show up and plug (the samples) in. It increases our reach, scientifically," he said.

While COSMIC is loaded with features, it isn't bulky, and Shapiro described it as "streamlined in size and cost." He said he hopes it will be a model for future beamlines, both at ALS-U and at other synchrotron facilities.

"I think what is really attractive about it is that it is a very compact instrument. It is high-performance and very stable," he said. "It is very manageable and not very expensive. In that sense it should be very attractive for synchrotrons."

Credit: 
DOE/Lawrence Berkeley National Laboratory

Citizen science study captures 2.2M wildlife images in NC

You didn't need a Ph.D. to contribute to research into wildlife abundance and behavior in North Carolina, thanks to a large-scale citizen science project led by North Carolina State University researchers.

Through the project, called North Carolina Candid Critters, researchers trained 580 volunteers to take candid animal photos with heat sensitive cameras, and then share their photos through a website called eMammal. In an article on the project in the journal Citizen Science: Theory and Practice, researchers reported on the successes and challenges of the effort, which gathered more than 2.2 million wildlife photos across three years, and increased the number of verified mammal records that were available in the state by a factor of five.

"The power of this is that you can get large-scale, ecological data in a timely manner," said the study's corresponding author Roland Kays, research associate professor at NC State. "There are many people interested in using citizen science, but there are a lot of questions such as: How do you train the volunteers? How do you get the data from them? This paper was really about how we addressed those questions as the project went on, and what were some of the solutions that we found for dealing with them."

Through the project, researchers recruited volunteers including library patrons, middle school students, teachers, hikers and nature enthusiasts from all 100 counties. They created a customized online program to train volunteers to place and use the cameras, which they loaned out through 63 public libraries. Some volunteers used their own cameras. The project was a collaboration with the N.C. Wildlife Resources Commission, N.C. Museum of Natural Sciences, eMammal and N.C. Cardinal Libraries.

"We're the first citizen science project to loan out equipment on that kind of scale," Kays said.

The volunteers placed cameras at 3,093 locations. Along with additional work by research staff, they were able to get photos from a total of 4,295 locations. While they worked with federal and state agencies, nonprofits and private landowners to get permission for people to place cameras on public and private land, many people placed cameras by their homes. Fifty-four percent of volunteers placed cameras on private land.

"It's really hard to sample on private land because it's hard to get permission," Kays said. "In this case, people were putting cameras on their own land because they wanted to see what animals were there. That's a real bonus of the citizen science approach."

Of 2.2 million photos taken, 1.4 million were taken by volunteers, and the rest were captured by staff. From those photos, they were able to get 120,671 wildlife observations, with 45 percent of those taken by volunteers. That included 30 different mammal and three bird species.

Researchers double-checked volunteers' photos to make sure the cameras were placed correctly, and the animals were correctly identified. Researchers rejected less than 1 percent of camera placements for being set too low, 3.2 percent for being set too high, and 4.9 percent for equipment malfunctions, including cameras being destroyed by bears.

"Volunteers might not do everything perfectly the first time," Kays said. "The nice thing was that via the eMammal system, we could check to see if the camera was set up correctly. We could tell the volunteer, and the next time it would get better. We were able to verify the information and give feedback to volunteers."

They found volunteers identified animals with 69.7 percent accuracy. While volunteers tended to identify certain species, such as the white-tailed deer and wild turkey, correctly every time, others were trickier. They identified the North American river otter with just 56 percent accuracy.

Researchers faced challenges in recruiting volunteers, training them, managing the camera equipment and in making sure they got photos in locations where they needed, including in forests, open land and developed areas. To help other researchers, they suggested solutions for how to recruit volunteers, gather data and overcome other obstacles.

"Data management was a huge challenge, which we addressed using the eMammal system," Kays said. "Training was a problem we still have to work on. Some people dropped out because the training was too complicated."

The photos will be used for multiple research projects to answer questions about wildlife abundance, reproduction and other questions. The data will be made publicly available for other researchers to use.

"The great potential of citizen science is it can help you collect more data than you could before, across a larger area more rapidly, and on different areas like on private land," Kays said. "It also engages the public, and it get them interested in science and science around nature and conservation."

Credit: 
North Carolina State University

Antisense oligonucleotides as a feasible therapy to treat MECP2 duplication disorder

Many cognitive neurodevelopmental disorders are a result of too many or too few copies of certain genes or chromosomes. To date, no treatment options exist for this class of disorders. MECP2 duplication syndrome (MDS) is one such disorder that primarily affects boys and results from a duplication spanning the methyl-CpG binding protein 2 (MECP2) locus located on the X chromosome.

A preclinical study published from the laboratory of Dr. Huda Zoghbi, professor at Baylor College of Medicine and director of the Jan and Dan Duncan Neurological Research Institute at Texas Children's Hospital, provides experimental evidence that supports the use of antisense oligonucleotides as a feasible strategy to treat MDS. The study also offers crucial insights into the pharmacodynamics of this approach, which will serve as an important guide for the design and implementation of future clinical trials for this disorder. The study appears in the journal Science Translational Medicine.

The MECP2 gene encodes the MeCP2 protein which is a cellular maestro that orchestrates the expression of thousands of other genes in the brain. Studies show that the dosage and expression of this gene in the neurons must be strictly regulated and maintained at the optimal level. Reduced levels of this protein result in Rett syndrome, a childhood neurological disorder primarily affecting girls, characterized by decreased cognition, inability to perform motor functions, particularly with hands, autism-like behavior, and seizures. On the other hand, excess levels of this protein due to MECP2 duplication cause a syndrome primarily affecting boys, called MDS that is characterized by poor muscle tone and impaired motor abilities, cognitive disability, epilepsy, autistic behaviors, respiratory infections, and premature death.

"The 'Goldilocks' nature of this protein poses an enormous challenge in devising potential therapies for MECP2 disorders. An effective treatment strategy for MDS needs to ensure that the reduction in MeCP2 protein is significant enough to cause a dose-dependent reversal of the associated symptoms but not lower MeCP2 levels to such an extent that it leads to the emergence of Rett symptoms," said Zoghbi, who is also a Howard Hughes Medical Institute investigator.

Antisense oligonucleotides as a potential therapeutic strategy for MDS

Antisense oligonucleotides (ASOs) are small nucleotides that selectively bind and silence the expression of a specific gene's mRNA and have been recognized as an attractive potential strategy to treat MDS. As a therapeutic strategy, ASOs offer several advantages such as limited toxicity and high specificity to the target gene, which in turn means fewer side effects. Moreover, they have been successfully used in the animal models of several neurological disorders to reverse symptoms and have proved safe in humans, including children, in recent clinical trials for certain neurodegenerative diseases.

Previous experiments in the Zoghbi lab showed ASO treatment of mice engineered with an extra copy of human MECP2 successfully lowered MeCP2 levels and reversed the associated symptoms. It is important to note, however, that in those experiments, the rodents also carried a normal copy of the mouse Mecp2 gene, which is not recognized or silenced by the MECP2-ASO that is specifically designed to only target human MECP2 gene.

In MDS patients, since the MECP2-ASO can target and silence both copies of the MECP2 gene, there is a possibility that ASO treatment could significantly reduce the gene dosage to below its physiological levels and thereby put the individual at an increased risk for developing Rett symptoms.

"This scenario is similar to the dilemma physicians encounter with diabetics or high blood pressure patients, wherein the insulin dose or blood pressure medication must be precisely adjusted so the patient's blood glucose levels or blood pressure does not dip too much at any given time," Zoghbi said.

Therefore, before embarking on clinical studies to evaluate the potential of the MECP2-ASO, the researchers conducted this preclinical study to test if SO dosage could be reliably titrated to regulate MeCP2. The goal of the study was to test if it is possible to precisely control MeCP2 levels such that it is lowered enough to reverse MDS symptoms but leave behind sufficient levels of MeCP2 protein in the brain for crucial functions to continue unhindered.

ASOs reverse most MDS symptoms in 'humanized' mouse models

To test this, the team generated a novel 'humanized' mouse model of MDS that closely mimicked the human condition with two copies of human MECP2 and no mouse version of MECP2. These mice recapitulated many of the features of already established MDS mouse models and MDS in human patients.

The mice were then injected with a single large dose of ASOs into the lateral ventricle of the brain, which is similar to the preferred method of drug delivery in clinical settings, which involves injection into the spine through a procedure that mimics a spinal tap. ASOs were widely distributed in various brain regions and were effective in reducing MeCP2 protein levels.

Next, to understand the time-course and sequence of the molecular changes that occur in the brain upon ASO treatment, they conducted a dose-dependent response study. They found that within a week of ASO administration, MECP2 mRNA levels were reduced by 50%, while it took two weeks for MeCP2 protein levels to go down and about five weeks after ASO administration to observe a reversal in the expression of several genes that are regulated by the MeCP2 protein. Interestingly, by ten weeks of ASO treatment, the mice exhibited a significant improvement in several behavioral symptoms such as locomotion and learning deficits that are common among MDS patients.

"The results of this study are really exciting because not only do they prove that some of the MDS symptoms are reversible in adult mice, but they also demonstrate that it is possible to titrate and monitor the dose of ASOs to safely normalize MeCP2 levels without lowering them below the normal range to avoid causing Rett-like symptoms. In addition, we have now identified the temporal sequence of molecular events and identified several molecular markers that undergo change weeks before clinical symptoms, which gives us a tool as well as a window of several weeks, to ensure precise ASO dosing," said Dr. Yehezkel Sztainberg, one of the lead authors of the study and currently a scientist at Regeneron Pharmaceuticals.

"I see a very clear path from these proof-of-concept results to future clinical trials that test the safety and efficacy of MECP2-ASOs in children with MDS. Moreover, our study suggests the antisense strategy could potentially also be used to treat other childhood developmental disorders caused by genetic duplications," Zoghbi concluded.

Credit: 
Texas Children's Hospital

Newly discovered millipede, Nannaria hokie, lives at Virginia Tech

image: The Hokie twisted-claw millipede is one of scores of millipedes that Entomologist Paul Marek has discovered and named over the years.

Image: 
Virginia Tech

Hearing the words "new species discovered" may conjure images of deep caves, uncharted rainforests, or hidden oases in the desert.

But the reality is that thousands of new species are discovered each year by enterprising scientists all over the world. Many of these new species do come from exotic locations, but more surprisingly, many come from just down the road, including the newest member of the Hokie Nation, the millipede Nannaria hokie.

The newest Hokie -- which has about 60 more legs than the HokieBird ­-- was discovered living under rocks by the Duck Pond behind the Grove on Virginia Tech's Blacksburg campus. Since then, the critter has been found at the area commonly referred to as stadium woods and in town in Blacksburg as well.

"It's not every day that we find new species, let alone on our campus, so we wanted to name the new species for the Virginia Tech community and to highlight the importance of conserving native habitat in the region," said Paul Marek, a systematics and taxonomy associate professor in Virginia Tech's Department of Entomology in the College of Agriculture and Life Sciences.

Nannaria hokie (pronounced nan-aria ho-key) is about 2 centimeters long, and a dark reddish millipede with yellow-white highlights (apologies to those who thought it would be maroon and orange). These creatures are roughly the size of a penny and usually find their home under rocks, leaves, and among other forest floor debris. The common name "Hokie twisted-claw millipede" comes from the presence of twisted claws on their feet before their reproductive organs.

Millipede biodiversity is the primary focus of Marek's lab, which investigates habitats all over the world, including Vietnam, Japan, and the United States. Marek, recent entomology graduate Jackson Means, and other co-authors recently published a paper in the journal Insect Systematics and Diversity, that describes 10 new species of millipedes, including the Hokie twisted-claw millipede, which was found only a stone's throw from Marek's lab window.

The announcement of these new species speaks to the biodiversity that has yet to be discovered, not just in far off exotic locations, but in your backyard.

"Millipedes are surprisingly abundant and diverse yet have thus far avoided major attention from both the scientific community and the public," Jackson said. "I guarantee that if you just go out into a forest near your home and start looking under leaves you will find several species of millipede, some of which will likely be large and colorful."

Millipedes are a unique group of arthropods that are characterized by having two pairs of jointed legs on most segments of their bodies. For anyone who may have turned over a rock in the dirt, the shiny exoskeleton of these types of arthropods should be familiar. These critters boast an incredible amount of biodiversity and have many fascinating and unique traits; some have bright colors, some glow in the dark, and some can even exude cyanide in self-defense. Most millipedes are known as detritivores, or decomposers, and feed on decaying plant matter on forest floors.

Including the Hokie millipede, the publication goes on to detail nine other millipedes, all native to Appalachian forests. As the scientists who discovered these arthropods, the Marek lab had the honor of naming these new species, including references to Virginia Tech alumnus and arachnologist Jason Bond (Appalachioria bondi), alumna Ellen Brown (Appalachioria brownae), and even one named after Marek's wife Charity (Rudiloria charityae). This millipede he named for his wife after he found it while taking a quick stroll with family before their wedding by the Chagrin River where he grew up in northeastern Ohio.

Millipedes have existed far longer than humans have and represent some of the first land animals discovered by scientists in fossil records. Their role as detritivores is crucial to forest ecosystems, and the primary role of millipedes in this environment is to break down plant matter into smaller material, so that bacteria and other smaller organisms can continue to recycle the material into the soil and make its nutrients available for future generations of life.

Despite an ancient lineage and a plentiful food source, the threat of extinction is very real for many millipede species. Millipedes typically remain confined to select, relatively small geographical regions, due to their limited mobility and their dependency on specific habitats. As such, changing climate and habitat destruction is highly threatening to the survival of these organisms.

"The forests of Appalachia are important carbon sinks, providing habitat to diverse species occupying many trophic levels. Deforestation and habitat loss threaten this biodiversity," Marek said. "Many Appalachian invertebrates, which make up the most diverse component of this fauna, are unknown to science, and without immediate taxonomic attention, species may be irrecoverably lost. My lab's motivation is to preserve biodiversity. Intertwined is our goal to educate and promote an understanding of organismal biology, appreciation of nature, and its immense ecological value."

Discovering and preserving these new species and their habitat is the noble goal of Virginia Tech researchers and scientists who seek to understand what crucial role these often-overlooked creatures play in their environments. Investigation into the different types of millipedes out in the world could have any number of repercussions when it comes to understanding evolution, adaptation, and interdependence within an ecosystem.

Credit: 
Virginia Tech

Breaking the patrisharky: Scientists reexamine gender biases in shark, ray mating research

image: Sand tiger sharks mating at Georgia Aquarium

Image: 
Georgia Aquarium

Shark scientists at Georgia Aquarium, Scripps Institution of Oceanography at UC San Diego, and Dalhousie University are challenging the status quo in shark and ray mating research in a new study that looks at biological drivers of multiple paternity in these animals. The results were published March 4 in the journal Molecular Ecology.

Many species of sharks and rays exhibit multiple paternity, where females give birth to a litter of pups that have different fathers. While widely documented in scientific literature, the drivers of this phenomenon are not well understood. However, previous research has cited male aggression as the reason, claiming that the females are unable to avoid or submit to their advances during mating. This has led to the "convenience polyandry" theory, the assumption that there is a greater cost for females when refusing male mating attempts, as being the most widely credited explanation.

"If convenience polyandry is the only reason we talk about, it takes the agency away from females," said Kady Lyons, research scientist at Georgia Aquarium and lead author of the study. "If she's investing all this energy into making big babies, why doesn't she get a say in who will be the sire of her offspring?"

While multiple paternity has been documented in many elasmobranchs (sharks and rays), this is the first study to evaluate it starting from the female point-of-view. The researchers note that most studies on shark and ray reproduction were conducted by male scientists, and their biases could manifest in their research.

"No matter how objective we try to be as scientists, we're still human and our experiences are brought to the table," said Dovi Kacev, assistant teaching professor at Scripps Oceanography and a co-author of the study.

The researchers developed models based on shark and ray biology and physiology to test whether multiple paternity could be in the best interest of females or males, or a combination of both. They looked for patterns that they would expect to see if this phenomenon was pushed or pulled by one sex or the other, such as sperm competition or female selection for sperm genetic diversity, and compared it to data from past studies and their own research on these animals. By asking, "Would this benefit a male or a female?" they found no conclusive evidence that multiple paternity is primarily a male-driven advantage. In most instances, the benefits for females and males were the same, challenging previous ideas that male behavior and biology drives multiple paternity.

"Male or female drivers may sometimes produce the same multiple-paternity end result, but more often than not it is the male factors that get the lion's share of the credit," Lyons said. "This seemed odd to me considering how complex and energetically taxing female reproduction is."

The researchers stress that uncovering which sex (or both) may be responsible for the observed patterns is difficult as elasmobranchs have been evolving over many millions of years, and observing shark and ray mating is a rare occurrence. The scientists, however, found clues in what science does know about elasmobranch mating and reproduction.

Elasmobranch mating can be quite violent. In many cases, males bite the females to secure a hold, inflicting injury and leaving scars. This can even be seen in some of the more docile species like manta rays. Females have, however, developed advantages such as thicker skin in bite-prone areas in order to better recover from the mating injuries. In many species, the females are also often much larger than males.

While it does take energy for females to recover from wounds or avoid frisky males by swimming away, this only represents the beginning of their energetic investment into reproduction. The researchers point out that female elasmobranchs devote a lot of resources to the production of developed young, such as through ovulation of fat-rich eggs and providing nutrition to embryos in species that have internal gestation, which is the norm for most sharks and rays.

"One thing that is underappreciated, even by experts, is how diverse sharks and rays are," said Christopher Mull, postdoctoral fellow at Dalhousie University and a co-author of the study. "When it comes to reproduction, sharks and rays use every mode that has been described in vertebrates, from laying eggs to giving live birth with a placenta, similar to humans."

This diversity of sharks and rays makes them an interesting animal group to study multiple paternity, and for examining sex roles in the animal kingdom, said the researchers. The typical number of offspring can vary widely between species, from a single pup up to several hundred. The smallest of sharks can fit in the palm of a human hand, while the largest can grow to the size of a school bus. Despite these differences in biology and physiology, the phenomenon of multiple paternity appears to be a common trait across sharks and rays.

This evolutionary advantage, and the idea that it could be driven by the biology of the females, has been of keen interest to Lyons throughout her research. In a previous study, she found that round stingrays, native to the California coast, can exhibit different patterns of paternity. These rays have two uteri, and Lyons showed that in some females, one uterus held offspring with the same father while the other held offspring of a different father. In other individuals, offspring paternity was mixed between the two uteri. This was hypothesized as females ovulating in different patterns, which may give them some control over which males were able to fertilize their eggs.

Female elasmobranchs have other physiological abilities that suggest they might be behind multiple paternity. Some species have serial ovulation, in which one egg at a time is produced and fertilized. It has been shown that some can store sperm, thereby preventing the sperm from one male from fertilizing an egg, or using this ability to save sperm from another male that could be used in the future. Essentially, females have the anatomical tools that may allow them to control which sperm fertilize their eggs.

From past data, the researchers found many examples of pregnant female sharks with "failed ova" (those that were in the process of being resorbed) alongside normally developing embryos. If reproduction was driven by males, the scientists asked, why would the female miss the opportunity to pass on her genes as well? These examples bolster their idea that the reasons for multiple paternity cannot be attributed to only one sex.

"Female elasmobranchs have these incredible physiology mechanisms that give them a reproductive advantage, but these are largely ignored in the literature," Lyons said.

"I think a key takeaway from our work is challenging the dog(fish)ma that female sharks and rays are passive players in the mating process," said Mull. "But demonstrating these mechanisms at work can be really challenging, so we focused on developing a series of testable hypotheses that other researchers can apply to their own work."

The researchers note that while this study is on sharks and rays, it has implications for the larger animal kingdom, and for diversity in science.

"Diversifying perspectives at the table will enrich scientific studies," said Kacev. "We're not saying the male perspective is wrong, or that male sharks aren't at all responsible for multiple paternity, but it takes two to tango."

"Perspective is completely shaped by background," said Lyons. "If you don't have a diverse background, your perspective will be limited."

Credit: 
University of California - San Diego

Extreme-scale computing and AI forecast a promising future for fusion power

image: Physicist C.S. Chang with figure showing turbulence eddies in an ITER plasma edge (green) with the heat-load footprint on the material wall carried by escaping hot plasma particles. Model simulated with XGC code and AI-produced heat-load width formula is shown at left top.

Image: 
Photo by Elle Starkman/PPPL Office of Communications. Simulation and image from Robert Hager and Seung-Hoe Ku.

Efforts to duplicate on Earth the fusion reactions that power the sun and stars for unlimited energy must contend with extreme heat-load density that can damage the doughnut-shaped fusion facilities called tokamaks, the most widely used laboratory facilities that house fusion reactions, and shut them down. These loads flow against the walls of what are called divertor plates that extract waste heat from the tokamaks.

Far larger forecast

But using high-performance computers and artificial intelligence (AI), researchers at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) have predicted a far larger and less damaging heat-load width for the full-power operation of ITER, the international tokamak under construction in France, than previous estimates have found. The new formula produced a forecast that was over six-times wider than those developed by a simple extrapolation from present tokamaks to the much larger ITER facility whose goal is to demonstrate the feasibility of fusion power.

"If the simple extrapolation to full-power ITER from today's tokamaks were correct, no known material could withstand the extreme heat load without some difficult preventive measures," said PPPL physicist C.S. Chang, leader of the team that developed the new, wider forecast and first author of a paper that Physics of Plasmas has published as an Editor's Pick. "An accurate formula can enable scientists to operate ITER in a more comfortable and cost-effective way toward its goal of producing 10 times more fusion energy than the input energy," Chang said.

Fusion reactions combine light elements in the form of plasma -- the hot, charged state of matter composed of free electrons and atomic nuclei that makes up to 99 percent of the visible universe -- to generate massive amounts of energy. Tokamaks, the most widely used fusion facilities, confine the plasma in magnetic fields and heat it to million-degree temperatures to produce fusion reactions. Scientists around the world are seeking to produce and control such reactions to create a safe, clean, and virtually inexhaustible supply of power to generate electricity.

The Chang team's surprisingly optimistic forecast harkens back to results the researchers produced on the Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory in 2017. The team used the PPPL-developed XGC high-fidelity plasma turbulence code to forecast a heat load that was over six-times wider in full-power ITER operation than simple extrapolations from current tokamaks predicted.

Surprise finding

The surprising finding raised eyebrows by sharply contradicting the dangerously narrow heat-load forecasts. What accounted for the difference -- might there be some hidden plasma parameter, or condition of plasma behavior, that the previous forecasts had failed to detect?

Those forecasts arose from parameters in the simple extrapolations that regarded plasma as a fluid without considering the important kinetic, or particle motion, effects. By contrast, the XGC code produces kinetic simulations using trillions of particles on extreme-scale computers, and its six-times wider forecast suggested that there might indeed be hidden parameters that the fluid approach did not factor in.

The team performed more refined simulations of the full-power ITER plasma on the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory to ensure that their 2017 findings on Titan were not in error.

The team also performed new XGC simulations on current tokamaks to compare the results to the much wider Summit and Titan findings. One simulation was on one of the highest magnetic-field plasmas on the Joint European Torus (JET) in the United Kingdom, which reaches 73 percent of the full-power ITER magnetic field strength. Another simulation was on one of the highest magnetic-field plasmas on the now decommissioned C-Mod tokamak at the Massachusetts Institute of Technology (MIT), which reaches 100 percent of the full-power ITER magnetic field.

The results in both cases agreed with the narrow heat-load width forecasts from simple extrapolations. These findings strengthened the suspicion that there are indeed hidden parameters.

Supervised machine learning

The team then turned to a type of AI method called supervised machine learning to discover what the unnoticed parameters might be. Using kinetic XGC simulation data from future ITER plasma, the AI code identified the hidden parameter as related to the orbiting of plasma particles around the tokamak's magnetic field lines, an orbiting called gyromotion.

The AI program suggested a new formula that forecasts a far wider and less dangerous heat-load width for full-power ITER than the previous XGC formula derived from experimental results in present tokamaks predicted. Furthermore, the AI-produced formula recovers the previous narrow findings of the formula built for the tokamak experiments.

"This exercise exemplifies the necessity for high-performance computing, by not only producing high-fidelity understanding and prediction but also improving the analytic formula to be more accurate and predictive." Chang said. "It is found that the full-power ITER edge plasma is subject to a different type of turbulence than the edge in present tokamaks due to the large size of the ITER edge plasma compared to the gyromotion radius of particles."

Researchers then verified the AI-produced formula by performing three more simulations of future ITER plasmas on the supercomputers Summit at OLCF and Theta at the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. "If this formula is validated experimentally," Chang said, "this will be huge for the fusion community and for ensuring that ITER's divertor can accommodate the heat exhaust from the plasma without too much complication."

The team would next like to see experiments on current tokamaks that could be designed to test the AI-produced extrapolation formula. If it is validated, Chang said, "the formula can be used for easier operation of ITER and for the design of more economical fusion reactors."

Credit: 
DOE/Princeton Plasma Physics Laboratory

Cancer 'guardian' breaks bad with one switch

image: A model produced by scientists at Rice University shows the conformational changes caused by a mutation in the cancer-fighting p53 protein. At top left, the red box highlights the aggregation-prone sequence protected by the N-terminus tail in wild-type p53 but exposed by the mutation of a single amino acid. The strongest deviation happens in the domain at the green asterisk. The other three models show "open" conformations at the C-terminus caused by the mutation.

Image: 
Kolomeisky Research Group/Rice University

HOUSTON - (March 4, 2021) - A mutation that replaces a single amino acid in a potent tumor-suppressing protein turns it from saint to sinister. A new study by a coalition of Texas institutions shows why that is more damaging than previously known.

The ubiquitous p53 protein in its natural state, sometimes called "the guardian of the genome," is a front-line protector against cancer. But the mutant form appears in 50% or more of human cancers and actively blocks cancer suppressors.

Researchers led by Peter Vekilov at the University of Houston (UH) and Anatoly Kolomeisky at Rice University have discovered the same mutant protein can aggregate into clusters. These in turn nucleate the formation of amyloid fibrils, a prime suspect in cancers as well as neurological diseases like Alzheimer's.

The condensation of p53 into clusters is driven by the destabilization of the protein's DNA-binding pocket when a single arginine amino acid is replaced with glutamine, they reported.

"It's known that a mutation in this protein is a main source of cancer, but the mechanism is still unknown," said Kolomeisky, a professor and chair of Rice's Department of Chemistry and a professor of chemical and biomolecular engineering.

"This knowledge gap has significantly constrained attempts to control aggregation and suggest novel cancer treatments," said Vekilov, the John and Rebecca Moores Professor of Chemical and Biomolecular Engineering and Chemistry at UH.

The mutant p53 clusters, which resemble those discovered by Vekilov in solutions of other proteins 15 years ago, and the amyloid fibrils they nucleate prompt the aggregation of proteins the body uses to suppress cancer. "This is similar to what happens in the brain in neurological disorders, though those are very different diseases," Kolomeisky said.

The p53 mechanism described in the Proceedings of the National Academy of Sciences may be similar to those that form functional and pathological solids like tubules, filaments, sickle cell polymers, amyloids and crystals, Vekilov said.

Researchers at UH combined 3D confocal images of breast cancer cells taken in the lab of chemical and biomolecular engineer Navin Varadarajan with light scattering and optical microscopy of solutions of the purified protein carried out in the Vekilov lab.

Transmission electron microscopy micrographs of cluster and fibril formation contributed by Michael Sherman at the University of Texas Medical Branch at Galveston (UTMB) supported the main result of the study, as did molecular simulations by Kolomeisky's group

All confirmed the p53 mutant known as R248Q goes through a two-step process to form mesoscopic condensates. Understanding the mechanism could provide insight into treating various cancers that manipulate either p53 or its associated signaling pathways, Vekilov said.

In normal cell conditions, the concentration of p53 is relatively low, so the probability of aggregation is low, he said. But when a mutated p53 is present, the probability increases.

"Experiments show the size of these clusters is independent of the concentration of p53," Kolomeisky said. "Mutated p53 will even take normal p53 into the aggregates. That's one of the reasons for the phenomenon known as loss of function."

If even a small relative fraction of the mutant is present, it's enough to kill or lower the ability of normal, wild-type p53 to fight cancer, according to the researchers.

The Rice simulations showed normal p53 proteins are compact and easily bind to DNA. "But the mutants have a more open conformation that allows them to interact with other proteins and gives them a higher tendency to produce a condensate," Kolomeisky said. "It's possible that future anti-cancer drugs will target the mutants in a way that suppresses the formation of these aggregates and allows wild-type p53 to do its job."

Credit: 
Rice University

Field study shows icing can cost wind turbines up to 80% of power production

image: This drone photo from a field study of icing on wind turbines shows how ice accumulated at the tip of a turbine blade during a winter storm.

Image: 
Photo courtesy of Hui Hu/Iowa State University

AMES, Iowa - Wind turbine blades spinning through cold, wet conditions can collect ice nearly a foot thick on the yard-wide tips of their blades.

That disrupts blade aerodynamics. That disrupts the balance of the entire turbine. And that can disrupt energy production by up to 80 percent, according to a recently published field study led by Hui Hu, Iowa State University's Martin C. Jischke Professor in Aerospace Engineering and director of the university's Aircraft Icing Physics and Anti-/De-icing Technology Laboratory.

Hu has been doing laboratory studies of turbine-blade icing for about 10 years, including performing experiments in the unique ISU Icing Research Tunnel. Much of that work has been supported by grants from the Iowa Energy Center and the National Science Foundation.

"But we always have questions about whether what we do in the lab represents what happens in the field," Hu said. "What happens over the blade surfaces of large, utility-scale wind turbines?"

We all know about one thing that recently happened in the field. Wind power and other energy sources froze and failed in Texas during last month's winter storm.

Searching for a field site

Hu wanted to quantify what happens on wind farms during winter weather and so several years ago began organizing a field study. But that was more complicated than he expected. Even in Iowa, where some 5,100 wind turbines produce more than 40% of the state's electricity (according to the U.S. Energy Information Association), he wasn't given access to turbines. Energy companies usually don't want their turbine performance data to go public.

So Hu - who had made connections with researchers at the School of Renewable Energy at North China Electric Power University in Beijing as part of an International Research Experiences for Students program funded by the National Science Foundation - asked if Chinese wind farms would cooperate.

Operators of a 34-turbine, 50-megawatt wind farm on a mountain ridgetop in eastern China agreed to a field study in January 2019. Hu said most of the turbines generate 1.5 megawatts of electricity and are very similar to the utility-scale turbines that operate in the United States.

Because the wind farm the researchers studied is not far from the East China Sea, Hu said the wind turbines there face icing conditions more like those in Texas than in Iowa. Iowa wind farms are exposed to colder, drier winter conditions; when winter cold drops to Texas, wind farms there are exposed to more moisture because of the nearby Gulf of Mexico.

Measuring the ice

As part of their field work, the researchers used drones to take photos of 50-meter-long turbine blades after exposure to up to 30 hours of icy winter conditions, including freezing rain, freezing drizzle, wet snow and freezing fog.

The photographs allowed detailed measurement and analyses of how and where ice collected on the turbine blades. Hu said the photos also allowed researchers to compare natural icing to laboratory icing and largely validated their experimental findings, theories and predictions.

The photos showed, "While ice accreted over entire blade spans, more ice was found to accrete on outboard blades with the ice thickness reaching up to 0.3 meters (nearly 1 foot) near the blade tips," the researchers wrote in a paper recently published online by the journal Renewable Energy. (See sidebar for the full research team.)

The researchers used the turbines' built-in control and data-acquisition systems to compare operation status and power production with ice on the blades against more typical, ice-free conditions.

"That tells us what's the big deal, what's the effect on power production," Hu said.

The researchers found that icing had a major effect:

"Despite the high wind, iced wind turbines were found to rotate much slower and even shut down frequently during the icing event, with the icing-induced power loss being up to 80%," the researchers wrote.

That means Hu will continue to work on another area of wind-turbine research - finding effective ways to de-ice the blades so they keep spinning, and the electricity keeps flowing, all winter long.

Credit: 
Iowa State University

Twistoptics--A new way to control optical nonlinearity

image: Two slabs of boron nitride crystals are dynamically twisted with respect to each other. At certain angles, the incoming laser light (orange beam) can be efficiently converted to higher energy light (pink beam), as a result of micromechanical symmetry breaking.

Image: 
Nathan R.Finney and Sanghoon Chae/Columbia Engineering

Columbia researchers engineer first technique to exploit the tunable symmetry of 2D materials for nonlinear optical applications, including laser, optical spectroscopy, imaging, and metrology systems, as well as next-generation optical quantum information processing and computing.

New York, NY--March 4, 2021--Nonlinear optics, a study of how light interacts with matter, is critical to many photonic applications, from the green laser pointers we're all familiar with to intense broadband (white) light sources for quantum photonics that enable optical quantum computing, super-resolution imaging, optical sensing and ranging, and more. Through nonlinear optics, researchers are discovering new ways to use light, from getting a closer look at ultrafast processes in physics, biology, and chemistry to enhancing communication and navigation, solar energy harvesting, medical testing, and cybersecurity.

Columbia Engineering researchers report that they developed a new, efficient way to modulate and enhance an important type of nonlinear optical process: optical second harmonic generation--where two input photons are combined in the material to produce one photon with twice the energy--from hexagonal boron nitride through micromechanical rotation and multilayer stacking. The study was published online March 3 by Science Advances.

"Our work is the first to exploit the dynamically tunable symmetry of 2D materials for nonlinear optical applications," said James Schuck , associate professor of mechanical engineering , who led the study along with James Hone , Wang Fong-Jen Professor of Mechanical Engineering.

A hot topic in the field of 2D materials has been exploring how twisting or rotating one layer relative to another can change the electronic properties of the layered system--something that can't be done in 3D crystals because the atoms are bond so tightly together in a 3D network. Solving this challenge has led to a new research area termed "twistronics." In this new study, the team used concepts from twistronics to show that they also apply to optical properties.

"We are calling this new research area 'twistoptics,'" said Schuck. "Our twistoptics approach demonstrates that we can now achieve giant nonlinear optical responses in very small volumes--just a few atomic layer thicknesses--enabling, for example, entangled photon generation with a much more compact, chip-compatible foot print. Moreover, the response is fully tunable on demand."

Most of today's conventional nonlinear optical crystals are made of covalently bonded materials, such as lithium niobate and barium borate. But because they have rigid crystal structures, it is difficult to engineer and control their nonlinear optical properties. For most applications, though, some degree of control over a material's nonlinear optical properties is essential.

The group found that van der Waals multilayer crystals provide an alternative solution for engineering optical nonlinearity. Thanks to the extremely weak interlayer force, the researchers could easily manipulate relative crystal orientation between neighboring layers by micromechanical rotation. With the ability to control symmetry at the atomic-layer limit, they demonstrated precise tuning and giant enhancement of optical second harmonic generation with micro-rotator devices and superlattice structures, respectively. For the superlattices, the team first used layer rotation to created "twisted" interfaces between layers that yield an extremely strong nonlinear optical response, and then stacked several of these "twisted" interfaces on top of one another.

"We showed that the nonlinear optical signal actually scales with the square of the number of twisted interfaces," said Kaiyuan Yao, a postdoctoral research fellow in Schuck's lab and co-lead author of the paper. "So this makes the already large nonlinear response of a single interface orders of magnitude stronger still."

The group's findings have several potential applications. Tunable second harmonic generation from micro-rotators could lead to novel on-chip transducers that couple micromechanical motion to sensitive optical signals by turning mechanical motion into light. This is critical for many sensors and devices such as atomic force microscopes.

Stacking multiple boron nitride thin films on top of each other with controlled twist angle demonstrated greatly enhanced nonlinear response. This could offer a new way to manufacture efficient nonlinear optical crystals with atomic precision. These could be used in a broad range of laser (such as the green laser pointers), optical spectroscopy, imaging, and metrology systems. And perhaps most significantly, they could provide a compact means for generating entangled photons and single photons for next-generation optical quantum information processing and computing.

This work was a collaboration carried out at the Energy Frontier Research Center on Programmable Quantum Materials at Columbia, with theory collaborators at Max Planck Institute for the Structure and Dynamics of Matter . The device fabrication was partially done in the cleanroom of the Columbia Nano Initiative .

"We hope," Schuck said, "that this demonstration provides a new twist in the ongoing narrative aimed at harnessing and controlling the properties of materials."

Credit: 
Columbia University School of Engineering and Applied Science

COVID-19 lockdown linked to uptick in tobacco use

March 4, 2021 -- Pandemic-related anxiety, boredom, and irregular routines were cited as major drivers of increased nicotine and tobacco use during the initial COVID-19 "lockdown," according to research just released by Columbia University's Mailman School of Public Health. The study highlights ways that public health interventions and policies can better support quit attempts and harm reduction, both during the COVID-19 pandemic and beyond. The findings are published in the International Journal of Drug Policy.

Between April-May 2020, the researchers conducted telephone interviews with adults across the United States who use cigarettes and/or electronic nicotine delivery systems (ENDS), such as e-cigarettes. Participants in the study were recruited using an advertisement campaign on Facebook and Instagram. During this window, nearly 90 percent of the U.S. population experienced some form of state lockdown, with 40 states ordering non-essential businesses to close and 32 states enacting mandatory stay-at-home orders. At the time of their interviews, all participants were voluntarily isolating at home unless required to leave the house.

Nearly all participants reported increased stress related to COVID-19 - namely, fears about the virus, job uncertainty, and the psychological effects of isolation - and described this as the primary driver of increased nicotine and tobacco use. Decreased use, while less common, was prevalent among "social" tobacco users, who cited fewer interpersonal interactions during lockdown and a fear of sharing products.

At the community level, retail access impacted cigarette and ENDS use differently. While cigarettes were universally accessible in essential businesses, such as convenience stores and gas stations, access to preferred ENDS products was more limited, since "vape shops" and other specialty ENDS retailers were typically deemed non-essential and required to close or limit hours. This drove some ENDS users to order their products online, which often resulted in long wait times due to shipping delays, or product backorder as a result of high demand. As a result, some dual users of cigarettes and ENDS increased their use of readily-available cigarettes.

"Pandemic response policies that intentionally or inadvertently restrict access to lower risk products - through availability, supply chains, or even postal service slowdowns - while leaving more harmful products widely accessible may have unintended consequences that should be considered during policy development," said Daniel Giovenco, PhD, assistant professor of sociomedical sciences at Columbia Mailman School, and the study's lead author.

Policy considerations

Given that tobacco use behaviors are expected to increase among some individuals during this sustained period of uneasiness, Giovenco and colleagues proposed several key policy recommendations: expansion of cessation resources and services, including their adaptation for remote delivery; establishment and enforcement of smoke-free home rules to protect household members; and enabling equivalent access to lower risk products - such as ENDS and nicotine replacement therapy - to facilitate harm reduction among those who cannot or do not want to quit using nicotine at this time.

"While quantitative, survey-based studies provide valuable insight into changes in tobacco use during lockdown periods, existing research has drawn mixed conclusions. Our approach was the first to qualitatively capture the complex drivers and mechanisms that may help explain varied behavioral shifts," noted Giovenco. "COVID-19 mitigation strategies to curb transmission will likely continue for the foreseeable future, with many permanently altering elements of the workplace, education, and consumer behaviors. Our findings can help tailor intervention and policy work to address multi-level determinants of tobacco use in the COVID era and the years ahead."

Credit: 
Columbia University's Mailman School of Public Health

Seagrass loss around the UK may be much higher than previously thought

The loss of seagrass in the waters around the UK is much higher than previously estimated. A new study published in Frontiers in Plant Science concludes that, with high certainty, at least 44% of the UK's seagrasses have been lost since 1936, of which 39% has been since the 1980s. This study is one of the first of its kind to bring together seagrass data from diverse sources and give a systematic estimate of the current and historic extent of seagrass, as well as seagrass loss in the UK.

The study was a collaboration between researchers at University College London, Kings College London, and Swansea University.

Seagrasses as climate change superheroes
Nature-based solutions are essential to mitigate the effects of the climate crisis, and seagrasses are highly suitable candidates for the job. While they cover only 0.1% of the ocean floor worldwide, seagrasses are one of the largest global carbon sinks, storing carbon in marine soils many times faster than terrestrial forests. Healthy seagrasses also support marine biodiversity, including commercially important (such as bass) and charismatic species (such as seahorses), and provide ecosystem services such as nutrient cycling, increasing shoreline stability, and supporting coastal livelihoods. But different human activities, such as industrial, agricultural, and coastal development, have led to worldwide declines.

Previous studies had estimated that the worldwide loss of seagrasses is at least 29%, but the current status of many seagrass meadows is unknown. A better knowledge of where losses have occurred would allow us to protect current seagrass meadows and re-plant and restore degraded or lost ones. Dr Alix Green, lead author of the study, says: "Raising the profile of this undervalued ecosystem will undoubtedly support its protection and rejuvenation."

Estimating the extent of seagrass meadows
The purpose of the study was to estimate the current aerial extent of seagrass for the UK and to estimate the recent and historic percentage loss of seagrass.

The two species of seagrass indigenous to the UK are Zostera marina and Zostera noltii, both protected under several EU Directives and the UK's Wildlife and Countryside Act 1981. The researchers collected several datasets from different sources to research the current aerial extent of seagrasses. Any data collected since 1998 were categorized as 'contemporary', and data older than 1998 as 'historical'. The researchers used three methods to assess seagrass loss with high, medium, and low certainty.

Dr Green says: "Our paper establishes the best estimate of current seagrass extent in the UK, confirming at least 8,493 hectares, and documents a loss of at least 39% of aerial cover in the last 30 years. Historically, we show that this loss could be as much as 92%, and that these meadows could have stored 11.4 Mt of carbon and supported approximately 400 million fish." Worryingly, these estimates are likely under-representative of the true loss.

Protecting what is left, restoring what is lost
These results show the urgency of protecting current seagrass meadows and restoring degraded or lost ones. They also show that not all seagrass meadows in the UK have suffered equally, while some seagrass sites show signs of recovery. The observation that seagrasses can recover from environmental degradation is encouraging and should motivate conservation initiatives. Co-author Dr Peter Jones, University College London, adds: "The next decade is a crucial window of opportunity to address the inter-related crises of biodiversity loss and climate change - the restoration of seagrass meadows would be an important contribution to this. This will involve restrictions such as reduced boat anchor damage, restricting damaging fishing methods and reducing coastal pollution, including through marine protected areas."

Dr Richard Unsworth, co-author and lecturer at Swansea University and director of the marine conservation charity Project Seagrass said: "Our systematic documentation of the loss of seagrass needs to be seen as a positive moment to start the rejuvenation of our coastal seas. Now is the time to create financial mechanisms that reduce the flow of nutrients into our coastal seas, and offsetting mechanisms to allow a pathway of carbon finance into the conservation and restoration of these systems. By reversing this loss, we can improve our fisheries, reduce coastal erosion and fight climate change."

Dr Green concluded: "The catastrophic losses documented in this research are alarming but offer a snapshot of the potential of this habitat if efforts are made to protect and restore seagrass meadows across the UK. We hope that this work will spur continued, systematic mapping and monitoring of seagrass meadows across the UK and encourage restoration and rehabilitation projects. These meadows have the potential to support our bountiful fisheries and help us win the fight against climate change and environmental degradation. The UK is lucky to have such a resource in our waters, and we should fight to protect it!"

Credit: 
Frontiers

Air pollution fell sharply during lockdown

image: The Innsbruck Atmospheric Observatory is located on the roof of the Bruno Sander House at the University of Innsbruck in the center of the Tyrolean capital.

Image: 
University of Innsbruck

The far-reaching mobility restrictions at the beginning of the Corona pandemic in March 2020 created a unique situation for atmospheric sciences: "During the 2020 lockdown, we were able to directly investigate the actual effects of drastic traffic restrictions on the distribution of air pollutants and on the emission of climate gases," says Innsbruck atmospheric scientist Thomas Karl. With his team, he has now published a detailed analysis of air quality during the first lockdown in the city of Innsbruck, Austria, in the journal Atmospheric Chemistry and Physics. "We find significantly greater decreases of air pollutants than of carbon dioxide, for example," the researcher says, summarizing the results. In the past year, some studies showed contradicting results because the influence of weather was often not factored in, on the one hand, or a detailed comparison with emission data was not possible, on the other hand. Based on a unique measurement strategy in combination with detailed source emission data, the Innsbruck researchers can now provide a reliable analysis. Their results confirm assumptions inferred from earlier work: "The decrease in nitrogen oxides and other pollutants due to reduced traffic is stronger than often assumed," emphasizes Thomas Karl. "We find that the proportion of nitrogen oxides emitted from traffic is higher than often assumed, while the proportion from domestic, commercial and public energy consumption is lower." The European energy transition, with the switch to cleaner combustion in the residential and industrial sectors, is having a positive effect on air quality and has been underestimated in some cases. Atmospheric researcher Thomas Karl summarizes "We project that in many European inner cities, comparable to Innsbruck, more than 90 percent of nitrogen oxide emissions are caused by traffic".

Emission models need to be adjusted

In urban regions across Europe, the air quality thresholds for nitrogen oxides and other pollutants are regularly exceeded. It is not always easy to determine which polluters are responsible for how much emission. Until recently, the key method for quantifying emissions has relied on exhaust emission tests on test stands that were then extrapolated in a model. However, the actual amount of air pollutants emitted by a vehicle or a heating appliance in everyday use can depend on many factors. The diesel scandal has made it clear how inconclusive measurements on the test stand can be, when interpreting their impact on the environment. Assessment of air management by environmental and health authorities heavily depends on atmospheric models that rely on accurate emission data. Until now, it was very difficult to assess actual air pollutants emitted in a specific region and constraining their emissions. The team led by Thomas Karl from the Department of Atmospheric and Cryospheric Sciences at the University of Innsbruck closes this gap with the so-called eddy covariance method, which measures air composition and wind flow in detail and thus allows conclusions to be drawn about the air pollutant emission strengths. With the Innsbruck Atmospheric Observatory (IAO) set up at the University of Innsbruck, the air over Innsbruck is now being continuously studied.

Credit: 
University of Innsbruck

New microcomb could help discover exoplanets and detect diseases

image: PhD Student Óskar Bjarki Helgason demonstrates the chip and the experimental setup for generating the game changing microcomb.

Image: 
Photo: Mia Halleröd Palmgren, Collage: Yen Strandqvist /Chalmers

Tiny photonic devices could be used to find new exoplanets, monitor our health, and make the internet more energy efficient. Researchers from Chalmers University of Technology, Sweden, now present a game changing microcomb that could bring advanced applications closer to reality.

A microcomb is a photonic device capable of generating a myriad of optical frequencies - colours - on a tiny cavity known as microresonator. These colours are uniformly distributed so the microcomb behaves like a 'ruler made of light'. The device can be used to measure or generate frequencies with extreme precision.

In a recent article in the journal Nature Photonics, eight Chalmers researchers describe a new kind of microcomb on a chip, based on two microresonators. The new microcomb is a coherent, tunable and reproducible device with up to ten times higher net conversion efficiency than the current state of the art.

"The reason why the results are important is that they represent a unique combination of characteristics, in terms of efficiency, low-power operation, and control, that are unprecedented in the field," says Óskar Bjarki Helgason, a PhD student at the Department of Microtechnology and Nanoscience at Chalmers, and first author of the new article.

The Chalmers researchers are not the first to demonstrate a microcomb on a chip, but they have developed a method that overcomes several well-known limitations in the field. The key factor is the use of two optical cavities - microresonators - instead of one. This arrangement results in the unique physical characteristics.

Placed on a chip, the newly developed microcomb is so small that it would fit on the end of a human hair. The gaps between the teeth of the comb are very wide, which opens great opportunities for both researchers and engineers.

A wide range of potential applications

Since almost any measurement can be linked to frequency, the microcombs offer a wide range of potential applications. They could, for example, radically decrease the power consumption in optical communication systems, with tens of lasers being replaced by a single chip-scale microcomb in data centre interconnects. They could also be used in lidar for autonomous driving vehicles, for measuring distances.

Another exciting area where microcombs could be utilised is for the calibration of the spectrographs used in astronomical observatories devoted to the discovery of Earth-like exoplanets.

Extremely accurate optical clocks and health-monitoring apps for our mobile phones are further possibilities. By analysing the composition of our exhaled air, one could potentially diagnose diseases at earlier stages.

Providing answers to questions not yet asked

"For the technology to be practical and find its use outside the lab, we need to co-integrate additional elements with the microresonators, such as lasers, modulators and control electronics. This is a huge challenge, that requires maybe 5-10 years and an investment in engineering research. But I am convinced that it will happen," says Victor Torres Company, who leads the research project at Chalmers. He continues:

"The most interesting advances and applications are the ones that we have not even conceived of yet. This will likely be enabled by the possibility of having multiple microcombs on the same chip. What could we achieve with tens of microcombs that we cannot do with one?"

Credit: 
Chalmers University of Technology

Speeding up commercialization of electric vehicles

image: Co-free high capacity Li-rich materials for long-term cycles

Image: 
POSTECH

Professor Byoungwoo Kang develops a high-density cathode material through controlling local structures of the Li-rich layered materials.

Researchers in Korea have developed a high-capacity cathode material that can be stably charged and discharged for hundreds of cycles without using the expensive cobalt (Co) metal. The day is fast approaching when electric vehicles can drive long distances with Li- ion batteries.

Professor Byoungwoo Kang and Dr. Junghwa Lee of POSTECH's Department of Materials Science and Engineering have successfully developed a high energy-density cathode material that can stably maintain charge and discharge for over 500 cycles without the expensive and toxic Co metal. The research team achieved this by controlling the local structure via developing the simple synthesis process for the Li-rich layered material that is attracting attention as the next-generation high-capacity cathode material. These research findings were published in ACS Energy Letters, a journal in the energy field of the American Chemistry Association.

The mileage and the charge-discharge cycle of an electric vehicle depend on the unique properties of the electrode material in the rechargeable Li-ion battery. Electricity is generated when lithium ions flow back and forth between the cathode and anode. In the case of Li-rich layered material, the number of cycles decreases sharply when large amount of lithium is extracted and inserted. In particular, when a large amount of lithium is extracted and an oxygen reaction occurs in a highly charged state, a structural collapse occurs rendering it impossible to maintain the charge-discharge properties or the high-energy density for long-term cycles. This deterioration of the cycle property has hampered commercialization.

The research team had previously revealed that the homogeneous distribution of atoms between the transition metal layer and the lithium layer of the Li-rich layered material can be an important factor in the activation of electrochemical reaction and cycle property in Li-rich layered materials. The team then conducted a subsequent research to control the synthesis conditions for increasing the degree of the atoms' distribution in the structure. Using the previously published solid-state reaction, the team developed a new, simple and efficient process that can produce a cathode material that has an optimized atomic distribution.

As a result, it was confirmed that the synthesized Li-rich layered material has an optimized local structure in terms of electrochemical activity and cycle property, enabling a large amount of lithium to be used reversibly. It was also confirmed that the oxygen redox reaction was also stably and reversibly driven for several hundred cycles.

Under these optimized conditions, the Co-free Li-rich layered material synthesized displayed 180% higher reversible energy at 1,100Wh/kg than the conventionally commercialized high nickel layered material (ex. LiNi0.8Mn0.1Co0.1O2) with energy density of 600Wh/kg. In particular, even if a large amount of lithium is removed, a stable structure was maintained, enabling about 95% capacity for 100 cycles. In addition, by maintaining 83% for 500 cycles, a breakthrough performance that can maintain stable high energy for hundreds of cycles is anticipated.

"The significance of these research findings is that the cycle property, which is one of the important issues in the next-generation high-capacity Li-rich layered materials, have been dramatically improved through relatively simple process changes," explained Professor Byoungwoo Kang of POSTECH. "This is noteworthy in that we have moved a step closer to commercializing the next generation Li-rich layered materials."

Credit: 
Pohang University of Science & Technology (POSTECH)

How 'green' are environmentally friendly fireworks?

March 5, 2021 Update: The authors have updated their funding statement to read: “The authors acknowledge funding from the National Natural Science Foundation of China, Guangdong Province Science and Technology Planning Project of China and Shenzhen Science and Technology Program.”

Fireworks are used in celebrations around the world, including Independence Day in the U.S., the Lantern Festival in China and the Diwali Festival in India. However, the popular pyrotechnic displays emit large amounts of pollutants into the atmosphere, sometimes causing severe air pollution. Now, researchers reporting in ACS' Environmental Science & Technology have estimated that, although so-called environmentally friendly fireworks emit 15-65% less particulate matter than traditional fireworks, they still significantly deteriorate air quality.

Fireworks displays can cause health problems, such as respiratory ailments, because they release high levels of air pollutants, including particulate matter (PM), sulfur dioxide, heavy metals and perchlorates. As a result, some cities have banned their use. But because the displays are an important aspect of many traditional celebrations, researchers and manufacturers have tried to develop more environmentally friendly pyrotechnics, including those with smokeless charges and sulfur-free propellants. Although research suggests that these fireworks emit less pollutants, their impact on air quality has not been evaluated. Ying Li and colleagues wanted to use data from a large fireworks display held in Shenzhen, China, on Chinese National Day on Oct. 1, 2019, to assess how "green" these fireworks really are.

The researchers estimated emissions of PM2.5, which is PM with a diameter of 2.5 μm and smaller, from the 160,000 environmentally friendly fireworks set off during the display, as well as emissions from traditional fireworks. They used information on the wind direction, wind speed, temperature and chemical composition of the fireworks to simulate the size, trajectory and peak PM2.5 values for the smoke plume resulting from the event. Then, they compared their simulated values with actual data on PM2.5 concentrations measured at 75 monitoring stations throughout the city following the fireworks display. In agreement with the team's predictions, the data showed that the fireworks smoke plume began as a narrow band that traveled northward before being fully dispersed, with peak PM2.5 levels similar to the predictions. The researchers used their validated simulation to estimate that the use of environmentally friendly fireworks produces a much smaller, shorter-lasting plume, with 15-65% of the PM2.5 emissions of a display using traditional fireworks. However, the peak concentration of PM2.5 still greatly exceeds World Health Organization guidelines. This led the researchers to conclude that the number of "green" fireworks used at one time should be restricted.

Credit: 
American Chemical Society