Tech

UArizona engineers demonstrate a quantum advantage

image: University of Arizona researchers demonstrate a quantum advantage

Image: 
University of Arizona

Quantum computing and quantum sensing have the potential to be vastly more powerful than their classical counterparts. Not only could a fully realized quantum computer take just seconds to solve equations that would take a classical computer thousands of years, but it could have incalculable impacts on areas ranging from biomedical imaging to autonomous driving.

However, the technology isn't quite there yet.

In fact, despite widespread theories about the far-reaching impact of quantum technologies, very few researchers have been able to demonstrate, using the technology available now, that quantum methods have an advantage over their classical counterparts.

In a paper published on June 1 in the journal Physical Review X, University of Arizona researchers experimentally show that quantum has an advantage over classical computing systems.

"Demonstrating a quantum advantage is a long-sought-after goal in the community, and very few experiments have been able to show it," said paper co-author Zheshen Zhang, assistant professor of materials science and engineering, principal investigator of the UArizona Quantum Information and Materials Group and one of the paper's authors. "We are seeking to demonstrate how we can leverage the quantum technology that already exists to benefit real-world applications."

How (and When) Quantum Works

Quantum computing and other quantum processes rely on tiny, powerful units of information called qubits. The classical computers we use today work with units of information called bits, which exist as either 0s or 1s, but qubits are capable of existing in both states at the same time. This duality makes them both powerful and fragile. The delicate qubits are prone to collapse without warning, making a process called error correction - which addresses such problems as they happen - very important.

The quantum field is now in an era that John Preskill, a renowned physicist from the California Institute of Technology, termed "noisy intermediate scale quantum," or NISQ. In the NISQ era, quantum computers can perform tasks that only require about 50 to a few hundred qubits, though with a significant amount of noise, or interference. Any more than that and the noisiness overpowers the usefulness, causing everything to collapse. It is widely believed that 10,000 to several million qubits would be needed to carry out practically useful quantum applications.

Imagine inventing a system that guarantees every meal you cook will turn out perfectly, and then giving that system to a group of children who don't have the right ingredients. It will be great in a few years, once the kids become adults and can buy what they need. But until then, the usefulness of the system is limited. Similarly, until researchers advance the field of error correction, which can reduce noise levels, quantum computations are limited to a small scale.

Entanglement Advantages

The experiment described in the paper used a mix of both classical and quantum techniques. Specifically, it used three sensors to classify the average amplitude and angle of radio frequency signals.

The sensors were equipped with another quantum resource called entanglement, which allows them to share information with one another and provides two major benefits: First, it improves the sensitivity of the sensors and reduces errors. Second, because they are entangled, the sensors evaluate global properties rather than gathering data about specific parts of a system. This is useful for applications that only need a binary answer; for example, in medical imaging, researchers don't need to know about every single cell in a tissue sample that isn't cancerous - just whether there's one cell that is cancerous. The same concept applies to detecting hazardous chemicals in drinking water.

The experiment demonstrated that equipping the sensors with quantum entanglement gave them an advantage over classical sensors, reducing the likelihood of errors by a small but critical margin.

"This idea of using entanglement to improve sensors is not limited to a specific type of sensor, so it could be used for a range of different applications, as long as you have the equipment to entangle the sensors," said study co-author Quntao Zhuang, assistant professor of electrical and computer engineering and principal investigator of the Quantum Information Theory Group"In theory, you could consider applications like lidar (Light Detection and Ranging) for self-driving cars, for example."

Zhuang and Zhang developed the theory behind the experiment and described it in a 2019 Physical Review X paper. They co-authored the new paper with lead author Yi Xia, a doctoral student in the James C. Wyant College of Optical Sciences, and Wei Li, a postdoctoral researcher in materials science and engineering.

Qubit Classifiers

There are existing applications that use a mix of quantum and classical processing in the NISQ era, but they rely on preexisting classical datasets that must be converted and classified in the quantum realm. Imagine taking a series of photos of cats and dogs, then uploading the photos into a system that uses quantum methods to label the photos as either "cat" or "dog."

The team is tackling the labeling process from a different angle, by using quantum sensors to gather their own data in the first place. It's more like using a specialized quantum camera that labels the photos as either "dog" or "cat" as the photos are taken.

"A lot of algorithms consider data stored on a computer disk, and then convert that into a quantum system, which takes time and effort," Zhuang said. "Our system works on a different problem by evaluating physical processes that are happening in real time."

The team is excited for future applications of their work at the intersection of quantum sensing and quantum computing. They even envision one day integrating their entire experimental setup onto a chip that could be dipped into a biomaterial or water sample to identify disease or harmful chemicals.

"We think it's a new paradigm for both quantum computing, quantum machine learning and quantum sensors, because it really creates a bridge to interconnect all these different domains," Zhang said.

Credit: 
University of Arizona College of Engineering

New algorithm could help enable next-generation deep brain stimulation devices

PROVIDENCE, R.I. [Brown University] -- By delivering small electrical pulses directly to the brain, deep brain stimulation (DBS) can ease tremors associated with Parkinson's disease or help relieve chronic pain. The technique works well for many patients, but researchers would like to make DBS devices that are a little smarter by adding the capability to sense activity in the brain and adapt stimulation accordingly.

Now, a new algorithm developed by Brown University bioengineers could be an important step toward such adaptive DBS. The algorithm removes a key hurdle that makes it difficult for DBS systems to sense brain signals while simultaneously delivering stimulation.

"We know that there are electrical signals in the brain associated with disease states, and we'd like to be able to record those signals and use them to adjust neuromodulation therapy automatically," said David Borton, an assistant professor of biomedical engineering at Brown and corresponding author of a study describing the algorithm. "The problem is that stimulation creates electrical artifacts that corrupt the signals we're trying to record. So we've developed a means of identifying and removing those artifacts, so all that's left is the signal of interest from the brain."

The research is published in the journal Cell Reports Methods. The work was co-led by Nicole Provenza, a Ph.D. candidate working in Borton's lab at Brown, and Evan Dastin-van Rijn, a Ph.D. student at the University of Minnesota who worked on the project while he was an undergraduate at Brown advised by Borton and Matthew Harrison, an associate professor of applied mathematics. Borton's lab is affiliated the Brown's Carney Institute for Brain Science.

DBS systems typically consist of an electrode implanted in the brain that's connected to a pacemaker-like device implanted in the chest. Electrical pulses are delivered at a consistent frequency, which is set by a doctor. The stimulation frequency can be adjusted as disease states change, but this has to be done manually by a physician. If devices could sense biomarkers of disease and respond automatically, it could lead to more effective DBS therapy with potentially fewer side effects.

There are several factors that make it difficult to sense and stimulate at the same time, the researchers say. For one thing, the frequency signature of the stimulation artifact can sometimes overlap with that of the brain signal researchers want to detect. So merely cutting out swaths of frequency to eliminate artifacts might also remove important signals. To eliminate the artifact and leave other data intact, the exact waveform of the artifact needs to be identified, which presents another problem. Implanted brain sensors are generally designed to run on minimal power, so the rate at which sensors sample electrical signals makes for fairly low-resolution data. Accurately identifying the artifact waveform with such low-resolution data is a challenge.

To get around that problem, the researchers came up with a way to turn low-resolution data into a high-resolution picture of the waveform. Even though sensors don't collect high-resolution data, they do collect a lot of data over time. Using some clever mathematics, the Brown team found a way to cobble bits of data together into a high-resolution picture of the artifact waveform.

"We basically take an average of samples recorded at similar points along the artifact waveform," Dastin-van Rijn said. "That allows us to predict the contribution of the artifact in those kinds of samples, and then remove it."

In a series of laboratory experiments and computer simulations, the team showed that their algorithm outperforms other techniques in its ability to separate signal from artifact. The team also used the algorithm on previously collected data from humans and animal models to show that they could accurately identify artifacts and remove them.

"I think one big advantage to our method is that even when the signal of interest closely resembles the simulation artifact, our method can still tell the difference between the two," Provenza said. "So that way we're able to get rid of the artifact while leaving the signal intact."

Another advantage, the researchers say, is that the algorithm isn't computationally expensive. It could potentially run in real time on current DBS devices. That opens the door to real-time artifact-filtering, which would enable simultaneous recording and stimulation.

"That's the key to an adaptive system," Borton said. "Being able to get rid of the stimulation artifact while still recording important biomarkers is what will ultimately enable a closed-loop therapeutic system."

Credit: 
Brown University

Researchers develop prototype of robotic device to pick, trim button mushrooms

image: To determine forces that needed to be programmed into the robotic picker, researchers took mushroom-picking-dynamics measurements using force sensors and an inertial measurement unit.

Image: 
Penn State

Researchers in Penn State's College of Agricultural Sciences have developed a robotic mechanism for mushroom picking and trimming and demonstrated its effectiveness for the automated harvesting of button mushrooms.

In a new study, the prototype, which is designed to be integrated with a machine vision system, showed that it is capable of both picking and trimming mushrooms growing in a shelf system.

The research is consequential, according to lead author Long He, assistant professor of agricultural and biological engineering, because the mushroom industry has been facing labor shortages and rising labor costs. Mechanical or robotic picking can help alleviate those problems.

"The mushroom industry in Pennsylvania is producing about two-thirds of the mushrooms grown nationwide, and the growers here are having a difficult time finding laborers to handle the harvesting, which is a very labor intensive and difficult job," said He. "The industry is facing some challenges, so an automated system for harvesting like the one we are working on would be a big help."

The button mushroom -- Agaricus bisporus -- is an important agricultural commodity. A total of 891 million pounds of button mushrooms valued at $1.13 billion were consumed in the U.S. from 2017 to 2018. Of this production, 91% were for the fresh market, according to the U.S. Department of Agriculture, and were picked by hand, one by one, to ensure product quality, shelf life and appearance. Labor costs for mushroom harvesting account for 15% to 30% of the production value, He pointed out.

Developing a device to effectively harvest mushrooms was a complex endeavor, explained He. In hand-picking, a picker first locates a mature mushroom and detaches it with one hand, typically using three fingers. A knife, in the picker's other hand, is then used to remove the stipe end. Sometimes the picker waits until there are two or three mushrooms in hand and cuts them one by one. Finally, the mushroom is placed in a collection box. A robotic mechanism had to achieve an equivalent picking process.

The researchers designed a robotic mushroom-picking mechanism that included a picking "end-effector" based on a bending motion, a "4-degree-of-freedom positioning" end-effector for moving the picking end-effector, a mushroom stipe-trimming end-effector, and an electro-pneumatic control system. They fabricated a laboratory-scale prototype to validate the performance of the mechanism.

The research team used a suction cup mechanism to latch onto mushrooms and conducted bruise tests on the mushroom caps to analyze the influence of air pressure and acting time of the suction cup.

The test results, recently published in Transactions of the American Society of Agricultural and Biological Engineers, showed that the picking end-effector was successfully positioned to the target locations and its success rate was 90% at first pick, increasing to 94.2% after second pick.

The trimming end-effector achieved a success rate of 97% overall. The bruise tests indicated that the air pressure was the main factor affecting the bruise level, compared to the suction-cup acting time, and an optimized suction cup may help to alleviate the bruise damage, the researchers noted. The laboratory test results indicated that the developed picking mechanism has potential to be implemented in automatic mushroom harvesting.

Button mushrooms for the study were grown in tubs at Penn State's Mushroom Research Center on the University Park campus. Fabrication and experiments were conducted at the Fruit Research and Extension Center in Biglerville. A total of 70 picking tests were conducted to evaluate the robotic picking mechanism. The working pressures of the pneumatic system and the suction cup were set at 80 and 25 pounds per square inch, respectively.

Credit: 
Penn State

Optic nerve firing may spark growth of vision-threatening childhood tumor

image: In a study of mice, NIH funded researchers showed that the neural activity associated seeing light may spark the growth of vision-threating optic nerve gliomas (see red spots).

Image: 
Courtesy of Monje lab, Stanford University, Palo Alto, California.

In a study of mice, researchers showed how the act of seeing light may trigger the formation of vision-harming tumors in young children who are born with neurofibromatosis type 1 (NF1) cancer predisposition syndrome. The research team, funded by the National Institutes of Health, focused on tumors that grow within the optic nerve, which relays visual signals from the eyes to brain. They discovered that the neural activity which underlies these signals can both ignite and feed the tumors. Tumor growth was prevented or slowed by raising young mice in the dark or treating them with an experimental cancer drug during a critical period of cancer development.

"Brain cancers recruit the resources they need from the environment they are in," said Michelle Monje, M.D., Ph.D., associate professor of neurology at Stanford University, Palo Alto, California, and co-senior author of the study published in Nature. "To fight brain cancers, you have to know your enemies. We hope that understanding how brain tumors weaponize neural activity will ultimately help us save lives and reduce suffering for many patients and their loved ones."

The study was a joint project between Dr. Monje's team and scientists in the laboratory of David H. Gutmann, M.D., Ph.D., the Donald O. Schnuck Family Professor and the director of the Neurofibromatosis Center at the Washington University School of Medicine in St. Louis.

In 2015, Dr. Monje's team showed for the first time that stimulation of neural activity in mice can speed the growth of existing malignant brain tumors and that this enhancement may be controlled by the secretion of a protein called neuroligin-3. In this new study, the researchers hoped to test out these ideas during earlier stages of tumor development.

"Over the years, cancer researchers have become more and more focused on the role of the tumor microenvironment in cancer development and growth. Until recently, neuronal activity has not been considered, as most studies have focused on immune and vascular cell interactions," said Jane Fountain, Ph.D., program director at the NIH's National Institute of Neurological Disorders and Stroke (NINDS), which partially funded the study. "This study is one of the first to show a definitive role for neurons in influencing tumor initiation. It's both scary and exciting to see that controlling neuronal activity can have such a profound influence on tumor growth."

Specifically, the researchers chose to study optic nerve gliomas in mice. Gliomas are formed from newborn cells that usually become a type of brain cell called glia. The tumors examined in this study are reminiscent of those found in about 15-20% of children who are born with a genetic mutation that causes NF1. About half of these children develop vision problems.

Dr. Gutmann helped discover the disease-causing mutation linked to NF1 and its encoded protein, neurofibromin, while working in a lab at the University of Michigan, Ann Arbor, which was then led by the current NIH director, Francis S. Collins, M.D., Ph.D. Since then, the Gutman team's pioneering work on NF1, and particularly NF1-brain tumors, has greatly shaped the medical research community's understanding of low-grade glioma formation and progression.

"Based on multiple lines of converging evidence, we knew that these optic nerve gliomas arose from neural precursor cells. However, the tumor cells required help from surrounding non-cancerous cells in the optic nerve to form gliomas," said Dr. Gutmann, who was also a senior author of this study. "While we had previously shown that immune cells, like T-cells and microglia, provide growth factors essential for tumor growth, the big question was: 'What role did neurons and neural activity play in optic glioma initiation and progression?'"

To address this, the researchers performed experiments on mice engineered by the Gutmann laboratory to generate tumors that genetically resembled human NF1-associated optic gliomas. Typically, optic nerve gliomas appear in these mice between six to sixteen weeks of age.

Initial experiments suggested that optic nerve activity drives the formation of the tumors. Artificially stimulating neural activity during the critical ages of tumor development enhanced cancer cell growth, resulting in bigger optic nerve tumors. In contrast, raising the mice in the dark during that same time completely prevented new tumors from forming.

Interestingly, the exact timing of the dark period also appeared to be important. For instance, two out of nine mice developed tumors when they were raised in the dark beginning at twelve weeks of age.

"These results suggest there is a temporal window during childhood development when genetic susceptibility and visual system activity critically intersect. If a susceptible neural precursor cell receives the key signals at a vulnerable time, then it will become cancerous. Otherwise no tumors form," said Yuan Pan, Ph.D., a post-doctoral fellow at Stanford and the lead author. "We needed to understand how this happens at a molecular level."

Further experiments supported the idea that neuroligin-3 may be a key player in this process. For instance, the scientists found high levels of neuroligin-3 gene activity in both mouse and human gliomas. Conversely, silencing the neuroligin-3 gene prevented tumors from developing in the neurofibromatosis mice.

Traditionally, neuroligin-3 proteins are thought to act like tie rods that physically brace neurons together at communication points called synapses. In this study, the researchers found that the protein may work differently. The optic nerves of neurofibromatosis mice raised under normal light conditions had higher levels of a short, free-floating version of neuroligin-3 than the nerves of mice raised in the dark.

"Previously our lab showed that neural activity causes shedding of neuroligin-3 and that this shedding hastens malignant brain tumor growth. Here our results suggest that neuroligin-3 shedding is the link between neural activity and optic nerve glioma formation. Visual activity causes shedding and shedding, in turn, transforms susceptible cells into gliomas," said Dr. Monje.

Finally, the researchers showed that an experimental drug may be effective at combating gliomas. The drug is designed to block the activity of ADAM10, a protein that is important for neuroligin-3 shedding. Treating the neurofibromatosis mutant mice with the drug during the critical period of six to sixteen weeks after birth prevented the development of tumors. Treatment delayed to twelve weeks did not prevent tumor formation but reduced the growth of the optic gliomas.

"These results show that understanding the relationship between neural activity and tumor growth provides promising avenues for novel treatments of NF-1 optic gliomas," said Jill Morris, Ph.D., program director, NINDS.

Dr. Monje's team is currently testing neuroligin-3-targeting drugs and light exposure modifications that may in the future help treat patients with this form of cancer.

Credit: 
NIH/National Institute of Neurological Disorders and Stroke

New method to improve durability of nano-electronic components, further semiconductor manufacturing

University of South Florida researchers recently developed a novel approach to mitigating electromigration in nanoscale electronic interconnects that are ubiquitous in state-of-the-art integrated circuits. This was achieved by coating copper metal interconnects with hexagonal boron nitride (hBN), an atomically-thin insulating two-dimensional (2D) material that shares a similar structure as the "wonder material" graphene.

Electromigration is the phenomenon in which an electrical current passing through a conductor causes the atomic-scale erosion of the material, eventually resulting in device failure. Conventional semiconductor technology addresses this challenge by using a barrier or liner material, but this takes up precious space on the wafer that could otherwise be used to pack in more transistors. USF mechanical engineering Assistant Professor Michael Cai Wang's approach accomplishes this same goal, but with the thinnest possible materials in the world, two-dimensional (2D) materials.

"This work introduces new opportunities for research into the interfacial interactions between metals and ångström-scale 2D materials. Improving electronic and semiconductor device performance is just one result of this research. The findings from this study opens up new possibilities that can help advance future manufacturing of semiconductors and integrated circuits," Wang said. "Our novel encapsulation strategy using single-layer hBN as the barrier material enables further scaling of device density and the progression of Moore's Law." For reference, a nanometer is 1/60,000 of the thickness of human hair, and an ångström is one-tenth of a nanometer. Manipulating 2D materials of such thinness requires extreme precision and meticulous handling.

In their recent study published in the journal Advanced Electronic Materials, copper interconnects passivated with a monolayer hBN via a back-end-of-line (BEOL) compatible approach exhibited more than 2500% longer device lifetime and more than 20% higher current density than otherwise identical control devices. This improvement, coupled with the ångström-thinness of hBN compared to conventional barrier/liner materials, allows for further densification of integrated circuits. These findings will help advance device efficiency and decrease energy consumption.

"With the growing demand for electric vehicles and autonomous driving, the demand for more efficient computing has grown exponentially. The promise of higher integrated circuits density and efficiency will enable development of better ASICs (application-specific integrated circuits) tailored to these emerging clean energy needs." explained Yunjo Jeong, an alumnus from Wang's group and first author of the study.

An average modern car has hundreds of microelectronic components, and the significance of these tiny but critical components has been especially highlighted through the recent global chip shortage. Making the design and manufacturing of these integrated circuits more efficient will be key to mitigating possible future disruptions to the supply chain. Wang and his students are now investigating ways to speed up their process to the fab scale.

"Our findings are not limited only to electrical interconnects in semiconductor research. The fact that we were able to achieve such a drastic interconnect device improvement implies that 2D materials can also be applied to a variety of other scenarios." Wang added.

Credit: 
University of South Florida

Forged books of seventeenth-century music discovered in Venetian library

image: The manuscripts include arias that were foundational in the history of opera -- a genre that emerged in the early seventeenth century.

Image: 
Michel Garrett, Penn State

UNIVERSITY PARK, Pa. -- In 1916 and 1917, a musician and book dealer named Giovanni Concina sold three ornately decorated seventeenth-century songbooks to a library in Venice, Italy. Now, more than 100 years later, a musicologist at Penn State has discovered that the manuscripts are fakes, meticulously crafted to appear old but actually fabricated just prior to their sale to the library. The manuscripts are rare among music forgeries in that the songs are authentic, but the books are counterfeit.

Uncovering deception was not what Marica Tacconi, professor of musicology and associate director of the School of Music at Penn State, set out to do when she began her research at the Biblioteca Nazionale Marciana of Venice in 2018. While on sabbatical there, she had planned to spend the fall semester studying 'echo effects' in seventeenth-century music -- phrases that are sung by the primary vocalist and then repeated 'in echo' by one or more additional singers.

While searching the library's database for songs incorporating echo effects, Tacconi stumbled upon a peculiar book. Catalogued as being from the seventeenth century, it certainly looked the part. It was bound in worn leather and embellished with brass bosses, or metal knobs that serve to elevate and protect the book from the table surface. Inside, the paper showed some signs of deterioration, including even an occasional worm hole. The first page revealed an elaborate letter 'T,' indicating the opening of the song "Tu mancavi a tormentarmi" by Antonio Cesti. The music itself was written with heart-shaped noteheads, and the bottom of the page displayed the coat of arms of the Contarini family, one of the most prominent and influential Venetian households.

"It was a beautiful, elegantly produced book," said Tacconi. "I was immediately intrigued. But I also sensed that something was off."

Additional research led to the discovery of two more manuscripts, also sold by Concina and very similar in format, design and content. Considered as a set, the three books preserve 61 compositions by 26 Italian composers, all written during the period from 1600 to 1678. According to Tacconi, an expert on the music, art and culture of early modern Italy, typical seventeenth-century music anthologies focus on just one or a few composers.

"The books comprised a strange conglomeration of composers, from very famous ones, like Giulio Caccini, Claudio Monteverdi and Francesco Cavalli, to lesser-known names. This was unusual for the seventeenth century when music anthologies tended to be more monographic in content," she said. "In addition, seventeenth-century scribes would not have had access to such a wide range of music, as many of those pieces had not yet been printed and existed only in manuscripts that did not circulate widely."

Despite her suspicions about the authenticity of the manuscripts, Tacconi was excited about the music itself.

"The manuscripts include arias that were foundational in the history of opera -- a genre that emerged in the early seventeenth century," she said. "They include musical gems that can tell us a lot about the origins and development of opera."

Upon further close investigation, she realized that much of the music in the manuscripts had been lifted, note for note, from a number of late nineteenth-/early twentieth-century books about music.

"The music copied in the manuscripts showed some strange editorial quirks that you can see in early twentieth-century editions, but that would not have appeared in seventeenth-century sources," said Tacconi, who proceeded to conduct a detailed comparison of the manuscripts with more modern books.

This type of painstaking comparison proved to be particularly fruitful in proving the manuscripts' fabricated nature. Tacconi's knowledge of a little-known twentieth-century book in particular, Hugo Riemann's "Handbuch der Musikgeschichte" (1912), provided verification of her suspicions. For example, one of the fabricated manuscripts included the song "Torna o torna pargoletto" by Jacopo Peri, which originally appeared in Piero Benedetti's "Musiche" -- a collection of songs published in 1611. Riemann included it in his "Handbuch," but with some alterations. Tacconi noticed these small but significant variants -- a wrong note, a misspelling of a word.

"It was obvious that the fabricator copied the music from Riemann's 1912 publication and not from the 1611 print," she said. "This was the 'smoking gun,' the confirmation that these books were indeed forgeries."

Tacconi noted that the books are unique among music forgeries in that most forgeries falsify the music itself.

"While the music preserved in these books is authentic, the manuscripts themselves are the handiwork of one or more fabricators who, working with several scribes and decorators, went through extraordinary means to make the volumes appear genuine," she said. "The books were clearly designed to look like those created for important Venetian households during the seventeenth century. It's not surprising that the library staff did not recognize them as fakes. At first glance they seem authentic, but once we look closely at the music and notice the editorial quirks, we detect the subtle traces of a twentieth-century fabricator."

Tacconi said that it is impossible to know whether Concina, who died in 1946, was the mastermind behind the forgeries or if he came into possession of the books with no knowledge of their fabricated nature.

Regardless of who generated the forgeries, an important question is "Why did they do it?"

"Monetary gain was probably not the main impetus," said Tacconi, explaining that the library paid Concina the equivalent of about $220 in today's money for one of the manuscripts. "That's a relatively modest sum, which does not really justify all the time and effort that went into producing these books. Instead, what we have is possibly an example of the fabricators engaging in a desire to hoodwink the experts."

In addition, she said, the forgers could have been motivated by a love for the music and the time period. "Imitation is the sincerest form of flattery," after all.

"Twentieth-century musicians and publishers often romanticized the music of the seventeenth century as being particularly elegant, and that elegance is something you see very clearly in the visual aspects of the three manuscripts," said Tacconi. "They're beautiful and ornate; their decorations include butterflies, birds and little cupids; the notes are heart shaped. The fact that the forgers went to such an effort to portray this elegance tells us something about the forgers' attitudes about the music of this time period. Knowing now that these books were created in the early twentieth century, the manuscripts and their contents actually provide an opportunity to study the late-Romantic tradition of so-called 'arie antiche' or 'gemme antiche,' which saw music collectors, musicians and audiences alike drawn to the antiquity of Italian Baroque solo vocal music."

Credit: 
Penn State

New Geology articles published online ahead of print in May

Boulder, Colo., USA: Article topics include Zealandia, Earth's newly recognized continent; the topography of Scandinavia; an interfacial energy penalty; major disruptions in North Atlantic circulation; the Great Bahama Bank; Pityusa Patera, Mars; the end-Permian extinction; and Tongariro and Ruapehu volcanoes, New Zealand. These Geology articles are online at https://geology.geoscienceworld.org/content/early/recent.

Mass balance controls on sediment scour and bedrock erosion in waterfall plunge pools
Joel S. Scheingross; Michael P. Lamb

Abstract: Waterfall plunge pools experience cycles of sediment aggradation and scour that modulate bedrock erosion, habitat availability, and hazard potential. We calculate sediment flux divergence to evaluate the conditions under which pools deposit and scour sediment by comparing the sediment transport capacities of waterfall plunge pools (Qsc_pool) and their adjacent river reaches (Qsc_river). Results show that pools fill with sediment at low river discharge because the waterfall jet is not strong enough to transport the supplied sediment load out of the pool. As discharge increases, the waterfall jet strengthens, allowing pools to transport sediment at greater rates than in adjacent river reaches. This causes sediment scour from pools and bar building at the downstream pool boundary. While pools may be partially emptied of sediment at modest discharge, floods with recurrence intervals >10 yr are typically required for pools to scour to bedrock. These results allow new constraints on paleodischarge estimates made from sediment deposited in plunge pool bars and suggest that bedrock erosion at waterfalls with plunge pools occurs during larger floods than in river reaches lacking waterfalls.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48881.1/598762/Mass-balance-controls-on-sediment-scour-and

Pace, magnitude, and nature of terrestrial climate change through the end-Permian extinction in southeastern Gondwana
T.D. Frank; C.R. Fielding; A.M.E. Winguth; K. Savatic; A. Tevyaw ...

Abstract: Rapid climate change was a major contributor to the end-Permian extinction (EPE). Although well constrained for the marine realm, relatively few records document the pace, nature, and magnitude of climate change across the EPE in terrestrial environments. We generated proxy records for chemical weathering and land surface temperature from continental margin deposits of the high-latitude southeastern margin of Gondwana. Regional climate simulations provide additional context. Results show that Glossopteris forest-mire ecosystems collapsed during a pulse of intense chemical weathering and peak warmth, which capped ~1 m.y. of gradual warming and intensification of seasonality. Erosion resulting from loss of vegetation was short lived in the low-relief landscape. Earliest Triassic climate was ~10-14 °C warmer than the late Lopingian and landscapes were no longer persistently wet. Aridification, commonly linked to the EPE, developed gradually, facilitating the persistence of refugia for moisture-loving terrestrial groups.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48795.1/598763/Pace-magnitude-and-nature-of-terrestrial-climate

Controls on andesitic glaciovolcanism at ice-capped volcanoes from field and experimental studies
R.P. Cole; J.D.L. White; T. Dürig; R. Büttner; B. Zimanowski ...

Abstract: Glaciovolcanic deposits at Tongariro and Ruapehu volcanoes, New Zealand, represent diverse styles of interaction between wet-based glaciers and andesitic lava. There are ice-confined lavas, and also hydroclastic breccia and subaqueous pyroclastic deposits that formed during effusive and explosive eruptions into meltwater beneath the glacier; they are rare among globally reported products of andesitic glaciovolcanism. The apparent lack of hydrovolcanically fragmented andesite at ice-capped volcanoes has been attributed to a lack of meltwater at the interaction sites because either the thermal characteristics of andesite limit meltwater production or meltwater drains out through leaky glaciers and down steep volcano slopes. We used published field evidence and novel, dynamic andesite-ice experiments to show that, in some cases, meltwater accumulates under glaciers on andesitic volcanoes and that meltwater production rates increase as andesite pushes against an ice wall. We concur with models for eruptions beneath ice sheets showing that the glacial conditions and pre-eruption edifice morphology are more important controls on the style of glaciovolcanism and its products than magma composition and the thermal properties of magmas. Glaciovolcanic products can be useful proxies for paleoenvironment, and the range of andesitic products and the hydrological environments in which andesite erupts are greater than hitherto appreciated.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48735.1/598764/Controls-on-andesitic-glaciovolcanism-at-ice

How cementation and fluid flow influence slip behavior at the subduction interface
J.N. Hooker; D.M. Fisher

Abstract: Much of the complexity of subduction-zone earthquake size and temporal patterns owes to linkages among fluid flow, stress, and fault healing. To investigate these linkages, we introduce a novel numerical model that tracks cementation and fluid flow within the framework of an earthquake simulator. In the model, there are interseismic increases in cohesion across the plate boundary and decreases in porosity and permeability caused by cementation along the interface. Seismogenic slip is sensitive to the effective stress and therefore fluid pressure; in turn, slip events increase porosity by fracturing. The model therefore accounts for positive and negative feedbacks that modify slip behavior through the seismic cycle. The model produces temporal clustering of earthquakes in the seismic record of the Aleutian margin, which has well-documented along-strike variations in locking characteristics. Model results illustrate how physical, geochemical, and hydraulic linkages can affect natural slip behavior. Specifically, coseismic drops in fluid pressure steal energy from large ruptures, suppress slip, moderate the magnitudes of large earthquakes, and lead to aftershocks.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48741.1/598765/How-cementation-and-fluid-flow-influence-slip

Conservative transport of dissolved sulfate across the Rio Madre de Dios floodplain in Peru
Emily I. Burt; Markus Bill; Mark E. Conrad; Adan Julian Ccahuana Quispe; John N. Christensen ...

Abstract: Mineral weathering plays a primary role in the geologic carbon cycle. Silicate weathering by carbonic acid consumes CO2 and stabilizes Earth's climate system. However, when sulfuric acid drives weathering, CO2 can be released to the atmosphere. Recent work has established that sulfuric acid weathering resulting from sulfide mineral oxidation is globally significant and particularly important in rapidly eroding environments. In contrast, if SO42- produced by sulfide oxidation is reduced during continental transit, then CO2 release may be negated. Yet, little is known about how much SO42- reduction takes place in terrestrial environments. We report oxygen and sulfur stable isotope ratios of SO42- in river waters and mass budget calculations, which together suggest that SO42- released from pyrite oxidation in the Peruvian Andes mountains is conservatively exported across ~300 km of the Amazon floodplain. In this system, floodplain SO42- reduction does not counteract the large SO42- flux from Andean pyrite weathering or measurably affect the stable isotope composition of riverine SO42-. These findings support the hypothesis that uplift and erosion of sedimentary rocks drive release of CO2 from the rock reservoir to the atmosphere.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48997.1/598766/Conservative-transport-of-dissolved-sulfate-across

First U-Pb dating of fossilized soft tissue using a new approach to paleontological chronometry
Heriberto Rochín-Bañaga; Donald W. Davis; Tobias Schwennicke

Abstract: Previous U-Pb dating of fossils has had only limited success because of low uranium content and abundance of common Pb as well as element mobility during late diagenesis. We report the first accurate U-Pb dating of fossilized soft tissue from a Pliocene phosphatized bivalve mold using laser ablation-inductively coupled mass spectrometry (LA-ICPMS). The fossilized soft tissue yields a diagenetic U-Pb age of 3.16 ± 0.08 Ma, which is consistent with its late Pliocene stratigraphy and similar to the oldest U-Pb age measured on accompanying shark teeth. Phosphate extraclasts give a distinctly older age of 5.1 ± 1.7 Ma, indicating that they are likely detrital and may have furnished P, promoting phosphatization of the mold. The U-Pb ages reported here along with stratigraphic constraints suggest that diagenesis occurred shortly after the death of the bivalve and that the U-Pb system in the bivalve mold remained closed until the present. Shark teeth collected from the same horizon show variable resetting due to late diagenesis. Data were acquired as line scans in order to exploit the maximum Pb/U variation and were regressed as counts, rather than ratios, in three-dimensional space using a Bayesian statistical method.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48386.1/598767/First-U-Pb-dating-of-fossilized-soft-tissue-using

Direct measurement of fungal contribution to silicate weathering rates in soil
Bastien Wild; Gwenaël Imfeld; Damien Daval

Abstract: Chemical weathering produces solutes that control groundwater chemistry and supply ecosystems with essential nutrients. Although microbial activity influences silicate weathering rates and associated nutrient fluxes, its relative contribution to silicate weathering in natural settings remains largely unknown. We provide the first quantitative estimates of in situ silicate weathering rates that account for microbially induced dissolution and identify microbial actors associated with weathering. Nanoscale topography measurements showed that fungi colonizing olivine [(Mg,Fe)2SiO4] samples in a Mg-deficient forest soil accounted for up to 16% of the weathering flux after 9 mo of incubation. A local increase in olivine weathering rate was measured and attributed to fungal hyphae of Verticillium sp. Altogether, this approach provides quantitative parameters of bioweathering (i.e., rates and actors) and opens new avenues to improve elemental budgets in natural settings.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48706.1/598768/Direct-measurement-of-fungal-contribution-to

Detrital chromites reveal Slave craton's missing komatiite
Rasmus Haugaard; Pedro Waterton; Luke Ootes; D. Graham Pearson; Yan Luo ...

Abstract: Komatiitic magmatism is a characteristic feature of Archean cratons, diagnostic of the addition of juvenile crust, and a clue to the thermal evolution of early Earth lithosphere. The Slave craton in northwest Canada contains >20 greenstone belts but no identified komatiite. The reason for this dearth of komatiite, when compared to other Archean cratons, remains enigmatic. The Central Slave Cover Group (ca. 2.85 Ga) includes fuchsitic quartzite with relict detrital chromite grains in heavy-mineral laminations. Major and platinum group element systematics indicate that the chromites were derived from Al-undepleted komatiitic dunites. The chromites have low 187Os/188Os ratios relative to chondrite with a narrow range of rhenium depletion ages at 3.19 ± 0.12 Ga. While these ages overlap a documented crust formation event, they identify an unrecognized addition of juvenile crust that is not preserved in the bedrock exposures or the zircon isotopic data. The documentation of komatiitic magmatism via detrital chromites indicates a region of thin lithospheric mantle at ca. 3.2 Ga, either within or at the edge of the protocratonic nucleus. This study demonstrates the applicability of detrital chromites in provenance studies, augmenting the record supplied by detrital zircons.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48840.1/598769/Detrital-chromites-reveal-Slave-craton-s-missing

Pityusa Patera, Mars: Structural analyses suggest a mega-caldera above a magma chamber at the crust-mantle interface
Hannes Bernhardt; David A. Williams

Abstract: Pityusa Patera is the southernmost of four paterae in the 1.2 × 106 km2 wrinkle-ridged plains-dominated Malea Planum region of Mars. Based on their texture, morphology, and uniqueness to Pityusa Patera, we interpret layered, folded massifs as pyroclastic deposits emplaced during patera formation as a collapse caldera. Such deposits would not be expected in a previously suggested scenario of patera formation by subsidence from lithospheric loading. Our structural measurements and modeling indicate that the folding and high relief of the massifs resulted from ~1.3%-6.9% of shortening, which we show to be a reasonable value for a central plug sagging down into an assumed piston-type caldera. According to a previously published axisymmetric finite-element model, the extent of shortening structures on a caldera floor relative to its total diameter is controlled by the roof depth of the collapsed magma chamber beneath it, which would imply Pityusa Patera formed above a chamber at 57.5-69 km depth. We interpret this value to indicate a magma chamber at the crust-mantle interface, which is in agreement with crust-penetrating ring fractures and mantle flows expected from the formation of the Hellas basin. As such, the folded massifs in Pityusa Patera, which are partially superposed by ca. 3.8 Ga wrinkle-ridged plains, should consist of primordial mantle material, a theory that might be assessed by future hyperspectral observations. In conclusion, we do not favor a formation by load-induced lithospheric subsidence but suggest Pityusa Patera to be one of the oldest extant volcanic landforms on Mars and one of the largest calderas in the solar system, which makes the folded, likely mantle-derived deposits on its floor a prime target for future exploration.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48903.1/598740/Pityusa-Patera-Mars-Structural-analyses-suggest-a

Evidence for iron-rich sulfate melt during magnetite(-apatite) mineralization at El Laco, Chile
Wyatt M. Bain; Matthew Steele-MacInnis; Fernando Tornos; John M. Hanchar; Emily C. Creaser ...

Abstract: The origins of Kiruna-type magnetite(-apatite) [Mt(-Ap)] deposits are contentious, with existing models ranging from purely hydrothermal to orthomagmatic end members. Here, we evaluate the compositions of fluids that formed the classic yet enigmatic Mt(-Ap) deposit at El Laco, northern Chile. We report evidence that ore-stage minerals crystallized from an Fe-rich (6-17 wt% Fe) sulfate melt. We suggest that a major component of the liquid was derived from assimilation of evaporite-bearing sedimentary rocks during emplacement of andesitic magma at depth. Hence, we argue that assimilation of evaporite-bearing sedimentary strata played a key role in the formation of El Laco and likely Mt(-Ap) deposits elsewhere.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48861.1/598741/Evidence-for-iron-rich-sulfate-melt-during

Surface-wave tomography of the Emeishan large igneous province (China): Magma storage system, hidden hotspot track, and its impact on the Capitanian mass extinction
Yiduo Liu; Lun Li; Jolante van Wijk; Aibing Li; Yuanyuan V. Fu

Abstract: Large igneous provinces (LIPs) are commonly associated with mass extinctions. However, the precise relations between LIPs and their impacts on biodiversity is enigmatic, given that they can be asynchronous. It has been proposed that the environmental impacts are primarily related to sill emplacement. Therefore, the structure of LIPs' magma storage system is critical because it dictates the occurrence and timing of mass extinction. We use surface-wave tomography to image the lithosphere under the Permian Emeishan large igneous province (ELIP) in southwestern China. We find a northeast-trending zone of high shear-wave velocity (Vs) and negative radial anisotropy (Vsv > Vsh; v and h are vertically and horizontally polarized S waves, respectively) in the crust and lithosphere. We rule out the possibilities of rifting or orogenesis to explain these seismic characteristics and interpret the seismic anomaly as a mafic-ultramafic, dike-dominated magma storage system of the ELIP. We further propose that the anomaly represents a hidden hotspot track that was emplaced before the ELIP eruption. A zone of higher velocity but less-negative radial anisotropy, on the hotspot track but to the northeast of the eruption center in the Panxi region, reflects an elevated proportion of sills emplaced at the incipient stage of the ELIP. Liberation of poisonous gases by the early sill intrusions explains why the mid-Capitanian global biota crisis preceded the peak ELIP eruption by 2-3 m.y.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G49055.1/598742/Surface-wave-tomography-of-the-Emeishan-large

Diverse marine fish assemblages inhabited the paleotropics during the Paleocene-Eocene thermal maximum
Sanaa El-Sayed; Matt Friedman; Tarek Anan; Mahmoud A. Faris; Hesham Sallam

Abstract: The Paleocene-Eocene thermal maximum (PETM) was a short interval (120-220 k.y.) of elevated global temperatures, but it is important for understanding biotic responses to climatic warming. Consequences of the PETM for marine fishes remain unclear, despite evidence that they might have been particularly vulnerable to increasing temperatures. Part of this uncertainty reflects a lack of data on marine fishes across a range of latitudes at the time. We report a new paleotropical (~12°N paleolatitude) fish fauna from the Dababiya Quarry Member of Egypt dating to the PETM. This assemblage--Ras Gharib A--is a snapshot of a time when tropical sea-surface temperatures approached limits lethal for many modern fishes. Despite extreme conditions, the Ras Gharib A fauna is compositionally similar to well-known, midlatitude Lagerstätten from the PETM or later in the Eocene. The Ras Gharib A fauna shows that diverse fish communities thrived in the paleotropics during the PETM, that these assemblages shared elements with coeval assemblages at higher latitudes, and that some taxa had broad latitudinal ranges substantially exceeding those found during cooler intervals.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48549.1/598743/Diverse-marine-fish-assemblages-inhabited-the

Facies control on carbonate δ13C on the Great Bahama Bank
Emily C. Geyman; Adam C. Maloof

Abstract: The carbon isotopic (δ13C) composition of shallow-water carbonates often is interpreted to reflect the δ13C of the global ocean and is used as a proxy for changes in the global carbon cycle. However, local platform processes, in addition to meteoric and marine diagenesis, may decouple carbonate δ13C from that of the global ocean. We present new δ13C measurements of benthic foraminifera, solitary corals, calcifying green algae, ooids, coated grains, and lime mud from the modern Great Bahama Bank. We find that vital effects, cross-shelf seawater chemistry gradients, and meteoric diagenesis produce carbonate with δ13C variability rivaling that of the past two billion years of Earth history. Leveraging Walther's Law, we illustrate how these local δ13C signals can find their way into the stratigraphic record of bulk carbonate.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48862.1/598744/Facies-control-on-carbonate-13C-on-the-Great

Quantifying bankfull flow width using preserved bar clinoforms from fluvial strata
Evan Greenberg; Vamsi Ganti; Elizabeth Hajek

Abstract: Reconstruction of active channel geometry from fluvial strata is critical to constrain the water and sediment fluxes in ancient terrestrial landscapes. Robust methods--grounded in extensive field observations, numerical simulations, and physical experiments--exist for estimating the bankfull flow depth and channel-bed slope from preserved deposits; however, we lack similar tools to quantify bankfull channel widths. We combined high-resolution lidar data from 134 meander bends across 11 rivers that span over two orders of magnitude in size to develop a robust, empirical relation between the bankfull channel width and channel-bar clinoform width (relict stratigraphic surfaces of bank-attached channel bars). We parameterized the bar cross-sectional shape using a two-parameter sigmoid, defining bar width as the cross-stream distance between 95% of the asymptotes of the fit sigmoid. We combined this objective definition of the bar width with Bayesian linear regression analysis to show that the measured bankfull flow width is 2.34 ± 0.13 times the channel-bar width. We validated our model using field measurements of channel-bar and bankfull flow widths of meandering rivers that span all climate zones (R2 = 0.79) and concurrent measurements of channel-bar clinoform width and mud-plug width in fluvial strata (R2 = 0.80). We also show that the transverse bed slopes of bars are inversely correlated with bend curvature, consistent with theory. Results provide a simple, usable metric to derive paleochannel width from preserved bar clinoforms.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48729.1/598745/Quantifying-bankfull-flow-width-using-preserved

Revisiting stepwise ocean oxygenation with authigenic barium enrichments in marine mudrocks
Guang-Yi Wei; Hong-Fei Ling; Graham A. Shields; Simon V. Hohl; Tao Yang ...

Abstract: There are current debates around the extent of global ocean oxygenation, particularly from the late Neoproterozoic to the early Paleozoic, based on analyses of various geochemical indices. We present a temporal trend in excess barium (Baexcess) contents in marine organic-rich mudrocks (ORMs) to provide an independent constraint on global ocean redox evolution. The absence of remarkable Baexcess enrichments in Precambrian (>ca. 541 Ma) ORMs suggests limited authigenic Ba formation in oxygen- and sulfate-deficient oceans. By contrast, in the Paleozoic, particularly the early Cambrian, ORMs are marked by significant Baexcess enrichments, corresponding to substantial increases in the marine sulfate reservoir and oxygenation level. Analogous to modern sediments, the Mesozoic and Cenozoic ORMs exhibit no prominent Baexcess enrichments. We suggest that variations in Baexcess concentrations of ORMs through time are linked to secular changes in the marine dissolved Ba reservoir associated with elevated marine sulfate levels and global ocean oxygenation. Further, unlike Mo, U, and Re abundances, significant Baexcess enrichments in ORMs indicate that the overall ocean oxygenation level in the early Paleozoic was substantially lower than at present.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48825.1/598746/Revisiting-stepwise-ocean-oxygenation-with

Widespread glacial erosion on the Scandinavian passive margin
Vivi K. Pedersen; Åsne Rosseland Knutsen; Gustav Pallisgaard-Olesen; Jane Lund Andersen; Robert Moucha ...

Abstract: The topography in Scandinavia features enigmatic high-elevation low-relief plateau regions dissected by deep valleys and fjords. These plateau regions have long been interpreted as relict landforms of a preglacial origin, whereas recent studies suggest they have been modified significantly by glacial and periglacial denudation. We used late Pliocene-Quaternary source-to-sink analyses to untangle this scientific conundrum. We compared glacier-derived offshore sediment volumes with estimates of erosion in onshore valleys and fjords and on the inner shelf. Our results suggest that onshore valley and fjord erosion falls 61%-66% short of the offshore sink volume. Erosion on the inner shelf cannot accommodate this mismatch, implying that the entire Scandinavian landscape and adjacent shelf have experienced significant glacial erosion.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48836.1/598226/Widespread-glacial-erosion-on-the-Scandinavian

Clay minerals modulate early carbonate diagenesis
N. Tanner Mills; Julia S. Reece; Michael M. Tice

Abstract: Early diagenetic precipitation of authigenic carbonate has been a globally significant carbon sink throughout Earth history. In particular, SO42- and Fe3+ reduction and CH4 production create conditions in pore fluids that promote carbonate mineral precipitation; however, these conditions may be modified by the presence of acid-base buffers such as clay minerals. We integrated the acid-base properties of clay minerals into a biogeochemical model that predicts the evolution of pore-water pH and carbonate mineral saturation during O2, Fe3+, and SO42- reduction and CH4 production. Key model inputs were obtained using two natural clay mineral-rich sediments from the Integrated Ocean Drilling Program as well as from literature. We found that clay minerals can enhance carbonate mineral saturation during O2 and SO42- reduction and moderate saturation during Fe3+ reduction and CH4 production if the pore-fluid pH and clay mineral pKa values are within ~2 log units of one another. We therefore suggest that clay minerals could significantly modify the environmental conditions and settings in which early diagenetic carbonate precipitation occurs. In Phanerozoic marine sediments--where O2 and SO42- have been the main oxidants of marine sedimentary organic carbon--clay minerals have likely inhibited carbonate dissolution and promoted precipitation of authigenic carbonate.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48713.1/598227/Clay-minerals-modulate-early-carbonate-diagenesis

The interfacial energy penalty to crystal growth close to equilibrium
Fred Gaidies; Freya R. George

Abstract: Understanding the origin of rock microstructure is critical for refining models of the geodynamics of the Earth. We use the geometry of compositional growth zoning of a population of garnet porphyroblasts in a mica schist to gain quantitative insight into (1) the relative growth rates of individual crystals, (2) the departure from equilibrium during their growth, and (3) the mobility of the porphyroblast-matrix interface. The driving force for garnet growth in the studied sample was exceedingly small and is comparable in magnitude to the interfacial energy associated with the garnet-matrix interface. This resulted in size-dependent garnet growth at macroscopic length scales, with a decrease in radial growth rates for smaller crystals caused by the penalty effect of the interfacial energy. The difference in growth rate between the largest and the smallest crystal is ~45%, and the interface mobility for garnet growth from ~535 °C, 480 MPa to 565 °C, 560 MPa in the phyllosilicate-dominated rock matrix ranged between ~10-19 and 10-20 m4 J-1 s-1. This is the first estimation of interface mobility in natural rock samples. In addition to the complex structural and chemical reorganization associated with the formation of dodecahedral coordination polyhedra in garnet, the presence of abundant graphite may have exerted drag on the garnet-matrix interface, further decreasing its mobility.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48715.1/598228/The-interfacial-energy-penalty-to-crystal-growth

A hidden Rodinian lithospheric keel beneath Zealandia, Earth's newly recognized continent
R.E. Turnbull; J.J. Schwartz; M.L. Fiorentini; R. Jongens; N.J. Evans ...

Abstract: We present a data set of >1500 in situ O-Hf-U-Pb zircon isotope analyses that document the existence of a concealed Rodinian lithospheric keel beneath continental Zealandia. The new data reveal the presence of a distinct isotopic domain of Paleozoic-Mesozoic plutonic rocks that contain zircon characterized by anomalously low δ18O values (median = +4.1‰) and radiogenic εHf(t) (median = +6.1). The scale (>10,000 km2) and time span (>>250 m.y.) over which plutonic rocks with this anomalously low-δ18O signature were emplaced appear unique in a global context, especially for magmas generated and emplaced along a continental margin. Calculated crustal-residence ages (depleted mantle model, TDM) for this low-δ18O isotope domain range from 1300 to 500 Ma and are interpreted to represent melting of a Precambrian lithospheric keel that was formed and subsequently hydrothermally altered during Rodinian assembly and rifting. Recognition of a concealed Precambrian lithosphere beneath Zealandia and the uniqueness of the pervasive low-δ18O isotope domain link Zealandia to South China, providing a novel test of specific hypotheses of continental block arrangements within Rodinia.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48711.1/598229/A-hidden-Rodinian-lithospheric-keel-beneath

Immediate temperature response in northern Iberia to last deglacial changes in the North Atlantic
J.L. Bernal-Wormull; A. Moreno; C. Pérez-Mejías; M. Bartolomé; A. Aranburu ...

Abstract: Major disruptions in the North Atlantic circulation during the last deglaciation triggered a series of climate feedbacks that influenced the course of Termination I, suggesting an almost synchronous response in the ocean-atmosphere system. We present a replicated δ18O stalagmite record from Ostolo cave in the northern Iberian Peninsula with a robust chronological framework that continuously covers the last deglaciation (18.5-10.5 kyr B.P.). The Ostolo δ18O record, unlike other speleothem records in the region that were related to humidity changes, closely tracks the well-known high-latitude temperature evolution, offering important insights into the structure of the last deglaciation in the Northern Hemisphere. In addition, this new record is accompanied by a clear signal of the expected cooling events associated with the deglacial disruptions in North Atlantic deep convection during Heinrich event 1.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48660.1/598230/Immediate-temperature-response-in-northern-Iberia

Credit: 
Geological Society of America

Newly identified atmospheric circulation enhances heatwaves and wildfires around the Arctic

image: The relationships among CAW, heatwaves, wildfires, and pollution. Anomalous anticyclones characterize the atmospheric circulation that develops concomitantly over the three remote regions around the summertime Arctic (July and August). The authors named it the circum-Arctic wave (CAW) pattern. These anticyclones induce warm and dry conditions from the surface to the mid-troposphere. The CAW can drive heatwaves and wildfires; wildfire smoke also emits aerosols that increase PM2.5 in and around the Arctic (Teppei J. Yasunari, et al. Environmental Research Letters. May 17, 2021).

Image: 
Teppei J. Yasunari, et al. Environmental Research Letters. May 17, 2021.

Scientists have uncovered a summertime climate pattern in and around the Arctic that could drive co-occurrences of European heatwaves and large-scale wildfires with air pollution over Siberia and subpolar North America.

In recent years in summer, there have often been extremely high temperatures over Europe, including heatwaves and active wildfires in and around the Arctic such as Siberia and subpolar North America (Alaska and Canada), which have caused widespread air pollution. For instance, in July 2019, significant Alaskan wildfires were detected by satellites. The recent unusual climate phenomena are of immense concern to many people living in these regions.

A team of scientists from Japan, South Korea, and the USA, including Hokkaido University's Assistant Professor Teppei J. Yasunari, have revealed relationships among wildfires, aerosols (air pollution), and climate patterns in and around the Arctic. They have published their discoveries in the journal Environmental Research Letters. Involved in this study were Professor Hisashi Nakamura, The University of Tokyo, Japan; Dr. Nakbin Choi and Professor Myong-In Lee, Ulsan National Institute of Science and Technology, Republic of Korea; and Professor Yoshihiro Tachibana, Mie University, Japan, and two scientists from the Goddard Space Flight Center, National Aeronautics and Space Administration (NASA), USA.

"Wildfires lead to extensive air pollution, primarily in the form of inhalable particulate matter with diameters of 2.5 micrometers or smaller (PM2.5). Arctic hazes during winter and spring are typical phenomena due to aerosols existing in the Arctic. In our scientific field, it is also known that deposition of light-absorbing aerosols onto snow surfaces can induce the so-called snow darkening effect, contributing to accelerated snow melting. For these reasons, long-term assessments of PM2.5 and aerosols in the Arctic and surrounding regions are required," said Yasunari.

For their investigations, the scientists used the MERRA-2 (Modern-Era Retrospective analysis for Research and Applications, version 2) dataset and fire data by satellite, both produced by NASA, focusing on the recent period from 2003 to 2017. They assessed comprehensive air pollution (i.e., PM2.5) in the Arctic for as long as the past 15 years, seeking to clarify the relationships between variations in PM2.5 and aerosols, wildfires, and the relevant climate patterns.

"We found 13 out of the 20 months with highest PM2.5 in the Arctic during the 15 year period were in summer. The elevated PM2.5 levels were highly correlated with relatively higher organic carbon aerosol concentrations, implying active wildfires. We concluded that the summertime wildfires contributed to those months with exceptionally high PM2.5 in the Arctic. In those months, the wildfires likely occurred under extremely warm and dry conditions. Those were due to concomitantly persistent or developed high-pressure systems over Europe, Siberia, and subpolar North America, namely, Alaska and Canada," explained Yasunari.

The scientists named this climate (atmospheric circulation) pattern, the circum-Arctic wave (CAW) pattern, as a driver for enhancing the co-occurrence of heatwaves in Europe and wildfires in Siberia and subpolar North America. In fact, the CAW-like pattern was also seen in the early summer of 2019, which was outside the period of the MERRA-2 analyses.

Credit: 
Hokkaido University

New 'Swiss Army knife' cleans up water pollution

image: Co-authors Vinayak Dravid and Stephanie Ribet examine their phosphate elimination and recovery substrate

Image: 
Northwestern University

Phosphate pollution in rivers, lakes and other waterways has reached dangerous levels, causing algae blooms that starve fish and aquatic plants of oxygen. Meanwhile, farmers worldwide are coming to terms with a dwindling reserve of phosphate fertilizers that feed half the world's food supply.

Inspired by Chicago's many nearby bodies of water, a Northwestern University-led team has developed a way to repeatedly remove and reuse phosphate from polluted waters. The researchers liken the development to a "Swiss Army knife" for pollution remediation as they tailor their membrane to absorb and later release other pollutants.

The research will be published during the week of May 31 in the Proceedings of the National Academy of Science.

Phosphorus underpins both the world's food system and all life on earth. Every living organism on the planet requires it: phosphorous is in cell membranes, the scaffolding of DNA and in our skeleton. Though other key elements like oxygen and nitrogen can be found in the atmosphere, phosphorous has no analog. The small fraction of usable phosphorous comes from the Earth's crust, which takes thousands or even millions of years to weather away. And our mines are running out.

A 2021 article in The Atlantic by Julia Rosen cited Isaac Asimov's 1939 essay, in which the American writer and chemist dubbed phosphorous "life's bottleneck."

Given the shortage of this non-renewable natural resource, it is sadly ironic that many of our lakes are suffering from a process known as eutrophication, which occurs when too many nutrients enter a natural water source. As phosphate and other minerals build up, aquatic vegetation and algae become too dense, depleting oxygen from water and ultimately killing aquatic life.

"We used to reuse phosphate a lot more," said Stephanie Ribet, the paper's first author. "Now we just pull it out of the ground, use it once and flush it away into water sources after use. So, it's a pollution problem, a sustainability problem and a circular economy problem."

Ecologists and engineers traditionally have developed tactics to address the mounting environmental and public health concerns around phosphate by eliminating phosphate from water sources. Only recently has the emphasis shifted away from removing to recovering phosphate.

"One can always do certain things in a laboratory setting," said Vinayak Dravid, the study's corresponding author. "But there's a Venn Diagram when it comes to scaling up, where you need to be able to scale the technology, you want it to be effective and you want it to be affordable. There was nothing in that intersection of the three before, but our sponge seems to be a platform that meets all these criteria."

Dravid is the Abraham Harris Professor of Materials Science and Engineering at Northwestern's McCormick School of Engineering, the founding director of the Northwestern University Atomic and Nanoscale Characterization Experimental Center (NUANCE), and director of the Soft and Hybrid Nanotechnology Experimental Resource (SHyNE). Dravid also serves as the director of global initiatives for Northwestern's International Institute of Nanotechnology. Ribet is a Ph.D. student in Dravid's lab and the paper's first author.

The team's Phosphate Elimination and Recovery Lightweight (PEARL) membrane is a porous, flexible substrate (such as a coated sponge, cloth or fibers) that selectively sequesters up to 99% of phosphate ions from polluted water. Coated with nanostructures that bind to phosphate, the PEARL membrane can be tuned by controlling the pH to either absorb or release nutrients to allow for phosphate recovery and reuse of the membrane for many cycles.

Current methods to remove phosphate are based on complex, lengthy, multi-step methods. Most of them do not also recover the phosphate during removal and ultimately generate a great deal of physical waste. The PEARL membrane provides a simple one-step process to remove phosphate that also efficiently recovers it. It's also reusable and generates no physical waste.

Using samples from Chicago's Water Reclamation District, the researchers tested their theory with the added complexity of real water samples.

"We often call this a 'nanoscale solution to a gigaton problem,'" Dravid said. "In many ways the nanoscale interactions that we study have implications for macrolevel remediation."

The team has demonstrated that the sponge-based approach is effective on scales, ranging from milligrams to kilograms, suggesting promise in scaling even further.

This research builds on a former development from the same team - Vikas Nandwana, a member of the Dravid group and co-author on the present study was the first author -called the OHM (oleophilic hydrophobic multifunctional) sponge that used the same sponge platform to selectively remove and recover oil resulting from oil contamination in water. By modifying the nanomaterial coating in the membrane, the team plans to next use their "plug-and-play"-like framework to go after heavy metals. Ribet also said multiple pollutants could be addressed at once by applying multiple materials with tailored affinities.

"This water remediation challenge hits so close to home," Ribet said. "The western basin of Lake Erie is one of the main areas you think of when it comes to eutrophication, and I was inspired by learning more about the water remediation challenges in our Great Lakes neighborhood."

Credit: 
Northwestern University

Using fossil plant molecules to track down the Green Sahara

image: The samples studied come from a core recovered by researchers in Morocco's Lake Tislit.

Image: 
Photo: Rachid Cheddadi, University of Montpellier

The Sahara has not always been covered by only sand and rocks. During the period from 14,500 to 5,000 years ago large areas of North Africa were more heavily populated, and where there is desert today the land was green with vegetation. This is evidenced by various sites with rock paintings showing not only giraffes and crocodiles, but even illustrating people swimming in the "Cave of Swimmers". This period is known as the Green Sahara or African Humid Period. Until now, researchers have assumed that the necessary rain was brought from the tropics through an enhanced summer monsoon. The northward shift of the monsoon was attributed to rotation of the Earth's tilted axis that produces higher levels of solar radiation over North Africa approximately every 25,000 years. However, climate models have not been able to simulate plant growth sufficient to create a Green Sahara with rain that stem only from the summer monsoon. Scientists are convinced that permanent vegetation at that time in North Africa cannot be explained by a single rainy season each year.

Dr. Enno Schefuß of MARUM and Dr. Rachid Cheddadi of the University of Montpellier (France) together with an international team of researchers have analyzed pollen and leaf waxes extracted from a sediment core in order to reconstruct the vegetation cover and amount of rainfall in the past. The core was retrieved from Lake Tislit in the High Atlas Mountains of Morocco. Fossil components of plants such as pollen and refractory plant molecules are deposited in lakes just as in marine sediments. These enable to identify the types of vegetation and climate conditions from the past.

"Our results are very clear," explains Enno Schefuß, "While the leaf waxes indicate increased rainfall during the African Humid Period, the pollen explicitly reveal that the vegetation was Mediterranean, not subtropical or even tropical." Mediterranean plants can tolerate arid conditions in the summer as long as they receive sufficient rain in the winter. "This strongly suggests that the monsoon reconstructions of previous studies need to be reconsidered."

Based on these findings, Schefuß and his colleagues have developed a new concept to explain the Green Sahara. During the period of the Green Sahara, as the monsoon was intensifying and moving northward in the summer, there must have been a southward shift of the belt of westerlies in the winter that brought winter precipitation to North Africa. The team subsequently tested their past climate reconstructions from the Tislit record using a mechanistic vegetation model. "We have winter rain on the northern margin of the Sahara, the monsoon on the southern margin, and between the two areas an overlap of the two rain systems which provides rains there during both summer and winter, albeit rather sparsely," explains Rachid Cheddadi. The vegetation model simulations clearly showed that a Green Sahara was formed under this climate scenario. A continuous vegetation cover could only form with precipitation in two seasons; the plants would not survive a long dry phase after a short rainy period.

Schefuß and his colleagues describe their results as a paradigm change in the climate research explaining the cause of the Green Sahara. The implications of this include not only a better understanding of past climate conditions, but also an improvement of the predictions for future climate and vegetation trends in the region, as well a contribution to archaeological studies of settlement patterns and migration routes.

A planned expedition using the Research Vessel METEOR to retrieve additional high-resolution sediment archives from the near-coastal deposits off Morocco was postponed due to the CoVid-19 pandemic. However, it will be rescheduled as soon as possible in order to further strengthen this research and the German-Moroccan cooperation.

Credit: 
MARUM - Center for Marine Environmental Sciences, University of Bremen

Medical AI models rely on 'shortcuts' that could lead to misdiagnosis of COVID-19

Artificial intelligence promises to be a powerful tool for improving the speed and accuracy of medical decision-making to improve patient outcomes. From diagnosing disease, to personalizing treatment, to predicting complications from surgery, AI could become as integral to patient care in the future as imaging and laboratory tests are today.

But as University of Washington researchers discovered, AI models -- like humans -- have a tendency to look for shortcuts. In the case of AI-assisted disease detection, these shortcuts could lead to diagnostic errors if deployed in clinical settings.

In a new paper published May 31 in Nature Machine Intelligence, UW researchers examined multiple models recently put forward as potential tools for accurately detecting COVID-19 from chest radiography, otherwise known as chest X-rays. The team found that, rather than learning genuine medical pathology, these models rely instead on shortcut learning to draw spurious associations between medically irrelevant factors and disease status. Here, the models ignored clinically significant indicators and relied instead on characteristics such as text markers or patient positioning that were specific to each dataset to predict whether someone had COVID-19.

"A physician would generally expect a finding of COVID-19 from an X-ray to be based on specific patterns in the image that reflect disease processes," said co-lead author Alex DeGrave, who is pursuing his doctorate in the Paul G. Allen School of Computer Science & Engineering and a medical degree as part of the UW's Medical Scientist Training Program. "But rather than relying on those patterns, a system using shortcut learning might, for example, judge that someone is elderly and thus infer that they are more likely to have the disease because it is more common in older patients. The shortcut is not wrong per se, but the association is unexpected and not transparent. And that could lead to an inappropriate diagnosis."

Shortcut learning is less robust than genuine medical pathology and usually means the model will not generalize well outside of the original setting, the team said.

"A model that relies on shortcuts will often only work in the hospital in which it was developed, so when you take the system to a new hospital, it fails -- and that failure can point doctors toward the wrong diagnosis and improper treatment," DeGrave said.

Combine that lack of robustness with the typical opacity of AI decision-making, and such a tool could go from a potential life-saver to a liability.

The lack of transparency is one of the factors that led the team to focus on explainable AI techniques for medicine and science. Most AI is regarded as a "black box" -- the model is trained on massive datasets and it spits out predictions without anyone knowing precisely how the model came up with a given result. With explainable AI, researchers and practitioners are able to understand, in detail, how various inputs and their weights contributed to a model's output.

The team used these same techniques to evaluate the trustworthiness of models recently touted for appearing to accurately identify cases of COVID-19 from chest X-rays. Despite a number of published papers heralding the results, the researchers suspected that something else may have been happening inside the black box that led to the models' predictions.

Specifically, the team reasoned that these models would be prone to a condition known as "worst-case confounding," owing to the lack of training data available for such a new disease. This scenario increased the likelihood that the models would rely on shortcuts rather than learning the underlying pathology of the disease from the training data.

"Worst-case confounding is what allows an AI system to just learn to recognize datasets instead of learning any true disease pathology," said co-lead author Joseph Janizek, who is also a doctoral student in the Allen School and earning a medical degree at the UW. "It's what happens when all of the COVID-19 positive cases come from a single dataset while all of the negative cases are in another. And while researchers have come up with techniques to mitigate associations like this in cases where those associations are less severe, these techniques don't work in situations where you have a perfect association between an outcome such as COVID-19 status and a factor like the data source."

The team trained multiple deep convolutional neural networks on X-ray images from a dataset that replicated the approach used in the published papers. First they tested each model's performance on an internal set of images from that initial dataset that had been withheld from the training data. Then the researchers tested how well the models performed on a second, external dataset meant to represent new hospital systems.

While the models maintained their high performance when tested on images from the internal dataset, their accuracy was reduced by half on the second set. The researchers referred to this as a "generalization gap" and cited it as strong evidence that confounding factors were responsible for the models' predictive success on the initial dataset.

The team then applied explainable AI techniques, including generative adversarial networks and saliency maps, to identify which image features were most important in determining the models' predictions.

The researchers trained the models on a second dataset, which contained positive and negative COVID-19 cases drawn from similar sources, and was therefore presumed to be less prone to confounding. But even those models exhibited a corresponding drop in performance when tested on external data.

These results upend the conventional wisdom that confounding poses less of an issue when datasets are derived from similar sources. They also reveal the extent to which high-performance medical AI systems could exploit undesirable shortcuts rather than the desired signals.

"My team and I are still optimistic about the clinical viability of AI for medical imaging. I believe we will eventually have reliable ways to prevent AI from learning shortcuts, but it's going to take some more work to get there," said senior author Su-In Lee, a professor in the Allen School. "Going forward, explainable AI is going to be an essential tool for ensuring these models can be used safely and effectively to augment medical decision-making and achieve better outcomes for patients."

Despite the concerns raised by the team's findings, it is unlikely that the models the team studied have been deployed widely in the clinical setting, DeGrave said. While there is evidence that at least one of the faulty models - COVID-Net - was deployed in multiple hospitals, it is unclear whether it was used for clinical purposes or solely for research.

"Complete information about where and how these models have been deployed is unavailable, but it's safe to assume that clinical use of these models is rare or nonexistent," DeGrave said. "Most of the time, healthcare providers diagnose COVID-19 using a laboratory test, PCR, rather than relying on chest radiographs. And hospitals are averse to liability, making it even less likely that they would rely on a relatively untested AI system."

Researchers looking to apply AI to disease detection will need to revamp their approach before such models can be used to make actual treatment decisions for patients, Janizek said.

"Our findings point to the importance of applying explainable AI techniques to rigorously audit medical AI systems," Janizek said. "If you look at a handful of X-rays, the AI system might appear to behave well. Problems only become clear once you look at many images. Until we have methods to more efficiently audit these systems using a greater sample size, a more systematic application of explainable AI could help researchers avoid some of the pitfalls we identified with the COVID-19 models."

This group has already demonstrated the value of explainable AI for a range of medical applications beyond imaging. These include tools for assessing patient risk factors for complications during surgery and targeting cancer therapies based on an individual's molecular profile.

Credit: 
University of Washington

Scientists discover a new genetic form of ALS in children

image: NIH researchers discovered a new form of ALS that begins in childhood. The study linked the disease to a gene called SPLTC1. As part of the study, NIH senior scientist Carsten Bonnemann, M.D., (right) examined Claudia Digregorio (left), a patient from the Apulia region of Italy.

Image: 
Courtesy of the NIH/NINDS.

In a study of 11 medical-mystery patients, an international team of researchers led by scientists at the National Institutes of Health and the Uniformed Services University (USU) discovered a new and unique form of amyotrophic lateral sclerosis (ALS). Unlike most cases of ALS, the disease began attacking these patients during childhood, worsened more slowly than usual, and was linked to a gene, called SPTLC1, that is part of the body's fat production system. Preliminary results suggested that genetically silencing SPTLC1 activity would be an effective strategy for combating this type of ALS.

"ALS is a paralyzing and often fatal disease that usually affects middle-aged people. We found that a genetic form of the disease can also threaten children. Our results show for the first time that ALS can be caused by changes in the way the body metabolizes lipids," said Carsten Bönnemann, M.D., senior investigator at the NIH's National Institute of Neurological Disorders and Stroke (NINDS) and a senior author of the study published in Nature Medicine. "We hope these results will help doctors recognize this new form of ALS and lead to the development of treatments that will improve the lives of these children and young adults. We also hope that our results may provide new clues to understanding and treating other forms of the disease."

Dr. Bönnemann leads a team of researchers that uses advanced genetic techniques to solve some of the most mysterious childhood neurological disorders around the world. In this study, the team discovered that 11 of these cases had ALS that was linked to variations in the DNA sequence of SPLTC1, a gene responsible for manufacturing a diverse class of fats called sphingolipids.

In addition, the team worked with scientists in labs led by Teresa M. Dunn, Ph.D., professor and chair at USU, and Thorsten Hornemann, Ph.D., at the University of Zurich in Switzerland. Together they not only found clues as to how variations in the SPLTC1 gene lead to ALS but also developed a strategy for counteracting these problems.

The study began with Claudia Digregorio, a young woman from the Apulia region of Italy. Her case had been so vexing that Pope Francis imparted an in-person blessing on her at the Vatican before she left for the United States to be examined by Dr. Bönnemann's team at the NIH's Clinical Center.

Like many of the other patients, Claudia needed a wheelchair to move around and a surgically implanted tracheostomy tube to help with breathing. Neurological examinations by the team revealed that she and the others had many of the hallmarks of ALS, including severely weakened or paralyzed muscles. In addition, some patients' muscles showed signs of atrophy when examined under a microscope or with non-invasive scanners.

Nevertheless, this form of ALS appeared to be different. Most patients are diagnosed with ALS around 50 to 60 years of age. The disease then worsens so rapidly that patients typically die within three to five years of diagnosis. In contrast, initial symptoms, like toe walking and spasticity, appeared in these patients around four years of age. Moreover, by the end of the study, the patients had lived anywhere from five to 20 years longer.

"These young patients had many of the upper and lower motor neuron problems that are indicative of ALS," said Payam Mohassel, M.D., an NIH clinical research fellow and the lead author of the study. "What made these cases unique was the early age of onset and the slower progression of symptoms. This made us wonder what was underlying this distinct form of ALS."

The first clues came from analyzing the DNA of the patients. The researchers used next-generation genetic tools to read the patients' exomes, the sequences of DNA that hold the instructions for making proteins. They found that the patients had conspicuous changes in the same narrow portion of the SPLTC1 gene. Four of the patients inherited these changes from a parent. Meanwhile, the other six cases appeared to be the result of what scientist call "de novo" mutations in the gene. These types of mutations can spontaneously occur as cells rapidly multiply before or shortly after conception.

Mutations in SPLTC1 are also known to cause a different neurological disorder called hereditary sensory and autonomic neuropathy type 1 (HSAN1). The SPLTC1 protein is a subunit of an enzyme, called SPT, which catalyzes the first of several reactions needed to make sphingolipids. HSAN1 mutations cause the enzyme to produce atypical and harmful versions of sphingolipids.

At first, the team thought the ALS-causing mutations they discovered may produce similar problems. However, blood tests from the patients showed no signs of the harmful sphingolipids.

"At that point, we felt like we had hit a roadblock. We could not fully understand how the mutations seen in the ALS patients did not show the abnormalities expected from what was known about SPTLC1 mutations," said Dr. Bönnemann. "Fortunately, Dr. Dunn's team had some ideas."

For decades Dr. Dunn's team had studied the role of sphingolipids in health and disease. With the help of the Dunn team, the researchers reexamined blood samples from the ALS patients and discovered that the levels of typical sphingolipids were abnormally high. This suggested that the ALS mutations enhanced SPT activity.

Similar results were seen when the researchers programmed neurons grown in petri dishes to carry the ALS-causing mutations in SPLTC1. The mutant carrying neurons produced higher levels of typical sphingolipids than control cells. This difference was enhanced when the neurons were fed the amino acid serine, a key ingredient in the SPT reaction.

Previous studies have suggested that serine supplementation may be an effective treatment for HSAN1. Based on their results, the authors of this study recommended avoiding serine supplementation when treating the ALS patients.

Next, Dr. Dunn's team performed a series of experiments which showed that the ALS-causing mutations prevent another protein called ORMDL from inhibiting SPT activity.

"Our results suggest that these ALS patients are essentially living without a brake on SPT activity. SPT is controlled by a feedback loop. When sphingolipid levels are high then ORMDL proteins bind to and slow down SPT. The mutations these patients carry essentially short circuit this feedback loop," said Dr. Dunn. "We thought that restoring this brake may be a good strategy for treating this type of ALS."

To test this idea, the Bönnemann team created small interfering strands of RNA designed to turn off the mutant SPLTC1 genes found in the patients. Experiments on the patients' skin cells showed that these RNA strands both reduced the levels of SPLTC1 gene activity and restored sphingosine levels to normal.

"These preliminary results suggest that we may be able to use a precision gene silencing strategy to treat patients with this type of ALS. In addition, we are also exploring other ways to step on the brake that slows SPT activity," said Dr. Bonnemann. "Our ultimate goal is to translate these ideas into effective treatments for our patients who currently have no therapeutic options."

Credit: 
NIH/National Institute of Neurological Disorders and Stroke

Genetic treasure trove for malaria researchers

image: A team of scientists at KAUST has sequenced the rodent-malaria parasite Plasmodium vinckei, which could help advance the development of malaria prevention and treatment strategies.

Image: 
© 2021 Morgan Bennett Smith

A new extensive genetic resource of rat-infecting malaria parasites may help advance the development of malaria prevention and treatment strategies. This trove of genome and phenome information has been published1 by a team of KAUST researchers, along with colleagues in Japan, and the datasets have been made publicly available for malaria researchers.

Rodent malaria parasites are closely related to human parasites but are easier to study because they can be grown in laboratory mice. "Investigations on rodent malaria parasites have played a key role in revealing many aspects of fascinating biology across their life-cycle stages," says KAUST bioscientist Arnab Pain, who led the sequencing effort, in collaboration with Richard Culleton of Nagasaki University's Institute of Tropical Medicine.

Most research on these parasites to date has involved three specific species, but a fourth, called Plasmodium vinckei, hasn't received much attention.

Pain, Culleton and the team generated a comprehensive genetic resource for this species and also sequenced genomes of seven isolates belonging to two of the other species, P. yoelii and P. chabaudi.

Sequencing the genomes of ten isolates from five subspecies of P. vinckei from tropical Africa revealed that they have widely diverged from their common ancestor. The evolutionary pressures on each of the subspecies varies greatly according to the regions where they are mainly found.

The sequencing efforts clarified aspects of the evolutionary tree of rat malaria parasites and also led to the naming of three new subspecies: P. yoelii cameronensis, P. chabaudi esekanensis and P. vinckei baforti.

The research describes in detail genetic and phenotypic variations between the subspecies, which is likely to help studies that aim to understand the functions of malaria parasite genes.

The scientists were also able to genetically modify a subspecies of P. vinckei to carry a fluorescent protein. This demonstrates that investigations on gene function, which involve modifying or removing a target gene, can be conducted in this subspecies.

"We hope our resource will provide the research community with a diverse set of parasite models to play with. This resource can be put to use to identify genes that influence the malaria parasite's virulence, drug resistance and transmissibility in mosquitoes," says Abhinay Ramaprasad, the first author of this study, which he conducted during his Ph.D. at KAUST.

The resource has been well received by the research community: "What a rich set of resources," wrote2 Jane Carlton, Director of New York University's Center for Genomics & Systems Biology. "The development of a model system once took decades, but with the aid of next-generation sequencing ... and enhanced molecular biology techniques, Ramaprasad et al. have fast-tracked the establishment of P. vinckei as a useful additional experimental model for malaria."

Credit: 
King Abdullah University of Science & Technology (KAUST)

Being born very preterm or very low birthweight is associated with continued lower IQ performance into adulthood

The average IQ of adults born very preterm or very low birth weight was compared to those who were term born in the 1970s to 1990s in 8 longitudinal cohorts from 7 countries around the world

The IQ was significantly lower for very pre-term and very low birth weight adults in comparison to those term born, researchers from the University of Warwick have found

Action needs to be taken to ensure support is available for those born very preterm or very low birth weight

The average IQ of adults who were born very preterm (VP) or at a very low birth weight (VLBW) has been compared to adults born full term by researchers from the Department of Psychology at the University of Warwick. Researchers have found VP/VLBW children may require special support in their education to boost their learning throughout childhood.

Birth before 32 weeks of gestation is classed as very preterm (VP) and those born weighing less than 1500g are classed as very low birthweight (VLBW).

Research has previously found that those who were born VP or VLBW had lower cognitive performance in childhood.

In the paper, ‘Association of Very Preterm Birth or Very Low Birth Weight with Intelligence in Adulthood: An Individual Participant Meta-analysis’, published today, the 28th of May in the journal JAMA Pediatrics, a consortium of researchers led by the Department of Psychology at the University of Warwick have conducted an Individual participant meta-analysis investigating IQ in adulthood.

Participants were 1068 VP/VLBW adults and 1067 term born controls born between 1978-1995 from 6 cohort studies in Europe and 2 from Australia and New Zealand, who had been studied from birth and had their IQ assessed in adulthood (aged 18-30 years).

The average IQ score in the general population is 100. The researchers found that VP/VLBW individuals scored approximately 12 IQ points less (i.e. 88) compared to term born adults (born 37-41 weeks gestation). Even when they removed those who had a childhood neurosensory impairment or learning disability (e.g childhood IQ score below 70) the adult IQ difference between VP/VLBW and term born adults was still on average 9.8 IQ points.

The risk factors that were associated with lower IQ performance for VP/VLBW adults included neonatal severe lung problems (bronchopulmonary dysplasia), neonatal bleeds into their brain (intraventricular haemorrhage) and being born to mothers with lower education.

Robert Eves, first author from the Department of Psychology at the University of Warwick comments:

"We have found that being born very preterm or at a very low birthweight continues to have a highly significant long term impact on the average IQ as compared to their peers in 7 different countries. The multi cohort, international aspect of this research can especially give us confidence in this important finding"

Professor Dieter Wolke, senior author and project lead from the Department of Psychology at the University of Warwick adds: "While most born VP/VLBW show cognitive development within the normal range, many may benefit from better tailored early interventions. These may include reducing bronchopulmonary dysplasia and intraventricular haemorrhage in neonatal care and educational interventions of those born into socially disadvantaged families."

Credit: 
University of Warwick

Declining biodiversity in wild Amazon fisheries threatens human diet

image: Landing a catch along the Ucayali River in the Loreto department of the Peruvian Amazon. The boy is holding a boquichico, a commonly consumed species. (All photos: Sebastian Heilpern)

Image: 
Sebastian Heilpern

A new study of dozens of wild fish species commonly consumed in the Peruvian Amazon says that people there could suffer major nutritional shortages if ongoing losses in fish biodiversity continue. Furthermore, the increasing use of aquaculture and other substitutes may not compensate. The research has implications far beyond the Amazon, since the diversity and abundance of wild-harvested foods is declining in rivers and lakes globally, as well as on land. Some 2 billion people globally depend on non-cultivated foods; inland fisheries alone employ some 60 million people, and provide the primary source of protein for some 200 million. The study appears this week in the journal Science Advances.

The authors studied the vast, rural Loreto department of the Peruvian Amazon, where most of the 800,000 inhabitants eat fish at least once a day, or an average of about 52 kilograms (115 pounds) per year. This is their primary source not only of protein, but fatty acids and essential trace minerals including iron, zinc and calcium. Unfortunately, it is not enough; a quarter of all children are malnourished or stunted, and more than a fifth of women of child-bearing age are iron deficient.

Threats to Amazon fisheries, long a mainstay for both indigenous people and modern development, are legion: new hydropower dams that pen in big migratory fish (some travel thousands of miles from Andes headwaters to the Atlantic estuary and back); soil erosion into rivers from deforestation; toxic runoff from gold mines; and over-exploitation by fishermen themselves, who are struggling to feed fast-growing populations. In Loreto, catch tonnages are stagnating; some large migratory species are already on the decline, and others may be on the way. It is the same elsewhere; globally, a third of freshwater fish species are threatened with extinction, and 80 are already known to be extinct, according to the World Wildlife Fund.

Different species of animals and plants contain different ratios of nutrients, so biodiversity is key to adequate human nutrition, say the researchers. "If fish decline, the quality of the diet will decline," said the study's senior coauthor, Shahid Naeem, director of Columbia University's Earth Institute Center for Environmental Sustainability. "Things are definitely declining now, and they could be on the path to crashing eventually."

To study the region's fish, the study's lead author, then-Columbia PhD. student Sebastian Heilpern, made numerous shopping trips to the bustling Belén retail market in the provincial capital of Iquitos. He also visited the city's Amazon River docks, where wholesale commerce begins at 3:30 in the morning. He and another student bought multiple specimens of as many different species as they could find, and ended up with 56 of the region's 60-some main food species. These included modest-size scale fish known locally as ractacara and yulilla; saucer-shaped palometa (related to piranha); and giant catfish extending six feet or more. (The researchers settled for chunks of the biggest ones.)

The fish were flown on ice to a government lab in Lima, where each species was analyzed for protein, fatty acids and trace minerals. The researchers then plotted the nutritional value of each species against its probability of surviving various kinds of ongoing environmental degradation. From this, they drew up multiple scenarios of how people's future diet would be affected as various species dropped out of the mix.

Overall, the biomass of fish caught has remained stable in recent years. However, large migratory species, the most vulnerable to human activities, comprise a shrinking portion, and as they disappear, they are being replaced by smaller local species. Most fish contain about the same amount of protein, so this has not affected the protein supply. And, the researchers found, many smaller fish in fact contain higher levels of omega-3 fatty acids, so their takeover may actually increase those supplies. On the other hand, as species compositions lean more to smaller fish, supplies of iron, zinc are already going down, and will continue to decline, they say.

"Like any other complex system, you see a tradeoff," said Heilpern. "Some things are going up while other things are going down. But that only lasts up to a point." Exactly which species will fill the gaps left when others decline is difficult to predict--but the researchers project that the overall nutritional value of the catch will nosedive around the point where 40 of the 60 food species become scarce or extinct. "You have a tipping point, where the species that remain can be really lousy," said Heilpern.

One potential solution: in many places around the world where wild foods including fish and bush meat (such as monkeys and lizards) are declining, people are turning increasingly to farm-raised chicken and aquaculture--a trend encouraged by the World Bank and other powerful organizations. This is increasingly the case in Loreto. But in a separate study published in March, Heilpern, Naeem and their colleagues show that this, too, is undermining human nutrition.

The researchers observed that chicken production in the region grew by about three quarters from 2010 to 2016, and aquaculture nearly doubled. But in analyzing the farmed animals' nutritional values, they found that they typically offer poorer nutrition than a diverse mix of wild fish. In particular, the move to chicken and aquaculture will probably exacerbate the region's already serious iron deficiencies, and limit supplies of essential fatty acids, they say. "Because no single species can offer all key nutrients, a diversity of species is needed to sustain nutritionally adequate diets," they write.

Besides this, chicken farming and aquaculture exert far more pressure on the environment than fishing. In addition to encouraging clearing of forests to produce feed for the animals, animal farming produces more more greenhouse gases, and introduces fertilizers and other pollutants into nearby waters, says Heilpern.

"Inland fish are fundamental for nutrition in many low-income and food-deficit countries, and of course landlocked countries," said John Valbo Jørgensen, a Rome-based expert on inland fisheries with the UN Food and Agriculture Organization. "Many significant inland fisheries, including those of Peru, take place in remote areas with poor infrastructure and limited inputs. It will not be feasible to replace those fisheries with farmed animals including fish."

Heilpern is now working with the Wildlife Conservation Society to produce an illustrated guide to the region's fish, including their nutritional values, in hopes of promoting a better understanding of their value among both fishermen and consumers.

Credit: 
Columbia Climate School