Tech

New catalyst moves seawater desalination, hydrogen production closer to commercialization

image: A team of researchers led by Zhifeng Ren, director of the Texas Center for Superconductivity at the University of Houston, has reported an oxygen evolving catalyst that takes just minutes to grow at room temperature and is capable of efficiently producing both clean drinking water and hydrogen from seawater.

Image: 
University of Houston

Seawater makes up about 96% of all water on earth, making it a tempting resource to meet the world's growing need for clean drinking water and carbon-free energy. And scientists already have the technical ability to both desalinate seawater and split it to produce hydrogen, which is in demand as a source of clean energy.

But existing methods require multiple steps performed at high temperatures over a lengthy period of time in order to produce a catalyst with the needed efficiency. That requires substantial amounts of energy and drives up the cost.

Researchers from the University of Houston have reported an oxygen evolving catalyst that takes just minutes to grow at room temperature on commercially available nickel foam. Paired with a previously reported hydrogen evolution reaction catalyst, it can achieve industrially required current density for overall seawater splitting at low voltage. The work is described in a paper published in Energy & Environmental Science.

Zhifeng Ren, director of the Texas Center for Superconductivity at UH (TcSUH) and corresponding author for the paper, said speedy, low-cost production is critical to commercialization.

"Any discovery, any technology development, no matter how good it is, the end cost is going to play the most important role," he said. "If the cost is prohibitive, it will not make it to market. In this paper, we found a way to reduce the cost so commercialization will be easier and more acceptable to customers."

Ren's research group and others have previously reported a nickel-iron-(oxy)hydroxide compound as a catalyst to split seawater, but producing the material required a lengthy process conducted at temperatures between 300 Celsius and 600 Celsius, or as high as 1,100 degrees Fahrenheit. The high energy cost made it impractical for commercial use, and the high temperatures degraded the structural and mechanical integrity of the nickel foam, making long-term stability a concern, said Ren, who also is M.D. Anderson Professor of physics at UH.

To address both cost and stability, the researchers discovered a process to use nickel-iron-(oxy)hydroxide on nickel foam, doped with a small amount of sulfur to produce an effective catalyst at room temperature within five minutes. Working at room temperature both reduced the cost and improved mechanical stability, they said.

"To boost the hydrogen economy, it is imperative to develop cost-effective and facile methodologies to synthesize NiFe-based (oxy)hydroxide catalysts for high-performance seawater electrolysis," they wrote. "In this work, we developed a one-step surface engineering approach to fabricate highly porous self-supported S-doped Ni/Fe (oxy)hydroxide catalysts from commercial Ni foam in 1 to 5 minutes at room temperature."

In addition to Ren, co-authors include first author Luo Yu and Libo Wu, Brian McElhenny, Shaowei Song, Dan Luo, Fanghao Zhang and Shuo Chen, all with the UH Department of Physics and TcSUH; and Ying Yu from the College of Physical Science and Technology at Central China Normal University.

Ren said one key to the researchers' approach was the decision to use a chemical reaction to produce the desired material, rather than the energy-consuming traditional focus on a physical transformation.

"That led us to the right structure, the right composition for the oxygen evolving catalyst," he said.

Credit: 
University of Houston

Transportation investments could save hundreds of lives, billions of dollars

BOSTON - Investments in infrastructure to promote bicycling and walking could save as many as 770 lives and $7.6 billion each year across 12 northeastern states and the District of Columbia under the proposed Transportation and Climate Initiative (TCI), according to a new Boston University School of Public Health (BUSPH) and Harvard T.H. Chan School of Public Health study.

Published in the Journal of Urban Health, the analysis shows that the monetary benefit of lives saved from increased walking and cycling far exceed the estimated annual investment for such infrastructure, without even considering the added benefits of reducing air pollution and tackling climate change.

"Our study suggests that if all the states joined TCI and collectively invested at least $100 million in active mobility infrastructure and public transit, the program could save hundreds of lives per year from increased physical activity. These benefits are larger than the estimated air quality and climate benefits for the TCI scenarios, highlighting the importance of leveraging investments in sustainable active mobility to improve health," says study lead author Matthew Raifman, a doctoral student in environmental health at BUSPH.

The TCI program, a partnership of 12 states and the District of Columbia currently under development, would implement a cap-and-invest program to reduce transportation sector emissions across the Northeast and Mid-Atlantic region, including substantial investment in cycling and pedestrian infrastructure as well as other sustainable transportation strategies like electric vehicle charging and public transit. In December, Massachusetts, Connecticut, Rhode Island, and D.C. became the first jurisdictions to formally join the TCI program.

Raifman and colleagues used an investment scenario model and the World Health Organization (WHO) Health Economic Assessment Tool methodology to estimate how many lives would be saved in each of the 378 counties in the Northeast and Mid-Atlantic regions thanks to increased physical activity (walking/running and cycling) and accounting for the potential for changes in traffic fatalities.

"These findings demonstrate how investments in climate-friendly transportation options like biking and walking can reap huge health and economic benefits at the local level," says study senior author Dr. Patrick Kinney, Beverly A. Brown Professor for the Improvement of Urban Health at BUSPH.

The team analyzed nine scenarios that differed in their greenhouse gas emission caps as well as how the proceeds from the program would be invested across a range of transportation options. The scenario with the largest health benefits assumed a 25% reduction in greenhouse gas emissions, and investment of $632 million of the proceeds in cycling and pedestrian infrastructure across the 12 states and D.C. The researchers estimated that this scenario could save 770 lives regionwide due to reduced cardiovascular mortality, accounting for changes in pedestrian accident fatality rates. The monetary value of the reduced health risk is $7.6 billion per year.

Health benefits across the other scenarios roughly scaled with the degree of investment in pedestrian and cycling infrastructure. For example, a more modest scenario highlighted in the December 2020 TCI memorandum of understanding (MOU) would invest $130 million in cycling and pedestrian infrastructure, saving around 200 lives regionwide due to increased physical activity, with a monetary value of $1.8 billion. The four jurisdictions that signed the MOU thus far would see 16 lives saved each year from biking and walking under the MOU scenario, with a monetized value of $154 million, the researchers estimated.

The states with the largest estimated health benefits from active mobility under all policy scenarios are the populous states of New York, New Jersey, Pennsylvania, and Maryland.

"Investments in active mobility would not only increase physical activity but would also reduce air pollution levels and start to address the climate crisis. This study reinforces the importance of considering near-term health benefits when developing climate policy," says study co-author Dr. Jonathan Levy, professor and chair of environmental health at BUSPH.

"Given the legacy of inequitable investment in infrastructure in the United States, the opportunity exists to address racial disparities in access to sidewalks and cycling infrastructure through equity-focused project siting," Raifman says.

The study is part of the Transportation, Equity, Climate and Health (TRECH) Project, a multi-university research initiative independently analyzing TCI and other policy scenarios. TRECH is based at the Center for Climate, Health, and the Global Environment at Harvard T.H. Chan School of Public Health (Harvard Chan C-CHANGE).

"This study sheds light on potential health benefits from investments in biking and walking infrastructure. Actual outcomes will depend on how much funding exists and how it is invested. We hope this information is useful to policymakers and advocates as they consider how to best target transportation investments to gain greater and more equitable health benefits," says study co-author Kathy Fallon Lambert, senior advisor at Harvard Chan C-CHANGE.

Credit: 
Boston University School of Medicine

Malaria threw human evolution into overdrive on this African archipelago

DURHAM, N.C. -- Malaria is an ancient scourge, but it's still leaving its mark on the human genome. And now, researchers have uncovered recent traces of adaptation to malaria in the DNA of people from Cabo Verde, an island nation off the African coast.

An archipelago of ten islands in the Atlantic Ocean some 385 miles offshore from Senegal, Cabo Verde was uninhabited until the mid-1400s, when it was colonized by Portuguese sailors who brought enslaved Africans with them and forced them to work the land.

The Africans who were forcibly brought to Cabo Verde carried a genetic mutation, which the European colonists lacked, that prevents a type of malaria parasite known as Plasmodium vivax from invading red blood cells. Among malaria parasites, Plasmodium vivax is the most widespread, putting one third of the world's population at risk.

People who subsequently inherited the protective mutation as Africans and Europeans intermingled had such a huge survival advantage that, within just 20 generations, the proportion of islanders carrying it had surged, the researchers report.

Other examples of genetic adaptation in humans are thought to have unfolded over tens to hundreds of thousands of years. But the development of malaria resistance in Cabo Verde took only 500 years.

"That is the blink of an eye on the scale of evolutionary time," said first author Iman Hamid, a Ph.D. student in assistant professor Amy Goldberg's lab at Duke University.

It is unsurprising that a gene that protects from malaria would give people who carry it an evolutionary edge, the researchers said. One of the oldest known diseases, malaria continues to claim up to a million lives each year, most of them children.

The findings, published this month in the journal eLife, represent one of the speediest, most dramatic changes measured in the human genome, says a team led by Goldberg and Sandra Beleza of the University of Leicester.

The researchers analyzed DNA from 563 islanders. Using statistical methods they developed for people with mixed ancestry, they compared the island of Santiago, where malaria has always been a fact of life, with other islands of Cabo Verde, where the disease has been less prevalent.

The team found that the frequency of the protective mutation on Santiago is higher than expected today, given how much of the islanders' ancestry can be traced back to Africa versus Europe.

In other words, the chances of a person surviving and having a family thanks to their genetic code -- the strength of selection -- were so great that the protective variant spread above and beyond the contributions of the Africans who arrived on Santiago's shores. The same was not true elsewhere in the archipelago.

The team's analyses also showed that as the protective mutation spread, nearby stretches of African-like DNA hitchhiked along with it, but only on malaria-plagued Santiago and not on other Cabo Verdean islands.

Together, the results suggest that what they were detecting was the result of adaptation in the recent past, in the few hundred years since the islands were settled, and not merely the lingering imprint of processes that happened long ago in Africa.

Humans are constantly evolving, but evidence of recent genetic adaptation -- during the last 10 to 100 generations -- has been hard to find. Part of the problem is that, on such short timescales, changes in gene frequencies can be hard to detect using traditional statistical methods.

But by using patterns of genetic ancestry to help reconstruct the Cabo Verdean islanders' history, the researchers were able to detect evolutionary changes that previous techniques missed.

The authors hope to extend their methods to study other populations where mass migration means migrants are exposed to different diseases and environments than they were before.

"Humans are still evolving, and here we have evidence," Hamid said.

Credit: 
Duke University

Thick lithosphere casts doubt on plate tectonics in Venus's geologically recent past

image: Mead crater, the largest impact basin on Venus, is encircled by two rocky rings, which provide valuable information about the planet's lithosphere.

Image: 
NASA

PROVIDENCE, R.I. [Brown University] -- At some point between 300 million and 1 billion years ago, a large cosmic object smashed into the planet Venus, leaving a crater more than 170 miles in diameter. A team of Brown University researchers has used that ancient impact scar to explore the possibility that Venus once had Earth-like plate tectonics.

For a study published in Nature Astronomy, the researchers used computer models to recreate the impact that carved out Mead crater, Venus's largest impact basin. Mead is surrounded by two clifflike faults -- rocky ripples frozen in time after the basin-forming impact. The models showed that for those rings to be where they are in relation to the central crater, Venus's lithosphere -- its rocky outer shell -- must have been quite thick, far thicker than that of Earth. That finding suggests that a tectonic regime like Earth's, where continental plates drift like rafts atop a slowly churning mantle, was likely not happening on Venus at the time of the Mead impact.

"This tells us that Venus likely had what we'd call a stagnant lid at the time of the impact," said Evan Bjonnes, a graduate student at Brown and study's lead author. "Unlike Earth, which has an active lid with moving plates, Venus appears to have been a one-plate planet for at least as far back as this impact."

Bjonnes says the findings offer a counterpoint to recent research suggesting that plate tectonics may have been a possibility in Venus's relatively recent past. On Earth, evidence of plate tectonics can be found all over the globe. There are huge rifts called subduction zones where swaths of crustal rock are driven down into the subsurface. Meanwhile, new crust is formed at mid-ocean ridges, sinuous mountain ranges where lava from deep inside the Earth flows to the surface and hardens into rock. Data from orbital spacecraft have revealed rifts and ridges on Venus that look a bit like tectonic features. But Venus is shrouded by its thick atmosphere, making it hard to make definitive interpretations of fine surface features.

This new study is a different way of approaching the question, using the Mead impact to probe characteristics of the lithosphere. Mead is a multi-ring basin similar to the huge Orientale basin on the Moon. Brandon Johnson, a former Brown professor who is now at Purdue University, published a detailed study of Orientale's rings in 2016. That work showed that the final position of the rings is strongly tied to the crust's thermal gradient -- the rate at which rock temperature increases with depth. The thermal gradient influences the way in which the rocks deform and break apart following an impact, which in turn helps to determine where the basin rings end up.

Bjonnes adapted the technique used by Johnson, who is also a coauthor on this new research, to study Mead. The work showed that for Mead's rings to be where they are, Venus's crust must have had a relatively low thermal gradient. That low gradient -- meaning a comparatively gradual increase in temperature with depth -- suggests a fairly thick Venusian lithosphere.

"You can think of it like a lake freezing in winter," Bjonnes said. "The water at the surface reaches the freezing point first, while the water at depth is a little warmer. When that deeper water cools down to similar temperatures as the surface, you get a thicker ice sheet."

The calculations suggest that the gradient is far lower, and the lithosphere much thicker, than what you'd expect for an active-lid planet. That would mean that Venus has been without plate tectonics for as far back as a billion years ago, the earliest point at which scientists think the Mead impact occurred.

Alexander Evans, an assistant professor at Brown and study co-author, said that one compelling aspect of the findings from Mead is their consistency with other features on Venus. Several other ringed craters that the researchers looked at were proportionally similar to Mead, and the thermal gradient estimates are consistent with the thermal profile needed to support Maxwell Montes, Venus's tallest mountain.

"I think the finding further highlights the unique place that Earth, and its system of global plate tectonics, has among our planetary neighbors," Evans said.

Credit: 
Brown University

New study unravels Darwin's 'abominable mystery' surrounding origin of flowering plants

The origin of flowering plants famously puzzled Charles Darwin, who described their sudden appearance in the fossil record from relatively recent geological times as an "abominable mystery". This mystery has further deepened with an inexplicable discrepancy between the relatively recent fossil record and a much older time of origin of flowering plants estimated using genome data.

Now a team of scientists from Switzerland, Sweden, the UK, and China may have solved the puzzle. Their results show flowering plants indeed originated in the Jurassic or earlier, that is millions of years earlier than their oldest undisputed fossil evidence, according to a new study published in the scientific journal Nature Ecology & Evolution. The lack of older fossils, according to their results, might instead be the product of low probability of fossilization and the rarity of early flowering plants.

"A diverse group of flowering plants had been living for a very long time shadowed by ferns and gymnosperms, which were dominating ancient ecosystems. This reminds me of how modern mammals lived for a long time laying low in the age of dinosaurs, before becoming a dominant component of modern faunas," said lead author Dr Daniele Silvestro, from the University of Fribourg in Switzerland.

Flowering plants are by far the most abundant and diverse group of plants globally in modern ecosystems, far outnumbering ferns and gymnosperms, and including almost all crops sustaining human livelihood. The fossil record shows this pattern was established over the past 80-100 million years, while earlier flowering plants are thought to have been small and rare. The new results show that flowering plants have been around for as many as 100 million years before they finally came to dominance.

"While we do not expect our study to put an end to the debate about angiosperm origin, it does provide a strong motivation for what some consider a hunt for the snark - a Jurassic flowering plant. Rather than a mythical artefact of genome-based analyses, Jurassic angiosperms are an expectation of our interpretation of the fossil record," said co-author Professor Philip Donoghue, from the University of Bristol in the UK.

The research conclusions are based on complex modelling using a large global database of fossil occurrences, which Dr Yaowu Xing and his team at the Xishuangbanna Tropical Botanical Garden compiled from more than 700 publications. These records, amounting to more than 15,000, included members of many groups of plants including representatives of palms, orchids, sunflowers, and peas.

"Scientific debate has long been polarised between palaeontologists who estimate the antiquity of angiosperms based on the age of the oldest fossils, versus molecular biologists who use this information to calibrate molecular evolution to geologic time. Our study shows that these views are too simplistic; the fossil record has to be interpreted," said co-author Dr Christine Bacon, from the University of Gothenburg in Sweden.

"A literal reading of the fossil record cannot be used to estimate realistically the time of origin of a group. Instead, we had to develop new mathematical models and use computer simulations to solve this problem in a robust way."

Even 140 years after Darwin's conundrum about the origin of flowering plants, the debate has maintained a central place in the scientific arena. In particular, many studies based on phylogenetic analyses of modern plants and their genomes estimated that the group originated significantly earlier than indicated by the fossil record, a finding widely disputed in palaeontological research. The new study, which was based exclusively on fossils and did not include genome data or evolutionary trees, shows an earlier age of flowering plants is not an artifact of phylogenetic analyses, but is in fact supported by palaeontological data as well.

Co-author Professor Alexandre Antonelli, Director of Science at the Royal Botanic Gardens, Kew in the UK, added: "Understanding when flowering plants went from being an insignificant group into becoming the cornerstone of most terrestrial ecosystems shows us that nature is dynamic. The devastating human impact on climate and biodiversity could mean that the successful species in the future will be very different to the ones we are accustomed to now."

Credit: 
University of Bristol

Chemists settle battery debate, propel research forward

image: Brookhaven chemists Enyuan Hu (left, lead author) and Zulipiya Shadike (right, first author) are shown holding a model of 1,2-dimethoxyethane, a solvent for lithium metal battery electrolytes.

Image: 
Brookhaven National Laboratory

UPTON, NY--A team of researchers led by chemists at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory has identified new details of the reaction mechanism that takes place in batteries with lithium metal anodes. The findings, published today in Nature Nanotechnology, are a major step towards developing smaller, lighter, and less expensive batteries for electric vehicles.

Recreating lithium metal anodes

Conventional lithium-ion batteries can be found in a variety of electronics, from smartphones to electric vehicles. While lithium-ion batteries have enabled the widespread use of many technologies, they still face challenges in powering electric vehicles over long distances.

To build a battery better suited for electric vehicles, researchers across several national laboratories and DOE-sponsored universities have formed a consortium called Battery500, led by DOE's Pacific Northwest National Laboratory (PNNL). Their goal is to make battery cells with an energy density of 500 watt-hours per kilogram, which is more than double the energy density of today's state-of-the-art batteries. To do so, the consortium is focusing on batteries made with lithium metal anodes.

Compared to lithium-ion batteries, which most often use graphite as the anode, lithium metal batteries use lithium metal as the anode.

"Lithium metal anodes are one of the key components to fulfill the energy density sought by Battery500," said Brookhaven chemist Enyuan Hu, leading author of the study. "Their advantage is two-fold. First, their specific capacity is very high; second, they provide a somewhat higher voltage battery. The combination leads to a greater energy density."

Scientists have long recognized the advantages of lithium metal anodes; in fact, they were the first anode to be coupled with a cathode. But due to their lack of "reversibility," the ability to be recharged through a reversible electrochemical reaction, the battery community ultimately replaced lithium metal anodes with graphite anodes, creating lithium-ion batteries.

Now, with decades of progress made, researchers are confident they can make lithium metal anodes reversible, surpassing the limits of lithium-ion batteries. The key is the interphase, a solid material layer that forms on the battery's electrode during the electrochemical reaction.

"If we are able to fully understand the interphase, we can provide important guidance on material design and make lithium metal anodes reversible," Hu said. "But understanding the interphase is quite a challenge because it's a very thin layer with a thickness of only several nanometers. It is also very sensitive to air and moisture, making the sample handling very tricky."

Visualizing the interphase at NSLS-II

To navigate these challenges and "see" the chemical makeup and structure of the interphase, the researchers turned to the National Synchrotron Light Source II (NSLS-II), a DOE Office of Science user facility at Brookhaven that generates ultrabright x-rays for studying material properties at the atomic scale.

"NSLS-II's high flux enables us to look at a very tiny amount of the sample and still generate very high-quality data," Hu said.

Beyond the advanced capabilities of NSLS-II as a whole, the research team needed to use a beamline (experimental station) that was capable of probing all the components of the interphase, including crystalline and amorphous phases, with high energy (short wavelength) x-rays. That beamline was the X-ray Powder Diffraction (XPD) beamline.

"The chemistry team took advantage of a multimodal approach at XPD, using two different techniques offered by the beamline, x-ray diffraction (XRD) and pair distribution function (PDF) analysis," said Sanjit Ghose, lead beamline scientist at XPD. "XRD can study the crystalline phase, while PDF can study the amorphous phase."

The XRD and PDF analyses revealed exciting results: the existence of lithium hydride (LiH) in the interphase. For decades, scientists had debated if LiH existed in the interphase, leaving uncertainty around the fundamental reaction mechanism that forms the interphase.

"When we first saw the existence of LiH, we were very excited because this was the first time that LiH was shown to exist in the interphase using techniques with statistical reliability. But we were also cautious because people have been doubting this for a long time," Hu said.

Co-author Xiao-Qing Yang, a physicist in Brookhaven's Chemistry Division, added, "LiH and lithium fluoride (LiF) have very similar crystal structures. Our claim of LiH could have been challenged by people who believed we misidentified LiF as LiH."

Given the controversy around this research, as well as the technical challenges differentiating LiH from LiF, the research team decided to provide multiple lines of evidence for the existence of LiH, including an air exposure experiment.

"LiF is air stable, while LiH is not," Yang said. "If we exposed the interphase to air with moisture, and if the amount of the compound being probed decreased over time, that would confirm we did see LiH, not LiF. And that's exactly what happened. Because LiH and LiF are difficult to differentiate and the air exposure experiment had never been performed before, it is very likely that LiH has been misidentified as LiF, or not observed due to the decomposition reaction of LiH with moisture, in many literature reports."

Yang continued, "The sample preparation done at PNNL was critical to this work. We also suspect that many people could not identify LiH because their samples had been exposed to moisture prior to experimentation. If you don't collect the sample, seal it, and transport it correctly, you miss out."

In addition to identifying LiH's presence, the team also solved another long-standing puzzle centered around LiF. LiF has been considered to be a favored component in the interphase, but it was not fully understood why. The team identified structural differences between LiF in the interphase and LiF in the bulk, with the former facilitating lithium ion transport between the anode and the cathode.

"From sample preparation to data analysis, we closely collaborated with PNNL, the U.S. Army Research Laboratory, and the University of Maryland," said Brookhaven chemist Zulipiya Shadike, first author of the study. "As a young scientist, I learned a lot about conducting an experiment and communicating with other teams, especially because this is such a challenging topic."

Hu added, "This work was made possible by combining the ambitions of young scientists, wisdom from senior scientists, and patience and resilience of the team."

Beyond the teamwork between institutions, the teamwork between Brookhaven Lab's Chemistry Division and NSLS-II continues to drive new research results and capabilities.

"The battery group in the Chemistry Division works on a variety of problems in the battery field. They work with cathodes, anodes, and electrolytes, and they continue to bring XPD new issues to solve and challenging samples to study," Ghose said. "That's exciting to be part of, but it also helps me develop methodology for other researchers to use at my beamline. Currently, we are developing the capability to run in situ and operando experiments, so researchers can scan the entire battery with higher spatial resolution as a battery is cycling."

The scientists are continuing to collaborate on battery research across Brookhaven Lab departments, other national labs, and universities. They say the results of this study will provide much-needed practical guidance on lithium metal anodes, propelling research on this promising material forward.

Credit: 
DOE/Brookhaven National Laboratory

Rumen additive and controlled energy benefit dairy cows during dry period

URBANA, Ill. - Getting nutrition right during a dairy cow's dry period can make a big difference to her health and the health of her calf. But it's also a key contributor to her milk yield after calving. New research from the University of Illinois shows diets containing consistent energy levels and the rumen-boosting supplement monensin may be ideal during the dry period.

"Many producers use a 'steam up' approach where you gradually increase the energy intake during the dry period to help adjust the rumen and adapt the cow to greater feed intakes after calving. Our work has shown that's really of questionable benefit for many farms, and it may be safer to just keep a constant level of feed intake before calving," says James Drackley, professor in the Department of Animal Sciences at Illinois and co-author on a study published in the Journal of Dairy Science.

To test their hypothesis, the researchers fed cows either a controlled-energy diet throughout the dry period or a variable energy diet containing greater energy during the close-up period. The two diets made no difference in how the cows performed or in any of their metabolic indicators after calving.

"Obviously, it's simpler if we don't have to feed an additional diet halfway through the dry period," Drackley says.

On top of the two feeding strategies, the researchers either added monensin to the prepartum diet or didn't. The supplement is typically fed during lactation to make fermentation in the rumen more efficient and convert nutrients into milk proteins. Some producers take the supplement out during the dry period to give rumen microbes a "rest" period.

"Our research showed if we took monensin out during the dry period, then the cows produced about 2 kilograms less milk in the next lactation," Drackley says. "The conclusion is it's better to leave it in and prevent that lost milk production. I'd guess the majority of dairy farms in the Midwest are feeding monensin during lactation, so this should be a fairly relevant piece of information."

The article, "Effects of prepartum diets varying in dietary energy density and monensin on early-lactation performance in dairy cows," is published in the Journal of Dairy Science [DOI: 10.3168/jds.2020-19414]. Authors include Joel Vasquez, Maris McCarthy, Bruce Richards, Kelly Perfield, David Carlson, Adam Lock, and James Drackley. Funding was provided in part by Elanco Animal Health and the Illinois Agricultural Experiment Station, part of the College of Agricultural, Consumer and Environmental Sciences at Illinois.

Credit: 
University of Illinois College of Agricultural, Consumer and Environmental Sciences

New <i>Geology</i> articles published online ahead of print in January

Boulder, Colo., USA: Eleven new articles were published ahead of print for Geology in January 2021. The include new modeling, geochemical evidence of tropical cyclone impacts, transport of plastic in submarine canyons, and a porphyry copper belt along the southeast China coast. These Geology articles are online at http://geology.geoscienceworld.org/content/early/recent.

Episodic exhumation of the Appalachian orogen in the Catskill Mountains (New York State, USA)
Chilisa M. Shorten; Paul G. Fitzgerald

Abstract: Increasing evidence indicates the eastern North American passive margin has not remained tectonically quiescent since Jurassic continental breakup. The identification, timing, resolution, and significance of post-orogenic exhumation, notably an enigmatic Miocene event, are debated. We add insight by constraining the episodic cooling and exhumation history of the Catskill Mountains (New York, USA) utilizing apatite fission-track thermochronology and apatite (U-Th)/He data from a ~1 km vertical profile. Multi-kinetic inverse thermal modeling constrains three phases of cooling: Early Jurassic to Early Cretaceous (1-3 °C/m.y.), Early Cretaceous to early Miocene (~0.5 °C/m.y.), and since Miocene times (1-2 °C/m.y.). Previous thermochronologic studies were unable to verify late-stage cooling and/or exhumation (typically post-Miocene and younger) because late-stage cooling was commonly a spurious artifact of earlier mono-kinetic annealing algorithms. Episodic cooling phases are correlative with rifting, passive-margin development, and drainage reorganization causing landscape rejuvenation. Geomorphologic documentation of increased offshore mid-Atlantic sedimentation rates and onshore erosion support the documented accelerated Miocene cooling and exhumation.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48011.1/594234/Episodic-exhumation-of-the-Appalachian-orogen-in

A new model for the optimal structural context for giant porphyry copper deposit formation
José Piquer; Pablo Sanchez-Alfaro; Pamela Pérez-Flores

Abstract: Porphyry-type deposits are the main global source of copper and molybdenum. An improved understanding of the most favorable structural settings for the emplacement of these deposits is necessary for successful exploration, particularly considering that most future discoveries will be made under cover based on conceptual target generation. A common view is that porphyry deposits are preferentially emplaced in pull-apart basins within strike-slip fault systems that favor local extension within a regional compressive to transpressive tectonic regime. However, the role of such a structural context in magma storage and evolution in the upper crust remains unclear. In this work, we propose a new model based on the integration of structural data and the geometry of magmatic-hydrothermal systems from the main Andean porphyry Cu-Mo metallogenic belts and from the active volcanic arc of southern Chile. We suggest that the magma differentiation and volatile accumulation required for the formation of a porphyry deposit is best achieved when the fault system controlling magma ascent is strongly misoriented for reactivation with respect to the prevailing stress field. When magmas and fluids are channeled by faults favorably oriented for extension (approximately normal to σ3), they form sets of parallel, subvertical dikes and veins, which are common both during the late stages of the evolution of porphyry systems and in the epithermal environment. This new model has direct implications for conceptual mineral exploration.

View article: https://pubs.geoscienceworld.org/gsa/geology/article/doi/10.1130/G48287.1/594235/A-new-model-for-the-optimal-structural-context-for

A new model for the growth of normal faults developed above pre-existing structures
Emma K. Bramham; Tim J. Wright; Douglas A. Paton; David M. Hodgson

Abstract: Constraining the mechanisms of normal fault growth is essential for understanding extensional tectonics. Fault growth kinematics remain debated, mainly because the very earliest phase of deformation through recent syn-kinematic deposits is rarely documented. To understand how underlying structures influence surface faulting, we examined fault growth in a 10 ka magmatically resurfaced region of the Krafla fissure swarm, Iceland. We used a high-resolution (0.5 m) digital elevation model derived from airborne lidar to measure 775 fault profiles with lengths ranging from 0.015 to 2 km. For each fault, we measured the ratio of maximum vertical displacement to length (Dmax/L) and any nondisplaced portions of the fault. We observe that many shorter faults (200 m) are vertically displaced along most of their surface length and have Dmax/L at the upper end of the global population for comparable lengths. We hypothesize that faults initiate at the surface as fissure-like fractures in resurfaced material as a result of flexural stresses caused by displacements on underlying faults. Faults then accrue vertical displacement following a constant-length model, and grow by dip and strike linkage or lengthening when they reach a bell-shaped displacement-length profile. This hybrid growth mechanism is repeated with deposition of each subsequent syn-kinematic layer, resulting in a remarkably wide distribution of Dmax/L. Our results capture a specific early period in the fault slip-deposition cycle in a volcanic setting that may be applicable to fault growth in sedimentary basins.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48290.1/594236/A-new-model-for-the-growth-of-normal-faults

Geochemical evidence of tropical cyclone controls on shallow-marine sedimentation (Pliocene, Taiwan)
Shahin E. Dashtgard; Ludvig Löwemark; Pei-Ling Wang; Romy A. Setiaji; Romain Vaucher

Abstract: Shallow-marine sediment typically contains a mix of marine and terrestrial organic material (OM). Most terrestrial OM enters the ocean through rivers, and marine OM is incorporated into the sediment through both suspension settling of marine plankton and sediment reworking by tides and waves under fair-weather conditions. River-derived terrestrial OM is delivered year-round, although sediment and OM delivery from rivers is typically highest during extreme weather events that impact river catchments. In Taiwan, tropical cyclones (TCs) are the dominant extreme weather event, and 75% of all sediment delivered to the surrounding ocean occurs during TCs. Distinguishing between sediment deposited during TCs and that redistributed by tides and waves during fair-weather conditions can be approximated using δ13Corg values and C:N ratios of OM. Lower Pliocene shallow-marine sedimentary strata in the Western Foreland Basin of Taiwan rarely exhibit physical evidence of storm-dominated deposition. Instead they comprise completely bioturbated intervals that transition upward into strata dominated by tidally generated sedimentary structures, indicating extensive sediment reworking under fair-weather conditions. However, these strata contain OM that is effectively 100% terrestrial OM in sediment that accumulated in estimated water depths View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48586.1/594237/Geochemical-evidence-of-tropical-cyclone-controls

Transport and accumulation of plastic litter in submarine canyons--The role of gravity flows
Guangfa Zhong; Xiaotong Peng

Abstract: Manned submersible dives discovered plastic litter accumulations in a submarine canyon located in the northwestern South China Sea, ~150 km from the nearest coast. These plastic-dominated litter accumulations were mostly concentrated in two large scours in the steeper middle reach of the canyon. Plastic particles and fragments generally occurred on the upstreamfacing sides of large boulders and other topographic obstacles, indicating obstruction during down-valley transportation. Most of the litter accumulations were distributed in the up-valley dipping slopes downstream of the scour centers. This pattern is tentatively linked to turbidity currents, which accelerated down the steep upstream slopes of the scours and underwent a hydraulic jump toward the scour centers before decelerating on the upstream-facing flank. Associated seabed sediment consisted of clayey and sandy silts, with unimodal or bimodal grain-size distributions, which are typical for turbidites. The focused distribution of the litter accumulations is therefore linked to turbidity currents that episodically flush the canyon. Our findings provide evidence that litter dispersion in the deep sea may initially be governed by gravity flows, and that turbidity currents efficiently transfer plastic litter to the deeper ocean floor.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48536.1/594238/Transport-and-accumulation-of-plastic-litter-in

Revisiting Ediacaran sulfur isotope chemostratigraphy with in situ nanoSIMS analysis of sedimentary pyrite
Wei Wang; Yongliang Hu; A. Drew Muscente; Huan Cui; Chengguo Guan ...

Abstract: Reconstructions of ancient sulfur cycling and redox conditions commonly rely on sulfur isotope measurements of sedimentary rocks and minerals. Ediacaran strata (635-541 Ma) record a large range of values in bulk sulfur isotope difference (Δ34S) between carbonate-associated sulfate (δ34SCAS) and sedimentary pyrite (δ34Spy), which has been interpreted as evidence of marine sulfate reservoir size change in space and time. However, bulk δ34Spy measurements could be misleading because pyrite forms under syngenetic, diagenetic, and metamorphic conditions, which differentially affect its isotope signature. Fortunately, these processes also impart recognizable changes in pyrite morphology. To tease apart the complexity of Ediacaran bulk δ34Spy measurements, we used scanning electron microscopy and nanoscale secondary ion mass spectrometry to probe the morphology and geochemistry of sedimentary pyrite in an Ediacaran drill core of the South China block. Pyrite occurs as both framboidal and euhedral to subhedral crystals, which show largely distinct negative and positive δ34Spy values, respectively. Bulk δ34Spy measurements, therefore, reflect mixed signals derived from a combination of syndepositional and diagenetic processes. Whereas euhedral to subhedral crystals originated during diagenesis, the framboids likely formed in a euxinic seawater column or in shallow marine sediment. Although none of the forms of pyrite precisely record seawater chemistry, in situ framboid measurements may provide a more faithful record of the maximum isotope fractionation from seawater sulfate. Based on data from in situ measurements, the early Ediacaran ocean likely contained a larger seawater sulfate reservoir than suggested by bulk analyses.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48262.1/594239/Revisiting-Ediacaran-sulfur-isotope

Recognition of a Middle-Late Jurassic arc-related porphyry copper belt along the southeast China coast: Geological characteristics and metallogenic implications
Jingwen Mao; Wei Zheng; Guiqing Xie; Bernd Lehmann; Richard Goldfarb

Abstract: Recent exploration has led to definition of a Middle-Late Jurassic copper belt with an extent of ~2000 km along the southeast China coast. The 171-153 Ma magmatic-hydrothermal copper systems consist of porphyry, skarn, and vein-style deposits. These systems developed along several northeast-trending transpressive fault zones formed at the margins of Jurassic volcanic basins, although the world-class 171 Ma Dexing porphyry copper system was controlled by a major reactivated Neoproterozoic suture zone in the South China block. The southeast China coastal porphyry belt is parallel to the northeast-trending, temporally overlapping, 165-150 Ma tin-tungsten province, which developed in the Nanling region in a back-arc transtensional setting several hundred kilometers inboard. A new geodynamic-metallogenic model linking the two parallel belts is proposed, which is similar to that characterizing the Cenozoic metallogenic evolution of the Central Andes.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48615.1/594241/Recognition-of-a-Middle-Late-Jurassic-arc-related

Anisovolumetric weathering in granitic saprolite controlled by climate and erosion rate
Clifford S. Riebe; Russell P. Callahan; Sarah B.-M. Granke; Bradley J. Carr; Jorden L. Hayes ...

Abstract: Erosion at Earth's surface exposes underlying bedrock to climate-driven chemical and physical weathering, transforming it into a porous, ecosystem-sustaining substrate consisting of weathered bedrock, saprolite, and soil. Weathering in saprolite is typically quantified from bulk geochemistry assuming physical strain is negligible. However, modeling and measurements suggest that strain in saprolite may be common, and therefore anisovolumetric weathering may be widespread. To explore this possibility, we quantified the fraction of porosity produced by physical weathering, FPP, at three sites with differing climates in granitic bedrock of the Sierra Nevada, California, USA. We found that strain produces more porosity than chemical mass loss at each site, indicative of strongly anisovolumetric weathering. To expand the scope of our study, we quantified FPP using available volumetric strain and mass loss data from granitic sites spanning a broader range of climates and erosion rates. FPP in each case is ?0.12, indicative of widespread anisovolumetric weathering. Multiple regression shows that differences in precipitation and erosion rate explain 94% of the variance in FPP and that >98% of Earth's land surface has conditions that promote anisovolumetric weathering in granitic saprolite. Our work indicates that anisovolumetric weathering is the norm, rather than the exception, and highlights the importance of climate and erosion as drivers of subsurface physical weathering.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48191.1/593942/Anisovolumetric-weathering-in-granitic-saprolite

"Missing links" for the long-lived Macdonald and Arago hotspots, South Pacific Ocean
L. Buff; M.G. Jackson; K. Konrad; J.G. Konter; M. Bizimis ...

Abstract: The Cook-Austral volcanic lineament extends from Macdonald Seamount (east) to Aitutaki Island (west) in the South Pacific Ocean and consists of hotspot-related volcanic islands, seamounts, and atolls. The Cook-Austral volcanic lineament has been characterized as multiple overlapping, age-progressive hotspot tracks generated by at least two mantle plumes, including the Arago and Macdonald plumes, which have fed volcano construction for ~20 m.y. The Arago and Macdonald hotspot tracks are argued to have been active for at least 70 m.y. and to extend northwest of the Cook-Austral volcanic lineament into the Cretaceous-aged Tuvalu-Gilbert and Tokelau Island chains, respectively. Large gaps in sampling exist along the predicted hotspot tracks, complicating efforts seeking to show that the Arago and Macdonald hotspots have been continuous, long-lived sources of hotspot volcanism back into the Cretaceous. We present new major- and trace-element concentrations and radiogenic isotopes for three seamounts (Moki, Malulu, Dino) and one atoll (Rose), and new clinopyroxene 40Ar/39Ar ages for Rose (24.81 ± 1.02 Ma) and Moki (44.53 ± 10.05 Ma). All volcanoes are located in the poorly sampled region between the younger Cook-Austral and the older, Cretaceous portions of the Arago and Macdonald hotspot tracks. Absolute plate motion modeling indicates that the Rose and Moki volcanoes lie on or near the reconstructed traces of the Arago and Macdonald hotspots, respectively, and the 40Ar/39Ar ages for Rose and Moki align with the predicted age progression for the Arago (Rose) and Macdonald (Moki) hotspots, thereby linking the younger Cook-Austral and older Cretaceous portions of the long-lived (>70 m.y.) Arago and Macdonald hotspot tracks.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48276.1/593943/Missing-links-for-the-long-lived-Macdonald-and

A detrital zircon test of large-scale terrane displacement along the Arctic margin of North America
Timothy M. Gibson; Karol Faehnrich; James F. Busch; William C. McClelland; Mark D. Schmitz ...

Abstract: Detrital zircon U-Pb geochronology is one of the most common methods used to constrain the provenance of ancient sedimentary systems. Yet, its efficacy for precisely constraining paleogeographic reconstructions is often complicated by geological, analytical, and statistical uncertainties. To test the utility of this technique for reconstructing complex, margin-parallel terrane displacements, we compiled new and previously published U-Pb detrital zircon data (n = 7924; 70 samples) from Neoproterozoic-Cambrian marine sandstone-bearing units across the Porcupine shear zone of northern Yukon and Alaska, which separates the North Slope subterrane of Arctic Alaska from northwestern Laurentia (Yukon block). Contrasting tectonic models for the North Slope subterrane indicate it originated either near its current position as an autochthonous continuation of the Yukon block or from a position adjacent to the northeastern Laurentian margin prior to >1000 km of Paleozoic-Mesozoic translation. Our statistical results demonstrate that zircon U-Pb age distributions from the North Slope subterrane are consistently distinct from the Yukon block, thereby supporting a model of continent-scale strike-slip displacement along the Arctic margin of North America. Further examination of this dataset highlights important pitfalls associated with common methodological approaches using small sample sizes and reveals challenges in relying solely on detrital zircon age spectra for testing models of terranes displaced along the same continental margin from which they originated. Nevertheless, large-n detrital zircon datasets interpreted within a robust geologic framework can be effective for evaluating translation across complex tectonic boundaries.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48336.1/593944/A-detrital-zircon-test-of-large-scale-terrane

Quantitative reconstruction of pore-pressure history in sedimentary basins using fluid escape pipes
Joe Cartwright; Chris Kirkham; Martino Foschi; Neil Hodgson; Karyna Rodriguez ...

Abstract: We present a novel method to reconstruct the pressure conditions responsible for the formation of fluid escape pipes in sedimentary basins. We analyzed the episodic venting of high-pressure fluids from the crests of a large anticlinal structure that formed off the coast of Lebanon in the past 1.7 m.y. In total, 21 fluid escape pipes formed at intervals of 50-100 k.y. and transected over 3 km of claystone and evaporite sealing units to reach the seabed. From fracture criteria obtained from nearby drilling, we calculated that overpressures in excess of 30 MPa were required for their formation, with pressure recharge of up to 2 MPa occurring after each pipe-forming event, resulting in a sawtooth pressure-time evolution. This pressure-time evolution is most easily explained by tectonic overpressuring due to active folding of the main source aquifer while in a confined geometry.

View article: https://pubs.geoscienceworld.org/gsa/geology/article-abstract/doi/10.1130/G48406.1/593946/Quantitative-reconstruction-of-pore-pressure

Credit: 
Geological Society of America

3D printing resins in dental devices may be toxic to reproductive health

3D-printable resins, such as those used in dental applications, are marketed as biocompatible

Clear tooth aligners, a multi-billion-dollar industry, use these resins

Many other consumer products use 3D-printable resins

CHICAGO --- Two commercially available 3D-printable resins, which are marketed as being biocompatible for use in dental applications, readily leach compounds into their surroundings. These compounds can induce severe toxicity in the oocyte, the immature precursor of the egg which can eventually be fertilized, reports a new Northwestern Medicine study in mouse oocytes.

The research team made this unexpected discovery while validating the use of commercially available resins to 3D print materials to culture reproductive cells.

"Our results are important because they demonstrate leachates from commonly used materials in 3D printing slated as 'biocompatible' but may have adverse effects on reproductive health," said Francesca Duncan, co-corresponding author of the study and assistant professor of obstetrics and gynecology at Northwestern University Feinberg School of Medicine. "There is a critical need to better understand the identity and biological impact of compounds that leach from these materials."

The final study was published in the journal Chemosphere on January 26.

While there have been a few previous studies investigating potential toxicities due to exposure to 3D-printed materials, there have been no studies investigating the potential reproductive toxicities induced by these materials in mammalian models.

"Despite the revelations surrounding BPA almost 20 years ago, it is still rare that the potential impact new materials may have on reproductive health are rigorously and systematically studied despite their ubiquitous nature in our day-to-day lives," Duncan said.

The clear tooth aligner market that uses resins such as Dental SG (DSG) and Dental LT (DLT) has become a multi-billion-dollar business in recent years, Duncan said, with some companies utilizing 3D-printing techniques in manufacturing due to their ability to rapidly produce products.

Duncan and colleagues characterized the leachates of the resins using mass spectroscopy and identified Tinuvin-292, a commercial light stabilizer that is commonly used in the production of plastic materials.

The results of this study potentially reach well beyond just the 3D-printing space however, Duncan said, because Tinuvin-292 is a common additive used in the production of many different types of plastic consumer products.

But even beyond dental applications, 3D-printed materials are being used more often due to recent technological advancements that make them easy to produce.

While the results of the study only provide evidence for egg toxicity of these materials in an in vitro setting, whether there are possible in vivo effects need to be further examined, scientists said. This is especially the case for DLT resins, which are intended for making oral retainers that must stay in one's mouth for long periods of time, leading to extended exposure in the body.

"The results demonstrate reproductive toxicity should be a priority when characterizing all materials humans may come into contact with either in a medical setting or in their day-to-day lives," Duncan said.

In terms of next steps, scientists plan to investigate whether in vivo exposures to DSG and DLT resins have egg toxicity similar to what occurs in vitro, examine whether there are sex differences in reproductive toxicity in response to DSG and DLT and examine the human exposure levels to Tinuvin 292.

Credit: 
Northwestern University

From heat to spin to electricity: Understanding spin transport in thermoelectric devices

image: Thermoelectric materials will allow the efficient conversion of waste industrial heat into electricity. But to create effective thermoelectric materials, their underlying physics must be well understood.

Image: 
Macrovector on Freepik

Thermoelectric materials, which can generate an electric voltage in the presence of a temperature difference, are currently an area of intense research; thermoelectric energy harvesting technology is among our best shots at greatly reducing the use of fossil fuels and helping prevent a worldwide energy crisis. However, there are various types of thermoelectric mechanisms, some of which are less understood despite recent efforts. A recent study from scientists in Korea aims to fill one such gap in knowledge. Read on to understand how!

One of these mechanisms mentioned earlier is the spin Seebeck effect (SSE), which was discovered in 2008 by a research team led by Professor Eiji Saitoh from Tokyo University, Japan. The SSE is a phenomenon in which a temperature difference between a nonmagnetic and a ferromagnetic material creates a flow of spins. For thermoelectric energy harvesting purposes, the inverse SSE is especially important. In certain heterostructures, such as yttrium iron garnet--platinum (YIG/Pt), the spin flow generated by a temperature difference is transformed into a current with an electric charge, offering a way to generate electricity from the inverse SSE.

Because this spin-to-charge conversion is relatively inefficient in most known materials, researchers have tried inserting an atomically thin layer of molybdenum disulfide (MoS2) between the YIG and Pt layers. Though this approach has resulted in enhanced conversion, the underlying mechanisms behind the role of the 2D MoS2 layer in spin transport remains elusive.

To tackle this knowledge gap, Professor Sang-Kwon Lee of the Department of Physics at Chung-Ang University, Korea, has recently led an in-depth study on the topic, which has been published in Nano Letters. Various colleagues from Chung-Ang University participated, as well as Professor Saitoh, in an effort to understand the effect of 2D MoS2 on the thermoelectric power of YIG/Pt.

To this end, the scientists prepared two YIG/MoS2/Pt samples with different morphologies in the MoS2 layer, as well as a reference sample without MoS2 altogether. They prepared a measurement platform in which a temperature gradient can be enforced, a magnetic field applied, and the voltage difference caused by the ensuing spin flow monitored. Interestingly, they found that the inverse SSE, and in turn the thermoelectric performance of the whole heterostructure, can be either enhanced or diminished depending on the size and type of MoS2 used. In particular, using a holey MoS2 multilayer between the YIG and Pt layers yielded a 60% increase in thermoelectric power compared with YIG/Pt alone.

Through careful theoretical and experimental analyses, the scientists determined that this marked increase was caused by the promotion of two independent quantum phenomena that, together, account for the total inverse SSE. These are called the inverse spin Hall effect, and the inverse Rashba-Edelstein effect, which both produce a spin accumulation that is then converted into a charge current. Moreover, they investigated how the holes and defects in the MoS2 layer altered the magnetic properties of the heterostructure, leading to a favorable enhancement of the thermoelectric effect. Excited about the results, Lee remarks: "Our study is the first to prove that the magnetic properties of the interfacial layer cause spin fluctuations at the interface and ultimately increase spin accumulation, leading to a higher voltage and thermopower from the inverse SSE."

The results of this work represent a crucial piece in the puzzle of thermoelectric materials technology and could soon have real-world implications, as Lee explains: "Our findings reveal important opportunities for large-area thermoelectric energy harvesters with intermediate layers in the YIG/Pt system. They also provide essential information to understand the physics of the combined Rashba-Edelstein effect and SSE in spin transport." He adds that their SSE measurement platform could be of great help to investigate other types of quantum transport phenomena, such as the valley-driven Hall and Nernst effects.

Let us hope that thermoelectric technology progresses rapidly so that we can make our dreams of a more ecofriendly society a reality!

Credit: 
Chung Ang University

Scholars reveal the changing nature of U.S. cities

image: Washington, D.C. and surrounding area.

Image: 
Johannes Uhl

Cities are not all the same, or at least their evolution isn't, according to new research from the University of Colorado Boulder.

These findings, out this week in Nature Communications Earth and Environment and Earth System Science Data, buck the historical view that most cities in the United States developed in similar ways. Using a century's worth of urban spatial data, the researchers found a long history of urban size (how big a place is) "decoupling" from urban form (the shape and structure of a city), leading to cities not all evolving the same--or even close.

The researchers hope that by providing this look at the past with this unique data set, they'll be able to glimpse the future, including the impact of population growth on cities or how cities might develop in response to environmental factors like sea level rise or wildfire risk.

"We can learn so much more about our cities about and urban development, if we know how to exploit these kinds of new data, and I think this really confirms our approach," said Stefan Leyk, a geography professor at CU Boulder and one of the authors on the papers.

"It's not just the volume of data that you take and throw into a washing machine. It's really the knowing how to make use of the data, how to integrate them, how to get the right and meaningful things out there."

It's projected that by 2050, more than two-thirds of humans will live in urban areas. What those urban areas will look like, however, is unclear, given limited knowledge of the history of urban areas, broadly speaking, prior to the 1970s.

This work and previous research, however, hopes to fill that gap by studying property-level data from the property management company, Zillow, through a property-share agreement.

This massive dataset, called the Zillow Transaction and Assessment Dataset or ZTRAX, contains about 374 million data records that include the built year of existing buildings going back over 100 years. Previously, the researchers then used these data to create the Historical Settlement Data Compilation for the United States (HISDAC-US), a set of unique time series data set that's freely available for anyone to use.

For this new research, which were funded by the National Science Foundation, the Institute of Behavioral Sciences and Earth Lab, the researchers applied statistical methods and data mining algorithms to the data, trying to glean all available information on the nature of settlement development, particularly for metropolitan statistical areas, or high-density geographic regions.

What they found is that not only were they able to learn more about how to measure urban size, shape and structure (or form), including the number of built-up locations and their structures, they were also able to see very clear trends in the evolution of these distinct categories of urban development.

In particular, the researchers found that urban form and urban size do not develop the same as previously thought. While size generally moves in a single direction, especially in large cities, form can ebb and flow depending on constraints, such as the geography of places as well as environmental and technological factors.

"This (the categorization) is something that is really novel about that paper because this could not be done prior to that because these data were just not available," said Johannes Uhl, the lead author of the paper and a research associate at CU Boulder.

It's remarkable, according to the researchers, that the two articles are being published by different high-impact journals on the same day. While the Nature Communications Earth and Environment piece discusses the substantive application of the data, the Earth System Science Data discusses the data themselves, the methods to create them, and the limitations with them.

"There's so much potential in this current data revolution, as we call it," Leyk commented. "The growth of so-called data journals is a good trend because it's becoming more and more systematic to publish formal descriptions of the data, to learn where the data can be found, and to inform the community what kind of publications are based on these data products. So, I like this trend and we try and make use of it."

This research, however, is still far from finished. Next, the researchers hope to further examine the categories, and, in particular, the different groups of cities that emerged in the process of this research to hopefully determine a classification system for urban evolution, while also applying the data approach to more rural settings.

"The findings are interesting, but they can of course be expanded into greater detail," Uhl said.

The researchers are also working with other researchers in different fields across the university to explore the applications of these data on topics as far reaching as urban fuel models for nuclear war scenarios, the exposure of the built environment to wildfire risk, and settlement vulnerability from sea level rise.

"The context is a little different in each of these fields, but really interesting," Leyk said. "You realize how important that kind of new data, new information, can become for so many unexpected topics."

Credit: 
University of Colorado at Boulder

Engineers share model for ventilating two patients with one ventilator

image: Simple RC network model of ventilator system and patient, with linear resistance (Rv) and compliance (Cv) for the ventilator tubing system, and linear resistance (R) and compliance (C) for the patient.

Image: 
University of Bath

As Covid-19 continues to put pressure on healthcare providers around the world, engineers at the University of Bath have published a mathematical model that could help clinicians to safely allow two people to share a single ventilator.

Members of Bath's Centre for Therapeutic Innovation and Centre for Power Transmission and Motion Control have published a first-of-its-kind research paper on dual-patient ventilation (DPV), following work which began during the first wave of the virus in March 2020.

Professor Richie Gill, Co-Vice Chair of the Centre for Therapeutic Innovation and the project's principal investigator, says: "We are not advocating dual-patient ventilation, but in extreme situations in parts of the world, it may be the only option available as a last resort. The Covid-19 crisis presents a potential risk of hospitals running short of ventilators, so it is important we explore contingencies, such as how to maximise capacity."

Dual-patient ventilation presents several challenges: accurate identification of patients' lung characteristics over time; close matching of patients suitable to be ventilated together, and the risk of lung damage if airflow is not safely maintained. The BathRC model enables doctors to calculate the amount of restriction required to safely ventilate two patients using one ventilator.

As a practice, DPV is strongly advised against by healthcare bodies given the potential for lung damage, and the team stresses that their findings should only be used in extreme situations where patients outnumber available equipment.

Prof Gill adds: "This isn't something we'd envisage being needed for critical-care patients. However, one of the issues with Covid is that people can need ventilation for several weeks. If you could ventilate two recovering patients with one machine it could free up another for someone in critical need."

Accurate matching and correct resistance are key

No testing has been carried out on patients, instead the research so far has taken place using artificial lungs, normally used to calibrate ventilators.

The model equates the ventilator circuit to an electrical circuit with resistance and compliance considered equivalent to electrical resistance and capacitance; this enabled a simple calculator to be created.

Prof Gill adds: "The BathRC model directly allows the restriction needed for safely ventilating two dissimilar patients using one ventilator.

"To reduce the risk of damage to a patient's lungs, you need to ensure the correct flow of gases around the circuit by adding resistance. The simplest and most successful method we tried was modelled on an electrical circuit, hence the BathRC model name - where RC stands for resistance-compliance."

While dual-patient ventilation has been previously attempted during the Covid-19 pandemic, the paper is the first to provide clinicians with the calculations needed to safely ventilate two patients with one machine. The model is able to predict tidal lung volumes accurate to within 4%.

In addition to further testing, some hurdles remain before clinicians could safely attempt dual-patient ventilation using the BathRC model. The team plans to publish further research soon into how to create an adjustable airflow restrictor.

The paper, A simple method to estimate flow restriction for dual ventilation of dissimilar patients: The BathRC model, is published in Plos One.

The University of Bath has several decades of world-leading expertise in the analysis and design of fluid systems, including modelling of ventilators.

Credit: 
University of Bath

Getting to net zero -- and even net negative -- is surprisingly feasible, and affordable

image: Regardless of the pathway we take to become carbon neutral by 2050, the actions needed in the next 10 years are the same.

Image: 
Jenny Nuss/Berkeley Lab)

Reaching zero net emissions of carbon dioxide from energy and industry by 2050 can be accomplished by rebuilding U.S. energy infrastructure to run primarily on renewable energy, at a net cost of about $1 per person per day, according to new research published by the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab), the University of San Francisco (USF), and the consulting firm Evolved Energy Research.

The researchers created a detailed model of the entire U.S. energy and industrial system to produce the first detailed, peer-reviewed study of how to achieve carbon-neutrality by 2050. According to the Intergovernmental Panel on Climate Change (IPCC), the world must reach zero net CO2 emissions by mid-century in order to limit global warming to 1.5 degrees Celsius and avoid the most dangerous impacts of climate change.

The researchers developed multiple feasible technology pathways that differ widely in remaining fossil fuel use, land use, consumer adoption, nuclear energy, and bio-based fuels use but share a key set of strategies. "By methodically increasing energy efficiency, switching to electric technologies, utilizing clean electricity (especially wind and solar power), and deploying a small amount of carbon capture technology, the United States can reach zero emissions," the authors write in "Carbon Neutral Pathways for the United States," published recently in the scientific journal AGU Advances.

Transforming the infrastructure

"The decarbonization of the U.S. energy system is fundamentally an infrastructure transformation," said Berkeley Lab senior scientist Margaret Torn, one of the study's lead authors. "It means that by 2050 we need to build many gigawatts of wind and solar power plants, new transmission lines, a fleet of electric cars and light trucks, millions of heat pumps to replace conventional furnaces and water heaters, and more energy-efficient buildings - while continuing to research and innovate new technologies."

In this transition, very little infrastructure would need "early retirement," or replacement before the end of its economic life. "No one is asking consumers to switch out their brand-new car for an electric vehicle," Torn said. "The point is that efficient, low-carbon technologies need to be used when it comes time to replace the current equipment."

The pathways studied have net costs ranging from 0.2% to 1.2% of GDP, with higher costs resulting from certain tradeoffs, such as limiting the amount of land given to solar and wind farms. In the lowest-cost pathways, about 90% of electricity generation comes from wind and solar. One scenario showed that the U.S. can meet all its energy needs with 100% renewable energy (solar, wind, and bioenergy), but it would cost more and require greater land use.

"We were pleasantly surprised that the cost of the transformation is lower now than in similar studies we did five years ago, even though this achieves much more ambitious carbon reduction," said Torn. "The main reason is that the cost of wind and solar power and batteries for electric vehicles have declined faster than expected."

The scenarios were generated using new energy models complete with details of both energy consumption and production - such as the entire U.S. building stock, vehicle fleet, power plants, and more - for 16 geographic regions in the U.S. Costs were calculated using projections for fossil fuel and renewable energy prices from DOE Annual Energy Outlook and the NREL Annual Technology Baseline report.

The cost figures would be lower still if they included the economic and climate benefits of decarbonizing our energy systems. For example, less reliance on oil will mean less money spent on oil and less economic uncertainty due to oil price fluctuations. Climate benefits include the avoided impacts of climate change, such as extreme droughts and hurricanes, avoided air and water pollution from fossil fuel combustion, and improved public health.

The economic costs of the scenarios are almost exclusively capital costs from building new infrastructure. But Torn points out there is an economic upside to that spending: "All that infrastructure build equates to jobs, and potentially jobs in the U.S., as opposed to sending money overseas to buy oil from other countries. There's no question that there will need to be a well-thought-out economic transition strategy for fossil fuel-based industries and communities, but there's also no question that there are a lot of jobs in building a low-carbon economy."

The next 10 years

An important finding of this study is that the actions required in the next 10 years are similar regardless of long-term differences between pathways. In the near term, we need to increase generation and transmission of renewable energy, make sure all new infrastructure, such as cars and buildings, are low carbon, and maintain current natural gas capacity for now for reliability.

"This is a very important finding. We don't need to have a big battle now over questions like the near-term construction of nuclear power plants, because new nuclear is not required in the next ten years to be on a net-zero emissions path. Instead we should make policy to drive the steps that we know are required now, while accelerating R&D and further developing our options for the choices we must make starting in the 2030s," said study lead author Jim Williams, associate professor of Energy Systems Management at USF and a Berkeley Lab affiliate scientist.

The net negative case

Another important achievement of this study is that it's the first published work to give a detailed roadmap of how the U.S. energy and industrial system can become a source of negative CO2 emissions by mid-century, meaning more carbon dioxide is taken out of the atmosphere than added.

According to the study, with higher levels of carbon capture, biofuels, and electric fuels, the U.S. energy and industrial system could be "net negative" to the tune of 500 million metric tons of CO2 removed from the atmosphere each year. (This would require more electricity generation, land use, and interstate transmission to achieve.) The authors calculated the cost of this net negative pathway to be 0.6% of GDP - only slightly higher than the main carbon-neutral pathway cost of 0.4% of GDP. "This is affordable to society just on energy grounds alone," Williams said.

When combined with increasing CO2 uptake by the land, mainly by changing agricultural and forest management practices, the researchers calculated that the net negative emissions scenario would put the U.S. on track with a global trajectory to reduce atmospheric CO2 concentrations to 350 parts per million (ppm) at some distance in the future. The 350 ppm endpoint of this global trajectory has been described by many scientists as what would be needed to stabilize the climate at levels similar to pre-industrial times.

Credit: 
DOE/Lawrence Berkeley National Laboratory

Germline whole exome sequencing reveals the potential role of hereditary predisposition and therapeutic implications in small cell lung cancer, a tobacco-related cancer

(Embargoed for January 28, 2021 3 a.m. SPT; January 27th 2 pm EST, 2021) -- Note: this study is scheduled for publication in the Journal Science Translational Medicine)

A study presented today by Dr. Nobuyuki Takahashi of the Center for Cancer Research (CCR), National Cancer Institute (NCI), Bethesda, Md. at the IASLC World Conference on Lung Cancer Singapore demonstrates that small cell lung cancer (SCLC) may have an inherited predisposition and lays the foundation for understanding the interaction between genotype and tobacco exposure in exacerbating SCLC risk as well as potential therapeutic implications. Because tobacco is the dominant carcinogen, secondary causes of lung cancer are often diminished in perceived importance, especially in SCLC, the most lethal lung cancer. SCLC is almost exclusively related to tobacco and comprises 15% to 20% of all lung cancers.

The study was conducted by researchers at the NCI CCR including Drs. Takahashi, Camille Tlemsani, and Lorinc Pongor, and led by Dr. Anish Thomas of the Developmental Therapeutics Branch. To explore the genetic basis of SCLC, they sequenced germline whole exomes of 87 patients (77 with SCLC, 10 with extrapulmonary small cell) and compared these with an independent SCLC cohort and with cancer-free non-Finnish European individuals from the Exome Aggregation Consortium (ExAC) cohort. They also evaluated clinical characteristics associated with the germline genotype.

Among 607 cancer predisposition and SCLC-related genes, the researchers discovered 42 deleterious variants in 35 genes among 38 (43.7%) of patients. Identification of variants influenced medical management and family member testing in 9 (10.3%) patients. Six germline mutations were also found in the independent cohort of 79 patients with SCLC, including three of the same variants (MUTYH G386D, POLQ I421Rfs*7, and RNASEL E265X). By tumor whole exome sequencing they confirmed loss of heterozygosity of MLH1, BRCA2, and SMARCA4 genes.

Consistent with the contribution to potential cancer predisposition, patients with MLH1, BRCA2, and MUTYH germline mutations had multiple personal and family history of cancer and lung cancers including SCLC. Unselected patients with SCLC in the cohort were more likely to carry germline RAD51D, CHEK1, BRCA2, and MUTYH mutations than healthy controls in the ExAC cohort. Pathogenic germline mutations were significantly associated with the likelihood of first-degree relatives with cancer or lung cancer (odds ratio: 1.82, 2.60, respectively) and longer recurrence-free survival following platinum-based chemotherapy (hazard ratio: 0.46, p = 0.002), independent of known prognostic factors including sex, stage, and age at diagnosis. Finally, they tested therapeutic relevance of these observations in a SCLC patient with pathogenic BRIP1 mutation, who achieved a tumor response (-64% decrease in tumor size) with a combination treatment of a topoisomerase 1 inhibitor and a poly (ADP-ribose) polymerase inhibitor.

"The study opens new avenues for directed cancer screening for patients and their families, as well as subtyping and targeted therapies of SCLC, currently treated as a single entity", Dr. Takahashi reported.

Credit: 
International Association for the Study of Lung Cancer

Ions in molten salts can go 'against the flow'

video: The film shows how ions move in an electric field. The lithium ion attracts fluoride and chloride towards the cathode, and only the iodide moves towards the anode.

Image: 
Walz, M.-M. and van der Spoel, D.

In a new article published in the scientific journal Communications Chemistry, a research group at Uppsala University show, using computer simulations, that ions do not always behave as expected. In their research on molten salts, they were able to see that, in some cases, the ions in the salt mixture they were studying affect one another so much that they may even move in the "wrong" direction - that is, towards an electrode with the same charge.

Research on the next-generation batteries is under way in numerous academic disciplines. Researchers at the Department of Cell and Molecular Biology, Uppsala University have developed and studied a model for alkali halides, of which ordinary table salt (sodium chloride) is the best-known example. If these substances are heated to several hundred degrees Celsius, they become electrically conductive liquids known as "molten salts". Molten salts are already used in energy contexts: for concentrated solar power in the Sahara desert and as electrolytes in molten-salt batteries that can be used for large-scale storage of electricity.

Despite their wide-spread use, some of the molten salt's basic properties are not yet fully understood. When it comes to batteries, optimising conductivity is a frequent goal. To produce a battery that is as efficient as possible, knowing what happens to individual ions is vital. This is what the Uppsala researchers are now investigating with their simulations.

"In the long run, the purpose of this research is to develop physical models for biological molecules. But these salts are relatively simple and make a good test bed," says Professor David van der Spoel, the group leader for the modelling project.

However, the researchers' simulations show that the salts are not as simple as they may seem at the first glance, and that they have some interesting properties, especially if various alkali halides are mixed together.

In a simplified theory, ions that move in an electric field (for example in a battery) do not interact with each other and are affected solely by the electric field. In their newly published study, the researchers were able to demonstrate that this is not always true. The study shows how, in a mixture of lithium ions with ions of fluoride, chloride and iodide, the lighter anions, fluoride and chloride, move towards the negative cathode along with the lithium ions in a (simulated) battery electrolyte.

"The negative ions are attracted both by the lithium ions and by the positive anode, and the net effect of these forces makes the lighter anions move slowly towards the cathode, since the positive lithium ions are also moving in that direction," says the first author of the study, Marie-Madeleine Walz.

In their continued research, the group will develop a water model to study the interaction of water molecules with ions. Their investigation will include, for example, how the properties of ions are affected by an electric field when there is water in the mixture.

Credit: 
Uppsala University