Tech

Holograms increase solar energy yield

image: A holographic light collector separates the colors of sunlight and directs them to the solar cells.

Image: 
R.K. Kostuk, University of Arizona

The energy available from sunlight is 10,000 times more than what is needed to supply the world's energy demands. Sunlight has two main properties that are useful in the design of renewable energy systems. The first is the amount power falling on a fixed area, like the ground or a person's roof. This quantity varies with the time of day and the season. The second property is the colors or spectrum of the sunlight.

One way to capture solar energy is to use solar cells that directly turn sunlight into electricity. In a solar module like those that people place on their roof, many cells are assembled on a rigid panel, connected to one another, sealed, and covered with protective glass. The solar cell works best when certain colors of sunlight fall on it, and when the whole area is covered by photocells. However, some panel area is needed to connect the cells, and the solar cell shape may not allow all of the remaining panel area to collect sunlight. These effects make the solar panel less efficient than it could be. Capturing as much of the sunlight on a solar panel as possible is critical to efficiently harnessing solar energy.

Researchers at the University of Arizona recently developed an innovative technique to capture the unused solar energy that illuminates a solar panel. As reported in the Journal of Photonics for Energy (JPE), they created special holograms that can be easily inserted into the solar panel package. Each hologram separates the colors of sunlight and directs them to the solar cells within the solar panel. This method can increase the amount of solar energy converted by the solar panel over the course of a year by about 5 percent. This will reduce both the cost and the number of solar panels needed to power a home, a city, or a country.

The research was supported by the QESST Engineering Research Center, which is sponsored by the US National Science Foundation and US Department of Energy to address the challenge of transforming electricity generation to sustainably meet growing demands for energy.

Low cost, sustainable design

Designed by PhD student Jianbo Zhao, under the supervision of Raymond K. Kostuk, professor of electrical and computer engineering and optical sciences, and in collaboration with fellow PhD student Benjamin Chrysler, the holographic light collector combines a low-cost holographic optical element with a diffuser. The optical element is situated symmetrically at the center of the photovoltaic module to obtain the maximum effective light collection.

The team computed the annual energy yield improvement for Tucson, Arizona, and presented a reproducible method for evaluating the power collection efficiency of the holographic light collector as a function of the sun angles at different times of day, in different seasons, and at different geographical locations.

According to JPE Editor-in-Chief Sean Shaheen at University of Colorado Boulder, the collector and associated method are especially noteworthy because they are low-cost and scalable as well as impactful: "The enhancement of approximately five percent in annual yield of solar energy enabled by this technique could have large impact when scaled to even a small fraction of the 100s of gigawatts of photovoltaics being installed globally. Professor Kostuk's team has demonstrated their holographic approach with a low-cost material based on gelatin, which is readily manufactured in large quantity. And while gelatin is normally derived from animal collagen, progress in lab-derived versions has made it likely that synthetic alternatives could be used at scale."

Credit: 
SPIE--International Society for Optics and Photonics

Skoltech researchers proposed an attractive cheap organic material for batteries

image: Cover of ACS Applied Energy Materials Volume 4 Issue 5

Image: 
ACS Applied Energy Materials

A new report by Skoltech scientists and their colleagues describes an organic material for the new generation of energy storage devices, which structure follows an elegant molecular design principle. It has recently been published in ACS Applied Energy Materials and made the cover of the journal.

While the modern world relies on energy storage devices more and more heavily, it is becoming increasingly important to implement sustainable battery technologies that are friendlier to the environment, are easy to dispose, rely on abundant elements only, and are cheap. Organic batteries are desirable candidates for such purposes. However, organic cathode materials that store a lot of energy per mass unit can be charged quickly, are durable and can be easily produced on a large scale at the same time, remain underdeveloped.

To address this problem, researchers from Skoltech proposed a simple redox-active polyimide. It was synthesized by heating a mixture of an aromatic dianhydride and meta-phenylenediamine, both easily accessible reagents. The material showed promising features in various types of energy storage devices, such as lithium-, sodium- and potassium-based batteries. It had high specific capacities (up to ~140 mAh/g), relatively high redox potentials, as well as decent cycling stability (up to 1000 cycles), and abilities to charge quickly (

The new material's energy and power outputs were superior compared to its previously known isomer, which is derived from para-phenylenediamine. With the help of collaborators from the Institute of Problems of Chemical Physics of the Russian Academy of Sciences, it was shown that there were two reasons for the better performance of the new polyimide. Firstly, it had smaller particles and a much higher specific surface area, which enabled easier diffusion of the charge carriers. Secondly, the spatial arrangement of the neighbor imide units in the polymer allowed a more energetically favorable binding of metal ions, which increased the redox potentials.

"This work is interesting not just because another organic cathode material was researched", - says Roman Kapaev, a Skoltech PhD student who designed this study, - "What we propose is a new molecular design principle for battery polyimides, which is using aromatic molecules with amino groups in meta positions as building blocks. For a long time, scientists have paid little attention to this structural motif, and used para-phenylenediamine or similar structures instead. Our results are a good hint for understanding how the battery polyimides should be designed on a molecular level, and it might lead to cathode materials with even better characteristics".

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Building a better LED bulb

LED lightbulbs offer considerable advantages over other types of lighting. Being more efficient, they require much less electricity to operate. They do not give off unwanted heat the way old-school incandescent bulbs do, and the best of them long outlast even fluorescent lightbulbs.

But LEDs are not problem-free. Questions linger over suspected links between health concerns such as fatigue, mood disorders, and insomnia from overexposure to the blue-tinted light produced by today's standard LED bulbs. Plus, higher prices can prompt lightbulb shoppers to weigh other options.

A University of Houston research team led by Jakoah Brgoch, associate professor of chemistry in the College of Natural Sciences and Mathematics and principal investigator in the Texas Center for Superconductivity, is developing an LED bulb that emits most of its energy from the safer violet segment of the visible light spectrum. Instead of just masking the blue light, they are developing a unique class of luminescent materials called phosphors that absorb a violet LED's single-color emission and convert the light to cover the majority of the visible spectrum.

"Our group is creating phosphors that operate, not with the conventional blue LED chip that nearly every LED light bulb uses today, but with a violet LED chip. This basically moves away from blue to violet as the base source and then converts the violet LED light into the broad-spectrum white light that we see," Brgoch explained. "Our ultimate goal is for this new violet-based bulb to be as energy efficient as possible and also cheap, eventually making new lighting technology marketable to consumers."

Results of their research were recently published in ACS Applied Materials and Interfaces, a journal of the American Chemical Society.

At this point, you might be looking at your favorite lamp's standard LED bulb and finding its white light to be just fine. But technically speaking, there actually is no such thing as pure white light.

Hold a prism up to that bulb, and you'll see its light separated into wavelengths that show a beautiful array of color bands ranging from violet to red; this is what scientists call the visible spectrum of light. (If your prism isn't handy, then imagine having your own tiny rainbow. It would look much the same.)

Your lamp light looks white because your eyes and brain work together to blend human perception of those separate bands of color into a white light that may at this moment be illuminating the words you read. Different types of lightbulbs emphasize different parts of the visible spectrum of light.

Engineers at lighting companies manipulate the balance to create a specific ambiance. A little more red yields a warm, mellow white light that is nice in a living room, while cool blue tones give off crisp white light better for office lighting. But outside the laboratory, the LEDs' tendency toward blue has been hard to avoid.

"Sometimes you recognize it - those are the cheapest LED lightbulbs. And then sometimes it looks like a nice warm white light. But even in the most expensive lightbulbs, if it's based on a blue LED, there is still a significant component of blue light sneaking through," the professor explained.

Lately scientists have been focusing on how light frequencies affect health.

"With the advent of LED lighting, companies have started trying to understand how humans interact with light and, more importantly, how light interacts with humans," Brgoch said. "As you sit in your office, the blue hues in your light are a great thing because they help you stay alert. But that same light at night might keep you awake. This is the balance you have to strike. It's about following a natural circadian cycle without disruption."

Sleep studies reveal that nighttime overexposure to blue frequencies can alter hormones like melatonin, sometimes leading to insomnia, disturbed sleep cycles, and other problems. Too much blue-light exposure also is suspected in cataract formation. Interestingly, urban dwellers living amid LED-based street lights, traffic lights, and lighted commercial signs are exposed to more day-and-night LED exposure than suburbanites.

"That's not to say we should just remove all the blue light from your lightbulbs. You need some of the blue spectrum. It's not about eliminating the blue, it's about keeping it to a reasonable level. That's what we're seeking with our work," said graduate research assistant Shruti Hariyani, an author of the paper.

Back in the lab, Brgoch and his team are focused on identifying phosphors and discovering which are most feasible, in energy efficiency and economy, to advance to prototype bulbs. "We look at finding new materials as a way to also help reduce the cost of these lights. Whenever you have more materials available, patent licensing costs go down and that makes the bulbs cheaper. So that's one of our driving forces," Brgoch said.

In the quest for what Hariyani calls a human-friendly light, the research team is busy testing those potential materials.

"Hearing myself say, 'this is different, this is new' when we find the right phosphor that can pair with violet - I guess that is my Eureka moment," she said.

For research not directly tied to the LED project, Brgoch and Hariyani recently were honored with the 2021 Chemistry of Materials Lectureship and Best Paper Award. The award, from the American Chemical Society Division of Inorganic Chemistry, recognizes outstanding influence across the field of materials chemistry and recognition of research as a team endeavor.

Credit: 
University of Houston

Incentivized product reviews: Positive to a fault?

ITHACA, N.Y. - It stands to reason that the more one is compensated for performing a task, the greater the incentive to do a good job and the better one feels about doing it.

But what if the task is writing an objective review of a company or service? Does the compensation blur the lines of objectivity?

Kaitlin Woolley, assistant professor of marketing in the Samuel Curtis Johnson Graduate School of Management, Cornell SC Johnson College of Business, wondered the same thing.

"You often receive emails after a purchase, offering you a chance to win a gift card to the company in exchange for writing a review," she said. "That seemed problematic to me - you're buying reviews from your customers. I was interested in how incentives affect what customers write in their reviews."

Woolley found that offering direct compensation for posting written reviews results in a greater proportion of positive versus negative emotion across a variety of product and service experiences, which she tested using two natural language-processing software systems and human judges. Whether overly glowing reviews are always good for a company, however, is another question.

Woolley is the lead author of "Incentives Increase Relative Positivity of Review Content and Enjoyment of Review Writing," which published April 23 in the Journal of Marketing Research. Woolley's co-author, Marissa Sharif, is an assistant professor of marketing at the Wharton School of the University of Pennsylvania.

Online reviews are critical for modern businesses; prior research has found that upward of 90% of consumers consult such reviews at least occasionally before making new purchases.

Statistically these reviews, Woolley said, create what's known as a "J-shaped distribution," with some bad reviews on one side, a considerable number of good reviews at the other, and a large number of silent customers in the middle. Incentivizing reviews, Woolley said, is an attempt to get that silent middle to speak up - in a positive manner.

"The idea is that if companies pay them, maybe that silent group will have more motivation to actually write a review," Woolley said. "Incentives are generally great at increasing motivation and can increase the volume of people writing reviews. But I was curious how it might bias what people write."

Woolley and Sharif attempted to answer this question through a series of seven controlled experiments. The first four experiments all confirmed the hypothesis that incentivizing review-writing increases customers' enjoyment of the review writing process and the relative positivity of review content, using different products and services (e.g., video streaming service; recent fast-food experience). Additionally, the fourth showed that the effect is lessened when the incentive is less directly tied to the actual process of review-writing.

In the fifth experiment, participants (some incentivized, some not) were asked to write a review for a popular breakfast cereal, with some asked to read negative information about the company first. Overall, Woolley said, positivity increased in proportion to both the awareness and the type (monetary vs. nonmonetary) of the compensation. But compensation did not produce positive reviews for a company viewed in a negative light.

For the final two experiments, the researchers recruited Cornell students to complete a review of their current on-campus dining experience or of their spring semester. Half were told in advance that they'd be compensated for their review and others received compensation after the fact. "We found that when you paid students for their dining hall review, it increased positivity by 55%," Woolley said.

Incentivizing reviews can have a positive effect on a company's bottom line, of course, but the investment comes with risks, Woolley said.

"There could be welfare implications for consumers if they are exposed to information that might be overly positive, especially if it doesn't live up to the experience that they're expecting," she said. "I don't know that marketers should just say, 'Let's use incentives to boost review positivity,' because there may be unforeseen negative consequences, too."

Credit: 
Cornell University

How army ants' iconic mass raids evolved

video: Here, in a colony of 25 individually-tagged ants, a scout leaves the nest in search of prey [a fire ant pupa, dyed blue]. After she locates it, she lays pheromone trail home and recruits a raiding column of her nestmates for a collective attack.

Image: 
Chandra, Gal and Kronauer. Courtesy of PNAS

Army ants form some of the largest insect societies on the planet. They are quite famous in popular culture, most notably from a terrifying scene in Indiana Jones. But they are also ecologically important. They live in very large colonies and consume large amounts of arthropods. And because they eat so much of the other animals around them, they are nomadic and must keep moving in order to not run out of food. Due to their nomadic nature and mass consumption of food, they have a huge impact on arthropod populations throughout tropical rainforests floors.

Their mass raids are considered the pinnacle of collective foraging behavior in the animal kingdom. The raids are a coordinated hunting swarm of thousands and, in some species, millions of ants. The ants spontaneously stream out of their nest, moving across the forest floor in columns to hunt for food. The raids are one of the most iconic collective behaviors in the animal kingdom. Scientists have studied their ecology and observed their complex behavior extensively. And while we know how these raids happen, we know nothing of how they evolved.

A new study in Proceedings of the National Academy of Sciences led by Vikram Chandra, postdoctoral researcher, Harvard University, Asaf Gal, postdoctoral fellow, The Rockefeller University, and Daniel J.C. Kronauer, Stanley S. and Sydney R. Shuman Associate Professor, The Rockefeller University, combines phylogenetic reconstructions and computational behavioral analysis to show that army ant mass raiding evolved from a different form of coordinated hunting called group raiding through the scaling effects of increasing colony size.

The researchers discovered that the ancestral state to army ants' mass raids is the rather different-looking group raids that their non-army ant relatives perform. The evolution of mass raids from group raids happened tens of millions of years ago and the transition from group raids to mass raids is perfectly correlated with a massive increase in colony size.

"All of these ants are within the subfamily Dorylinae," said Chandra. "The first doryline ants, which were not army ants, lived in small colonies of a few hundred workers. When army ants evolved their foraging behavior from group to mass raiding, they also massively expanded their colony sizes. Army ant colonies now have tens of thousands - and often millions - of ants."

Kronauer's Laboratory of Social Evolution and Behavior at The Rockefeller University studies the clonal raider ant Ooceraea biroi, a relative of army ants. Clonal raider ants are almost the only ant species that can be kept in a lab and experimented on indefinitely. They are also genetically tractable in that researchers can make mutants or transgenic lines to compare. But they are also poorly understood, understudied, and hard to find in the field. The mass raiding of army ants is well studied and described; however group raiding is not. And understanding group raiding is key to understanding the evolutionary trajectory to mass raiding.

"My goal has always been to study how social behavior evolves and is controlled, and how army ants have evolved," said Kronauer. "A few years ago we discovered that the way clonal raider ants forage is through raids that are similar to army ant raids."

To understand how the raids are structured and organized, the team collected a large number of video recordings of many colonies raiding under controlled conditions.

"Our goal was to understand what are the underlying behavioral rules the ants follow and how a raid emerges out of the behavior of individual ants," said Gal. "Tracking individuals in a dense colony is a challenge and does not have a generally applicable solution. This is especially true for small ants that like to form dense clusters such as raider ants."

The researchers overcame this challenge by using a custom computer vision software developed in the lab named anTraX that tracked and identified the ants based on small color marks painted on their abdomen and thorax. The method allowed them to collect accurate trajectories for all the ants for several weeks with minimal human effort, and without requiring expensive high-resolution cameras.

In a small nest of 25 ants they used 5 sets of colors and painted each ant with a unique set of colors. The researchers placed a single fire ant pupa (the prey) in the foraging arena outside of the nest. The nest sends out a scout to look for food. Once the scout finds the food, she lays a pheromone trail back to home. Inside the nest she releases, what researchers believe to be, a recruitment pheromone that attracts the ants to her. They spill out of the nest and follow her trail to the food in a group raid.

As the researchers increased the colony size, the number of scouts sent to forage also increased and they began to see more coordinated search activities. This same behavior is seen in army ants, but at a scale of tens of thousands or often millions of ants, with a very large increase in the number of scouts.

"Our behavioral analysis shows that group raids have stereotyped structure, and that they are conserved across the Dorylinae. Because the transition from group to mass raids over evolutionary time is perfectly associated with massive increases in colony size, we wondered whether this had something to do with the transition in raiding behavior," said Chandra. "We gradually scaled our colonies up from 10 through to 100 ants and we saw this very nice increase in coordinated search behavior as you increase colony size."

"Our experiments show that in larger colonies, the ants become more synchronized in their leaving of the nest to scout. In other words, when an ant leaves, the chances that more ants will follow her are higher in large colonies. While we cannot directly say much about the actual mechanism underlying this observation, we know from other complex systems that an increase in synchronicity is a result of stronger positive feedbacks between individuals," said Gal. "In the army ant size limit, this will result in what we know as a mass raid."

But the resemblance to army ant behavior seen as colony size increased was not limited to temporal synchronization. The experiments also showed that army ants and their relatives follow the same set of behavioral rules when searching for food. A few army ants leave the nest at first with no pheromone trail outside. They hesitantly step out and then appear to waver, turn around, and run back in. But there are ants inside the nest that want to leave, so they push them back out, or they take their place. Because each ant that leaves and returns is laying a pheromone trail the group is slowly extending the trail from the nest in bursts. This 'pushing party' is how army ants create a column of ants leaving the nest and traveling quite far away. As the researchers expanded the clonal raider ants' colony size, they observed the same behavior.

"It's hard to see in the small colonies because there are so few ants," said Chandra. "But we show statistically that this really is happening and we have instances where it's quite dramatic. So, even in small colonies of clonal raider ants, each ant seems to be following very similar rules for search behavior compared to an army ant, although it might not look like it at first glance. And as you increase colony size, the interactions between these ants lead to greater coordination, you start to see more obvious 'pushing parties' and you start to actually see spontaneous columns of ants leaving the nest."

To doubly test this, the team experimented with two colonies of 5,000 workers each, which is an order of magnitude larger than seen in natural colonies for this species. In these studies the raids displayed all four of the characteristic features of army ants' mass raids: a large number of ants that participate in the raid, a bifurcating trail that enables army ants to raid multiple prey sources, a recruitment that happens outside the nest at the raid front, and a spontaneously initiated raid.

At small colony sizes, these rules manifest as group raids, and as colony size increases - either experimentally or over evolutionary time - these rules give rise to mass raids. The team concluded that expansions in colony size in the ancestors of army ants are sufficient to have caused the transition from group raiding to mass raiding behavior.

"Probably the most common pattern is that collective behavior evolves via natural selection acting on and tweaking the interaction rules that the individual animals follow," said Kronauer. "But our study is a nice example of a different mechanism: scaling effects associated with group size can give you dramatically different outcomes in terms of collective behavior, even though the individual rules don't change much."

Gal agreed, "Of course, it has been long known that changing group size can have a dramatic effect on emergent collective behavior. This has been shown both theoretically and experimentally. We have now shown that this effect can also be harnessed by evolution, and that collective behavior can be adapted over evolutionary timescales without actually modifying the behavior of individuals."

As far as is known, the coordinated behavior of clonal raider ants is one of the most complex social behaviors that can be induced or studied in the lab. The authors are currently working on a detailed study of how individual ants behave during the course of the raid, and how the structure of the raid responds to variation in environment and colony composition.

"We suspect that the ants specialize to some extent on specific tasks," said Chandra. "There's probably some very interesting division of labor going on, and there's also clearly complex communication - the ants use several different pheromones to talk to each other and to organize the raid. And there are several decisions the colony must make in the course of the raid. It's an incredibly rich behavior and there are many questions we could ask in the future and we're laying the groundwork for that."

Credit: 
Harvard University, Department of Organismic and Evolutionary Biology

Slender robotic finger senses buried items

image: MIT researchers developed a "Digger Finger" robot that digs through granular material, like sand and gravel, and senses the shapes of buried objects.

Image: 
Image courtesy of Radhen Patel, Edward Adelson, et. al.

Over the years, robots have gotten quite good at identifying objects -- as long as they're out in the open.

Discerning buried items in granular material like sand is a taller order. To do that, a robot would need fingers that were slender enough to penetrate the sand, mobile enough to wriggle free when sand grains jam, and sensitive enough to feel the detailed shape of the buried object.

MIT researchers have now designed a sharp-tipped robot finger equipped with tactile sensing to meet the challenge of identifying buried objects. In experiments, the aptly named Digger Finger was able to dig through granular media such as sand and rice, and it correctly sensed the shapes of submerged items it encountered. The researchers say the robot might one day perform various subterranean duties, such as finding buried cables or disarming buried bombs.

The research will be presented at the next International Symposium on Experimental Robotics. The study's lead author is Radhen Patel, a postdoc in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). Co-authors include CSAIL PhD student Branden Romero, Harvard University PhD student Nancy Ouyang, and Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in CSAIL and the Department of Brain and Cognitive Sciences.

Seeking to identify objects buried in granular material -- sand, gravel, and other types of loosely packed particles -- isn't a brand new quest. Previously, researchers have used technologies that sense the subterranean from above, such as Ground Penetrating Radar or ultrasonic vibrations. But these techniques provide only a hazy view of submerged objects. They might struggle to differentiate rock from bone, for example.

"So, the idea is to make a finger that has a good sense of touch and can distinguish between the various things it's feeling," says Adelson. "That would be helpful if you're trying to find and disable buried bombs, for example." Making that idea a reality meant clearing a number of hurdles.

The team's first challenge was a matter of form: The robotic finger had to be slender and sharp-tipped.

In prior work, the researchers had used a tactile sensor called GelSight. The sensor consisted of a clear gel covered with a reflective membrane that deformed when objects pressed against it. Behind the membrane were three colors of LED lights and a camera. The lights shone through the gel and onto the membrane, while the camera collected the membrane's pattern of reflection. Computer vision algorithms then extracted the 3D shape of the contact area where the soft finger touched the object. The contraption provided an excellent sense of artificial touch, but it was inconveniently bulky.

For the Digger Finger, the researchers slimmed down their GelSight sensor in two main ways. First, they changed the shape to be a slender cylinder with a beveled tip. Next, they ditched two-thirds of the LED lights, using a combination of blue LEDs and colored fluorescent paint. "That saved a lot of complexity and space," says Ouyang. "That's how we were able to get it into such a compact form." The final product featured a device whose tactile sensing membrane was about 2 square centimeters, similar to the tip of a finger.

With size sorted out, the researchers turned their attention to motion, mounting the finger on a robot arm and digging through fine-grained sand and coarse-grained rice. Granular media have a tendency to jam when numerous particles become locked in place. That makes it difficult to penetrate. So, the team added vibration to the Digger Finger's capabilities and put it through a battery of tests.

"We wanted to see how mechanical vibrations aid in digging deeper and getting through jams," says Patel. "We ran the vibrating motor at different operating voltages, which changes the amplitude and frequency of the vibrations." They found that rapid vibrations helped "fluidize" the media, clearing jams and allowing for deeper burrowing -- though this fluidizing effect was harder to achieve in sand than in rice.

They also tested various twisting motions in both the rice and sand. Sometimes, grains of each type of media would get stuck between the Digger-Finger's tactile membrane and the buried object it was trying to sense. When this happened with rice, the trapped grains were large enough to completely obscure the shape of the object, though the occlusion could usually be cleared with a little robotic wiggling. Trapped sand was harder to clear, though the grains' small size meant the Digger Finger could still sense the general contours of target object.

Patel says that operators will have to adjust the Digger Finger's motion pattern for different settings "depending on the type of media and on the size and shape of the grains." The team plans to keep exploring new motions to optimize the Digger Finger's ability to navigate various media.

Adelson says the Digger Finger is part of a program extending the domains in which robotic touch can be used. Humans use their fingers amidst complex environments, whether fishing for a key in a pants pocket or feeling for a tumor during surgery. "As we get better at artificial touch, we want to be able to use it in situations when you're surrounded by all kinds of distracting information," says Adelson. "We want to be able to distinguish between the stuff that's important and the stuff that's not."

Credit: 
Massachusetts Institute of Technology

Ancient fish bones reveal non-kosher diet of ancient Judeans, say researchers

Ancient Judeans commonly ate non-kosher fish surrounding the time that such food was prohibited in the Bible, suggests a study published in the peer-reviewed journal Tel Aviv.

This finding sheds new light on the origin of Old Testament dietary laws that are still observed by many Jews today. Among these rules is a ban on eating any species of fish which lacks scales or fins.

The study reports an analysis of ancient fish bones from 30 archaeological sites in Israel and Sinai which date to the more than 2,000-year span from the Late Bronze Age (1550-1130 BCE) until the end of the Byzantine period (640 CE).

The authors say the results call for a rethink of assumptions that long-held traditions were the basis for the food laws outlined in the Pentateuch, the first five books of the Hebrew Bible.

"The ban on finless and scaleless fish deviated from longstanding Judean dietary habits", says Yonatan Adler from Ariel University.

"The Biblical writers appear to have prohibited this food despite the fact that non-kosher fish were often found on the Judean menu. There is little reason to think that an old and widespread dietary taboo lay at the root of this ban".

The Old Testament was penned at different times, beginning in the centuries before the destruction of Jerusalem in 586 BCE and into Hellenistic times (332-63 BCE). A set of passages repeated twice forbids the eating of certain species of fish.

The Book of Leviticus states: "Everything in the waters that does not have fins and scales is detestable to you", and Deuteronomy decrees that '...whatever does not have fins and scales you shall not eat; it is unclean for you.'

In both, the references immediately follow a prohibition on 'unclean' pig which has received wide scholarly attention. However, the origins and early history of the seafood ban have not been explored in detail until now.

The authors in this study set out to discover when and how the fish prohibition first arose, and if it was predated by an earlier taboo practiced prior to the editing of the Old Testament passages. They also sought to establish the extent to which the rule was obeyed.

Adler's co-author Omri Lernau from Haifa University analysed thousands of fish remains from dozens of sites in the southern Levant. At many Judean sites dating to the Iron Age (1130-586 BCE), including at the Judean capital city of Jerusalem, bone assemblages included significant proportions of non-kosher fish remains. Another key discovery was evidence of non-kosher fish consumption in Jerusalem during the Persian era (539-332 BCE).

Non-kosher fish bones were mostly absent from Judean settlements dating to the Roman era and later. The authors note that sporadic non-kosher fish remains from this later time may indicate 'some degree of non-observance among Judeans'.

The authors now intend to analyse more fish from around this timeframe to establish when Judeans began to avoid eating scaleless fish and how strictly the prohibition was kept.

Credit: 
Taylor & Francis Group

ED visits for appendicitis, miscarriage fell sharply in first wave of COVID-19 pandemic

Emergency department visits for common conditions such as appendicitis, miscarriage, gallbladder attacks and ectopic pregnancy decreased markedly at the start of the COVID-19 pandemic, but patient outcomes were not worse, found research published in CMAJ (Canadian Medical Association Journal) https://www.cmaj.ca/lookup/doi/10.1503/cmaj.202821.

"These findings are reassuring, as patients who required emergency care in the first wave of the pandemic continued to present to the emergency department, received similar care and had similar outcomes to patients presenting in the prepandemic period," writes Dr. David Gomez, a trauma surgeon at St. Michael's Hospital, Unity Health Toronto and assistant professor of surgery, University of Toronto, with coauthors.

The researchers compared emergency department visits over 2 periods, from January 1 to July 1, 2019, and January 1 to June 30, 2020. During this period, there were 39 691 emergency department visits for abdominal and gynecological conditions, including 15 964 (40%) for appendicitis, 12 733 (32%) for miscarriage, 8457 (21%) for gallbladder (cholecystitis) and 2537 (6%) for ectopic pregnancy.

Emergency department visits declined sharply at the start of the pandemic, with a 20% to 39% reduction in visits for appendicitis and miscarriage. This translates to 1087 fewer visits for appendicitis in 11 weeks of the pandemic period and 984 fewer patients seeking care for miscarriage in the 14 weeks. Over the total study period, just over half of patients (52%) were hospitalized, most (80%) for appendicitis.

Despite fewer emergency department visits, there was no increase in adverse patient outcomes, such as sicker patients presenting or increased rates of death. In addition, among those who presented, management strategies were unchanged.

There are two theories for decreased visits to emergency departments: underusage and an actual reduction in the acute conditions. However, the authors have another theory that could affect delivery of care.

"Our study suggests a third possibility: potential overusage of the emergency department before the pandemic," write the authors. "Avoidance of the emergency department during the pandemic may have resulted in miscarriages being managed through outpatient or virtual clinics without an emergency department visit. For some patients with mild symptoms of uncomplicated appendicitis, their symptoms may have resolved without presenting to the emergency department or they may have used virtual visits for conservative management."

Public messaging about when to seek emergency care and options for alternative care, such as telemedicine and after-hours clinics, could be employed in future to better use emergency department resources. Importantly, care during the pandemic was safe, and the findings suggest that patients who needed emergency care did seek help.

"These observations have direct relevance to the maintenance of care in future waves of the pandemic," write the authors. "Telemedicine, which became widely available early in the pandemic, may facilitate safe delivery of care outside the emergency department for certain conditions or may be used as part of a pre-emergency department triage strategy."

"A population-based analysis of the impact of the COVID-19 pandemic on common abdominal and gynecological emergency department visits" is published May 25, 2021.

Credit: 
Canadian Medical Association Journal

ORIENT-12 Study demonstrates adding sintilimab to gemcitabine/platinum has clinical benefit

(10 a.m. EDT May 25, 2021 Denver)-- Adding sintilimab to a regimen of gemcitabine and platinum demonstrates clinical benefit over gemcitabine and platinum alone as first-line therapy in patients with locally advanced or metastatic squamous cell non-small cell lung cancer, according to a study published in the Journal of Thoracic Oncology, the official journal of the International Association for the Study of Lung Cancer.

The standard chemotherapy for squamous NSCLC (sqNSCLC), includes platinum plus gemcitabine. sintilimab, an anti-PD-1 antibody, plus platinum/gemcitabine, has shown encouraging efficacy as first-line therapy for sqNSCLC in the phase III study ECOG 1594. Platinum/gemcitabine is another standard regimen of chemotherapy for sqNSCLC and is commonly used in Asia.

Led by Caicun Zhou, MD, Ph.D., from Shanghai Pulmonary Hospital in Shanghai, China, the researchers conducted a randomized, double-blind, phase 3 study to further compare the efficacy and safety of sintilimab with placebo, both in combination with gemcitabine/platinum.

Dr. Zhou and his co-researchers randomized patients with locally advanced or metastatic sqNSCLC and without EGFR-sensitive mutations or ALK rearrangements. Overall, researchers screened 543 patients from 42 centers throughout China. Of those, 357 patients were randomized into the sintilimab-gemcitabine/platinum group (n=179) and the placebo-gemcitabine/platinum group (n=178).

The primary endpoint was progression-free survival (defined as the time from randomization to the first disease progression or death from any cause), as assessed by the independent radiographic review committee.

After a median follow-up period of 12.9 months, patients in the sintilimab-gemcitabine/ platinum group continued to demonstrate a meaningful improvement in progression-free survival over the placebo-gemcitabine/platinum group (HR 0.536 [95 % CI 0.422-0.681]; p

The incidence of treatment-emergent adverse events leading to death was 4.5 % and 6.7 % in the sintilimab-gemcitabine/platinum group and the placebo-gemcitabine/platinum group, respectively.

"In this study, the risk of disease progression or death was reduced by 37.9 % at interim analysis and by 46.4 % at updated analysis," Dr. Zhou said. "The results from ORIENT-12 could provide a new option for combination therapy in this patient population."

Credit: 
International Association for the Study of Lung Cancer

Harnessing next generation sequencing to detect SARS-CoV-2

image: 96-well plates

Image: 
Peter Duchek

Researchers at the Vienna BioCenter designed a testing protocol for SARS-CoV-2 that can process tens of thousands of samples in less than 48 hours. The method, called SARSeq, is published in the journal Nature Communications and could be adapted to many more pathogens.

The COVID-19 pandemic has lasted more than a year and continues to impact our lives tremendously. Although some countries have launched speedy vaccination campaigns, many still await large-scale immunization schemes and effective antiviral therapies - before that happens, the world urgently needs to regain a semblance of normalcy.

One way to bring us closer to that point is massive parallel testing. Molecular tests that detect the presence of SARS-CoV-2 have become the best way to isolate positive cases and contain the spread of the virus. Several methods have come forward, some that detect viral proteins from nasopharyngeal swabs (such as antigen tests), and some that detect the presence of viral RNA from swabs, gargle samples, or saliva samples (such as reverse transcription and polymerase chain reaction tests, or RT-PCR).

Although antigen tests facilitate some logistical aspects of mass testing, their detection power is relatively weak - infected individuals carrying low amounts of virus remain undetected and can continue to infect other people. PCR tests, on the other hand, are more sensitive because they multiply fragments of the viral genome before scanning samples for the virus. However, they rely on the detection of fluorescent labels that tag viral sequences, which means that pooling samples coming from different people makes the process rather inefficient: if a pool tests positive, all the samples within the pool must be tested again individually to identify the source of the fluorescent signal. Too many machines needed, too expensive, too slow.

During the very first lockdown, scientists at the Vienna BioCenter were mulling over the situation: there had to be a way to scale up testing. Ulrich Elling, group leader at the Institute of Molecular Biotechnology of the Austrian Academy of Sciences (IMBA), and Luisa Cochella, group leader at the Research Institute of Molecular Pathology (IMP), decided to channel their frustration into an innovative solution. IMP group leader Alexander Stark and IMBA postdoc Ramesh Yelangandula joined their efforts, and the project took off.

Combining their expertise in genomics, RNA biochemistry and data analysis, they developed a method that could enable large groups to be tested for SARS-CoV-2 with the same sensitivity as regular PCR tests. SARSeq, or 'Saliva Analysis by RNA sequencing', achieves high sensitivity, specificity, and the power to process up to 36,000 samples in less than 48 hours. The method is now published in the journal Nature Communications.

The testing principle is conceptually simple: individual patient samples are collected into the wells of a testing plate - one well for each sample. Then, a fragment of viral RNA unique to SARS-CoV-2 - the nucleocapsid gene - is selectively converted to DNA and PCR-amplified in any well that contains it.

"Amplifying the viral material from individual samples to a maximum homogenizes its quantity across positive samples, making SARSeq highly sensitive," explains Luisa Cochella. "Within the thousands of samples that we could test simultaneously, some may contain up to 10 million times more coronavirus particles than others - if we pooled such samples before amplification, those with high amounts of viral material could mask other positive cases."

What distinguishes this first step to the usual PCR test is that each sample receives a unique set of short DNA sequences - or barcodes - that attach to the amplifying viral DNA. In a second amplification step, all the samples from one plate are pooled into one well, which receives a second set of unique DNA barcodes. The contents of multiple plates can be pooled once more, as the DNA molecules from each sample carry a unique combination of two sets of barcodes. This pooling and barcoding strategy makes SARSeq highly specific and scalable.

"We combine the sensitivity of PCR with the high throughput of Next Generation Sequencing technology, or NGS, the same used to sequence the human genome. The NGS machine processes the pooled samples and tells us which samples contained any SARS-CoV-2 material. The barcodes allow us to distinguish each positive sample from the others, and trace it back to a patient," says Ramesh Yelagandula, first author of the study. Moreover, the NGS-based method allows to test several RNAs in parallel, including RNAs that control the sample quality or RNAs from other pathogens for differential diagnostics.

"The Next Generation Sequencing facility and other colleagues at the Vienna BioCenter were of tremendous help to develop and optimize the method," says Alexander Stark. "With our machines, home-made enzymes, and analysis pipeline, we expect each test to cost less than five Euro."

The testing procedure can run in parallel to existing diagnostics, while being independent of the bottlenecks in supply chains. Therefore, it does not compete with other testing methods for reagents or equipment.

"We developed SARSeq to try and circumvent the limitations of other tests, and to process thousands of samples in parallel. Not only is it an excellent method to detect SARS-CoV-2, but it can also be applied to other respiratory pathogens like the flu virus, the common cold rhinoviruses, and potentially many others," says Ulrich Elling.

The principles behind SARSeq are simple and adaptable to any respiratory pathogen. As the world's population skyrockets along with our proximity to animals, cutting-edge diagnostic methods like SARSeq will be crucial to prevent future diseases from spreading like wildfire.

Credit: 
IMBA- Institute of Molecular Biotechnology of the Austrian Academy of Sciences

Silver attacks bacteria, gets 'consumed'

image: E. coli causes several dramatic changes in silver morphology and structure, such as particle agglomeration and amorphization. These effects lead to clear modification of its electronic properties, as highlighted by the absorption bleaching/blue shift and faster electron dynamics upon exposure to bacteria.

Image: 
Giuseppe M. Paternò

WASHINGTON, May 25, 2021 -- For millennia, silver has been utilized for its antimicrobial and antibacterial properties. Although its use as a disinfectant is widely known, the effects of silver's interaction with bacteria on the silver itself are not well understood.

As antibiotic-resistant bacteria become more and more prevalent, silver has seen steep growth in its use in things like antibacterial coatings. Still, the complex chain of events that lead to the eradication of bacteria is largely taken for granted, and a better understanding of this process can provide clues on how to best apply it.

In Chemical Physics Reviews, by AIP Publishing, researchers from Italy, the United States, and Singapore studied the impacts an interaction with bacteria has on silver's structure.

When monitoring the interaction of silver nanoparticles with a nearby E. coli culture, the researchers found the silver undergoes several dramatic changes. Most notably, the E. coli cells caused substantial transformations in the size and shape of the silver particles.

It is often assumed the silver stays unmodified in this process, but the work done by the team shows this not to be true.

The electrostatic interaction between the silver and the bacteria causes some of the silver particles to dissolve as it releases ions to penetrate the bacterial cells. This dissolution modifies the shape of the silver particles, shrinking and rounding them out from triangular shapes into circles.

These effects are even more pronounced if the E. coli cells are pretreated with a molecule to increase the permeability of their membranes before they meet the silver.

"It seems from this study that silver is 'consumed' from the interaction," said Guglielmo Lanzani, one of the authors on the paper and director of the Center for Nano Science and Technology of IIT-Instituto di Tecnologia.

Fortunately, this "consumption" likely does not impact silver's antimicrobial properties, because the effect is so small.

"We think this does not affect the efficiency of the biocidal process and, due to the tiny exchange of mass, the lifetime is essentially unlimited," said Giuseppe Paternò, a researcher at IIT and co-author of the study. "The structural modifications, however, affect the optical properties of the metal nanostructures."

Direct investigations of processes like these are difficult, because laboratories are controlled environments that cannot fully capture the complexities of a biological setting of bacterial cells.

Nevertheless, the group is planning further experiments to explore the chemical pathways that lead to the structural changes in silver. They hope to uncover why silver works better than other materials as an antibacterial surface, and why bacterial membranes are particularly vulnerable to silver, while other cells remain less affected.

Credit: 
American Institute of Physics

For men, low testosterone means high risk of severe COVID-19

image: A new study from Washington University School of Medicine in St. Louis suggests that, among men, low testosterone levels in the blood are linked to more severe COVID-19. The study contradicts widespread assumptions that higher testosterone may explain why men, on average, develop more severe COVID-19 than women do.

Image: 
SARA MOSER

Throughout the pandemic, doctors have seen evidence that men with COVID-19 fare worse, on average, than women with the infection. One theory is that hormonal differences between men and women may make men more susceptible to severe disease. And since men have much more testosterone than women, some scientists have speculated that high levels of testosterone may be to blame.

But a new study from Washington University School of Medicine in St. Louis suggests that, among men, the opposite may be true: that low testosterone levels in the blood are linked to more severe disease. The study could not prove that low testosterone is a cause of severe COVID-19; low levels could simply serve as a marker of some other causal factors. Still, the researchers urge caution with ongoing clinical trials investigating hormonal therapies that block or lower testosterone or increase estrogen as a treatment for men with COVID-19.

The study appears online May 25 in JAMA Network Open.

"During the pandemic, there has been a prevailing notion that testosterone is bad," said senior author Abhinav Diwan, MD, a professor of medicine. "But we found the opposite in men. If a man had low testosterone when he first came to the hospital, his risk of having severe COVID-19 -- meaning his risk of requiring intensive care or dying -- was much higher compared with men who had more circulating testosterone. And if testosterone levels dropped further during hospitalization, the risk increased."

The researchers measured several hormones in blood samples from 90 men and 62 women who came to Barnes-Jewish Hospital with symptoms of COVID-19 and who had confirmed cases of the illness. For the 143 patients who were admitted to the hospital, the researchers measured hormone levels again at days 3, 7, 14 and 28, as long as the patients remained hospitalized over these time frames. In addition to testosterone, the investigators measured levels of estradiol, a form of estrogen produced by the body, and IGF-1, an important growth hormone that is similar to insulin and plays a role in maintaining muscle mass.

Among women, the researchers found no correlation between levels of any hormone and disease severity. Among men, only testosterone levels were linked to COVID-19 severity. A blood testosterone level of 250 nanograms per deciliter or less is considered low testosterone in adult men. At hospital admission, men with severe COVID-19 had average testosterone levels of 53 nanograms per deciliter; men with less severe disease had average levels of 151 nanograms per deciliter. By day three, the average testosterone level of the most severely ill men was only 19 nanograms per deciliter.

The lower the levels of testosterone, the more severe the disease. For example, those with the lowest levels of testosterone in the blood were at highest risk of going on a ventilator, needing intensive care or dying. Thirty-seven patients -- 25 of whom were men -- died over the course of the study.

The researchers noted that other factors known to increase the risk of severe COVID-19, including advanced age, obesity and diabetes, also are associated with lower testosterone. "The groups of men who were getting sicker were known to have lower testosterone across the board," said first author Sandeep Dhindsa, MD, an endocrinologist at Saint Louis University. "We also found that those men with COVID-19 who were not severely ill initially, but had low testosterone levels, were likely to need intensive care or intubation over the next two or three days. Lower testosterone levels seemed to predict which patients were likely to become very ill over the next few days."

In addition, the researchers found that lower testosterone levels in men also correlated with higher levels of inflammation and an increase in the activation of genes that allow the body to carry out the functions of circulating sex hormones inside the cells. In other words, the body may be adapting to less testosterone circulating in the bloodstream by dialing up its ability to detect and use the hormone. The researchers don't yet know the implications of this adaptation and are calling for more research.

"We are now investigating whether there is an association between sex hormones and cardiovascular outcomes in long COVID-19, when the symptoms linger over many months," said Diwan, who is a cardiologist. "We also are interested in whether men recovering from COVID-19, including those with long COVID-19, may benefit from testosterone therapy. This therapy has been used in men with low levels of sex hormones, so it may be worth investigating whether a similar approach can help male COVID-19 survivors with their rehabilitation."

Credit: 
Washington University School of Medicine

Egyptian fossil surprise: Fishes thrived in tropics in ancient warm period, despite high ocean tempe

Photos and Map

The Paleocene-Eocene Thermal Maximum, or PETM, was a short interval of highly elevated global temperatures 56 million years ago that is frequently described as the best ancient analog for present-day climate warming.

Fish are among the organisms thought to be most sensitive to warming climates, and tropical sea-surface temperatures during the PETM likely approached temperatures that are lethal to some modern marine fish species, according to some estimates.

But newly discovered fish fossils from an eastern Egyptian desert site show that marine fishes thrived in at least some tropical areas during the PETM. The study, from a team of Egyptian scientists and a University of Michigan colleague, provides a snapshot of an ecosystem during an extreme warming event and may provide insights for the future.

"The impact of the PETM event on life at the time is of wide interest. But a major gap in our understanding is how life in the tropics responded, because this region is not well-sampled for many fossil groups," said U-M paleontologist Matt Friedman, co-author of a study published online May 17 in the journal Geology.

"On the basis of the scant evidence we have for fishes--remembering that this Egyptian site provides our first peek from the tropics--they seem to have weathered the PETM surprisingly well, and there are even hints that important diversification in the group might have happened around or just after this time," said Friedman, director of U-M's Museum of Paleontology and an associate professor in the Department of Earth and Environmental Sciences.

The lead author of the Geology paper is Sanaa El-Sayed of Egypt's Mansoura University, an Egyptian paleontologist who will be an incoming University of Michigan doctoral student this fall.

The newly discovered fossil assemblage, known as Ras Gharib A, was excavated from a site in Egypt's Eastern Desert, roughly 200 miles southeast of Cairo and west of the Gulf of Suez and the Sinai Peninsula.

The fossils provide the first clear picture of marine bony fish diversity in the tropics during the PETM. Previous studies estimated that sea surface temperatures in some parts of the tropics likely surpassed 95 degrees Fahrenheit (35 C) at that time, suggesting dire consequences for low-latitude ocean fishes.

But the Egyptian fossils capture an intact ecosystem with diverse fish lineages and a variety of ecologies. The composition of the Ras Gharib A fish community is similar to PETM-aged fish fossils from sites at higher latitudes.

"While the broader evolutionary consequences of the PETM for marine fishes remain little explored, the available paleontological evidence does not suggest a widespread crisis among marine fishes at that time," El-Sayed said. "In fact, the available records reveal that this time might have been a significant episode of evolutionary diversification among key modern fish groups, similar to patterns reported for land-living mammals."

Several factors might help explain why the Ras Gharib A fishes seem to have weathered the PETM.

First off, previous estimates of sea-surface temperatures exceeding 95 degrees apply broadly to tropical regions, but temperature data specific to the Ras Gharib A site is not yet available. It's possible that the northern coast of Africa experienced an upwelling of cool water from deeper in the ocean, for example. Or perhaps fishes moved to deeper, cooler waters to avoid the warmest temperatures.

Another possibility is that marine fishes at that time were simply more resilient than researchers had thought. After all, they evolved early in the Cenozoic Era when climates were already several degrees warmer than today.

"A more detailed picture of the setting in which these fishes lived is a key part of the puzzle," Friedman said. "This report really marks the beginning of a research project, and there's much more to do when it comes to studying the fossils themselves and their broader environmental context."

Through international collaborations like this Egyptian project, paleontologists can flesh out the fossil record in important regions like the tropics, helping to fill gaps in the story of life on Earth, Friedman said.

The PETM-aged fish fossils from Ras Gharib A were found in a layer of dark-gray shale and include examples of more than a dozen groups of bony fishes typical of the Eocene, the geological epoch that began with the PETM. Whole fishes are relatively abundant, but many individuals are small, measuring an inch or less in total length.

A group called percomorph acanthomorphs--which includes familiar Michigan fishes like walleye, bass and bluegills--are the most diverse kind of fishes at Ras Gharib A. Other fishes at the site include deep-sea hatchetfish and predatory species called bonytongues, whose relatives live in freshwater today.

The single most abundant fish type in the assemblage is a moonfish from the genus Mene, represented by more than 60 specimens. Still alive today, Mene is now restricted to tropical and subtropical regions of the Indian and Pacific oceans.

But during the PETM, these fish inhabited the tropics and were also found as far north as Denmark, showing how the warm period allowed some creatures to expand their ranges.

So, what insights can these ancient Egyptian fish fossils provide when considering how present-day life on Earth will likely respond to human-caused climate change? One lesson seems to be that different groups of organisms show contrasting responses to extreme warming events.

While the fishes from Ras Gharib A survived and may even have thrived during the PETM, coral reef ecosystems were practically wiped out at low latitudes, while clams and snails showed a muted response and some types of plankton seemed to have diversified, according to Friedman.

"Impacts on ecosystems involve the interplay of multiple groups," he said. "The survival of one group in isolation shouldn't be taken as evidence that changing climates are something to brush off."

Also, it's important to keep in mind that while the PETM is the best ancient analog for modern climate change, it's still an imperfect comparison.

By some estimates, humans are now releasing carbon dioxide into the atmosphere at more than 10 times the rate that led to the PETM. During the PETM, global climate responded to the added carbon by warming 9 to 14 degrees Fahrenheit (5 to 8 C) over thousands of years. Today, realistic emissions scenarios put us on track for around half of that warming over just a few centuries, Friedman said.

"It's really a sign of how unprecedented the current situation is," he said.

Credit: 
University of Michigan

'Slow slip' earthquakes' hidden mechanics revealed

image: A special kind of antenna used to image the Earth like an ultrasound scanner trails behind the research vessel Marcus Langseth. Seismic investigators at University of Texas Institute for Geophysics used supercomputers at the Texas Advanced Computing Center Seismic to analyze a seismic image of a subduction zone in precedented detail.

Image: 
UT Jackson School of Geosciences/UTIG

Slow slip earthquakes, a type of slow motion tremor, have been detected at many of the world's earthquake hotspots, including those found around the Pacific Ring of Fire, but it is unclear how they are connected to the damaging quakes that occur there. Scientists at The University of Texas at Austin have now revealed the earthquakes' inner workings using seismic CT scans and supercomputers to examine a region off the coast of New Zealand known to produce them.

The insights will help scientists pinpoint why tectonic energy at subduction zones such as New Zealand's Hikurangi subduction zone, a seismically active region where the Pacific tectonic plate dives -- or subducts -- beneath the country's North Island, is sometimes released gently as slow slip, and other times as devastating, high-magnitude earthquakes.

The research was recently published in the journal Nature Geoscience as part of a special edition focused on subduction zones.

"Subduction zones are the biggest earthquake and tsunami factories on the planet," said co-author Laura Wallace, a research scientist at UT Austin's Institute for Geophysics (UTIG) and GNS Science in New Zealand. "With more research like this, we can really begin to understand the origin of different types of [earthquake] behavior at subduction zones."

The research used novel image processing techniques and computer modelling to test several proposed mechanisms about how slow slip earthquakes unfold, revealing the ones that worked best.

The study's lead author, Adrien Arnulf, a UTIG research scientist, said that this line of research is important because understanding where and when a large subduction zone earthquake could strike can happen only by first solving the mystery of slow slip.

"If you ignore slow slip, you will miscalculate how much energy is stored and released as tectonic plates move around the planet," he said.

Scientists know that slow slip events are an important part of the earthquake cycle because they occur in similar places and can release as much pent-up tectonic energy as a high magnitude earthquake, but without causing sudden seismic shaking. In fact, the events are so slow, unfolding over the course of weeks, that they escaped detection until only about 20 years ago.

New Zealand's Hikurangi subduction zone is an ideal site to study slow slip quakes because they occur at depths shallow enough to be imaged at high resolution, either by listening to the internal rumblings of the Earth, or by sending artificial seismic waves into the subsurface and recording the echo.

Turning seismic data into a detailed image is a laborious task but by using similar techniques to those used in medical imaging, geoscientists are able to pick apart the length, shape, and strength of the seismic echo to figure out what's going on underground.

In the current study, Arnulf was able to extract even more information by programming algorithms on Lonestar5, a supercomputer at the Texas Advanced Computing Center, to look for patterns in the data. The results told Arnulf how weak the fault had become and where pressure was being felt within the Earth's joints.

He worked with UT Jackson School of Geosciences graduate student, James Biemiller, who used Arnulf's parameters in a detailed simulation he had developed for modeling how faults move.

The simulation showed tectonic forces building in the crust then releasing through a series of slow motion tremors, just like slow slip earthquakes detected at Hikurangi over the past two decades.

According to the scientists, the real success of the research was not that the model worked but that it showed them where the gaps are in the physics.

"We don't necessarily have the nail-in-the-coffin of how exactly shallow slow slip occurs," said Biemiller, "but we tested one of the standard nails (rate-state friction) and found it doesn't work as well as you'd expect. That means we can probably assume there are other processes involved in modulating slow slip, like cycles of fluid pressurization and release."

Finding those other processes is exactly what the team hope their method will help facilitate.

The study's seismic data was provided by GNS Science and the New Zealand Ministry of Economic Development. The research was funded by UTIG and an MBIE Endeavour fund for GNS Science. UTIG is a unit of the Jackson School of Geosciences.

Credit: 
University of Texas at Austin

Road verges provide opportunity for wildflowers, bees and trees

image: Road verge

Image: 
Ben Phillips

Road verges cover 1.2% of land in Great Britain - an area the size of Dorset - and could be managed to help wildlife, new research shows.

University of Exeter researchers used Google Earth and Google Street View to estimate that verges account for 2,579 km2 (almost 1,000 square miles) of land.

About 27% of these verges are frequently mown, 41% is wilder grassland, 19% is woodland and the rest is scrub.

There are "significant opportunities" to improve verges by reducing mowing and planting trees, the researchers say.

"Our key message is that there's a lot of road verge in Great Britain and we could manage it much better for nature," said lead author Ben Phillips, of the Environment and Sustainability Institute on Exeter's Penryn Campus in Cornwall.

"About a quarter of our road verges are mown very regularly to make them look like garden lawns - this is bad for wildlife."

Previous research has shown that reducing mowing to just once or twice per year provides more flowers for pollinators, allows plants to set seed and creates better habitats for other animals.

Phillips said: "Some parts of verges need to be mown regularly for safety, but many verges could be mown much less, and this could save money due to reduced maintenance costs.

"We found that only a quarter of frequently mown verges had trees, so there's potential to add trees and shrubs, which will also help to capture carbon.

"But tree planting must be done carefully to avoid damaging flower-rich grass verges, and to prevent any impacts on visibility for drivers, or damage to infrastructure from roots and branches."

Planting trees in some verges could provide a wide range of benefits for people, nature and the environment, and contribute towards the UK government's tree-planting ambitions.

As well as estimating land area of verges, the study found that 1.8% of Great Britain is covered by hard road surfaces.

The charity Plantlife is currently running a campaign called #NoMowMay, asking gardeners and councils to "lock up your lawnmower" for the month of May.

Credit: 
University of Exeter