Tech

The market advantage of a feminine brand name

Researchers from University of Calgary, University of Montana, HEC Paris, and University of Cincinnati published a new paper in the Journal of Marketing that explores the linguistic aspects of a name that can influence brand perceptions without people even realizing it.

The study, forthcoming in the Journal of Marketing, is titled "Is Nestlé a Lady? The Feminine Brand Name Advantage" and is authored by Ruth Pogacar, Justin Angle, Tina Lowrey, L. J. Shrum, and Frank Kardes.

What do iconic brands Nike, Coca-Cola, and Disney have in common? They all have linguistically feminine names. In fact, the highest-ranking companies on Interbrand's Global Top Brands list for the past twenty years have, on average, more feminine names than lower-ranked companies. How can you tell if a name is linguistically feminine? Easy--does it have two or more syllables and stress on the second or later syllable? Does it end in a vowel? If so, then it is a feminine name. Linguistically feminine names convey "warmth" (good-natured sincerity), which makes people like them better than less feminine names.

A brand's name is incredibly important. In most cases, the name is the first thing consumers learn about a brand. And a brand's name does the work of communicating what the brand represents. For instance, Lean Cuisine conveys the product's purpose. Others, like Reese's' Pieces, have rhyming names that promise whimsy and fun. Making a good first impression is critical, so it is not surprising that the market for brand naming services is booming. Boutique naming fees can run as much as $5,000 - $10,000 per letter for brand names in high-stakes product categories like automobiles and technology.

Specifically, the number of syllables in a name, which syllable is stressed, and the ending sound, all convey masculine or feminine gender. People automatically associate name length, stress, and ending sound with men's or women's names because most people's names follow certain rules. Women's names tend to be longer, have more syllables, have stress on the second or later syllable, and end with a vowel (e.g., Amánda). Men's names tend to be shorter with one stressed syllable, or with stress on the first of two syllables, and end in a consonant (e.g., Éd or Édward).

We often relate to brands like people--we love them, we hate them, we are loyal to certain brands but sometimes we cheat. We associate brands with masculine or feminine traits based on the linguistic cues in the name. So, attributes associated with gender - like warmth - become attached to a brand because of its name. "Warmth" is the quality of being good-natured, tolerant, and sincere. Researchers believe that warmth is incredibly important because deep in our evolutionary past, primitive people had to make a quick, critical judgment whenever they encountered someone new--is this stranger a threat or not? In other words--is this stranger dangerous or warm? If the newcomer was not warm, then a fight or flight decision might be called for. People still rely on warmth judgments every day to decide whether someone will be a good partner, employee, or friend.

So, it is no surprise that warmth is an important characteristic of brand personality. And because linguistically feminine names convey warmth, features like ending in a vowel are advantageous for brand names. As Pogacar explains, "We find that linguistically feminine brand names are perceived as warmer and are therefore better liked and more frequently chosen, an effect we term the Feminine Brand Name Advantage."

But does all this matter in terms of dollars and cents? Yes, according to the Interbrand Global Top Brand rankings, which is based on brand performance and strength. Angle says that "By analyzing the linguistic properties of each name on Interbrand's lists for the past twenty years, we find that brands with linguistically feminine names are more likely to make the list. And even more, the higher ranked a brand is, the more likely it is to have a linguistically feminine name."

After observing this feminine brand name advantage, the researchers conducted a series of experiments to better understand what is happening. Participants reported that brands with linguistically feminine names seemed warmer and this increased their purchase intentions. This pattern occurred with well-known brands and made-up brands that study participants had no prior experience with.

There are limitations to the feminine brand name advantage. When a product is specifically targeted to a male audience (e.g., men's sneakers), masculine and feminine brand names are equally well-liked. Furthermore, people like linguistically feminine names for hedonic products, like chocolate, but may prefer masculine names for strictly functional products like bathroom scales.

It is important to note that results may vary based on the linguistic patterns of name gender in the target market country. Lowrey summarizes the study's insights by saying "We suggest that brand managers consider linguistically feminine names when designing new brand names, particularly for hedonic products."

Credit: 
American Marketing Association

Climate change likely drove the extinction of North America's largest animals

image: The study's findings suggest that decreasing hemispheric temperatures and associated ecological changes were the primary drivers of the Late Quaternary megafauna extinctions in North America.

Image: 
Hans Sell

A new study published in Nature Communications suggests that the extinction of North America's largest mammals was not driven by overhunting by rapidly expanding human populations following their entrance into the Americas. Instead, the findings, based on a new statistical modelling approach, suggest that populations of large mammals fluctuated in response to climate change, with drastic decreases of temperatures around 13,000 years ago initiating the decline and extinction of these massive creatures. Still, humans may have been involved in more complex and indirect ways than simple models of overhunting suggest.

Before around 10,000 years ago, North America was home to many large and exotic creatures, such as mammoths, gigantic ground-dwelling sloths, larger-than-life beavers, and huge armadillo-like creatures known as glyptodons. But by around 10,000 years ago, most of North America's animals weighing over 44 kg, also known as megafauna, had disappeared. Researchers from the Max Planck Extreme Events Research Group in Jena, Germany, wanted to find out what led to these extinctions. The topic has been intensely debated for decades, with most researchers arguing that human overhunting, climate change, or some combination of the two was responsible. With a new statistical approach, the researchers found strong evidence that climate change was the main driver of extinction.

Overhunting vs. climate change

Since the 1960's, it has been hypothesized that, as human populations grew and expanded across the continents, the arrival of specialized "big-game" hunters in the Americas some 14,000 year ago rapidly drove many giant mammals to extinction. The large animals did not possess the appropriate anti-predator behaviors to deal with a novel, highly social, tool-wielding predator, which made them particularly easy to hunt. According to proponents of this "overkill hypothesis", humans took full advantage of the easy-to-hunt prey, devastating the animal populations and carelessly driving the giant creatures to extinction.

Not everyone agrees with this idea, however. Many scientists have argued that there is too little archaeological evidence to support the idea that megafauna hunting was persistent or widespread enough to cause extinctions. Instead, significant climatic and ecological changes may have been to blame.

Around the time of the extinctions (between 15,000 and 12,000 years ago), there were two major climatic changes. The first was a period of abrupt warming that began around 14,700 years ago, and the second was a cold snap around 12,900 years ago during which the Northern Hemisphere returned to near-glacial conditions. One or both of these important temperature swings, and their ecological ramifications, have been implicated in the megafauna extinctions.

"A common approach has been to try to determine the timing of megafauna extinctions and to see how they align with human arrival in the Americas or some climatic event," says Mathew Stewart, co-lead author of the study. "However, extinction is a process--meaning that it unfolds over some span of time--and so to understand what caused the demise of North America's megafauna, it's crucial that we understand how their populations fluctuated in the lead up to extinction. Without those long-term patterns, all we can see are rough coincidences."

'Dates as data'

To test these conflicting hypotheses, the authors used a new statistical approach developed by W. Christopher Carleton, the study's other co-lead author, and published last year in the Journal of Quaternary Science. Estimating population sizes of prehistoric hunter-gatherer groups and long-extinct animals cannot be done by counting heads or hooves. Instead, archaeologists and palaeontologists use the radiocarbon record as a proxy for past population sizes. The rationale being that the more animals and humans present in a landscape, the more datable carbon is left behind after they are gone, which is then reflected in the archaeological and fossil records. Unlike established approaches, the new method better accounts for uncertainty in fossil dates.

The major problem with the previous approach is that it blends the uncertainty associated with radiocarbon dates with the process scientists are trying to identify.

"As a result, you can end up seeing trends in the data that don't really exist, making this method rather unsuitable for capturing changes in past population levels. Using simulation studies where we know what the real patterns in the data are, we have been able to show that the new method does not have the same problems. As a result, our method is able to do a much better job capturing through-time changes in population levels using the radiocarbon record," explains Carleton.

North American megafauna extinctions

The authors applied this new approach to the question of the Late Quaternary North American megafauna extinctions. In contrast to previous studies, the new findings show that megafauna populations fluctuated in response to climate change.

"Megafauna populations appear to have been increasing as North American began to warm around 14,700 years ago," states Stewart. "But we then see a shift in this trend around 12,900 years ago as North America began to drastically cool, and shortly after this we begin to see the extinctions of megafauna occur."

And while these findings suggest that the return to near glacial conditions around 12,900 years ago was the proximate cause for the extinctions, the story is likely to be more complicated than this.

"We must consider the ecological changes associated with these climate changes at both a continental and regional scale if we want to have a proper understanding of what drove these extinctions," explains group leader Huw Groucutt, senior author of the study. "Humans also aren't completely off the hook, as it remains possible that they played a more nuanced role in the megafauna extinctions than simple overkill models suggest."

Many researchers have argued that it is an impossible coincidence that megafauna extinctions around the world often happened around the time of human arrival. However, it is important to scientifically demonstrate that there was a relationship, and even if there was, the causes may have been much more indirect (such as through habitat modification) than a killing frenzy as humans arrived in a region.

The authors end their article with a call to arms, urging researchers to develop bigger, more reliable records and robust methods for interpreting them. Only then will we develop a comprehensive understanding of the Late Quaternary megafauna extinction event.

Credit: 
Max Planck Institute for Chemical Ecology

USC biologists devise new way to assess carbon in the ocean

A new USC study puts ocean microbes in a new light with important implications for global warming.

The study, published Tuesday in the Proceedings of the National Academy of Sciences, provides a universal accounting method to measure how carbon-based matter accumulates and cycles in the ocean. While competing theories have often been debated, the new computational framework reconciles the differences and explains how oceans regulate organic carbon across time.

Surprisingly, most of the action involving carbon occurs not in the sky but underfoot and undersea. The Earth's plants, oceans and mud store five times more carbon than the atmosphere. It accumulates in trees and soil, algae and sediment, microorganisms and seawater.

"The ocean is a huge carbon reservoir with the potential to mitigate or enhance global warming," said Naomi Levine, senior author of the study and assistant professor in the biological sciences department at the USC Dornsife College of Letters, Arts and Sciences. "Carbon cycling is critical for understanding global climate because it sets the temperature, which in turn sets climate and weather patterns. By predicting how carbon cycling and storage works, we can better understand how climate will change in the future."

Processes governing how organic matter -- decaying plant and animal matter in the environment akin to the material gardeners add to soil -- accumulates are critical to the Earth's carbon cycle. However, scientists don't have good tools to predict when and how organic matter piles up. That's a problem because a better reconciling of organic carbon can inform computer models that forecast global warming and support public policy.

New USC framework can gauge organic carbon increases in the ocean

In recent years, scientists have offered three competing theories to explain how organic matter accumulates, and each has its limitations. For example, one idea is some organic matter is intrinsically persistent, similar to an orange peel. Sometimes carbon is too diluted so microbes can't locate and eat it, as if they're trying to find a single yellow jellybean in a jar full of white ones. And sometimes, the right microbe isn't in the right place at the right time to intercept organic matter due to environmental conditions.

While each theory explains some observations, the USC study shows how this new framework can provide a much more comprehensive picture and explain the ecological dynamics important for organic matter accumulation in the ocean. The solution has wide utility.

For example, it can help interpret data from any condition in the ocean. When linked into a full ecosystem model, the framework accounts for diverse types of microbes, water temperature, nutrients, reproduction rates, sunlight and heat, ocean depth and more. Through its ability to represent diverse environmental conditions worldwide, the model can predict how organic carbon will accumulate in various complex scenarios -- a powerful tool at a time when oceans are warming and the Earth is rapidly changing.

"Predicting why organic carbon accumulates has been an unsolved challenge," said Emily Zakem, a study co-author and postdoctoral scholar at USC Dornsife. "We show that the accumulation of carbon can be predicted using this computational framework."

Assessing the past -- and the future -- of Earth's oceans

The tool can also potentially be used to model past ocean conditions as a predictor of what may be in store for the Earth as the planet warms largely due to manmade greenhouse gas emissions.

Specifically, the model is capable of looking at how marine microbes can flip the world's carbon balance. The tool can show how microbes process organic matter in the water column throughout a given year, as well as at millennial timescales. Using that feature, the model confirms -- as has been previously predicted -- that microbes will consume more organic matter and rerelease it as carbon dioxide as the ocean warms, which ultimately will increase atmospheric carbon concentrations and increase warming. Moreover, the study says this phenomenon can occur rapidly, in a non-linear way, once a threshold is reached -- a possible explanation for some of the whipsaw climate extremes that occurred in Earth's distant past.

"This suggests that changes in climate, such as warming, may result in large changes in organic carbon stores and that we can now generate hypotheses as to when this might occur," Levine said.

Finally, the research paper says the new tool can model how carbon moves through soil and sediment in the terrestrial environment, too, though those applications were not part of the study.

Credit: 
University of Southern California

Photosynthetic bacteria-based cancer optotheranostics

image: Schematic illustration of photosynthetic bacteria-based cancer optotheranostics.

Image: 
Eijiro Miyako from JAIST.

Cancer is one of the most thought-provoking healthcare problems throughout the world. The development of therapeutic agents with highly selective anti-cancer activities is increasingly attractive due to the lack of tumor selectivity of conventional treatments.

Scientists at Japan Advanced Institute of Science and Technology (JAIST) have created a photosynthetic bacteria-based cancer optotheranostics (Figure 1).

Discovered by Associate Professor Eijiro Miyako and his team from JAIST, natural purple photosynthetic bacteria (PPSB) can play a key role as a highly active cancer immunotheranostics agent that uses the bio-optical-window I and II near-infrared (NIR) light thanks to the light harvesting nanocomplexes in microbial membrane. The NIR light-driven PPSB would serve as an effective "all-in-one" theranostic material for use in deep tumor treatments.

At least, the present work has the following great advantages in comparison with other cancer treatments such as anticancer drug, nanomedicine, antibody, and conventional microbial therapies. 1) PPSB have high tumor specificity and non-pathogenicity; 2) Sufficient active cancer efficacy and multifunction such as NIR-I-to-NIR-II fluorescence (FL), photothermal conversion, reactive oxygen species (ROS) generation, and contrasty photoacoustic (PA) effect, can be simultaneously expressed using NIR light exposure without chemical functionalizations and genetic manipulations; 3) Complicated and expensive procedures for their production are unnecessary because they can be spontaneously proliferated by simple culturing in cheap medium.

The present experiments warrant further consideration of this novel theranostic approach for the treatment of refractory cancers. The team believes that the developed technology would advance cancer treatment for creating more effective medicine.

Credit: 
Japan Advanced Institute of Science and Technology

Tapping into waste heat for electricity by nanostructuring thermoelectric materials

In our ongoing struggle to reduce the usage of fossil fuel, technology to directly convert the world's waste heat into electricity stands out as very promising. Thermoelectric materials, which carry out this energy conversion process, have, thus, recently become the focus of intense research worldwide. Of the various potential candidates applicable at a broad range of temperatures, between 30 and 630 °C, lead telluride (PbTe) offers the best thermoelectric performance. Unfortunately, the outstanding qualities of PbTe are eclipsed by the toxic nature of lead, driving researchers to look into safer thermoelectric semiconductors.

Tin telluride (SnTe) could be an alternative. But it does not perform nearly as well as PbTe, and various methods to improve its thermoelectric performance are actively being studied. There are two main problems with SnTe that lower its figure of merit (ZT): its high thermal conductivity and its low Seebeck coefficient, which determines how large the generated thermoelectric voltage is as a function of temperature. Although researchers have managed to improve these parameters separately, it has proven difficult to do so for both simultaneously in the case of SnTe.

In a recent study published in Chemical Engineering Journal, a pair of scientists from Chung-Ang University, Korea--Dr. Jooheon Kim and Hyun Ju--came up with an effective strategy to solve this problem. Their approach is based on nanostructuring--producing a material with desired structural properties at the nanometer scale. In this particular case, the scientists produced porous SnTe nanosheets. However, making nanosheets out of SnTe is remarkably complex using standard procedures, which prompted the scientists to devise an innovative synthesis strategy.

They took advantage of another semiconductor: tin selenide (SnSe). This material bears a layered structure that is relatively easy to exfoliate to produce SnSe nanosheets. The researchers submerged these nanosheets in a solution of tartaric acid (C4H6O6) and pure Te under a nitrogen atmosphere to prevent oxidation. What C4H6O6 does is extract Sn-Se pairs from the SnSe nanosheets, thereby allowing for the dissolved Te? anions to naturally replace the Se? anion in the extracted pairs. Then, the Sn-Te pairs rejoin the original nanosheet in a slightly 'imperfect' way, creating pores and grain boundaries in the material. The result of this whole process is anion-exchanged porous SnTe nanosheets.

The scientists investigated the reaction mechanisms that made these SnTe nanosheets possible and carefully searched for the synthesis conditions that produced the optimal nanoscale morphology. "We found that the nanostructure of the optimal anion-exchanged porous SnTe nanosheets, composed of nanoparticles of only 3 nm in size with defective shapes, led to a substantial reduction in thermal conductivity and a higher Seebeck coefficient compared to conventional bulk SnTe," remarks Kim. "This is a direct result of the introduced nanointerfaces, pores, and defects, which help to 'dissipate' otherwise uniform vibrations in SnTe known as phonons, which compromise thermoelectric properties," he adds. The ZT of the best-performing SnTe nanosheets was 1.1 at a temperature of 650 °C; that is almost three times higher than that of bulk SnTe.

The overall results of the study are very promising in the field of high-performance thermoelectric materials, which is bound to find applications not only in energy generation, but also refrigeration, air conditioning, transportation, and even biomedical devices. Equally important, however, is the insight gained by exploring a new synthesis strategy, as Kim explains: "The unconventional method we employed to obtain porous SnTe nanosheets could be relevant for other thermoelectric semiconductors, as well as in the fabrication and research of porous and nanostructured materials for other purposes."

Most importantly, with thermal energy harvesting being the most sought-after application of thermoelectric materials, this study could help industrial processes become more efficient. Thermoelectric semiconductors will let us tap into the large amounts of waste heat produced daily and yield useful electrical energy, and further research in this field will hopefully pave the way to a more ecofriendly society.

Credit: 
Chung Ang University

Harnessing socially-distant molecular interactions for future computing

image: Lead author FLEET PhD student Marina Castelli (Monash) examines samples in scanning tunnelling microscope (STM)

Image: 
FLEET

Could long-distance interactions between individual molecules forge a new way to compute?

Interactions between individual molecules on a metal surface extend for surprisingly large distances - up to several nanometers.

A new study, just published, of the changing shape of electronic states induced by these interactions, has potential future application in the use of molecules as individually addressable units.

For example, in a future computer based on this technology, the state of each individual molecule could be controlled, mirroring binary operation of transistors in current computing.

MEASURING SOCIALLY-DISTANT MOLECULAR INTERACTIONS ON A METAL SURFACE

The Monash-University of Melbourne collaboration studied the electronic properties of magnesium phthalocyanine (MgPc) sprinkled on a metal surface.

MgPc is similar to the chlorophyll responsible for photosynthesis.

By careful, atomically-precise scanning probe microscopy measurements, the investigators demonstrated that the quantum mechanical properties of electrons within the molecules - namely their energy and spatial distribution - are significantly affected by the presence of neighbouring molecules.

This effect - in which the underlying metal surface plays a key role - is observed for intermolecular separation distances of several nanometres, significantly larger than expected for this kind of intermolecular interaction.

These insights are expected to inform and drive progress in the development of electronic and optoelectronic solid-state technologies built from molecules, 2D materials and hybrid interfaces.

DIRECTLY OBSERVING CHANGES IN MOLECULAR ORBITAL SYMMETRY AND ENERGY

The phthalocyanine (Pc) 'four leaf clover' ligand, when decorated with a magnesium (Mg) atom at its centre, is part of the chlorophyll pigment responsible for photosynthesis in bio organisms.

Metal-phthalocyanines are exemplary for the tunability of their electronic properties by swapping the central metal atom and peripheral functional groups, and their ability to self-assemble in highly ordered single layers and nanostructures.

Cutting- edge scanning probe microscopy measurements revealed a surprisingly long-range interaction between MgPc molecules adsorbed on a metal surface.

Quantitative analysis of the experimental results and theoretical modelling showed that this interaction was due to mixing between the quantum mechanical orbitals - which determine the spatial distribution of electrons within the molecule - of neighbouring molecules. This molecular orbital mixing leads to significant changes in electron energies and electron distribution symmetries.

The long range of the intermolecular interaction is the result of the adsorption of the molecule on the metal surface, which "spreads" the distribution of the electrons of the molecule.

"We had to push our scanning probe microscope to new limits in terms of spatial resolution and complexity of data acquisition and analysis", says lead author and FLEET member Dr Marina Castelli.

"It was a big shift in thinking to quantify the intermolecular interaction from the point of view of symmetries of spatial distribution of electrons, instead of typical spectroscopic shifts in energy, which can be more subtle and misleading. This was the key insight that got us to the finish line, and also why we think that this effect was not observed previously."

"Importantly, the excellent quantitative agreement between experiment and atomistic DFT theory confirmed the presence of long-range interactions, giving us great confidence in our conclusions", says collaborator Dr Muhammad Usman from the University of Melbourne.

The outcomes of this study can have great implications in the development of future solid-state electronic and optoelectronic technologies based on organic molecules, 2D materials and hybrid interfaces.

Credit: 
ARC Centre of Excellence in Future Low-Energy Electronics Technologies

How to improve gender equity in medicine

Gender equity and racial diversity in medicine can promote creative solutions to complex health problems and improve the delivery of high-quality care, argue authors in an analysis in CMAJ (Canadian Medical Association Journal).

"[T]here is no excuse for not working to change the climate and environment of the medical profession so that it is welcoming of diversity," writes lead author Dr. Andrea Tricco, Knowledge Translation Program, Unity Health, and the University of Toronto, with coauthors. "The medical profession should be professional, be collegial, show mutual respect, and facilitate the full potential and contribution of all genders, races, ethnicities, religions and nationalities for the benefit of patient care."

The authors describe the root causes of gender inequity in society as well as medicine, and how to improve equity based on current evidence. Gender inequity in medicine is a long-standing problem and the time to act is now, they urge.

"The history of gender inequity in Canadian medical leadership is long, despite women outnumbering men in medical schools now for over a quarter of a century," says coauthor Dr. Ainsley Moore, a family physician and associate professor, Department of Family Medicine, McMaster University, Hamilton, Ontario. "Only 8 of the past 152 presidents of the Canadian Medical Association were women, and it took 117 years for a woman to be appointed dean of a medical faculty, and only 8 have been appointed since. The time is ripe for addressing this systemic problem."

For racialized women, the issue of equity is even more pronounced. "The effects of systemic and structural racism have resulted in racialized women experiencing a double-jeopardy of race and gender bias, thereby exaggerating their underrepresentation in leadership positions in academic medicine," says coauthor Dr. Nazia Peer, research program manager of the Knowledge Translation Program, Unity Health.

Addressing gender equity requires a multi-pronged approach targeting the medical system as well as individual behaviours.

Solutions include

Ensuring core principles of equity, diversity, inclusion, mutual respect, collegiality and professionalism are embedded in all policies and all stages of medicine

Communicating gender statistics

Getting buy-in from professional organizations at the national, provincial and local levels

Championing structural and behavioural change from the top

Role modelling

Diverse search committees for hiring

Flexible schedules, non-gendered parental leave and family-friendly policies

Career support and peer mentoring

"Equity will only be realized when everyone -- regardless of gender and other differences -- experiences equity in pay, promotions and other opportunities. There is no better time than now to implement policies to advocate for and support equity in medicine," they conclude.

Credit: 
Canadian Medical Association Journal

Ferns in the mountains

image: In a new study in the Journal of Biogeography an international team of researchers led by Harvard University assembled one of the largest global assessment of fern diversity. The study integrated digitized herbarium data, genetic data, and climatic data and discovered 58% of fern species occur in eight principally montane hotspots that comprise only 7% of Earth's land area. And within these hotspots, patterns of heightened diversity were amplified at higher elevations above 1000 meters.

Image: 
Copyright 2021 Jacob Suissa.

Earth is home to millions of known species of plants and animals, but by no means are they distributed evenly. For instance, rainforests cover less than 2 percent of Earth's total surface, yet they are home to 50 percent of Earth's species. Oceans account for 71 percent of Earth's total surface but contain only 15 percent of Earth's species. What drives this uneven distribution of species on Earth is a major question for scientists.

In a paper published February 16 in the Journal of Biogeography an international team of researchers led by Jacob S. Suissa, Ph.D. Candidate in the Department of Organismic and Evolutionary Biology, Harvard University, and co-authors Michael A. Sundue, University of Vermont, Burlington, and Weston L. Testo, University of Gothenburg, Sweden, assembled the first global assessment of fern diversity. The study integrated digitized herbarium data, genetic data, and climatic data to determine where most fern species occur and why.

The researchers relied on recently digitized natural history collections to map the diversity of life on earth and build a database of over one million fern specimens with longitude and latitude coordinates occurring all over the world. After an extensive cleaning of the database to remove records with poor coordinates roughly 800,000 occurrence records remained. They then divided the earth into one-degree latitude by longitude grid cells and determined the number of species occurring within each cell. The researchers discovered that the majority of fern species occur in eight principally montane hotspots: Greater Antilles, Mesoamerica, tropical Andes, Guianas, Southeastern Brazil, Madagascar, Malesia and East Asia.

"Natural history collections are the primary data for all biodiversity studies, and they are the backbone to this study," said Sundue. "Scientists have been making collections and curating them for hundreds of years. But only recently, the digitization of these records has allowed us to harness their collective power."

Testo agreed, "There has been a large effort over the last decade to digitize the impressive collection of specimens contributed by thousands of collectors and experts in the field and deposited in museums or natural history collections. For this study we used over 800,000 digitized occurrence records for nearly 8,000 fern species."

The research was conducted in multiple phases and each phase built upon the previous. "The first thing we wanted to know is where are the centers of ferns' biodiversity, and second, why?" Suissa said. "We wanted to understand the biogeographical patterns of fern diversity. Understanding these patterns in a major plant lineage like ferns, allows us to take a step towards understanding why there is an uneven distribution of species around the world."

One major finding is that 58 percent of fern species occur in eight principally montane biodiversity hotspots that comprise only 7 percent of Earth's land area. They also found that within these hotspots, patterns of heightened diversity were amplified at elevations greater than 1000 meters above sea level.

"On a global scale we find a peak in species richness per area around 2000 to 3000 meters in elevation, roughly midway up some of these tropical mountains," explained Suissa. "And we think this is primarily due to a very unique ecosystem that occurs in this elevation band, which in the tropics is the cloud forest."

While ferns grow in a variety of ecosystems, including moist shaded forest understories and rocky desert outcrops, many species are actually epiphytic, meaning they grow on the branches of trees. Suissa and colleagues believe these epiphytes explain the mid to upper elevation peak in species richness of ferns in the tropics.

Once the researchers determined the biogeographical patterns of fern diversity, they investigated why these particular patterns exist. Examining ecological data, including climate and soil data, they showed that within each hotspot there was a strong correlation between increased climatic space and increased species richness and diversification; suggesting that ferns occurring in tropical mountains are forming new species more rapidly than those elsewhere.

"People tend to think of places such as the Amazon rainforest as biodiversity hotspots," said Suissa. "But for ferns, it is tropical and subtropical mountains that harbor a disproportionate number of rapidly diversifying species relative to the land area they occupy. Ferns may be speciating within these tropical mountain systems because of the variation in habitats that occur across elevational gradients. For instance, at the base of a tropical mountain it is hot all year round, and at the summit it is perennially cold. Essentially, these dynamics create many different ecosystems within a small geographic space."

Unlike mountains in temperate regions, the tropics have very low temperature seasonality. This means that each ecosystem across an elevational transect in a tropical mountain remains roughly the same temperature year-round. Effectively, it is harder for plant and animal species to move between elevational zones if they are adapted to one spot on the mountain. Researchers think that these dynamics between the difference in climates at different elevations and the climatic stability within each elevational site allow for plants and animals to diversify more rapidly in tropical mountains.

Going forward the researchers hope to conduct more small-scale population-based studies in young tropical mountains to physically test these hypotheses, and to hopefully also add more specimens and continue to expand digitization of museum collections for study.

Credit: 
Harvard University, Department of Organismic and Evolutionary Biology

Ageing offshore wind turbines could stunt the growth of renewable energy sector

The University of Kent has led a study highlighting the urgent need for the UK's Government and renewable energy industries to give vital attention to decommissioning offshore wind turbines approaching their end of live expectancy by 2025. The research reveals that the UK must decommission approximately 300 and 1600 early-model offshore wind turbines by 2025 and 2030, respectively.

Urgent focus is needed now to proactively use the remaining years until turbines installed in the 1990s and early 2000s are no longer safely functional in 2025, to prevent safety lapses, potentially huge costs and the irretrievable loss of the skillset required for safe decommission.

The research shows that these original turbines have an approximate lifetime of 20 to 25 years, but this expectation is vulnerable to factors that occur whilst in use. Within each early-model turbine, there exist thousands of components and parts that have worn down, become replaced and fixed without estimates on their installation time frame, and are nearing the end of their life expectancy.

There is no existing breakdown of the potential costs of the activities that would surround decommissioning offshore wind turbines, nor is there is an alternative plan to their decommission.

As the turbines exceed their safety remit, the sector is also set to lose the unique skillset of engineers that originally installed and maintained these early models, as they are now approaching professional retirement. To combat this loss of skills, researchers advise the imperative creation of a database of era-specific skills and operation-techniques to offset such a loss.

The study also finds that profitable operations can be established to counter the cost of decommission. Recycling of existing parts into new wind turbine operations has the potential to be hugely cost-effective for the sector, as well as ensuring that renewable means of production are at the forefront of future operations.

Dr Mahmoud Shafiee, Reader in Mechanical Engineering at Kent's School of Engineering and Digital Arts said: 'Without a dedicated effort from the UK Government and renewable energy sector into planning the safe and efficient decommissioning these offshore wind turbines, there is a risk enormous and potentially unsalvageable cost to the renewable energy sector. The cost of maintaining outdated turbines is multiple times that of new installations, so for the benefit of our future hopes of renewable energy, we call on the Government and sector leaders to act now.'

Credit: 
University of Kent

Reserve prices under scarcity conditions improve with a dynamic ORDC, new research finds

Historically, most electric transmission system operators have used heuristics (rules based on experience) to hold sufficient reserves to guard against unforeseen large outages and maintain system reliability. However, the expansion of competitive wholesale electricity markets has led to efforts to translate reserve heuristics into competitively procured services. A common approach constructs an administrative demand curve for valuing and procuring least cost reserve supply offers. The technical term for this is the operating reserve demand curve (ORDC). A new paper quantifies how better accounting for the temperature-dependent probability of large generator contingencies with time-varying dynamic ORDC construction improves reserve procurement.

The paper, "Dynamic Operating Reserve Procurement Improves Scarcity Pricing in PJM," by researchers at Carnegie Mellon University, was published in Energy Policy.

"A dynamic ORDC increases reserve prices when there is higher probability of scarcity conditions, but has minimal effects on total market payments," says Jay Apt, a Professor and Co-Director of the Carnegie Mellon Electricity Industry Center, who co-authored the paper. "The results are directly relevant to the modeled two-settlement electricity market in PJM, which is currently studying changes to its ORDC."

The researchers pulled their data from PJM Interconnection, the largest system operator by load in North America. PJM, which serves 65 million customers in 13 mid-Atlantic states, manages a competitive two-settlement wholesale electricity market, with day-ahead and real-time settlements. The researchers chose PJM due to its current policy relevance, though they write that the results are broadly relevant to any market operators with similar two-settlement market designs and proportion of conventional generation resources.

PJM and other system operators are aware that probabilistic methods will better quantify the optimal quantity and type of reserves to hold across different timeframes. But competitively procuring reserves to reliability serve electricity load given uncertainty requires either computationally intensive stochastic optimization methods or improved heuristics for reserve procurement demands. Traditionally, uncertainties have included large conventional generator failures and load forecast error, and developing heuristics to respond to these uncertainties has led wholesale market operators to integrate reserve procurement via an administrative demand curve. The ORDC has become a market-based method for reserve procurement as it recognizes that the optimal quantity of reserves to hold will depend on a probability distribution of near real-time deviations between forecast and actual load and generation availability.

Following previous research demonstrating that generator reliability depends on temperature, the researchers proposed a dynamic formulation of an ORDC to implement scarcity pricing in a wholesale electricity market. They validated their model's price formation during two historical weeks -- Jan. 4-10, 2014, and Oct. 19-25, 2017, representing high and low load weeks, respectively -- and compared its performance to three alternative approaches for procuring operating reserves, two of which reflect historical and proposed practices at PJM.

The researchers found that a dynamic ORDC increased reserve procurement and prices during very hot and cold hours with heightened risk of generator forced outages. Increased reserve procurement reduces the probability of a reserve shortage during extreme temperature events, such as the Jan. 2014 Polar Vortex, and also enables generators to realize benefits for enhancing reliability during these events through increased reserve prices. By using ORDCs that account for day-ahead generator failure probabilities conditioned on forecast temperature, operating reserves can reduce the need for out-of-market administrative interventions and improve market efficiency.

Credit: 
Carnegie Mellon University

Experimental demonstration of measurement-dependent realities possible, researcher says

image: It is hard to know whether a quantum measurement is precise or not. Feedback compensation directly compares the measurement result with an "imprint" of the original left in a weak interaction. It is a bit like trying on shoes - you can tell whether the shoe fits or not, and a good fit confirms the result of your measurement. In quantum mechanics, it is possible to get a good fit from completely different measurements. The newly reported results show that the dependence of reality on the measurement is an experimentally testable fact.

Image: 
Holger Friedrich Hofmann, Hiroshima University

Shoe shops sell a variety of shoe sizes to accommodate a variety of foot sizes -- but what if both the shoe and foot size depended on how it was measured? Recent developments in quantum theory suggest that the available values of a physical quantity, such as a foot size, can depend on the type of measurement used to determine them. If feet were governed by the laws of quantum mechanics, foot size would depend on the markings on a foot measure to find the best fit -- at the time of measurement -- and even if the markings were changed, the measurement could still be precise.

In quantum mechanics, the "size" of a physical quantity is more elusive than foot length because unavoidable uncertainties in the history of a quantum system make it difficult to confirm the measurement due to what's called the uncertainty principle. Essentially, it is impossible to know the real properties that a quantum system had before the measurement. There isn't a way to try on the shoe after the measurement -- until now. A researcher at Hiroshima University may have found a solution to the problem, with possible implications for emerging quantum information technologies, such as quantum communication and quantum computing.

Holger F. Hofmann, professor in the Graduate School of Advanced Science and Engineering, Hiroshima University, published his approach on Feb. 3 in Physical Review Research, a journal of the American Physical Society.

According to Hofmann, a qubit -- the basic unit of quantum information -- can be used as an external probe to test the precision of a measurement of a physical property in its original quantum system. The probe interacts weakly, creating a memory of the physical property that is automatically encrypted by the qubit. The quantum encrypted one qubit memory can be used to evaluate the precision of a subsequent measurement. A feedback design allows the later measurement value to erase the quantum memory encoded on the probe qubit. If the memory is perfectly erased without any leftover traces, Hofmann said, the measurement outcomes must have been precise each and every time the measurement was performed.

This experimental procedure to probe the amount of uncertainty in a measurement result allows researchers to demonstrate that different measurements can accurately determine the same physical property of a quantum system before the measurement happened -- even when the values of the physical property change based on the measurement procedure, according to Hofmann.

"Quantum mechanics describes physical systems as mysterious 'super positions' of possibilities that seemingly 'collapse' into reality only when a measurement distinguishes the different possibilities," Hofmann said, referring to the idea that mere observation fundamentally changes a system. "There have been many attempts to find out what is there when nobody is looking, and my work builds on these previous attempts."

Hofmann noted that these attempts involve unmeasurable, unobservable uncertainties, making it difficult to answer any questions about the fundamental nature of reality.

"There is still a lot to do, and I hope that many members of the quantum measurement community will join in to develop the necessary theoretical framework," Hofmann said. "Physics should be grounded in observable phenomena, but, strangely enough, the concepts used in quantum mechanics are not."

Credit: 
Hiroshima University

ASHP publishes reports exploring pharmacy's role in future of healthcare delivery

BETHESDA, Md. -- ASHP (American Society of Health-System Pharmacists) today announced the publication of two landmark reports that articulate a futuristic vision for pharmacy practice, including expanded roles for the pharmacy enterprise in healthcare organizations. The 2021 ASHP/ASHP Foundation Pharmacy Forecast Report and the Vizient Pharmacy Network High-Value Pharmacy Enterprise (HVPE) framework, published in AJHP, outline opportunities for pharmacy leaders to advance patient-centered care, population health, and the overall well-being of their organizations.

"During these unprecedented times, it is more important than ever for pharmacy leaders to demonstrate the value pharmacy services contribute to the health of patients and the healthcare system," said ASHP CEO Paul W. Abramowitz, Pharm.D., Sc.D. (Hon.), FASHP, and Karl Matuszewski, M.S., Pharm.D., Vizient Vice President of Member Connections. "The HVPE framework and the Pharmacy Forecast, used in conjunction with the ASHP Practice Advancement Initiative 2030 recommendations, are essential tools to help pharmacy leaders build a comprehensive, future-focused, patient-centered pharmacy enterprise that delivers optimal outcomes through safe and effective medication use."

ASHP and the ASHP Foundation issue the Pharmacy Forecast annually to identify emerging issues and serve as a tool for dynamic strategic planning for pharmacy departments and health systems. The 2021 report is based on a survey on the likelihood of 42 potential impactful events occurring within the next five years in healthcare. A 17-member committee of pharmacy practice executives and specialists advised on the content of the survey, which was sent to a national panel of 319 experts in health-system pharmacy. The Pharmacy Forecast includes sections on the global supply chain, access to healthcare, analytics and big data, healthcare financing and delivery, patient safety, the pharmacy enterprise, and the pharmacy workforce. The report offers actionable, strategic recommendations for organizations to prepare for and respond to these emerging trends and issues that impact patient care.

The survey responses and recommendations in the 2021 Pharmacy Forecast reflect the disruptions caused by the COVID-19 pandemic and related economic crisis. Asked to predict trends for the next five years, 91% of the surveyed pharmacy directors agreed that at least 25 states will enact provisions to expand pharmacists' scope of practice during public health emergencies. More than 90% of the survey respondents expect a significant expansion of patient access to telehealth in rural and other underserved locations. The Forecast recommends that pharmacy leaders build on the expanded use of telehealth during the coronavirus pandemic to implement permanent telepharmacy services to enhance patients' medication-related outcomes, particularly those in underserved areas.

The global nature of the U.S. drug supply chain was initially a concern during the COVID-19 pandemic. A vast majority of leaders in the survey believed that global issues such as trade restrictions, pandemics, or climate change increase the potential for drug shortages. The Pharmacy Forecast suggests that health systems collaborate with other organizations and local and state agencies to plan for pandemic-related surges or distribution of scarce resources such as vaccines.

The HVPE framework establishes eight domains that address an expansive list of topics, including patient care services in the ambulatory, specialty pharmacy, and inpatient settings; safety and quality; pharmacy workforce; information technology, including data analytics and information management; business practices; and leadership. Consensus participants set a goal of adopting the 94 evidence-based statements and 336 performance elements in health system-based medication-use processes and pharmacy practice by 2025.

Credit: 
ASHP (American Society of Health-System Pharmacists)

Hospital wastewater favors multi-resistant bacteria

image: Scientists from the University of Gothenburg, Sweden presents evidence that hospital wastewater, containing elevated levels of antibiotics, rapidly kills antibiotic-sensitive bacteria, while multi-resistant bacteria continue to grow. Hospital sewers may therefore provide conditions that promote the evolution of new forms of antibiotic resistance.

Image: 
Johan Wingborg

Scientists from the University of Gothenburg, Sweden presents evidence that hospital wastewater, containing elevated levels of antibiotics, rapidly kills antibiotic-sensitive bacteria, while multi-resistant bacteria continue to grow. Hospital sewers may therefore provide conditions that promote the evolution of new forms of antibiotic resistance.

It is hardly news that hospital wastewater contains antibiotics from patients. It has been assumed that hospital sewers could be a place where multi-resistant bacteria develop and thrive due to continuous low-level antibiotic exposure. However, direct evidence for selection of resistant bacteria from this type of wastewater has been lacking, until now.

A research group at the University of Gothenburg, Sweden, led by Professor Joakim Larsson, has sampled wastewater from Sahlgrenska University Hospital in Gothenburg, and at the inlet and outlet of the local municipal treatment plant for comparison. They first removed all bacteria from the wastewaters by filtering and tested how the filtered wastewater affected bacteria in different controlled test systems in the lab.

"The results were very clear," says Joakim Larsson. "In all assay, we could see that antibiotic-sensitive bacteria were rapidly killed by the hospital wastewater, while the multi-resistant ones continued to grow. The wastewater entering the municipal treatment plant, primarily made up of wastewater from households, showed a very slight effect, while we could not see any effect of the filtered wastewater."

"It is good news that the wastewater entering the Göta Älv river is not selecting for resistant bacteria, but the strong selection by hospital wastewater is concerning," says Larsson. "Strong selection pressure that favors multi-resistant bacteria is the most important driver behind the evolution of new forms of resistance in pathogens. We now know that hospital wastewater does not only contain pathogens, it can also favor resistant bacteria."

Sweden uses very little antibiotics compared to many other countries in the world. It is therefore plausible that hospitals wastewaters from other places in the world also favor resistant bacteria, but this remains to be investigated. The researchers found some antibiotics that could explain some of the effects on bacteria, but they say that more research is needed to clarify exactly what is favoring the multi-resistant ones.

"One possible way to reduce risks could involve pre-treatment of wastewater at hospitals, something that is done in certain countries already", explains Larsson. "To find the best ways to reduce risks, including designing possible treatment measures, it is critical to first figure out which antibiotics or other antibacterial chemicals explain selection for resistance. That is something we are working on right now."

Credit: 
University of Gothenburg

3D model shows off the insides of a giant permafrost crater

image: The C17 crater

Image: 
Evgeny Chuvilin

Researchers from the Oil and Gas Research Institute of the Russian Academy of Sciences and their Skoltech colleagues have surveyed the newest known 30-meter deep gas blowout crater on the Yamal Peninsula, which formed in the summer of 2020. The paper was published in the journal Geosciences.

Giant craters in the Russian Arctic, thought to be the remnants of powerful gas blowouts, first attracted worldwide attention in 2014, when the 20 to 40-meter wide Yamal Crater was found quite close to the Bovanenkovo gas field. The prevailing hypothesis is that these craters are formed after gas is accumulated in cavities in the upper layers of permafrost, and increasing pressure ultimately unleashes an explosive force. Most of these craters are rather short-lived as they apparently quickly fill with water over several years and turn into small lakes. As of now, there are some 20 known and studied craters.

In 2020, researchers found and surveyed the latest crater, dubbed C17, about 25 meters in diameter. It was found by Andrey Umnikov, director of the non-profit partnership "Russian Center of Arctic Development," during a helicopter flight on July 16 in the central part of the Yamal Peninsula, close to three other craters including the famous Yamal Crater. OGRI deputy director Vasily Bogoyavlensky led the August 2020 expedition, which was possible thanks to the generous support of the government of Yamalo-Nenets Autonomous Area and Mr Umnikov's organization. Evgeny Chuvilin and Boris Bukhanov from the Skoltech Center for Hydrocarbon Recovery took part in the expedition.

"The new crater is impressive in its ideal state of preservation, primarily the cone-shaped top where ejecta was thrown from, the outer parts of the heaving mound that precipitated the crater, the walls of the crater itself which are incredibly well preserved, and, of course, the gas cavity in the icy bottom of the crater," Chuvilin says.

"Firstly, we got there in time to find the object in its almost pristine state, with no water filling it. Secondly, the giant underground cavity in the ice is unique in itself. A part of the icy dome of this cavity was preserved; before the explosion, it had this circular dome, and its bottom was elliptical, elongated to the north, with its axis ratio of approximately 1 to 4.5. From what we know we can say that the C17 crater is linked to a deep fault and an anomalous terrestrial heat flow," Bogoyavlensky notes.

A certified pilot, Igor Bogoyavlensky, piloted the drone used for crater surveillance. That was the first time a drone flew inside the crater for "underground aerial survey" 10 to 15 meters below ground, running the risk of losing the aircraft. The team used the data to build a 3D model based on the drone footage from inside the crater. This is the first time scientists were able to study a "fresh" crater that has not yet eroded or filled with water, with a well-preserved ice cavity where gas had been accumulating. 3D modeling was earlier used for the Yamal Crater, but at the time it was already filled with water.

"Over the years we've gained a lot of experience with surveillance drones, yet this "underground aerial survey" of the C17 crater was the most difficult task I had ever faced, having to lie down on the edge of a 10-story deep crater and dangle down my arms to control the drone. Three times we got close to losing it, but succeeded in getting the data for the 3D model," Igor Bogoyavlensky, the drone pilot, says.

Vasily Bogoyavlensky says the 3D model allowed them to capture the extremely complex shape of the underground cavity. "We could not see everything from above, especially the grottos, possible caverns in the lower part of the crater. You can clearly see all that with the 3D model. Our results suggest unequivocally that the crater was formed endogenously, with ice melting, a heaving mound dynamically growing due to gas accumulation and the explosion," he adds.

The Skoltech researchers were able to study the cryogeological conditions of the crater, the composition of permafrost in this area as well as ejecta from the crater, temperature conditions at the crater floor and some other parameters. "This information will shed light on the conditions and formation of these unusual objects in the Arctic," Chuvilin points out.

In 2021, OGRI and Skoltech researchers are planning a new expedition to this crater to monitor its state and conduct further research into how it was formed.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Plant as superhero during nuclear power plant accidents

image: Proposed model suggested by researchers, where ABCG33 and ABCG37 uptake cesium inside the plant cell in a potassium-independent manner

Image: 
Abidur Rahman

In recent time, HBO's highly acclaimed and award-winning miniseries Chernobyl highlighted the horror of nuclear power plant accident, which happened in Ukraine in 1986. It is not a fictional series just on TV. As we had seen such a catastrophic nuclear power plant accident in 2011 again caused by natural disaster, Tsunami, in Japan. Both historical nuclear power plant accidents released tons of radioactive cesium to the environment. Consequently, the radioactive cesium found their way to the surrounding land, river, into the plants and animal feed, and eventually to our food cycle and ecosystem. The more detrimental part is their half-life, as 137Cs has half-life of ~30 years. So, it is going to be a serious agricultural, economic, and health problem for such a long time without effective actions. Plant biologists use the technique call phytoremediation to use the plant or manipulate the plant genetically to take up toxic components from soil or make crop plants resilient to such contaminated soil. Over the years, scientists tried to find cesium transporters in plants, and till now mostly ended up with several potassium transporters. It is not surprising if someone thinks about the basic chemistry and periodic table of elements we learnt in the high school. Cesium (Cs) resides in the same group with potassium (K). But potassium is abundant in soil and important for plant growth and development. As a result, manipulating potassium transporter to regulate cesium uptake will cause problem for plant growth in general. In contrast to potassium, cesium is not abundant in the soil and it is toxic for plants. So, the idea of finding potassium-independent cesium transporter, which will transport cesium without affecting potassium, is a long waited one.

Recently, plant biologist Dr. Abidur Rahman's group from Iwate University, Japan in a collaboration with Dr. Keitaro Tanoi from the University of Tokyo and Dr. Takashi Akihiro from the Shimane University reported two potassium-independent cesium transporters, where these two transporters dedicatedly uptake cesium inside the plant without affecting potassium. Their finding has been published recently in the top tier plant-specific journal from Cell press, Molecular Plant. They have shown that two ATP Binding Cassette (also known as ABC transporter in general and evolutionary abundant across the kingdom) proteins, ABCG33 and ABCG37, uptake cesium inside the cell.

"This study highlights how we can solve the agricultural issues around us with basic research and why the basic research should be funded". This study also demonstrates the power of collaborations to answer the scientific questions" said the group leader Dr. Abidur Rahman.

"Arif Ashraf,one of the lead authors who conducted the study as part of his graduate study in the Iwate University and currently a postdoc at the University of Massachusetts Amherst, said that how the basic plant biology research can solve real-life problem around us" He added, "In this study, we combined plant physiology, molecular biology, cell biology with in planta transport assay using radioactive cesium, and heterologous system such as yeast, as well".

These transporters are first step forward towards the basic understanding of potassium-independent cesium transport in plant and promise for future bioremediation. Based on their current finding and proposed model, it suggests that possibly more potassium-independent cesium transporters are yet to discover in plant. Abidur lab is working on finding other unknown cesium transporters. Deciphering the basic mechanism and finding more transporters in this regard will help to develop phytoremediation technique possible and translate this mechanism from model plant to crop plants, as these reported transporters are also conserved in crop plants and other species.

Credit: 
Iwate University, Japan