Culture

Cell removal as the result of a mechanical instability

image: Schematic illustration of cell extrusion from epithelial tissue.

Image: 
Kanazawa University

The outer or inner boundaries of organs in the human body are lined with so-called epithelial sheets. These are layers of epithelial cells that can individually change their 3D shape -- which is what happens during biological processes like organ development (morphogenesis), physiological equilibrium (homeostatis) or cancer formation (carcinogenesis). Of particular interest is the process of cell extrusion, where a single cell loses its 'top' or 'bottom' surface and is subsequently pushed out of the layer. A thorough understanding of this phenomenon from a mechanical point of view has been lacking, but now, Satoru Okuda and Koichi Fujimoto from Kanazawa University have discovered that there is a purely mechanical cause for cell extrusion.

Mechanically speaking, a simple (single-layer) epithelial sheet is analogous to a foam, and can be represented as a layer of interconnected polyhedra. Okuda and Fujimoto used such a foam model to describe a monolayer of epithelial cells, with each cell a polyhedron with average volume V. Every cell is further characterized by the number of neighboring cells n, the area of the apical ('top') and the area of the basal ('bottom') surface. The model, taking into account mechanical forces between neighboring cells, leads to a formula for the total mechanical energy of an epithelial sheet as a function of only a few parameters, including V and n, as well as the in-plane density and a quantity called sharpness, which can distinguish between situations where basal and/or apical surfaces are present or not. (A vanished apical surface implies basal extrusion and vice versa.) By studying how the energy changes by varying these few parameters, the researchers were able to obtain valuable insights into the mechanics of an epithelial sheet.

The key finding of Okuda and Fujimoto is that the system exhibits an inherent mechanical instability: small changes in cell topology or cell density can cause cell extrusion without additional forces being applied. Furthermore, it turns out that a cell undergoing extrusion generates forces within the layer, which can direct the extrusion of other cells to either side of the layer.

The scientists also found many agreements between the outcomes of their model and observations in living systems, such as the occurrence of different epithelial geometries (e.g. 'rosette' or pseudostratified structures).

The model admittedly has limitations, for example the assumptions that the whole sheet and the individual cell surfaces are not curved but flat. However, quoting the researchers, "despite its limitations, [the] model provides a guide to understanding the wide range of epithelial physiology that occurs in morphogenesis, homeostasis, and carcinogenesis".

[Background]

Epithelial cells

Epithelial tissue, one of four kinds of human (or animal) tissue, is located on the outer surfaces of organs and blood vessels in the human body, and on the inner surfaces of 'hollow spaces' in various internal organs. A typical example is the outer layer of the skin, called the epidermis. Epithelial tissue consists of epithelial cells; it can be just one layer of epithelial cells (simple epithelium), or two or more (layered or stratified epithelium). Satoru Okuda and Koichi Fujimoto from Kanazawa University have now modeled a simple epithelium as an arrangement of polyhedra in order to study its mechanical properties and specifically the process of epithelial cell extrusion.

Cell extrusion

In epithelial tissue, cell extrusions happen -- the processes whereby epithelial cells are 'pushed out' of the epithelium. Cell extrusion is an important biological process, regulating for example the removal of apoptotic (dead) cells, tissue growth and the response to cancer. Okuda and Fujimoto looked at a simple epithelium from a mechanical point of view. Modeling the epithelium as a layer of interconnected polyhedra, they found that cell extrusion -- whereby the top or bottom surface of a polyhedron shrinks to a point and then vanishes -- can be considered a purely mechanical property. An inherent instability, present in homogeneous sheets, can lead to cells being extruded due to small changes in density or topology.

Credit: 
Kanazawa University

Memory impairment in mice reduced by soy derivate that can enter the brain intact

image: Research from Japan shows that a soy-derived protein fragment that reaches the brain after being ingested reduces memory degradation in mice with an induced cognitive impairment, providing a new lead for the development of functional foods that help prevent mental decline.

Image: 
William J. Potscavage Jr., Kyushu University

In a study that could help one day give a literal meaning to food for thought, researchers from Kyushu University in Japan have reported that a protein fragment that makes its way into the brain after being ingested can reduce memory degradation in mice treated to simulate Alzheimer's disease.

Derived by breaking apart the proteins in soybeans, the memory-effecting molecule is classified as a dipeptide because it contains just two of the protein building blocks known as amino acids. Unique about the dipeptide used in the study is that it is currently the only one known to make the trip from a mouse's stomach to its brain intact despite the odds against it.

"On top of the possibility of being broken down during digestion, peptides then face the challenge of crossing a highly selectively barrier to get from the blood into the brain," says Toshiro Matsui, professor in the Faculty of Agriculture at Kyushu University and leader of the study published in npj Science of Food.

"While our previous studies were the first to identify a dipeptide able to make the journey, our new studies now show that it can actually affect memory in mice."

Working in collaboration with researchers at Fukuoka University, the researchers investigated the effects of the dipeptide--named Tyr-Pro because it consists of the amino acids tyrosine and proline--by feeding it to mice for several days before and after injecting them with a chemical that is commonly used to simulate Alzheimer's disease by impairing memory functions.

In tests to evaluate short-term memory by comparing a mouse's tendency to explore different arms of a simple maze, impaired mice that had ingested the dipeptide over the past two weeks fared better than those that had not, though both groups were overall outperformed by mice without induced memory impairment. The same trend was also found in long-term memory tests measuring how long a mouse stays in the lighted area of an enclosure to avoid a mild electrical shock experienced in the dark area after having been trained in the box a day before.

Though there have been other reports suggesting some peptides can reduce the decline of brain functions, this is the first case where evidence also exists that the peptide can enter the brain intact.

"We still need studies to see if these benefits carry over to humans, but we hope that this is a step toward functional foods that could help prevent memory degradation or even improve our memories," comments Matsui.

Credit: 
Kyushu University

Why businesses should offer free trials to existing customers

image: Everybody loves free stuff.

Image: 
by MoneyforCoffee from Pixabay (cc)

Everybody loves freebies, whether it's a tasty treat handed out at the supermarket or a month of Netflix. These campaigns are a great way to bring attention to a new product or service, and marketers use them to target new customers and grow their customer base.

But offering free trials to existing customers might seem like a waste of time, after all, your customers are already sold on your product.

However, a new study, published in Management Science, looked at what happened when a telco offered free mobile phone data to existing users, and found it was an effective way to increase sales, particularly if customers could share the offer with friends.

"It might seem like a waste of resources to provide a free trial to existing customers, but that is not what we found, and surprisingly, higher data usage customers were more likely to redeem the offer than lower usage customers," says marketing researcher Dr Hillbun Ho from the University of Technology Sydney.

"We expected low usage customers would be more likely to take up the offer and increase their usage. However, low usage customers were largely unresponsive to the free trial," he says. Many of the customers who took up the offer then continued their higher usage after the free trial campaign ended, increasing sales for the company.

And when the company gave some customers the option to forward the free trial to friends who were also existing customers, both the sharer and the recipient were more likely to redeem the offer, and continue the higher data usage.

These results have important implications particularly for companies that offer online 'experience products' such as gaming, collaboration tools or music streaming services, where customers need to experience it to appreciate its value, says Dr Ho.

"When software companies promote their products, they frequently use a "freemium" model, where the basic version is free but customers have to pay to get access to more advanced functions or features.

"These companies often face challenges in migrating customers from the free version to the paid version, because the free version customers have no experience in using the advanced functions that are only accessible to paid users.

"Our research suggests that providing free or lower cost trials to the paid version for a short promotion period is likely to increase the trial users' appreciation of the product, inducing take-up of the paid version," he says.

The research findings also suggest that to increase the impact of free trial campaigns, marketers can leverage the "power of sharing" by including a sharing feature in their offer.

Previous studies have shown that businesses can save five times more money when they retain customers rather than look for new ones.

Credit: 
University of Technology Sydney

Coronavirus: A wake-up call to strengthen the global food system

Global food production is incredibly efficient, and the world's farmers produce enough to feed the global population. Despite this abundance, a quarter of the global population do not have regular access to sufficient and nutritious food. A growing and more affluent population will further increase the global demand for food and create stresses on land, for example, through deforestation.

Additionally, climate change is a major threat to agriculture. Increased temperatures have contributed to land degradation and unpredictable rainy seasons can lead to crop failure. While climate extremes impact the ability to produce food, the guarantee of food is more than just agricultural productivity. Today's globalized food system consists of highly interconnected social, technical, financial, economic, and environmental subsystems. It is characterized by increasingly complex trade networks and an efficient supply chain, with market power located in the hands of few. A shock to the food system can lead to ripple effects in political and social systems. The 2010 droughts in wheat-producing countries such as China, Russia, and Ukraine, led to major crop failures, pushing up food prices on the global markets. This in turn was one of the factors that led to deep civil unrest in Egypt, the world's largest wheat importer, as people were facing food shortages, which possibly contributed to the 2011 revolution spreading across the country.

Not all shocks to the global food system are directly linked to agricultural productivity or climatic conditions. The vulnerability of the interconnected food system has become painfully evident in recent months following the appearance of a different type of shock: a global pandemic. Although it started as a health crisis, COVID-19 quickly filtered through the political, social, economic, technological, and financial systems. Business interruptions resulted in a chain reaction that is projected to contribute to food crises in many parts of the world.

"Although harvests have been successful and food reserves are available, global food supply chain interruptions led to food shortages in some places because of lockdown measures," writes the author of the commentary Franziska Gaupp, an IIASA researcher working jointly with the Ecosystems Services and Management (ESM) and Risk and Resilience (RISK) programs. "Products cannot be moved from farms to markets. Food is rotting in the fields as transport disruptions have made it impossible to move food from the farm to the consumer. At the same time, many people have lost their incomes and food has become unaffordable to them."

The World Food Program has warned that by the end of 2020, an additional 130 million people could face famine. In the fight against the global COVID-19 pandemic, borders have been closed and a lack of local production has led to soaring prices in some countries. In South Sudan, for example, wheat prices have increased 62% since February 2020. Difficult access to food, and related stress could then lead to food riots and collective violence.

According to Gaupp, a systems approach is needed to address the challenges of a globally interconnected, complex food system. Systemic risk and systemic opportunities need to be incorporated into food-related policies. It is important to highlight that the threat to food security is not just a result of potential disruptions of production, but also shocks to distribution as well as shortfalls of the consumers' income. COVID-19 has shown how interconnected our world is, and how a simultaneous shock - such as a pandemic - also affects our food system. She further points out that the issues are supply chain imbalances. There is enough for everyone, however, some countries are panic buying, and some are banning exports: This is why the whole supply and demand system is experiencing challenges, leading to more difficult access to food, especially in poorer countries.

"There will likely be more shocks hitting our global food system in the future. We need global collaboration and transdisciplinary approaches to ensure that the food chains function even in moments of crises to prevent price spikes and to provide all people with safe access to food," concludes Gaupp.

Credit: 
International Institute for Applied Systems Analysis

Two quantum cheshire cats exchange grins

Prof. LI Chuanfeng, XU Jinshi, and XU Xiaoye from University of Science and Technology of China (USTC) of the Chinese Academy of Sciences (CAS), collaborating with Prof. CHEN Jingling from Nankai University, realized the non-contacing exchange of the polarization of two photons, revealing the unique quantum characteristics of the "Quantum Cheshire Cat".

The study, published in Nature Communications, deepens the understanding of the fundamental problem of physics, "what is physical reality."

In classical world, an object should carry all of its physical properties. However, in quantum world, a quantum object may not act in such a manner - it can temporarily leave some of its physical properties where it never appears. This phenomenon is first proposed in 2013 by Yakir Aharonov, which is known as the quantum Cheshire cat effect. Cheshire Cat is a grinning cat in the fairy tale "Alice in Wonderland". It can disappear, but its grin is still hanging in the air.

In the next few years, experimental physicists observed the separation of the particle properties from the particles in the neutron and photon interference experiments. Scientists soon realized that the results of these experiments could be explained by the classic interference theory. To show the unique quantum effects of "Quantum Cheshire Cat", however, more complicated experiments were needed.

Prof. LI's group, for the first time, used the two-photon system to demonstrate the unique quantum effect of two "quantum Cheshire Cats" exchanging grins. Weak values are required to characterize the location of Cheshire cat and its grin in experiments. However the extraction of weak values in multi-body quantum systems is a big problem.

In this study, scientists proved that the traditional weak measurement method can be bypassed by applying a perturbation to the system. The weak value can be obtained directly by using the inherent relationship between the system detection probability and the strength of the perturbation.

They prepared a two-photon hyper-entangled state, that is, the polarization and path degrees of freedom of the two photons are respectively in the maximum entangled state but there are no correlations between the two degrees of freedom. Then imaginary time evolution introduces perturbation to obtain the weak value of path and polarization of the photon.

Through these weak values, scientists observed that every photon and its polarization are separated, and finally the polarization of the other photon is obtained. The non-contacting grin exchange of the two "quantum Cheshire cats" is realized.

Credit: 
University of Science and Technology of China

Geologists shed light on the tibetan plateau origin puzzle: an open-and-shut perspective

image: Scientists unraveled part of the mystery surrounding the complex geological structures of the southern Tibetan Plateau

Image: 
Earth Science Frontiers

Earth's geographical surfaces have been formed over millions of years, and various current theories aim to explain their formation. The most popular theory, called the "plate tectonics theory," states that Earth's outermost layer is a dynamic system consisting of slowly moving plates, also known as "tectonic" plates. As theses plates move, they come close to each other and collide, or drift away from or slide past the other, causing tension or rupture along their boundaries. If two colliding plates face enormous compression force along the rupture line, a slab of the earth would uplift. The uplifted piece of land gives rise to geographical structures such as mountains or plateaus on the landscape of the earth.

The Tibetan Plateau, the highest plateau in the world, is believed to have been formed through one such tectonic process, when the Indian and Eurasian continental plates collided with each other. Interestingly, the landscape of this enigmatic plateau consists of various unusual geological structures that have baffled geologists globally. For example, many independent geological units of different structures and ages are placed next to each other in a way that cannot be explained by a single tectonic event as per the existing theory. Intrigued by this, in a new study published in Earth Science Frontiers, a group of scientists at the China University of Geosciences, led by Dr Liu Demin, investigated in detail the geological structures of the southern Tibetan Plateau. Talking about their motivation, Dr Demin says, "The southern Tibetan Plateau has a complicated geological structure, which cannot be explained by the existing 'plate tectonics' theory. Our study uses a new idea to explain some unusual tectonic structures that are part of the southern Tibetan Plateau."

To begin with, the scientists analyzed ancient tectonic ruptures in the form of "boundaries" between the distinct geological regions. The "South Tibet detachment system" (STDS) is one such boundary that runs parallel to the Himalayan range for more than 2,000 km. The researchers analyzed the geological data of STDS and other structures in the region, such as the Rongbu Temple normal fault and the Main Central Thrust (MCT), to trace the possible chain of events related to the evolution of these boundaries. They speculated that instead of a single "collision-compression" process (as per the existing theory), these boundaries were created in different periods altogether, through a series of tectonic events that date back to the early Cenozoic era (a geological era that extends from 66 million years ago to the present day) and occurred in multiple stages.

According to this model, called the "opening-closing" theory, the upper layer or "crust" of a prehistoric ocean called the "Neo-Tethys" ocean expanded or "opened," and a part of the oceanic crust moved under the other, resembling a "closing" movement. The continental plates too followed a similar process of "opening and closing" as they moved towards and away from each other. This chain of events gave rise to the structures of the Tibetan Plateau. Using this model, the scientists were able to deduce that the Rongbu Temple normal fault and the MCT were formed earlier than the STDS was. Further, they revealed that two tectonic units, klippes and windows, in the Chomolungma region were actually the result of gravitational gliding (as opposed to compression, as previously believed) and thus should be characterized as extensions and slips, respectively. Dr Demin further explains, "Thermal energy and gravitational potential energy in the deep earth played a key role during this opening-closing evolutionary process."

The geology of Earth's surfaces has changed over millions of years through continuous evolutionary processes. In this study, scientists unraveled part of the mystery surrounding the complex geological structures of the southern Tibetan Plateau. Dr Demin concludes, "A deeper understanding of the 'opening-closing' process requires us to focus more on the detailed geological record for evidence of continuous rather than temporal processes.'' The research team now plans to study the differences between the opening-closing view and the plate tectonic theory in detail, to shed further light on the genesis of the Tibetan plateau.

Credit: 
Cactus Communications

Fantastic muscle proteins and where to find them

image: Watching the sarcomeres contract - collage of myosin (green), actin and the Z-disk (red) and BioID (blue).

Image: 
Jacobo Lopez Carballo, Gotthardt Lab, MDC

Researchers at the Max Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC) developed a mouse model that enables them to look inside a working muscle and identify the proteins that allow the sarcomere to contract, relax, communicate its energy needs, and adapt to exercise. Specifically, they were able to map proteins in defined subregions of the sarcomere, starting from the "Z-disc," the boundary between neighboring sarcomeres. This in and of itself was a significant step forward in the study of striated muscle.

In the process, they made an unexpected discovery: myosin, one of the three main proteins that make up striated muscle fibers, appears to enter the Z-disc. Models of how myosin, actin and the elastic scaffold protein titin work together have largely ignored the possibility that myosin filaments penetrate the Z-disc structure. Only recently have German scientists theorized that they do, but no experimental evidence has validated the model, until now.

"This is going to be unexpected even for myosin researchers," says Professor Michael Gotthardt, who heads MDC's Neuromuscular and Cardiovascular Cell Biology Lab and led the research. "It gets to the very basics of how muscles generate force."

Who's there?

Gotthardt's team including first authors Dr. Franziska Rudolph and Dr. Claudia Fink with the help from colleagues at the MDC and the University of Göttingen, never set out to validate this theory. Their primary goal was to identify the proteins in and near the Z-disc. To do this, they developed a mouse model with an artificial enzyme, called BioID, inserted into the giant protein titin. The Titin-BioID then tagged proteins close to the Z-disc.

Sarcomeres are tiny molecular machines, packed with proteins that tightly interact. Until now it has been impossible to separate proteins specific to the different subregions, especially in live, functioning muscle. "Titin-BioID probes specific regions of the sarcomere structure in vivo," says Dr. Philipp Mertins, who heads MDC's Proteomics Lab. "This has not been possible before."

The team is the first to use BioID in live animals under physiological conditions and identified 450 proteins associated with the sarcomere, of which about half were already known. They found striking differences between heart and skeletal muscle, and adult versus neonatal mice, which relate to sarcomere structure, signaling and metabolism. These differences reflect the need of adult tissue to optimize performance and energy production versus growth and remodeling in neonatal tissue.

"We wanted to know who's there, know who the players are," Gotthardt says. "Most were expected, validating our approach."

The surprise

The protein that they were not expecting to see in the Z-disc was myosin, which is integrated at the opposite site of the sarcomere. When a muscle is triggered to move, myosin walks along actin bringing neighboring Z-discs closer together. This sliding of actin and myosin filaments creates the force that enables our heart to pump blood or our skeletal muscle to maintain posture, or lift an object.

This so-called "sliding filament model" of the sarcomere describes force production and helps explain how force and sarcomere length relate. However, current models have trouble predicting the behavior of fully contracted sarcomeres. Those models have assumed myosin does not enter the Z-disc on its walk along actin. There have been some hints that maybe it keeps going. "But we didn't know if what we were seeing in stained tissue samples was an artefact or real life," Gotthardt says. "With BioID we can sit at the Z-disc and watch myosin pass by."

Gotthardt agrees with the proposed theory that myosin entering the Z-disc can limit or dampen the contraction. This might help solve the ongoing issue scientists have had calculating how much force a muscle fiber can create in relation to its length and lead to a refined model of the sarcomere and possibly serve to protect muscle from excessive contraction.

Why it matters

Understanding how muscle fibers extend and contract on the molecular level under normal conditions is important so researchers can then identify what is going wrong when muscles are damaged, diseased or atrophy with age. Identifying which proteins are causing problems could potentially help identify novel treatment targets for patients with heart disease or skeletal muscle disorders.

Gotthardt and his team plan to next use BioID to study animals with different pathologies, to see what proteins are involved in muscle atrophy, for example. "Maybe a protein that is not normally there goes into the sarcomere, and it is part of the pathology," Gotthardt says. "We can find it with BioID."

Credit: 
Max Delbrück Center for Molecular Medicine in the Helmholtz Association

Mortality rates higher following kidney injury, University of Cincinnati research finds

New research from the University of Cincinnati shows kidney failure resulting from acute kidney injury (AKI) leads to a higher risk of death in the first six months compared to kidney failure from diabetes or other causes and that risk is even higher for women.

AKI occurs when kidneys stop working properly and can range from minor loss of kidney function to complete failure. AKI often happens as a complication of another serious illness. The UC study highlights the need for developing customizable treatment strategies targeting factors that enhance kidney recovery.

The study, published in the Clinical Journal of the American Society of Nephrology, finds a kidney recovery rate of 35% in patients with kidney failure due to AKI. Compared to men, women had a 14% lower likelihood of kidney recovery. Blacks, Asians, Hispanics and Native Americans had lower likelihoods of kidney recovery when compared to whites.

"Kidney failure due to AKI is associated with significant morbidity and mortality," says Silvi Shah, MD, assistant professor in the division of nephrology, Kidney CARE Program at UC, and lead author of the study. "There is not much available data on the patterns of recovery from AKI and its impact on outcomes for dialysis patients. So, in this study, we examined the association of kidney failure due to AKI with the outcome of all causes of mortality, and the associations of sex and race with kidney recovery."

The study evaluated over 1 million dialysis patients between January 1, 2005, and December 31, 2014, using data from the United States Renal Data System. The mean age of the study cohort was 63 years and 3% of patients starting long-term dialysis had kidney failure due to AKI. Compared to kidney failure due to diabetes, kidney failure due to AKI was associated with a higher mortality in the first three months as well as the first three to six months following the start of dialysis.

"AKI, defined as sudden deterioration in kidney function leading to kidney failure, is not uncommon and significantly increases the risk of morbidity and mortality" says Charuhas Thakar, director of the division of nephrology at the UC College of Medicine and senior author of the study.

Of the patients with kidney failure due to AKI, 35% eventually recovered their kidney function, 95% of those within 12 months. Women had a 14% lower likelihood of kidney recovery than did men. Blacks, Asians, Hispanics and Native Americans had lower likelihoods of kidney recovery as compared to white.

"This study suggests the need for developing customizable treatment strategies for patients with kidney failure due to AKI; in particular, focusing on factors promoting kidney recovery," says Thakar. "This research significantly contributes to improving the current knowledge gap in this area."

Shah says the study is unique in that it addresses a comprehensive group of patients from a national database to better understand the outcome of kidney failure due to AKI. Additionally, the analysis of kidney recovery focused on the associations between sex and race and the chances of recovery.

"Our findings suggest lower kidney recovery rates in women and among minorities," says Shah. "Given the differences observed across sex and race, further studies of the possible cultural and social contributors and strategies to improve clinical monitoring of patients with kidney failure due to AKI for kidney recovery may have to be specifically directed to that population subgroup."

In addition to Thakar, assisting in the research were Annette Christianson and Karthikeyan Meganathan, research associates in the UC Department of Environmental Health; Anthony Leonard, PhD, research associate professor in the UC Department of Family and Community Medicine; and Kathleen Harrison, senior clinical researcher in the UC Division of Nephrology and Hypertension. Shah is supported by funds from the UC Division of Nephrology.

Credit: 
University of Cincinnati

Researchers attempt new treatment approach for blood cancer

(Boston)--In an effort to improve the survival of patients with myeloproliferative neoplasms, a type of leukemia, researchers inhibited a specific protein (alpha5beta1 integrin) to decrease the number of large bone marrow cells (megakaryocytes) in an experimental model. An increase in megakaryocyte numbers is thought to be the cause of many problems observed in this disease. This type of treatment approach has never been attempted before.

Myeloproliferative neoplasms are a type of blood cancer that begin with a pathological mutation (change) in a stem cell in the bone marrow which causes too many red blood cells, white blood cells, or platelets to be produced. Most patients die of transformation of the disease to a more fatal leukemia or because of myelofibrosis, a scarring of bone marrow. There are currently no specific treatments for myelofibrosis.

"To date, most drug development efforts have been focused on the JAK2V617F mutation, but this approach has failed to fundamentally change the course of disease. Our study has taken a totally new approach for treatment of the disease, which, if successful, will present a complementary or even an alternative therapy to existing treatments," explained lead author Shinobu Matsuura, DVM, PhD, instructor of medicine at Boston University School of Medicine (BUSM).

Using two sets of experimental models, the researchers altered the JAK2V617F gene in one group to induce symptoms of myeloproliferative neoplasms. The second group were the control. When both groups were exposed to an antibody against alpha5beta1 integrin, the number of megakaryocytes decreased in bone marrow in the group with the altered gene, while no changes were seen in the control group.

According to the researchers, their ultimate objective is to find effective treatments for myelofibrosis, which can occur secondary to many diseases and is a terminal condition with no specific treatment available."Myeloproliferative neoplasms are a chronic and debilitating disease. Safe and effective novel treatments of this disease will improve the quality of life of these patients."

Credit: 
Boston University School of Medicine

Higher rates of severe COVID-19 in BAME populations remain unexplained

Higher rates of severe COVID-19 infections in Black, Asian and Minority Ethnic (BAME) populations are not explained by socioeconomic or behavioral factors, cardiovascular disease risk, or by vitamin D status, according to new research led by Queen Mary University of London.

The findings, published in the Journal of Public Health, suggest that the relationship between COVID-19 infection and ethnicity is complex, and requires more dedicated research to explain the factors driving these patterns.

Growing international reports highlight higher risk of adverse COVID-19 infection in BAME populations. The underlying cause of this ethnicity disease pattern is not known. Variation in cardiovascular disease risk, vitamin D levels, socio-economic, and behavioural factors have been proposed as possible explanations. However, these hypotheses have not been formally studied in existing work.

Investigators from Queen Mary, in collaboration with the Medical Research Council Lifecourse Epidemiology Unit at the University of Southampton, used the comprehensive and unique UK Biobank cohort of over half a million people to investigate the role of a range of socioeconomic, biological, and behavioural factors in determining the ethnicity pattern of severe COVID-19. The dataset included 4,510 UK Biobank participants who were tested for COVID-19 in a hospital setting, of whom 1,326 had a positive test result.

The results demonstrate that BAME ethnicity, male sex, higher body mass index, greater material deprivation, and household overcrowding are independent risk factors for COVID-19. The higher rates of severe COVID-19 in BAME populations was not adequately explained by variations in cardiovascular disease risk, vitamin D levels, socio-economic, or behavioural factors, suggesting that other factors not included in the analysis might underlie these differences.

Dr Zahra Raisi-Estabragh, BHF Clinical Research Training Fellow at Queen Mary University of London, led the analysis. She said: "There is increasing concern over the higher rate of poor COVID-19 outcomes in BAME populations. Understanding potential drivers of this relationship is urgently needed to inform public health and research efforts. This work goes some way in addressing some of these pertinent questions".

Steffen Petersen, Professor of Cardiovascular Medicine at Queen Mary University of London, who supervised the work added: "The results of this analysis suggest that factors which underlie ethnic differences in COVID-19 may not be easily captured. In addition to assessment of the role of biological considerations such as genetics, approaches which more comprehensively assess the complex economic and sociobehavioural differences should now be a priority."

Nicholas Harvey, Professor of Rheumatology and Clinical Epidemiology at the MRC Lifecourse Epidemiology Unit, University of Southampton, was a key collaborator in the work. He comments: "The detailed participant characterisation in the UK Biobank and the rapid linkage of this data with COVID-19 test results from Public Health England permitted consideration of potential importance of a wide range of exposures".

The work was also supported by the National Institute for Health Research (NIHR) through the Barts Biomedical Research Centre, NIHR Southampton Biomedical Research Centre, and NIHR Oxford Biomedical Research Centre.

Credit: 
Queen Mary University of London

Measuring a tiny quasiparticle is a major step forward for semiconductor technology

TROY, N.Y. -- A team of researchers led by Sufei Shi, an assistant professor of chemical and biological engineering at Rensselaer Polytechnic Institute, has uncovered new information about the mass of individual components that make up a promising quasiparticle, known as an exciton, that could play a critical role in future applications for quantum computing, improved memory storage, and more efficient energy conversion.

Published today in Nature Communications, the team's work brings researchers one step closer to advancing the development of semiconductor devices by deepening their understanding of an atomically thin class of materials known as transitional metal dichalcogenides (TMDCs), which have been eyed for their electronic and optical properties. Researchers still have a lot to learn about the exciton before TMDCs can successfully be used in technological devices.

Shi and his team have become leaders in that pursuit, developing and studying TMDCs, and the exciton in particular. Excitons are typically generated by energy from light and form when a negatively charged electron bonds with a positively charged hole particle.

The Rensselaer team found that within this atomically thin semiconductor material, the interaction between electrons and holes can be so strong that the two particles within an exciton can bond with a third electron or hole particle to form a trion.

In this new study, Shi's team was able to manipulate the TMDCs material so the crystalline lattice within would vibrate, creating another type of quasiparticle known as a phonon, which will strongly interact with a trion. The researchers then placed the material within a high magnetic field, analyzed the light emitted from the TMDCs from the phonon interaction, and were able to determine the effective mass of the electron and hole individually.

Researchers previously assumed there would be symmetry in mass, but, Shi said, the Rensselaer team found these measurements were significantly different.

"We have developed a lot of knowledge about TMDCs now," Shi said. "But in order to design an electronic or optoelectronic device, it is essential to know the effective mass of the electrons and holes. This work is one solid step toward that goal."

Credit: 
Rensselaer Polytechnic Institute

Old drug standards delay new drug approvals

image: Old drug standards delay new drug approvals.

Image: 
University of Texas at Austin.

AUSTIN, Texas - During the next year, the Food and Drug Administration will review many new drug applications for preventing and treating the new coronavirus. But early approvals could get delayed by the standards the agency used for older drugs, according to new research from the McCombs School of Business at The University of Texas at Austin.

In a forthcoming paper published online in advance by the Strategic Management Journal, Associate Management Professor Francisco Polidoro Jr. reviewed 291 drugs approved over 35 years. He found that the more information the FDA had about existing drugs, the longer it took to OK new ones for the same conditions.

When there was more information about older drugs, more than half the newer drugs in the study took more than 20 months to win approval. By contrast, only 20% of new drugs took that long to get approved when less information was available about older drugs. Delays in drug approvals cost their creators an average of $1 million a day.

“Sometimes knowledge can become a hindrance, and too much of a good thing can become a bad thing,” Polidoro said.

But his findings also contain hopeful news for potential COVID-19 treatments. When the agency had to review several innovative drugs in a relatively short time – three years – delays got shorter. Polidoro defines an innovative drug as one that uses a new mechanism of action to attack a disease.

“As it struggles with innovations, the organization becomes better able to deal with them,” he said. “It gets more used to breaking routines and creating new ones.”

Although his topic is timely for the pandemic, Polidoro has been curious about the subject for a decade as he has researched pharmaceutical companies and organizational learning.

In minutes of FDA meetings, he found regulators debating whether to apply old standards to innovative drugs for treating dementia, HIV and macular degeneration. He reasoned that the more information regulators had on existing drugs, the greater the variety of outcomes related to efficacy and safety that they need to ponder – and the longer approval would take.

To test his theory, he used a variety of data sources, including the Freedom of Information Act, to get data on drug approvals from 1980 to 2014. He divided the drugs into 18 therapeutic classes, from controlling blood pressure to fighting viruses, and he singled out the innovative drugs in each class.

To quantify regulators’ embedded knowledge, Polidoro counted the number of papers on existing drugs that were published in top medical journals. The number of publications varied greatly from case to case – they averaged about 150 but sometimes totaled more than 1,000.

He found that the more papers there were for a class of drugs, the longer it took for innovative drugs in that class to win approval. When the measure of papers increased by 32% beyond the average, the result was a 75% longer approval time.

A lesson for COVID-19 therapies, Polidoro said, is that regulators should be thinking ahead about new standards by which to judge them. Different drugs might require different criteria to measure their effectiveness, such as how tocilizumab prevents inflammation of lung tissue while remdesivir blocks the virus from reproducing.

“It will be difficult to compare these solutions with each other because they have different safety and efficacy profiles,” he said. “They’re not like apples to apples. Recognition of these differences can help ensure timely approvals.”

For more details about this research, read the McCombs Big Ideas feature story.

Journal

Strategic Management Journal

Credit: 
University of Texas at Austin

Simulating wind farm development

Wind farms are large, highly technical projects but their development often relies on personal decisions made by individual landowners and small communities. Recognizing the power of the human element in wind farm planning, Stanford University researchers have devised a model that considers how interactions between developers and landowners affect the success and cost of wind farms.

"I've been doing work on the costs of wind farms for about 10 years and I've found that the soft costs - basically the cost interactions between people - are overlooked," said Erin MacDonald, assistant professor of mechanical engineering at Stanford. "Existing models can tell us how to eke out a little more value by making a blade turn in a slightly different way but aren't focused on the reasons why a community accepts or rejects a wind farm."

In a paper, published June 19 in the Journal of Mechanical Design, the researchers present a model that highlights three actions developers could take during this process of landowner acquisition - community engagement meetings, preliminary environmental studies and sharing plans for wind turbine layout with the landowner - and investigates how those actions would affect the eventual cost of the wind farm. The cost analysis suggests that these actions, while contributing to upfront costs, may end up saving developers money in the long run.

With additional input from real-life landowner acquisition case studies, the researchers hope to further refine this model to ultimately increase the success of project implementation and reduce the cost of overall wind farm development.

Quantifying interactions

During the process of planning a wind farm, a developer uses models to predict how much the project will cost versus how much energy it will produce. These models are mathematical formulas that map the relationships between different pieces of a project - such as materials, labor, land and, in this case, interactions between developers and landowners.

In previous work, MacDonald and her former graduate student and postdoctoral fellow, Le Chen, created a model where they integrated landowner decision-making into a wind farm layout optimization model - which otherwise focuses on what physical layout will produce the most energy. With this model, developers can anticipate and prioritize which landowners will have the most influence on the success of their project. This latest work adds details about other interpersonal interactions throughout the early development process.

"When I worked in the energy industry, the models I used often lacked human input," said Sita Syal, a graduate student in mechanical engineering and lead author of the paper. "We don't deny the rigor of economic or engineering analysis, but we encourage developers to consider the benefits of social analysis as well."

"This work gives developers a framework to evaluate different actions, whereas right now it's hard to compare potential impacts of those actions, for example, how investing in landowner relations stacks up against buying more efficient equipment," said Yiqing Ding, a graduate student in mechanical engineering and co-author of the paper.

To account for soft costs in their model, the researchers had to study and brainstorm different scenarios for the interactions that occur during wind farm development - and their outcomes - and then translate the most crucial details of those interactions into formulas that could integrate with more traditional project analysis models. Their model, which is an initial proof-of-concept, suggests that actions that increase landowner involvement in the planning process lead to more landowners accepting a development contract, and this increase in acceptance would translate to cost savings overall - particularly in cases where they prevent failure of the project.

"The model suggests that taking preemptive actions can improve landowners' acceptance but can also incur cost," said Ding. "Timing is also important: we found that when an action is taken can influence landowner acceptance."

While some developers conduct community meetings and preliminary environmental studies, sharing a layout plan with landowners is rare. Typically, all landowners involved in a wind farm will be given a vague contract that does not actually specify how their land will be used by the final project and, relatedly, how much money they will be paid.

A co-design process

The researchers recognize that making the development process more transparent is challenging and adds to initial expenses. However, they are still optimistic about the potential for innovative, collaborative actions that could ultimately improve the success and value of wind power.

For example, MacDonald suggests that virtual reality mockups of turbine plans might increase landowner contract acceptance, given that previous studies have found that people tend to be more accepting of the appearance of turbines once they see them in place.

"It would almost be like a co-design process between the developers and landowners," said MacDonald. "The developer is including the landowner in the process in a collaborative way by showing them, not just where the turbines would be, but also explaining the advantages and disadvantages of different layouts."

Other options for increasing transparency and collaboration could include making contracts easier to read and giving landowners some choice, such as two alternatives for how their land could be used.

Meanwhile, the proof-of-concept model for landowner acceptance requires continued research and refinement. The researchers are hoping to see more studies of soft costs for wind farms in general and would like to gain more insight into developers' processes - which tend to be proprietary - in order to make the model useful to them.

The best outcome would be that all their painstaking efforts to distill and translate human interaction into mathematical relationships result in a program where a developer could, for example, input the amount of money they plan to spend on community meetings and receive a probability for landowner contracts that is customized to that community.

"We're thinking many steps down the line but someday this could be a tool for creating community-supported sustainable energy infrastructure," said Syal.

Credit: 
Stanford University

New research hints at the presence of unconventional galaxies containing 2 black holes

CLEMSON, South Carolina - A Clemson University scientist has joined forces with an international team of astronomers to identify periodic gamma-ray emissions from 11 active galaxies, paving the way for future studies of unconventional galaxies that might harbor two supermassive black holes at their centers.

Among astronomers, it has long been well-established that most galaxies host a black hole at their center. But galaxies hosting a pair of black holes has remained theoretical.

The results of the team's research appeared in The Astrophysical Journal on June 19, 2020 in a paper titled "Systematic search for gamma-ray periodicity in active galactic nuclei detected by the Fermi Large Area Telescope."

"In general, supermassive black holes are characterized by masses of more than a million masses of that of our sun," said Pablo Peñil, lead author of the study and a Ph.D. student at Universidad Complutense de Madrid in Spain. "Some of these supermassive black holes, known as active galactic nuclei (AGN) have been found to accelerate particles to near the speed of light in collimated beams called jets. The emission from these jets is detected throughout the entire electromagnetic spectrum, but most of their energy is released in the form of gamma rays."

Gamma rays, which are the most extreme form of light, are detected by the Large Area Telescope onboard NASA's Fermi Gamma-ray Space Telescope. AGN are characterized by abrupt and unpredictable variations in brightness.

"Identifying regular patterns in their gamma-ray emission is like looking at the stormy sea and searching for the tiny regular set of waves caused by, say, the passage of a small boat," Peñil said. "It becomes very challenging very quickly."

The team accomplished the first difficult step of identifying a large number of galaxies that emits periodically over years and is trying to address the question of what is producing that periodic behavior in these AGN. Several of the potential explanations are fascinating.

"The next step will be the preparation of observational campaigns with other telescopes to closely follow up on these galaxies and hopefully unravel the reasons behind these compelling observations," said co-author Marco Ajello, an associate professor in the College of Science's department of physics and astronomy at Clemson University. "We have a few possibilities in mind - from lighthouse effects produced by the jets to modulations in the flow of matter to the black hole - but one very interesting solution would be that periodicity is produced by a pair of supermassive black holes rotating around each other. Understanding the relation of these black holes with their environment will be essential for a complete picture of galaxy formation."

Thanks to a decade of Fermi-LAT observations, the team was able to identify the repetition of gamma-ray signals over cycles of a few years. On average, these emissions repeated about every two years.

"Our study represents the most complete work to date on the search for periodicity in gamma rays, a study that will be instrumental in deriving insights about the origin of this peculiar behavior," said co-author Alberto Domínguez, Peñil's Ph.D. supervisor in Madrid and also a former postdoctoral researcher in Ajello's group at Clemson. "We have used nine years of continuous LAT all-sky observations. Among the more than two thousand AGN analyzed, only about a dozen stand out for this intriguing cyclical emission."

Enlarging the limited sample of periodic emitters constitutes an important leap forward for understanding the underlying physical processes in these galaxies.

"Previously only two blazars were known to show periodic changes in their gamma-ray brightness. Thanks to our study, we can confidently say that this behavior is present in 11 other sources," said co-author Sara Buson, a professor at University of Würzburg in Germany. "In addition, our study found 13 other galaxies with hints of cyclical emission. But to confidently confirm this, we need to wait for Fermi-LAT to collect even more data."

Credit: 
Clemson University

A new social role for echolocation in bats that hunt together

image: The research team tested whether M. molossus can use echolocation calls to discriminate among different group members.

Image: 
Irene Mendez Cruz

Searching for food at night can be tricky. To find prey in the dark, bats use echolocation, their "sixth sense." But to find food faster, some species, like Molossus molossus, may search within hearing distance of their echolocating group members, sharing information about where food patches are located. Social information encoded in their echolocation calls may facilitate this foraging strategy, according to a recent study by Smithsonian Tropical Research Institute (STRI) scientists and collaborating institutions published online in Behavioral Ecology.

Previous research has identified several ways in which echolocation can transfer social information in bats. For example, "feeding buzzes," the echolocation calls bats produce to home in on prey they've spotted, can serve as cues of prey presence to nearby eavesdropping bats. On the other hand, echolocation calls that bats produce while looking for food, called "search-phase" calls, were not known to transfer social information.

However, for group-foraging bats, coordinating flight in the dark with several other fast-flying individuals may require an ability to identify group members on the wing. If search-phase calls contain individual signatures the bats can perceive, it could allow them to know which individuals are flying near them without requiring specialized signals for communication.

Led by Jenna Kohles, STRI fellow and doctoral candidate at the Max Planck Institute of Animal Behavior in Germany, the research team tested whether search-phase echolocation calls contain information about a bat's identity, and whether M. molossus can use this information to discriminate among different group members. The team exposed bats to search-phase echolocation calls in a habituation-dishabituation paradigm, a method where an animal is exposed to a repeating stimulus until it no longer reacts to it. Then, it is exposed to a new but similar stimulus to see if it reacts, which would indicate that it perceives a difference between the two stimuli.

"We played echolocation calls from two different bats that were both group members of the subject bat," Kohles said. "By measuring the responses of the subject bats as we switched between calls from different individuals, we could learn about whether the bats perceived differences and similarities between the calls."

They found that the bats indeed distinguish between different group members, likely by using individual signatures encoded in the calls. Their results could mean that search-phase calls serve a double function. They not only help bats detect prey, but also convey individual identities to nearby foraging group members. This coincides with the fact that the majority of M. molossus' auditory cortex is tuned in to these search-phase calls, indicating the importance of processing them.

This finding offers insight into not only the social strategies these bats may use to meet their energetic needs, but also into the evolution of echolocation signals and social communication in bats.

"This study suggests that we may be underestimating the crucial ways social information influences bat foraging success and ultimately survival," Kohles said.

Credit: 
Smithsonian Tropical Research Institute