Culture

Studying trust in autonomous products

While a certain level of trust is needed for autonomous cars and smart technologies to reach their full potential, these technologies are not infallible - hence why we're supposed to keep our hands on the wheel of self-driving cars and follow traffic laws, even if they contradict our map app instructions. Recognizing the significance of trust in devices - and the dangers when there is too much of it - Erin MacDonald, assistant professor of mechanical engineering at Stanford University, researches whether products can be designed to encourage more appropriate levels of trust among consumers.

In a paper published last month in The Journal of Mechanical Design, MacDonald and Ting Liao, her former graduate student, examined how altering peoples' moods influenced their trust in a smart speaker. Their results were so surprising, they conducted the experiment a second time and with more participants - but the results didn't change.

"We definitely thought that if people were sad, they would be more suspicious of the speaker and if people were happy, they would be more trusting," said MacDonald, who is a senior author of the paper. "It wasn't even close to that simple."

Overall, the experiments support the notion that a user's opinion of how well technology performs is the biggest determining factor of whether or not they trust it. However, user trust also differed by age group, gender and education level. The most peculiar result was that, among the people who said the smart speaker met their expectations, participants trusted it more if the researchers had tried to put them in either a positive or a negative mood - participants in the neutral mood group did not exhibit this same elevation in trust.

"An important takeaway from this research is that negative emotions are not always bad for forming trust. We want to keep this in mind because trust is not always good," said Liao, who is now an assistant professor at the Stevens Institute of Technology in New Jersey and lead author of the paper.

Manipulating trust

Scientific models of interpersonal trustworthiness suggest that our trust in other people can rely on our perceptions of their abilities, but that we also consider whether or not they are caring, objective, fair and honest, among many other characteristics. Beyond the qualities of who you are interacting with, it's also been shown that our own personal qualities affect our trust in any situation. But when studying how people interact with technology, most research concentrates on the influence of the technology's performance, overlooking trust and user perception.

In their new study, MacDonald and Liao decided to address this gap - by studying mood - because previous research has shown that emotional states can affect the perceptions that inform interpersonal trustworthiness, with negative moods generally reducing trust.

Over the span of two identical experiments, the researchers analyzed the interactions of sixty-three participants with a simulated smart speaker that consisted of a mic and pre-recorded answers hidden under a deactivated smart speaker. Before participants used the speaker, they were surveyed about their feelings regarding their trust of machines and shown images that, according to previous research, would put them in a good or a bad mood, or not alter their mood at all.

The participants asked the speaker 10 predetermined questions and received 10 prerecorded answers of varying accuracy or helpfulness. After each question, the participant would rate their satisfaction with the answer and report whether it met their expectations. At the end of the study, they described how much they trusted the speaker.

If participants didn't think the speaker delivered satisfactory answers, none of the variables measured or manipulated in the experiment - including age, gender, education and mood - changed their minds. However, among participants who said the speaker lived up to their expectations, men and people with less education were more likely to trust the speaker, while people over age 65 were significantly less likely to trust the device. The biggest surprise for the researchers was that, in the group whose expectations were met, mood priming led to increased trust regardless of whether the researchers tried to put them in a good mood or a bad mood. The researchers did not follow up on why this happened, but they noted that existing theory suggests that people may become more tolerant or empathetic with a product when they are emotional.

"Product designers always try to make people happy when they're interacting with a product," said Liao. "This result is quite powerful because it suggests that we should not only focus on positive emotions but that there is a whole emotional spectrum that is worth studying."

Proceed with caution

This research suggests there is a nuanced and complicated relationship between who we are and how we feel about technology. Parsing out the details will take further work, but the researchers emphasize that the issue of trust between humans and autonomous technologies deserves increased attention now more than ever.

"It bothers me that engineers pretend that they're neutral to affecting people's emotions and their decisions and judgments, but everything they design says, 'Trust me, I've got this totally under control. I'm the most high-tech thing you've ever used,'?" said MacDonald. "We test cars for safety, so why should we not test autonomous cars to determine whether the driver has the right level of knowledge and trust in the vehicle to operate it?"

As an example of a feature that might better regulate user trust, MacDonald recalled the more visible warnings she saw on navigation apps during the Glass Fire that burned north of Stanford in fall, which instructed people to drive cautiously since the fire may have altered road conditions. Following their findings, the researchers would also like to see design-based solutions that factor in the influence of users' moods, both good and bad.

"The ultimate goal is to see whether we can calibrate people's emotions through design so that, if a product isn't mature enough or if the environment is complicated, we can adjust their trust appropriately," said Liao. "That is probably the future in five to 10 years."

Credit: 
Stanford University

Novel gene variants that modify the risk of late-onset Alzheimer's disease discovered

Late-onset Alzheimer's disease (LOAD) is the most common form of dementia in the elderly, affecting almost 44 million people worldwide. Despite decades of research, genetic factors that predispose individuals to LOAD are not clearly understood. With a growing elderly population in the U.S. and many other developed countries, there is an urgent need for precise prognostic biomarkers and viable treatment options. Apart from advanced age, variants in the apolipoprotein E (APOE) gene are a strong predictor of disease risk. Those who carry the ε4 variant have a higher risk of developing Alzheimer's, while those who carry a different variant, ?2, are typically protected. Surprisingly, many individuals do not conform to these rules -- a puzzle that has confounded researchers. Some people with APOE?4 remain disease-free while some APOE?2 carriers develop the disease.

Using a novel methodology, researchers at Baylor College of Medicine and Texas Children's Hospital examined the genomes of these "rule-breaker" individuals, which led to the identification of 216 new gene variants, many previously not suspected to play a role in this disease. This is an exciting first step toward understanding this paradox. The newly-identified biomarkers could potentially be used in the future to refine risk assessment and patient prognosis in APOE?2 and APOE?4 carrier populations and act as therapeutic targets for this untreatable condition.

The study, led by Drs. Olivier Lichtarge and Juan Botas, professors of molecular and human genetics at Baylor College of Medicine and investigators at the Jan and Dan Duncan Neurological Research Institute at Texas Children's Hospital, was published in Alzheimer's and Dementia.

"We wanted to test if people who show paradoxical outcomes might have other genetic variants that blunted the risky or protective effects of the APOE genotype they carry," Lichtarge said. "To search for those variants, Young Won Kim, a graduate student in my lab, designed new algorithms to compare and contrast the genetic variants present in each patient. We were then fortunate to be able to test these with a unique high-throughput fruit fly screen developed by Juan and his team at the Duncan NRI. This is a good example of the power of comparative studies across species and only feasible through multidisciplinary collaborations."

A novel computational approach finds 216 genes with differential variants in paradoxical populations

The wide spectrum of clinical presentations and complex inheritance patterns of LOAD, and its co-occurrence with other neuropathologies, heart-related disorders and age-associated cognitive impairments are some of the reasons why scientists have found the puzzle of paradoxical APOE ?2 and ?4 patients particularly challenging.

In this study, researchers tapped into the largest whole exome dataset available for this condition, called the Alzheimer's Disease Sequencing Project (ADSP).

"To screen for potential candidate variants, I focused on the coding regions of the human genome and estimated the functional impact of genetic variants by factoring in all the past evolutionary changes at the same gene positions. Through a series of regression analyses, we found 216 genes with significant differences in mutational load between the two paradoxical patient groups. Many of these genes showed involvement in known Alzheimer's disease pathways such as synaptic biology, dendritic spine pruning, and inflammatory responses in the brain. Also, we were able to use machine learning algorithms to predict which APOE?4 carriers would remain healthy and which APOE?2 carriers would develop Alzheimer's based on the variants they carry in the 216 genes," lead author, Young Won Kim, said.

High-throughput robotic screen of fruit flies validates candidate genes

The researchers' next step was to experimentally test the biological relevance of these 216 gene variants in modifying Alzheimer's disease pathogenesis.

"Testing so many genes for their impact on neuronal dysfunction in vivo is a daunting task. So, we leveraged fruit flies engineered in our lab to develop Alzheimer's pathology and used a high-throughput behavioral screen based on custom-made robots unique to the Duncan NRI to test the ability of each variant-associated gene to modify the disease pathology. This allowed us to validate many candidate genes and experimentally demonstrate their ability to modify disease pathology," Botas said.

"The robotic assays provided us with a quantitative assessment of the actual neuronal impairment. We used movement-specific behavioral assays as the main readout of nervous system function. Additionally, we obtained precise information of how variants in each gene modified its physiological function, meaning whether the impact on neuronal function was a result of the gene's under-performance (loss-of-function) or over-performance (gain-of-function). This information is critical for the future development of these biomarkers for therapeutic interventions, either by inhibiting or activating these genes in the future," Dr. Ismael Al-Ramahi, co-lead author and assistant professor at Baylor, added.

"To summarize, by focusing on paradoxical patient populations, we can now study the genetic modifiers that give them their special power to live a disease course different than expected. Here, in Alzheimer's disease, this may improve risk evaluation and point to gene targets for drugs that blunt the disease pathogenesis. Also, the massive use of evolutionary information seems to have made up for a relatively small number of patients (just 480) in this study, which is much less than a typical genome-wide association study. We think this makes our study design an attractive alternative way to study other health conditions with 'rule-breaking' patients. To conclude, this is the first proof-of-concept study to demonstrate the feasibility of integrating evolutionary biology and machine learning with high-throughput Drosophila genetics and neurobiology assays, to examine why some patients 'break free' of the genetic rules of disease," Lichtarge concluded.

Credit: 
Baylor College of Medicine

Disrupting the cellular process that promotes pancreatic cancer's deadly growth

WASHINGTON (December 8, 2020) -- Researchers say they've identified a way to disrupt a process that promotes the growth of pancreatic cancers -- one of the most difficult and deadly cancers to treat.

The team, led by scientists at Georgetown Lombardi Comprehensive Cancer Center and including investigators from Lawrence Livermore National Laboratories, STCube and Fluidigm, published their findings today in the journal Gastroenterology.

Pancreatic cancer is on course to be the second leading cause of cancer-related death in the country, according to the American Cancer Society, with most people surviving less than a year after diagnosis. Therapies approved to treat the cancer are poorly effective and extend survival by only a few months.

One of the hallmarks of the disease is a build-up of scar tissue called fibrosis, in which the cancer-associated fibroblasts (CAF) that trigger fibrosis serve a tricky dual role -- depletion of CAF is just as likely to promote as inhibit pancreatic cancer.

"We hypothesized that if we can alter the microenvironment where cross-talk between the fibroblasts, immune cells and cancer cells happens, then we can cause just enough disruption to push back tumor growth," says lead author, Ivana Peran, PhD, a research instructor of oncology at Georgetown Lombardi.

To do that, the team focused on adhesion molecule cadherin 11 (CDH11), which is expressed by CAFs in the pancreatic tumor environment and has been associated with other fibrotic disorders where it is expressed by activated fibroblasts. They examined inhibiting CDH11 in tumors growing in mice.

Their hypothesis turned out correct. For the first time, the team demonstrated that CDH11 is important in the natural history of pancreatic cancer. "Knocking out" the ability of mice to produce CDH11 reduced the growth of pancreatic tumors, increased the tumor's response to a common chemotherapy called gemcitabine, reversed immunosuppression and significantly extended the survival of mice.

"Separately, we have identified a drug that can inhibit CDH11," said Stephen Byers, PhD, professor of oncology at Georgetown Lombardi. "This drug also inhibited the growth of pancreatic tumors and taken together with our earlier work, this finding warrants study in human clinical trials for people with pancreatic cancer whose tumors express CDH11."

Credit: 
Georgetown University Medical Center

'Pink tax' hurts female consumers, but electing more women combats it

HOUSTON - (Dec. 8, 2020) - The wage gap between men and women is no secret, but another form of gender discrimination directly and disproportionately affects women worldwide: the "pink tax" imposed by import tariffs that target female products.

But new Rice University research concludes the "pink tax" is generally lower in countries where more women are elected to political office.

"Women's Descriptive Representation and Gendered Import Tax Discrimination" will appear in an upcoming edition of the American Political Science Review. The article not only highlights a previously unacknowledged trade policy that penalizes women -- gender-biased import taxes -- but also provides evidence that political representation can have a substantial, direct impact on discriminatory policies.

Import tariffs, which can differ for otherwise identical gender-specific products, often impose direct penalties on women consumers, said researchers Diana O'Brien of Rice, Timm Betz of the Technical University of Munich and David Fortunato of the University of California, San Diego. When they compared nearly 200,000 tariff rates on identical men's and women's apparel products in 167 countries between 1995 and 2015, they found that the difference varied from country to country but it negatively impacted women on average.

Importers ended up paying more for merchandise like jeans, boots, robes and T-shirts aimed at women than identical items sold to men. Women's and men's products were taxed at different levels nearly 40% of the time, and women's items were, on average, taxed 0.7% more than identical products aimed at men. The higher costs were passed on to wholesalers, then retailers and finally female consumers.

But the researchers found political representation is a key factor in lowering import tax penalties on women's goods.

"Just like in other areas of politics, inequalities in representation are reflected in inequalities in government policy," O'Brien said.

Across all democracies, women hold only about 25% of the seats in national legislatures, O'Brien said. In countries with more women in their legislatures, and in countries where women have been able to increase their share of legislative seats, the tax penalty on female consumers has been reduced.

Credit: 
Rice University

Johns Hopkins develops potential antibiotic for drug-resistant pathogen

Scientists from Johns Hopkins University and Medicine have developed a possible new antibiotic for a pathogen that is notoriously resistant to medications and frequently lethal for people with cystic fibrosis and other lung ailments.

The pathogen, called Mycobacterium abscessus, is related to a better-known bacterium that causes tuberculosis and leprosy but has recently emerged as a distinct species presenting most often as a virulent lung infection. The team of scientists from the Krieger School of Arts & Sciences' Department of Chemistry and the School of Medicine's infectious diseases department published their findings in the journal Communications Biology.

The team has developed one of the first potential treatments of a bacterium that has no FDA-approved treatments and a cure rate less than 50%. Before the compound, called T405, can move closer to becoming a clinical treatment, researchers need to improve its pharmacological potency using a preclinical animal model of the infection.

"People die of this in our hospitals every week," said Craig Townsend, a professor of chemistry who served as a principal investigator on the study along with Gyanu Lamichhane, an associate professor of medicine. "The data we have is very promising."

Despite years of urgent calls for more studies to understand the bacteria and to explore possible treatments, researchers have been wary of experimenting with the most dangerous member of its Non-Tuberculosis Mycobacteria (NTM) family.

"It's still considered an emerging disease," Lamichhane said. "There are now more NTM than tuberculosis cases in the United States. And this is the most virulent of all of them."

The compound T405 has demonstrated a "superior potency against M. abscessus" over two commonly used antibiotics, the paper states. When combined with an existing medication called avibactam, T405 also demonstrated an ability to prevent the bacteria from developing resistance.

T405 was also well tolerated in mice and could be administered less than current treatments, exposing patients to fewer toxic side effects such as deafness.

People with depressed immune systems and lung diseases are also at risk of developing an infection that is most frequently found in cystic fibrosis patients. Transmission is not well understood, but the bacteria can be found in soil, dust and water. It causes infections in lungs, soft tissue and skin.

Current therapeutic guidelines for the infection require 12 to 18 months of multidrug therapy that have resulted in cure rates between 30-50 percent, underscoring the "need for new antibiotics with improved activity," the paper states.

Credit: 
Johns Hopkins University

New cost-effective technique facilitates study of non-bacterial plant microbiomes

Thanks to a new technique developed by plant pathologists in Connecticut, scientists now have access to an affordable and effective tool to facilitate the study of the entire non-bacterial microbiomes of any plant species.

Like humans, plants have microbiomes. As plants grow, they recruit beneficial microorganisms from the surrounding soil to colonize the roots. Most plant microbiome research focuses on bacteria and fungi, though there are many other microorganisms that can impact plant health, such as nematodes, mites, protists, and fungi.

However, not much is known about the impact of these other microorganisms, in large part because of technical limitations. One such limitation is that samples from plant roots or tissues are full of plant DNA, which overrides the DNA signal that scientists use to detect specific microorganisms.

To more accurately sequence and identify the DNA from nematodes, mites, protists, and fungi, scientists from The Connecticut Agricultural Experiment Station and the University of Connecticut adapted a technology known as "peptide nucleic acid (PNA) clamps" to block plant DNA. They published their results in Phytobiomes Journal.

"These clamps allowed us to examine the entire non-bacterial microbiomes with minimal contamination from the plant itself," explained first author Stephen Taerum. "By adding the clamp, we were able to get three times the sequence information from each plant sample and detect thousands of sequences from rare microorganisms that were otherwise undetectable."

The clamp they designed blocked DNA from corn, wheat, sorghum, and barley. With small changes, this method can be used to develop clamps that block DNA from other plant species.

"It's a technique that adds only small change to the type of methods many researchers are already using and reveals the presence of many new and different organisms, so we hope people will be encouraged to try it," added project leader Lindsay Triplett. "We are also very accessible, so we encourage researchers to contact us with any questions about PNA clamps."

Triplett, Taerum, and colleagues are now focusing their studies on the role of protists in the rhizosphere microbiome, with the hope to build upon the rich foundation of research laid by marine, soil, and plant protistologists around the world.

Credit: 
American Phytopathological Society

More support for induction at 41 weeks' pregnancy, especially for first time mothers

image: Mårten Alkmark, Henrik Hagberg, Judit Keulen and Ulla-Britt Wennerholm

Image: 
Photo: University of Gothenburg, University of Amsterdam

There is growing evidence that pregnant women who go beyond term, especially first time mothers and their infants, will benefit from induction of labour at 41 weeks, instead of expectant management with subsequent induction of labour at 42 weeks if labour will not start spontaneously. This is clearer now that researchers from Sweden and the Netherlands have appraised results from three previous investigations.

The present study, an individual participant data meta-analysis, is published in the journal PLOS Medicine. Most of the researchers are connected to the University of Gothenburg and the University of Amsterdam.

In Sweden and the Netherlands, the risk of a baby dying before, during or shortly after birth ("perinatal death") is generally very low. The same is true of the risk of harm or injury to the baby in conjunction with the birth. However, these risks -- of perinatal death and morbidity (ill-health, trauma or other injury) alike -- are known to rise somewhat, from a low level, the longer a pregnancy goes on after the 40th week.

The purpose of the meta-analysis was to compare outcomes from induction at 41 and expectant management and if not delivered induction at 42 weeks, by combining individual studies addressing the same question. To date, in some respects, it has been unclear what measures best protect the woman and child.

Three randomized studies of the same question have been published, all since the year 2000: SWEPIS (the SWEdish Post-term Induction Study), covering 2,760 women; a Dutch INDEX study (INDuction or Expectant management) of 1,801 women; and a Turkish study of 600 women.

The Swedish and Dutch studies were able to contribute findings at individual level, and the Turkish study was also included in the aggregate appraisal of perinatal death and the proportion of cesarean deliveries. All the women had reached 41 weeks, were healthy and expecting one baby when they participated in the respective studies.

Of the 4561 women included in the analysis of individual data, 2,281 were assigned for induction at 41 full week. In this group, 80 percent underwent induction. For the others, the delivery started spontaneously.

In the Expectant Management group of 2,280 women, spontaneous delivery start was awaited until 42 weeks when induction was otherwise planned. This has been the routine management practice at most birth centers in Sweden and the Netherlands in uncomplicated pregnancies. In the Expectant Management group, 30 percent of the women needed to be induced, while for the others labor began spontaneously.

In terms of the combination of perinatal death and severe morbidity, 10 (0.4%) were affected in the group induced at 41 weeks and 23 (1.0%) in the 42 week group. The difference between the groups is statistically significant. These results hold for women who deliver for the first time. For women who already gave birth once the number of perinatal deaths and morbidity was too low to demonstrate any effect.

There was no difference in the women's state of health after birth between the groups. The proportions of cesarean sections and of instrumental births, using a ventouse (suction cup) or forceps, were also comparable.

Mårten Alkmark, a doctoral student in obstetrics and gynecology at Sahlgrenska Academy, University of Gothenburg, and senior consultant physician at the University Hospital, is one of the two first authors of the study.

"Being able to combine studies at individual level is a good, robust way of investigating questions where what we're studying is very unusual. It means that we've increased the number of women taking part, thereby also boosting the reliability of the results," Alkmark says.

"Our study shows, in agreement with previous research, that the risks of morbidity and perinatal death are lower when induction is carried out at 41 weeks than when it's done at 42 weeks, while it doesn't increase the risks of impaired health in the mothers."

Esteriek de Miranda, assistant professor of Amsterdam UMC of the University of Amsterdam and one of both last authors: "This reduction in risk was only found for women having their first childbirth, not for women who had given birth already one or more times, earlier induction had no benefit for these women and their babies."

Henrik Hagberg, professor of obstetrics and gynecology at Sahlgrenska Academy at the University of Gothenburg and senior consultant physician at the Sahlgrenska University Hospital, is one of the co-authors.

"If these results are extrapolated to Swedish conditions, where roughly 20,000 women a year are still pregnant at 41 weeks, one might prevent at least 100 cases a year of severe illness or death in the babies when they're induced at 41 weeks' gestation. The other side of the coin is that a lot of inductions then have to be done. To save one child from severe illness or death, statistically, 175 women have to undergo induction at 41 weeks," Hagberg says.

Judit Keulen, doctoral student of Amsterdam UMC and University of Amsterdam and one of both first authors:"Choosing for expectant management means an overall 99% chance of a good perinatal outcome for all women, for multiparous women choosing expectant management, the chance of a good outcome is not different than after induction of labour."

Ulla-Britt Wennerholm, senior clinical physician and associate professor of obstetrics and gynecology at Sahlgrenska Academy at the University of Gothenburg, is one of the two senior authors.

"Pregnant women whose pregnancies last 41 weeks should be informed about the advantages and disadvantages of induction, and those who then want to be induced should be offered this option," Wennerholm says.

Credit: 
University of Gothenburg

Why do elephants and tigers still roam in India? Study offers clues

Tropical Asia and Africa are the only regions on Earth that retain diverse populations of large, land-dwelling mammals, such as elephants, rhinos, and big cats. A new study co-authored by Yale researcher Advait M. Jukar suggests that the persistence of mammalian megafauna in the Indian Subcontinent is related to the great beasts' long coexistence there with homo sapiens and other human ancestors.

The study, published in the journal Palaeogeography, Palaeoclimatology, Palaeoecology -- and based on a novel dataset drawn from 51 fossil sites in present-day India -- documents a low-magnitude extinction that began about 30,000 years ago. That was about 30,000 years after modern humans arrived in the Indian Subcontinent.

The analysis provides the first direct and independent test of the "co-evolution hypothesis," a commonly held theory that the magnitude of an extinction correlates with the amount of time that large mammals coexist with humans and their hominin ancestors, the researchers said.

"During the past 100,000 years, people have been implicated in the extinction of large, land-dwelling mammals all over the world, but Indian megafauna proved more resilient and, as in Africa, have co-existed with humans for much longer periods than in other regions," said Jukar, a Gaylord Donnelly Postdoctoral Associate at the Yale Institute for Biospheric Studies and the study's lead author. "Our work supports the idea that some large species co-evolved with human ancestors, adapting to their presence and developing behaviors that helped them cope with how they altered the habitat."

Jukar co-authored the study with S. Kathleen Lyons and Peter J. Wagner of the University of Nebraska-Lincoln, and Mark D. Uhen of George Mason University.

Not all large mammals on the Indian Subcontinent survived, of course. The researchers document the extinctions of Palaeoloxodon namadicus and Stegodon namadicus, two species of elephant; Hexaprotodon sp., a hippopotamus; and Equus namadicus, a zebra-like horse. They also show the extirpation, or local extinction, of ostriches, which survive elsewhere, and the "pseudo-extinction" of the Indian aurochs -- the wild ancestor of the domestic zebu cattle that thrive in India today. The four extinctions represent about 4% of mainland India's mammalian fauna and 20% of its mammalian megafauna, animals weighing more than 50 kilograms, or 110 pounds. Human activity combined with the species' limited ranges and slow reproduction rates contributed to these extinctions, Jukar said

The extinction rate in India over the past 50,000 years is comparable to that of eastern and southern Africa, but 2.5 times smaller than in South America and 4 times smaller than in North America, Europe, Madagascar, and Australia, according to the study. The researchers noted that India's extinction pattern is strikingly similar to that of Africa, where humans first evolved, lending support to the co-evolution hypothesis. (The first hominins-- a group that includes modern humans and all our immediate ancestors -- arrived in India about 1.7 to 1.5 million years ago.) The researchers conclude that, as in Africa, land-dwelling megafauna provided remarkably resilient to human pressures. They found that the presence of other hominins had little to no impact on the Indian Subcontinent's animal life and posit that early humans may have preferred to hunt smaller prey, such as primates or rodents, to megafauna.

The researchers also analyzed the role of contemporary climate change trends -- including temperature fluctuations and varying monsoon intensity -- on the extinction pattern. While changes in climate may have elevated the extinction risk for species that were dependent on annual water sources, such as Hexaprotodon sp., the researchers found that climate change alone does not explain the low-magnitude but strongly size-biased extinction that they documented. The researchers noted that all of the extinct species they identified had survived periods of drought.

The researchers also note that Asian elephants, tigers, and other large mammals in India had extensive ranges extending from Turkey to Southeast Asia, which improved their chances of survival. The extinct species' ranges, however, were limited to the Indian Subcontinent, the researchers explain. They note that some species, including the Asian elephant, are known to inhabit refugia -- areas that offer protection during drought and other periods when conditions become unfavorable.

The fact that India's large mammals have proven resilient to the presence of humans is no excuse to become lax about conservation, Jukar cautioned.

"Today's mammals are facing many of the same pressures that these extinct mammals faced, but they are confined to smaller and smaller ranges," he said. "Climate change and the human activities that caused the extinctions we've documented are now accelerating at unprecedented rates. If we ignore these factors, we will lose the elephants, rhinos, and tigers that have survived."

Credit: 
Yale University

Researchers identify critical molecules that coronaviruses hijack to infect human cells

image: Melanie Ott co-led a study with researchers from Gladstone, the Chan Zuckerberg Biohub, UC San Francisco, and Synthego Corporation that points toward ways to treat not only COVID-19, but future coronaviruses that might emerge.

Image: 
Photo: Gladstone Institutes

SAN FRANCISCO, CA--December 8, 2020--When a coronavirus--including SARS-CoV-2, which causes COVID-19--infects someone, it hijacks the person's cells, co-opting their molecular machinery for its own survival and spread. Researchers at Gladstone Institutes and the Chan Zuckerberg Biohub, in collaboration with scientists at UC San Francisco (UCSF) and Synthego Corporation, have identified critical molecular processes in human cells that coronaviruses use to survive.

They report, in a study published in the journal Cell, that targeting these processes with drugs may treat not only COVID-19 infections, but other existing and future coronaviruses.

"What is unique about our study is that we didn't just look at SARS-CoV-2, but other coronaviruses at the same time," says one of the leaders of the study, Melanie Ott, MD, PhD, director of the Gladstone Institute of Virology. "This gives us a good idea of drug targets that could broadly suppress many coronaviruses."

A large family of viruses, coronaviruses include common cold viruses as well as more severe viruses. The SARS-CoV virus that caused a deadly SARS epidemic in 2002 was a coronavirus, as is the MERS virus, which has caused outbreaks in the Middle East.

"There have now been multiple coronavirus outbreaks, so it's clear this virus family has high pandemic potential," says Andreas Puschnik, PhD, a principal investigator at the Chan Zuckerberg Biohub and the other leader of the study. "COVID-19 is not the last coronavirus infection we'll be dealing with."

Comparing and Contrasting Coronaviruses

Like all viruses, coronaviruses can only grow inside host cells; they rely on the host cell's molecules to multiply. Because of this, the team of researchers want to target human molecules that the viruses use to survive, rather than components of viruses themselves.

In the new study, they infected human cells with either SARS-CoV-2 or two other coronaviruses that cause common colds--and all three viruses killed the cells. Next, the team of researchers mutated the cells using CRISPR-Cas9 gene-editing technology and studied which mutations made the cells less vulnerable to the coronaviruses.

"We reasoned that the few cells that could survive these infections presumably had mutations in host molecules that the viruses use to infect them or to multiply," explains Puschnik.

Some results were not surprising. For instance, the human ACE2 receptor is known to be required by SARS-CoV-2 to enter human cells. So, cells with a mutation in the ACE2 gene were no longer infected or killed by SARS-CoV2.

But other findings were less expected. The researchers found that certain genetic mutations prevented all three coronaviruses from successfully infecting and killing the cells. These were mutations in genes known to control the balance of two types of lipid molecules in human cells, namely cholesterol and phosphatidylinositol phosphate (PIP).

Cholesterol is needed for some viruses to enter cells, but it hadn't been studied in the context of coronaviruses when this study started. Similarly, PIP is known to play a role in forming the small vesicles that viruses often use to travel into and around cells, but it had not been directly linked to SARS-CoV-2 before.

A Pathway toward Therapeutics

To verify the importance of the cholesterol and PIP genes for coronavirus infection, the researchers engineered human cells that lack these genes completely and infected them with the virus. Cells lacking the genes were protected from infection by all three coronaviruses. Similarly, when the team used existing compounds to disrupt the balance of PIP or cholesterol, the cells were less susceptible to infection by any of the viruses.

These results suggest that targeting cholesterol or PIP could be a promising strategy to combat multiple coronaviruses.

"For viruses, the traditional view has been that we design drugs against unique viral targets, and that means it takes time to develop a drug each time there's a new virus," says Ott, who is also a professor in the Department of Medicine at UCSF. "If we could develop a few broader antiviral drugs that target host cells' molecules, that would go a long way toward making us better prepared for future pandemic viruses."

Not all results were the same between the three studied viruses, however. Some human molecules required for SARS-CoV-2 infection weren't needed by the two common cold coronaviruses, and vice versa. These findings could help explain what makes SARS-CoV-2 more deadly than the other two viruses.

More work is needed to test the effectiveness of drugs targeting PIP and cholesterol, and whether they can effectively stop viral growth without causing dangerous side effects. The team would also like to repeat the screens using other coronaviruses--including the first SARS-CoV and MERS viruses--to determine just how universal the new targets they pinpointed are.

Ott and Puschnik agree that the current study was made possible by researchers from many labs coming together without hesitation. Puschnik has expertise in studying viral host factors, but didn't have access to a Biosafety Level 3 (BSL-3) lab required to work with SARS-CoV-2. Ott was spearheading Gladstone's effort to open such a lab earlier this year and offered to collaborate. Scientists at Synthego provided the engineered cells needed to study the viruses, and Gladstone Senior Investigator Nevan Krogan, PhD, helped analyze the results of the CRISPR-Cas9 screen.

"Everybody was completely willing to roll up their sleeves, pool resources, and work together to help contribute to better understanding COVID-19," says Puschnik.

Credit: 
Gladstone Institutes

Smartphone data shows real-time impact on health

ITHACA, N.Y. - Researchers at Cornell University are using smartphones to capture location and real-time survey data to examine how social environments encountered in everyday life may affect health.

Equipped with smartphones for a week, dozens of older New York City residents allowed a Cornell sociologist to track their movements and reported several times a day where they were, what they were doing and how they felt.

The 61 study participants, all age 55 or older, shared glimpses of their daily routines and activities: working at a bakery, being at home with a cat, walking in a park with grandchildren, visiting with friends, waiting in line for coffee and going to church. They also noted when they saw litter, vacant buildings, damaged sidewalks and other scenes they considered problematic, such as evidence of drug or alcohol use.

In the places they perceived as stressful or threatening, the older adults were significantly more likely to report momentary spikes in fatigue or pain, the study found. When observing at least two of those disorderly conditions, they were twice as likely to report feeling tired and about two-thirds more likely to report feeling pain.

"These fluctuations may have longer-term impacts on health and well-being for older adults who have to navigate demanding or distressing social environments on a regular basis," said Erin York Cornwell, lead author of the study.

In New York City, York Cornwell worked with community-based senior centers to recruit a diverse group of volunteers from four neighborhoods: East Harlem, Gramercy and the north and south sections of Bedford-Stuyvesant.

On average, about one-third of the study participants reported feeling some pain and about one-third some fatigue in any given survey, with those feelings being less prevalent in more orderly spaces. The risk increased with each additional condition of disorder observed.

The participants completed most of the surveys within 10 or 15 minutes of being pinged on their smartphones to complete one. Their overall response rate of nearly 99% is almost unheard of in survey research, York Cornwell said.

Researchers typically have relied on survey respondents to keep paper logs of their experiences, often recorded long after events occurred. Though still an emerging research method, York Cornwell said, smartphone-based data collection enables more immediate responses that promise to improve survey accuracy and reduce bias.

"It's an incredible tool for understanding how people experience and perceive their environments in real time, and then looking at how those environments may affect their health," she said. "We haven't been able to get that insight in such a clean way in the past, so this approach has really exciting implications."

Credit: 
Cornell University

Algorithms and automation: Making new technology faster and cheaper

image: Compared with traditional metal additive manufacturing, this process contains multiple printable volumes that started from different building surfaces with different orientations.

Image: 
Xinyi Xiao, Penn State

Additive manufacturing (AM) machinery has advanced over time, however, the necessary software for new machines often lags behind. To help mitigate this issue, Penn State researchers designed an automated process planning software to save money, time and design resources.

Newer, five-axis machines are designed to move linearly along an x, y and z plane and rotate between the planes to allow the machine to change an object's orientation. These machines are an advancement on the traditional three-axis machines that lack rotation capabilities and require support structures.

Such a machine can potentially lead to large cost and time savings; however, five-axis AM lacks the same design planning and automation that three-axis machines have. This is where the creation of planning software becomes critical.

"Five-axis AM is a young area, and the software isn't there yet," said Xinyi Xiao, a summer 2020 Penn State doctoral recipient in industrial engineering, now an assistant professor in mechanical and manufacturing engineering at Miami University in Ohio. "Essentially, we developed a methodology to automatically map designs from CAD -- computer-aided design -- software to AM to help cut unnecessary steps. You save money by taking less time to make the part and by also using less materials from three-axis support structures."

Xiao conducted this work as part of her doctoral program in the Penn State Harold and Inge Marcus Department of Industrial and Manufacturing Engineering under the supervision of Sanjay Joshi, professor of industrial engineering. Their research was published in the Journal of Additive Manufacturing.

"We want to automate the decision process for manufacturing designs to get to 'push button additive manufacturing,'" Joshi said. "The idea of the software is to make five-axis AM fully automated without the need for manual work or re-designs of a product. Xinyi came to me when she needed guidance or had questions, but ultimately, she held the key."

The software's algorithm automatically determines a part's sections and the sections' orientations. From this, the software designates when each section will be printed, and in which orientation within the printing sequence. Through a decomposition process, the part's geometry boils down into individual sections, each printable without support structures. As each piece is made in order, the machine can rotate throughout its axes to reorient the part and continue printing. Xiao compared it to working with Lego building blocks.

The algorithm can help inform a designer's process plan to manufacture a part. It allows designers opportunities to make corrections or alter the design before printing, which can positively affect cost. The algorithm can also inform a designer how feasible a part may be to create using support-free manufacturing.

"With an algorithm, you don't really need the expertise from the user because it's in the software," Joshi said. "Automation can help with trying out a bunch of different scenarios very quickly before you create anything on the machine."

Xiao said she intends to continue this research as some of the major application areas of this technology are aerospace and automobiles.

"Large metal components, using traditional additive manufacturing, can takes days and waste lots of materials by using support structures," Xiao said. "Additive manufacturing is very powerful, and it can make a lot of things due to its flexibility; however, it also has its disadvantages. There is still more work to do."

Credit: 
Penn State

Understanding COVID-19 infection and possible mutations

image: This schematic shows the difference in interactions between SARS-CoV and human ACE2 versus SARS-CoV-2 and human ACE2.

Image: 
Ratul Chowdhury, Penn State

The binding of a SARS-CoV-2 virus surface protein spike -- a projection from the spherical virus particle -- to the human cell surface protein ACE2 is the first step to infection that may lead to COVID-19 disease. Penn State researchers computationally assessed how changes to the virus spike makeup can affect binding with ACE2 and compared results to those of the original SARS-CoV virus (SARS).

The researchers' original manuscript preprint, made available online in March, was among the first to computationally investigate SARS-CoV-2's high affinity, or tendency to bind, with human ACE2. The paper was published online on Sept. 18 in the Computational and Structural Biotechnology Journal. The work was conceived and led by Costas Maranas, Donald B. Broughton Professor in the Department of Chemical Engineering, and his former graduate student Ratul Chowdhury, who is currently a postdoctoral fellow at Harvard Medical School.

"We were interested in answering two important questions," said Veda Sheersh Boorla, doctoral student in chemical engineering and co-author on the paper. "We wanted to first discern key structural changes that give COVID-19 a higher affinity towards human ACE2 proteins when compared with SARS, and then assess its potential affinity to livestock or other animal ACE2 proteins."

The researchers computationally modeled the attachment of SARS-CoV-2 protein spike to ACE2, which is located in the upper respiratory tract and serves as the entry point for other coronaviruses, including SARS. The team used a molecular modeling approach to compute the binding strength and interactions of the viral protein's attachment to ACE2.

The team found that the SARS-CoV-2 spike protein is highly optimized to bind with human ACE2. Simulations of viral attachment to homologous ACE2 proteins of bats, cattle, chickens, horses, felines and canines showed the highest affinity for bats and human ACE2, with lower values of affinity for cats, horses, dogs, cattle and chickens, according to Chowdhury.

"Beyond explaining the molecular mechanism of binding with ACE2, we also explored changes in the virus spike that could change its affinity with human ACE2," said Chowdhury, who earned his doctorate in chemical engineering at Penn State in fall 2019.

Understanding the binding behavior of the virus spike with ACE2 and the virus tolerance of these structural spike changes could inform future research on vaccine durability and the potential for the virus to spread to other species.

"The computational workflow that we have established should be able to handle other receptor binding-mediated entry mechanisms for other viruses that may arise in the future," Chowdhury said.

Credit: 
Penn State

New treatment in development for irritable bowel syndrome with constipation

Patients suffering from irritable bowel syndrome with constipation (IBS-C) have long needed an upgrade in treatment. Rapid-release, cramp-inducing doses of chenodeoxycholic acid (CDC) have previously shown promise in treating constipation, but further development has been hampered by the abdominal pain associated with the sudden release of CDC. Researchers at Brigham and Women's Hospital and the Massachusetts Institute of Technology (MIT) devised a plan to deliver CDC in a bilayered capsule, finding that this mode of delivery could decrease colon cramping and thus produce a better patient experience. In preclinical studies, the team found evidence that this bilayered delivery system has the potential to reduce cramping and provide constipation relief. Findings are published in Clinical and Translational Gastroenterology.

"We know bile acids are capable of helping with motility, but what has been attempted in the past is giving a bolus -- a boatload of bile acid all at once. This manifests in increased bowel movements, but also pain," said Giovanni Traverso, MD, PhD, of the Brigham's Division of Gastroenterology, Hepatology and Endoscopy and the Department of Mechanical Engineering at MIT. "Could we take this endogenous, natural product and deliver it in a way that overcomes this risk of contractions?"

The liver produces bile acids to aid in the digestive process, regulating intestinal motility, fluid homeostasis, and humoral activity. Bile acids, such as pro-motility CDC, have been previously studied in patients for their pro-motility effects and recognized to enhance water ingression and bowel motility. The challenge has been how to administer these in ways that minimize potential side effects; to accomplish this, researchers developed a bilayered delivery system.

The bilayered delivery mechanism was tested in swine models for half-life, colon cramping and whether it caused a similar dosage spike to single-layered delivery. Immediate release of the pill's surface layer established a healthy local concentration of CDC, which in turn offered enough colonic fluid to initiate the pill's second, slower-release layer of CDC. This biphasic release of CDC established a low-dose, long-lasting presence of bile acid over time, avoiding the dosage spike and decreasing cramping.

Several important limitations exist for this study, including lack of similar studies for comparison and shortcomings of the swine model; the team was not able to measure levels of abdominal pain in the swine model, and only rectal contractions were measured, as opposed to a full colonic evaluation.

Within the next 18 months, clinical trials will begin for the bilayered delivery of CDC to IBS-C patients, with pill production regulated by the team's newly founded Bilayer Therapeutics. Traverso envisions applications for this therapeutic far beyond IBS-C. Bile acids are involved in metabolic diseases, including diabetes and liver cytopathy.

"While further studies are necessary to demonstrate the therapeutic potential of these systems in humans, our findings suggest that controlled delivery of bile acids to the colon may represent a novel approach to treating gastrointestinal diseases such as constipation," said co-author Joshua Korzenik, MD, of the Brigham's Gastroenterology and Hepatology Division.

Credit: 
Brigham and Women's Hospital

Colorado mountains bouncing back from 'acid rain' impacts

image: Meadows, forests and mountain ridges create the high alpine landscapes of Niwot Ridge in the Rocky Mountains, 25 miles northwest of Boulder, Colorado.

Image: 
William Bowman

A long-term trend of ecological improvement is appearing in the mountains west of Boulder. Researchers from the University of Colorado Boulder have found that Niwot Ridge--a high alpine area of the Rocky Mountains, east of the Continental Divide--is slowly recovering from increased acidity caused by vehicle emissions in Colorado's Front Range.

Their results show that nitric and sulfuric acid levels in the Green Lakes Valley region of Niwot Ridge have generally decreased over the past 30 years, especially since the mid-2000s. The findings, which suggest that alpine regions across the Mountain West may be recovering, are published in the Journal of Geophysical Research: Biogeosciences.

This is good news for the wildlife and wildflowers of Rocky Mountain National Park to the north of Niwot Ridge, which depend on limited levels of acidity in the water and soil to thrive. Colorado's Rocky Mountains are also the source of a lot of water for people living in the Mountain West, and the integrity of these ecosystems influences both the quantity and the quality of this water.

"It looks like we're doing the right thing. By controlling vehicle emissions, some of these really special places that make Colorado unique are going back to what they used to be," said Jason Neff, co-author on the paper and director of the Sustainability Innovation Lab at Colorado (SILC).

Almost every area in the world, including Colorado's Rocky Mountains, has been affected in the past 200 years by increased acidic nutrients, like nitrogen, contained in rain and snow. Nitrogen oxides, like nitrate, are produced primarily from vehicles and energy production. Ammonium is a main ingredient in common agricultural fertilizers.

Nitrogen is a fundamental nutrient required in ecosystems. But when nitrogen levels increase too much, this changed soil and water chemistry can make it difficult for native plants to thrive or even survive--leading to a cascade of negative consequences.

In the summer, the sun heats up the Eastern flanks of the Front Range, causing the warmer air to rise--bringing nitrogen from cars, industry and agriculture with it. As this air cools, it forms clouds over the Rocky Mountains and falls back down as afternoon thunderstorms--depositing contaminants, explained Neff.

In the 1970s, so-called "acid rain" hit East Coast ecosystems much harder than the Mountain West, famously wiping out fish populations and killing trees across large swaths of upstate New York. But scientists are still working to understand how increased levels of acidic nutrients affect the alpine region and how long these ecosystems take to recover.

To fill this gap of knowledge, the researchers analyzed data from 1984 to 2017 on atmospheric deposition and stream water chemistry from the Mountain Research Station, a research facility of the Institute of Arctic and Alpine Research (INSTAAR) and CU Boulder located on Niwot Ridge. They found that around the early 2000s, levels of nitric and sulfuric acid stopped increasing in the Green Lakes Valley. In the mid-2000s they started decreasing.

Their findings were not all good news, however. Levels of ammonium from fertilizer have more than doubled in rainfall in this area between 1984 and 2017, indicating a need to continue monitoring this agricultural chemical and its effects on the mountain ecosystem.

From field work to statistics

This work builds on decades of field work by Colorado researchers at CU Boulder and beyond.

Niwot Ridge is one of 28 Long Term Ecological Research (LTER) Network sites in the U.S., funded by the National Science Foundation. Its 4 square miles stretch from the Continental Divide down to the subalpine forest, 25 miles northwest of Boulder. Researchers at CU Boulder, as well as Colorado State University and the United States Geological Survey, have been collecting data here since the mid-1970s, hiking through snow, sleet and rain to get it.

In the 80s, 90s and 2000s they worked to bring attention to increasing acidification in Colorado mountain ecosystems as a need for pollution regulation in the Front Range.

This new research was made possible by these dedicated scientists, stresses Neff.

"We used water quality modeling and statistical approaches to analyze the long-term datasets that Niwot researchers have been collecting for decades," said Eve-Lyn Hinckley, a co-author on the paper and fellow of INSTAAR. "The data are available for anyone to download. Our modeling approaches allowed us to evaluate the patterns they hold in a rigorous way."

Since 1990, Bill Bowman, director of the Mountain Research Station and a professor of ecology and evolutionary biology, has been looking into how nutrients like nitrogen affect plants in mountain ecosystems. He's found that alpine environments are unique in how they respond to these nutrients.

"It's a system that is adapted to low nutrients, as well as a harsh climate and a very short growing season--and frost in the middle of the season. These are very slow growing plants. And they just simply can't respond to the addition of more nitrogen into the system," said Bowman, also a fellow in INSTAAR.

He has also found that these ecosystems recover quite slowly, even after acidic elements like nitrogen are no longer being added. But like Neff, who completed his undergraduate honors thesis with Bowman in 1993 using Niwot Ridge data, he sees this research as encouraging.

Even if it's slow going, they said, these results show that the ecosystem has a chance to recover.

"We still have air quality issues in the Front Range. But even with those air quality issues, this research shows that regulating vehicle and power plant emissions is having a big impact," said Neff.

Credit: 
University of Colorado at Boulder

'SCOUT' helps researchers find, quantify significant differences among organoids

video: The ability to culture cerebral organoids or "minibrains" using stem cells derived from people has given scientists experimentally manipulable models of human neurological development and disease, but not without confounding challenges. No two organoids are alike and none of them resemble actual brains. This "snowflake" problem has held back the science by making scientifically meaningful quantitative comparisons difficult to achieve. To help researchers overcome those limitations, MIT neuroscientists and engineers have developed SCOUT, a new pipeline for clearing, labeling, 3D imaging and rigorously analyzing organoids.

Image: 
MIT Picower Institute

The ability to culture cerebral organoids or "minibrains" using stem cells derived from people has given scientists experimentally manipulable models of human neurological development and disease, but not without confounding challenges. No two organoids are alike and none of them resemble actual brains. This "snowflake" problem has held back the science by making scientifically meaningful quantitative comparisons difficult to achieve. To help researchers overcome those limitations, MIT neuroscientists and engineers have developed a new pipeline for clearing, labeling, 3D imaging and rigorously analyzing organoids.

Called "SCOUT" for "Single-Cell and Cytoarchitecture analysis of Organoids using Unbiased Techniques," the process can extract comparable features among whole organoids despite their uniqueness - a capability the researchers demonstrate via three case studies in their new paper in Scientific Reports. In one of the case studies, for example, the team reports new patterns of disruption in organoid development from Zika virus infection, providing new insights into why babies born to infected mothers can exhibit severe neurological deficits.

"When you are dealing with natural tissues you can always subdivide them using a standard tissue atlas, so it is easy to compare apples to apples," said study co-lead author Alexandre Albanese, a research scientist in the lab of the paper's senior author, Associate Professor Kwanghun Chung. "But when every organoid is a snowflake and has its own unique combination of features, how do you know when the variability you observe is because of model itself rather than the biological question you are trying to answer? We were interested in cutting through the noise of the system to make quantitative comparisons."

Albanese co-led the research with former MIT chemical engineering graduate student Justin Swaney. The team has taken the added step of sharing their software and protocols on GitHub so that it can be freely adopted. Chung said that by sharing many of his lab's tissue processing, labeling and analysis innovations, he hopes to speed up biomedical progress.

"We are developing all these technologies to enable more holistic understanding of complex biological systems, which is essential to accelerate the pace of discovery and the development of therapeutic strategies," said Chung, an investigator in The Picower Institute for Learning and Memory and the Institute for Medical Engineering and Science as well as a faculty member in Chemical Engineering and Brain and Cognitive Sciences. "Disseminating these technologies is as important as developing them to make a real-world impact."

Abstracting architecture

Several of the Chung lab's technologies are components of the SCOUT pipeline. The process starts by making organoids optically transparent so they can be imaged with their 3D structure intact--a key capability, Chung said, for studying whole organoids as developing systems. The next SCOUT step is to infuse the cleared organoids with antibody labels targeting specific proteins to highlight cellular identity and activity. With organoids cleared and labeled, Chung's team images them with a light-sheet microscope to gather a full picture of the whole organoid at single-cell resolution. In total, each organoid produces about 150 GB of data for automated analysis by SCOUT's software, principally coded by Swaney.

The high-throughput process allows for many organoids to be processed, ensuring that research teams can include many specimens in their experiments.

The team chose its antibody labels strategically, Albanese said. With a goal of discerning cell patterns arising during organoid development, the team decided to label proteins specific to early neurons (TBR1) and radial glial progenitor cells (SOX2) because their organization impacts downstream development of the cortex. The team imbued SCOUT with algorithms to accurately identify every distinct cell within each organoid.

From there, SCOUT could start to recognize common architectural patterns such as identifying locations where similar cells cluster or areas of greater diversity, as well as how close or far different cell populations were from ventricles, or hollow spaces. In developing brains and organoids alike, cells organize around ventricles and then migrate out radially. With the aid of artificial intelligence-based methods, SCOUT was able to track patterns of different cell populations outward from each ventricle. Working with the system, the team therefore could identify similarities and differences in the cell configurations, or cytoarchitectures, across each organoid.

Ultimately the researchers were able to build a set of nearly 300 features on which organoids could be compared, ranging from the single-cell to whole-tissue level. Chung said that with further analysis and different molecular label choices even more features could be developed. Notably, the features extracted by SCOUT are unbiased, because they are products of the software's analysis, rather than pre-ordained hypotheses about what is "supposed to be" meaningful.

Scientific comparisons

With the analytical pipeline set, the team put it to the test. In one case study they used it to discern trends of organoid development by comparing specimens of different ages. SCOUT highlighted dozens of significant differences not only in overall growth, but also changes in the proportions of cell types, differences in layering, and other changes in tissue architecture consistent with maturation.

In another case study they compared different methods of culturing organoids. Harvard University co-authors Paola Arlotta and Silvia Velasco have developed a method that, according to single-cell RNA sequencing analysis, produces more consistent organoids than other protocols. The team used SCOUT to compare them with conventionally produced organoids to assess their consistency at the tissue scale. They found that the "Velasco" organoids show improved consistency in their architectures, but still show some variance.

Zika insights

The third case study involving Zika not only proved the utility of SCOUT in detecting major changes, but also led to the discovery of rare events. Chung's group collaborated with virus expert Lee Gehrke, Hermann L.F. von Helmholtz Professor in IMES, to determine how Zika infection changed organoid development. SCOUT spotted 22 major differences between infected and uninfected organoids, including some that had not been documented before.

"Overall this analysis provided a first-of-its kind comprehensive quantification of Zika-mediated pathology including loss of cells, reduction of ventricles and overall tissue reorganization," the authors wrote. "We were able to characterize the spatial context of rare cells and distinguish group-specific differences in cytoarchitectures. Infection phenotype reduced organoid size, ventricle growth and the expansion of SOX2 and TBR1 cells. Given our observation that SOX2 cell counts correlate with multiscale tissue features, it is expected that Zika-related loss of neural progenitors decreased in the complexity of tissue topography and cell patterning."

Chung said his lab is also collaborating with colleagues studying autism-like disorders to learn more about how development may differ.

Credit: 
Picower Institute at MIT