Brain

New algorithm will prevent misidentification of cancer cells

Researchers from the University of Kent have developed a computer algorithm that can identify differences in cancer cell lines based on microscopic images, a unique development towards ending misidentification of cells in laboratories.

Cancer cell lines are cells isolated and grown as cell cultures in laboratories for study and developing anti-cancer drugs. However, many cell lines are misidentified after being swapped or contaminated with others, meaning many researchers may work with incorrect cells.

This has been a persistent problem since work with cancer cell lines began. Short tandem repeat (STR) analysis is commonly used to identify cancer cell lines, but is expensive and time-consuming. Moreover, STR cannot discriminate between cells from the same person or animal.

Based on microscopic images from a pilot set of cell lines and utilising computer models capable of 'deep learning', researchers from Kent's School of Engineering and Digital Arts (EDA) and School of Computing (SoC) trained the computers through a period of mass comparison of cancer cell data. From this, they developed an algorithm allowing the computers to examine separate microscopic digital images of cell lines and accurately identify and label them.

This breakthrough has the potential to provide an easy-to-use tool that enables the rapid identification of all cell lines in a laboratory without expert equipment and knowledge.

This research was led by Dr Chee (Jim) Ang (SoC) and Dr Gianluca Marcelli (EDA) with leading cancer cell lines experts Professor Martin Michaelis and Dr Mark Wass (School of Biosciences).

Dr Ang, Senior Lecturer in Multimedia/Digital Systems, said: 'Our collaboration has demonstrated tremendous results for potential future implementation in laboratories and within cancer research. Utilising this new algorithm will yield further results that can transform the format of cell identification in science, giving researchers a better chance of correctly identifying cells, leading to reduced error in cancer research and potentially saving lives.

'The results also show that the computer models can allocate exact criteria used to identify cell lines correctly, meaning that the potential for future researchers to be trained in identifying cells accurately may be greatly enhanced too.'

Credit: 
University of Kent

Preschool program linked with better social and emotional skills years later

UNIVERSITY PARK, Pa. -- A preschool enrichment program developed at Penn State helps boost social and emotional skills that still have positive effects years later during middle and high school, according to a new study.

The researchers found that students attending Head Start preschools that implemented the Research-based, Developmentally Informed (REDI) program were less likely to experience behavioral problems, trouble with peers and emotional symptoms like feeling anxious or depressed by the time they reached seventh and ninth grade.

Karen Bierman, Penn State Evan Pugh Professor of Psychology, said she was encouraged that the students were still showing benefits from the program years later.

"The program had an effect on internal benefits, including better emotion management and emotional well-being, as well as external benefits, such as reduced conduct problems," Bierman said. "So not only did the program result in fewer distressed adolescents, but it also resulted in less distress for their teachers and peers, as well. It's an important finding to know we can promote these long-term benefits by intervening early with a strategic prevention programming embedded in a well-established public program like Head Start."

According to the researchers, living in poverty is difficult for children and their families. The lack of resources and added stress increase the chance that a child may develop gaps in social, emotional and language skills by the time they begin school, placing them behind other children growing up in more well-resourced circumstances. Moreover, this gap tends to widen over time, placing children in low income families at risk for developing emotional and behavioral problems by the time they reach adolescence.

But Bierman said previous research has also shown that stronger early social-emotional and self-regulation skills can protect against these effects, creating an opportunity for preschool programs to help close some of these gaps.

The REDI program was developed at Penn State as a way to build upon the existing Head Start program, which provides preschool education to low-income children. The REDI program aims to improve social and emotional skills, as well as early language and literacy skills, by incorporating stories, puppets and other activities that introduce concepts like understanding feelings, cooperation, friendship skills and self-control skills.

Bierman said the program uses a classroom curriculum called Preschool PATHS, which stands for Promoting Alternative Thinking Strategies.

"It's a program that teaches skills like how to make friends, how to be aware of your and others' feelings, and how to manage strong feelings and conflict," Bierman said. "These programs are designed to enhance the child's ability to get along with others, regulate their emotions, and develop coping skills."

She added that REDI also promotes language development with daily interactive reading and discussion sessions that involve children in talking through the social and emotional challenges faced by story characters.

For the study, the researchers identified 25 preschool centers participating in Head Start. After obtaining consent from the children's parents, 356 children were cleared to participate in the study. Classrooms were randomly assigned to be part of the intervention group -- which included the REDI program enhancements -- or the comparison group, which was instructed to proceed with the school year as usual.

Students were assessed at the beginning and end of the preschool year, as well as at several checkpoints as they moved into elementary, middle, and high school. For this study, teachers rated students during seventh and ninth grade on such factors as conduct problems, emotional symptoms, hyperactivity and inattention, and peer problems.

"After the children left preschool, they moved on to many different schools and school districts," Bierman said. "Once they reached seventh and ninth grade, their teachers who provided ratings for this study didn't know who had been in the REDI classrooms and who hadn't, so it was very much a blind rating."

After analyzing the data, the researchers found that the number of students with clinically significant levels of conduct problems, emotional symptoms and peer problems was lower for children who had been in the Head Start classrooms implementing the REDI program compared to those in the Head Start classrooms without REDI enhancements.

By ninth grade, 6 percent of the REDI program students had ratings of very high conduct problems compared to 17 percent in the comparison group, and 3 percent of REDI program students had very high emotional symptoms compared to 15 percent in the comparison group. Additionally, 2 percent of REDI program students had very high peer problems compared to 8 percent in the comparison group.

"Teachers gave these ratings using clinical screening questionnaires, so students with very high difficulties may have problems significant enough to be referred for mental health treatment," Bierman said. "The main effect of the REDI program was to reduce the number of adolescents scoring in the highest risk category in adolescence and move them to a lower risk category."

The researchers said the results -- published today (Dec. 10) in the American Journal of Psychiatry -- suggest that programs like REDI can help reduce the gaps in school readiness and mental health that can come when early development is disadvantaged by financial hardship and lack of access to resources and supports.

"We found that the effects that lasted through adolescence weren't in the academic areas like literacy and math, but in the social-emotional areas," Bierman said. "Perhaps in the past, we've been too focused boosting academic learning in preschool and not paid enough attention to the value of enriching preschool with the social-emotional supports that build character and enhance school adjustment. We know from other research that these skills become very important in predicting overall success in graduating from high school, supporting future employment and fostering overall well-being in life."

Credit: 
Penn State

A genetic shortcut to help visualize proteins at work

image: A team led by Nevan Krogan at Gladstone and UCSF has demonstrated that a large-scale and systematic genetic approach can indeed yield reliable and detailed information on the structure of protein complexes.

Image: 
QBI, UCSF

SAN FRANCISCO, CA--December 10, 2020--One of biologists' most vexing tasks is figuring out how proteins, the molecules that carry the brunt of a cell's work, do their job. Each protein has a variety of knobs, folds, and clefts on its surface that dictate what it can do. Scientists can visualize these features fairly easily on individual proteins. But proteins don't act alone, and scientists also need to know the shape and composition--the structure, as they call it--of the complexes that proteins form when working together.

With precise information about the structure of protein complexes, scientists stand better chances of designing efficient drugs to block or boost the complexes' activity for therapeutic applications. They can also better anticipate how a mutation might disrupt a complex and lead to disease.

But determining the structure of protein complexes is a painstaking endeavor. Every complex is different, there is no one-size-fits-all way to determine their structure, and few means to speed up the process. Most importantly, the methods that yield the most precise structural information entail taking the complexes out of their natural context--the cell. As a result, scientists peering at a structure are faced with a nagging doubt: Does it really reflect how the complex looks and works when it's still in the cell?

Ultimately, proteins come from genes, and since genes have proven easier to work with than proteins, some scientists are looking to genes and a fast-growing arsenal of genetic tools to facilitate the determination of protein structures.

Now, a group at Gladstone Institutes and UC San Francisco (UCSF) has demonstrated that a large-scale and systematic genetic approach can indeed yield reliable and detailed information on the structure of protein complexes. Their findings are published in the journal Science.

"Our technique allows us to collect large amounts of structural data from live cells, reflecting how the proteins work in their normal environment rather than in artificial lab conditions," says Nevan Krogan, PhD, who led the study and is a senior investigator at Gladstone, as well as a professor of cellular and molecular pharmacology and the director of the Quantitative Biosciences Institute (QBI) at UCSF. "This has not been possible on such a scale before, and it should greatly speed up the process of determining the structure of protein complexes, including those that are difficult to tackle with traditional methods."

The approach builds on a technology that Krogan pioneered called genetic interaction mapping. It screens through thousands of combinations of gene mutations, in live cells and in relatively little time, and can reveal genes whose protein products work in common cellular processes. Krogan and his team cranked up the resolution power of these screens and successfully modeled two protein complexes in yeast cells, and one in bacterial cells.

Krogan sees this advance not as a replacement for other ways of determining the structure of proteins, but as a crucial complement, part of a strategy called "integrative modeling" pioneered by Andrej Sali, a professor at UCSF and a collaborator on this project.

"Combining the genetic data from our screens with other structural information improved the accuracy of our models," Krogan says. "Our study highlights the power of integrative modeling and the value of combining several data sets gathered in completely different ways."

From Yeast to Human Cells and Diseases

Proteins are chains of building blocks called amino acids. Deciphering the structure of a protein complex consists mainly in figuring out which stretches of amino acids end up close to one another when the complex is assembled. Most of the time, this is achieved through biochemistry.

Instead, Krogan and his team relied on genetics, and looked at how the amino acids of a complex behaved in their large-scale screens. The idea is that if two amino acids are close to each other--say, within the same knob or cleft at the complex's surface--they are likely to perform similar functions for the complex. Therefore, in a genetic screen, the two amino acids are expected to interact with the same genes. But protein complexes are indeed complex, with different regions potentially influencing each other or performing similar functions.

"So, if two amino acids in a complex interact with the same gene, they may or may not be close to each other," says Hannes Braberg, PhD, co-first author of the study and a scientist at QBI, an organized research unit under the School of Pharmacy at UCSF. "But if they interact with the same 50 genes out of 1,000 possibilities, then the chances are much greater that they are indeed close to each other in the complex."

The scientists decided to explore whether this reasoning could be used to determine the structure of protein complexes in their native environment--live, growing cells.

They started with two proteins called Histone H3 and Histone H4, which form a well-understood protein complex. They performed their screen in yeast cells and used the resulting information to model the structure of the histone H3-H4 complex.

"The structure we obtained was consistent with existing data about the protein complex," says Braberg. "And the performance of our method is comparable to that of a commonly used biochemistry approach, which is remarkable, given that the genetic interaction data is purely based on looking at how well cells grow!"

The success of their approach was not limited to the H3-H4 complex, as the researchers obtained similar results with two other protein complexes, one in yeast and one in bacterial cells. This bodes well for the widespread application of the approach to many more complexes, particularly complexes that do not yield easily to traditional techniques because, for instance, they are embedded in other cellular components, or are too large or too short-lived.

"Recent advances in CRISPR-Cas9 genome editing should also enable us to extend our approach to human cells," says Krogan. "This possibility opens exciting perspectives to investigate diseases caused by gene mutations or pathogens."

Combining CRISPR with genetic interaction screens, Krogan and his team will be able to precisely describe the impact of disease mutations on the structure of protein complexes, and identify the changes that are relevant to disease. His team recently used genetic interaction screens to study the interface between viruses and human cells. Building on this work, they can now introduce specific mutations into the genome of a pathogen, and use genetic interaction profiles of the human host proteins to understand the consequences on infection in live cells.

"This project, using genetic interaction screens to inform the structural understanding of protein complexes, started 15 years ago," Krogan says. "We have continued refining the approach and increasing its power over the years, and it's really gratifying to see the unprecedented resolution with which it can now inform us about biological phenomena as they take place inside live cells."

Credit: 
Gladstone Institutes

Hematoxylin as a killer of CALR mutant cancer cells

image: First author Ruochen Jia and Last author Robert Kralovics.

Image: 
First author Ruochen Jia and Last author Robert Kralovics.

Patients with myeloproliferative neoplasm (MPN), a group of malignant diseases of the bone marrow, often have a carcinogenic mutated form of the calreticulin gene (CALR). Scientists of the research group of Robert Kralovics, Adjunct Principal Investigator at the CeMM Research Center for Molecular Medicine of the Austrian Academy of Sciences and group leader at the Medical University of Vienna, have now identified hematoxylin as a novel CALR inhibitor. The study, published in the renowned journal Blood, shows how hematoxylin compounds affect a specific domain of CALR and selectively kill those CALR mutant cells that have been identified as the cause of disease in MPN patients. The discovery has enormous therapeutic potential and gives hope for new treatment options.

In medicine, a group of malignant diseases of the bone marrow is known as myeloproliferative neoplasms. This special type of blood cancer is characterised by increased formation of blood cells, vulnerability to thrombosis and frequent transformation to acute leukaemia. In the laboratory of Robert Kralovics it was discovered as early as 2013 that carcinogenic mutations of the gene calreticulin (CALR) were frequently found in affected patients and are now used clinically as diagnostic and prognostic markers. The mechanism by which the mutated CALR functions as an oncogene, which can lead to myeloid leukaemia, has also been scientifically identified since then. The carcinogenic effect of CALR mutations is based on the interaction of the N-glycan binding domain (GBD) of CALR with the thrombopoietin receptor. Ruochen Jia from the research group of Robert Kralovics at CeMM was looking for a way to stop this interaction and prevent one of the growth advantages of CALR mutated cells. It became evident that a group of chemicals, most notably hematoxylin, can selectively kill mutated CALR cells. The results thus provide extremely valuable information for potential treatment approaches for myeloproliferative neoplasms.

Hematoxylin compounds kill CALR mutated cells

Robert Kralovics, head of the study, explains: "In our study we tried to identify small molecules that might block the interaction between the mutated CALR and the receptor." The scientists used so-called in-silico docking studies for this purpose. "Basically, these are computer-based simulations of biochemical processes - virtual 'screenings' that enable increasingly accurate predictions," says study author Ruochen Jia. The results showed a group of chemicals as binders for a specific domain of calreticulin, which selectively kill the mutated CALR cells. "Our data suggest that small molecules targeting the N-glycan binding domain of CALR can selectively kill CALR-mutated cells by disrupting the interaction between CALR and the thrombopoietin receptor and inhibiting oncogenic signal transmission," said the study author. A hematoxylin compound proved to be particularly efficient. So far, hematoxylin has been used as a dye especially in histological staining processes.

Ray of hope for primary myelofibrosis therapy

"Our study demonstrates the enormous therapeutic potential of CALR inhibitor therapy," says Kralovics. "The treatment of patients with primary myelofibrosis (PMF) continues to produce poor clinical outcomes. They have the clearest tendency to develop acute myeloid leukaemia. Since about one third of PMF patients have a CALR mutation, they could particularly benefit from the new therapeutic approach."

Credit: 
CeMM Research Center for Molecular Medicine of the Austrian Academy of Sciences

Largest-ever study on children's soft contact lens safety shows low complication rates

image: Independent researcher Robin Chalmers, OD, FAAO, served as principal investigator for the largest-ever retrospective study of its kind, finding very low complication rates in children who wear soft contact lenses, similar to rates in adults.

Image: 
R Chalmers

SAN RAMON, Calif., December 10, 2020--The largest-ever retrospective study of its kind has found very low complication rates in children who wear soft contact lenses, similar to rates in adults.(1) The newly-published outcomes offer eye care professionals (ECPs) valuable real-world information to better counsel parents and caregivers as they consider proven myopia management options to help slow myopia progression.

Adverse Event Rates in The Retrospective Cohort Study of Safety of Paediatric Soft Contact Lens Wear: the ReCSS Study will appear in the January 2021 issue of Ophthalmic & Physiological Optics, the peer reviewed journal of the UK College of Optometrists. It is now available in pre-press form via open access.

Widely-respected independent researcher Robin Chalmers, OD, FAAO, served as principal investigator, co-authoring the paper with CooperVision scientists John McNally, OD, FAAO, and Paul Chamberlain, BSc(Hons), and Lisa Keay, PhD, Head of the School of Optometry and Vision Science for UNSW Sydney.

The work was initiated to support CooperVision's regulatory submissions of MiSight® 1 day contact lenses, which are currently available in 26 countries with more expected in 2021.

ReCSS measured the rate of adverse events (AE) in children who were prescribed soft contact lenses before they turned 13 years old to establish wearing safety among that age group. The review documented AE details from clinical practice charts and clinical trial data of nearly 1,000 children and observed 2,713 years of wear across 4,611 visits. Compared to AE results derived exclusively from clinical trials, these data are likely to be more generalizable to real world experiences as myopia control soft contact lenses are prescribed more widely to young patients.

Clinical records from office visits with potential AEs were independently reviewed by an adjudication panel to determine a consensus diagnosis. The current findings are very similar to but slightly lower than rates reported in previous studies of similar age groups, possibly attributed to the higher proportion of daily disposable lenses in the current study. The study found the annualized incidence rate of inflammatory events was less than 1 percent per year of wear. The majority of events were conjunctivitis or foreign body abrasions, reflective of this young population.

The authors note that ReCSS study found a lower rate of microbial keratitis (7.4/10,000 years of wear) with a tighter confidence interval than other pediatric post?market studies, offering reassurance to clinicians and parents of children regarding the safety of myopia control soft contact lenses. That rate is comparable to established rates of microbial keratitis in adults. (2,3,4)

"ReCSS is the most extensive compilation of 'real-world' data supporting safety of soft contact lens wear in children, complementing the effectiveness research from our groundbreaking, multi-year MiSight® 1 day clinical study," said McNally, who serves as CooperVision's Senior Director of Clinical Research. "Practitioners will appreciate the fact that the study included a range of eyecare practice types and locations and a variety of soft contact lens brands, modalities and designs. Parents should be even more confident in embracing the benefits of a soft contact lens-based approach to myopia management by knowing that the study evaluated the safety of contact lenses in children of the same age range as their own."

Credit: 
McDougall Communications

Archaeopteryx fossil provides insights into the origins of flight

image: Remnants of feather sheaths on the wings of the fossil bird Archaeopteryx, shows the earliest evidence of a complex moulting strategy. The white arrows indicate the feather sheaths. Scale bar is 1 cm.

Image: 
Kaye et al. 2020

Flying birds moult their feathers when they are old and worn because they inhibit flight performance, and the moult strategy is typically a sequential molt. Moulting is thought to be unorganised in the first feathered dinosaurs because they had yet to evolve flight, so determining how moulting evolved can lead to better understanding of flight origins.

However, evidence of the transition to modern moulting strategies is scarce in the fossil record. Recently, Research Assistant Professor Dr Michael PITTMAN from the Research Division for Earth and Planetary Science, as well as Vertebrate Palaeontology Laboratory, at the Faculty of Science of the University of Hong Kong (HKU), Thomas G KAYE of the Foundation for Scientific Advancement (Arizona, USA) and William R WAHL of the Wyoming Dinosaur Center (Wyoming, USA), jointly discovered the earliest record of feather moulting from the famous early fossil bird Archaeopteryx found in southern Germany in rocks that used to be tropical lagoons ~150 million years ago. The findings were published in Communications Biology.

Archaeopteryx moulting strategy used to preserve maximum flight performance

The most common moult strategy in modern birds is a sequential moult, where feathers are lost from both wings at the same time in a symmetrical pattern. The sequence of feather loss follows two different strategies: The first strategy is a numerically sequential molt where feathers are lost in numerical order and is the most common among passerines birds, also known as songbirds and perching birds; the second strategy is a centre-out strategy where a centre feather is lost first, and then subsequent feathers are shed outwards from this centre point; this is more common in non-passerine birds such as falcons. This strategy minimises the size of the aerodynamic hole in the wing, which allows falcons to better maintain their flight performance during the moult for hunting.

Laser-Stimulated Fluorescence imaging co-developed at HKU revealed feather sheaths on the Thermopolis specimen of Archaeopteryx that are otherwise invisible under white light. "We found feather sheaths mirrored on both wings. These sheaths are separated by one feather and are not in numerical sequential order. This indicates that Archaeopteryx used a sequential centre-out moulting strategy, which is used in living falcons to preserve maximum flight performance," said Kaye. This strategy was therefore already present at the earliest origins of flight.

"The centre-out moulting strategy existed in early flyers and would have been a very welcome benefit because of their otherwise poor flight capabilities. They would have appreciated any flight advantage they could obtain," said Pittman. "This discovery provides important insights into how and when birds refined their early flight capabilities before the appearance of iconic but later flight-related adaptations like a keeled breastbone (sternum), fused tail tip (pygostyle) and the triosseal canal of the shoulder," added Pittman.

This study is part of a larger long-term project by Pittman and Kaye and their team of collaborators to better understand the origins of flight (see notes).

Credit: 
The University of Hong Kong

Social media messages help reduce meat consumption

Sending direct messages on social media informing people of the negative health and environmental impacts of consuming meat has proven successful at changing eating habits, a new study from Cardiff University has shown.

The study showed that sending direct messages twice a day through Facebook Messenger led to a significant reduction in the amount of red and processed meat the participants consumed over a 14-day period.

Participants reported, on average, eating between 7 and 8 portions of red or processed meat during the previous week before the Facebook messages were sent, which then dropped to between 4 and 5 portions during the second week of the intervention and stayed at roughly the same level one month after the intervention.

Furthermore, the intervention led to an observed 'behavioural spillover' effect in which the participants indicated a desire to also reduce other types of meat they would consume in the future, alongside dairy products.

The study has been published in the journal Frontiers in Psychology.

The health impacts of eating too much red and processed meat are well established, with links to cardiovascular disease, stroke, and certain forms of cancer.

Meat is also a major driver of climate change, responsible for approximately 15% of global anthropogenic greenhouse gas emissions, with a growing consensus among scientists that reducing excess meat consumption will be necessary to meet climate change targets.

Yet, evidence suggests there is a lack of public awareness of the issue and that people tend to greatly underestimate the extent to which meat consumption leads to climate change.

"With Christmas approaching, it is a good time to consider how much meat we consume on a day-to-day basis and the impacts that this can have on the environment as well as our health," said Emily Wolstenholme from the School of Psychology, who led the study.

"Our study shows that making people aware of these climate impacts makes them think about their eating habits. It also shows that people are willing to make changes to help the climate."

A total of 320 participants were recruited for the study, who were then divided into either one of three experimental conditions, or the control group, and were sent messages through Facebook Messenger twice a day during the two-week intervention period.

Different messages were sent to participants in the experimental groups, each focussing on the environmental and/or health consequences of eating too much meat, for example: "If you eat only a small amount of red and processed meat, you will protect the environment by reducing the release of harmful greenhouse gases."

Participants were asked to complete a food diary every day during the two-week period to keep track of their diet.

Surveys were sent to participants at the end of the two-week intervention to measure their red and processed meat consumption, as well as other environmentally friendly behaviours. The same survey was repeated a month after the end of the intervention.

Over the two-week period the researchers observed a significant reduction in the amount of red and processed meat that was consumed by the participants receiving health messages, environmental messages and combined health and environmental messages - with no significant difference between each of these approaches.

Professor Wouter Poortinga, co-author of the study from the Welsh School of Architecture, said: "The results of the research are really encouraging. It shows that we can make changes to our diet, and if we all do, it can make a big difference for climate change"

Credit: 
Cardiff University

Engaged dads can reduce adolescent behavioral problems, improve well-being

In low-income families, fathers who are engaged in their children's lives can help to improve their mental health and behavior, according to a Rutgers University-New Brunswick study published in the journal Social Service Review.

The researchers found that adolescents in low-income families whose fathers are more frequently engaged in feeding, reading, playing and other activities and who provide necessities such as clothes and food throughout their childhood have fewer behavioral and emotional problems -- reducing a significant gap between poor families and those with higher socioeconomic status.

"On average, children in lower socioeconomic status families tend to have more behavior problems and their fathers have lower levels of overall involvement than those in higher socioeconomic status families," said lead author Lenna Nepomnyaschy, an associate professor at Rutgers University-New Brunswick's School of Social Work.

The researchers analyzed data on the long-term behavior of 5,000 children born between 1998 and 2000. They focused on the frequency of their fathers' engagement, from ages 5 to 15, through feeding, playing, reading and helping with homework, and providing noncash items as clothes, toys, food and other necessities. They reviewed the associations between such engagement and the children's internalizing and externalizing behaviors, including crying, worrying, fighting, bullying and skipping school.

According to Nepomnyaschy, fathers with lower education, lower skilled jobs and lower wages may find it difficult to engage in their children's lives due to social and economic changes over the last several decades. These changes have resulted in the loss of manufacturing jobs, a decline in union power and criminal justice policies linked to mass incarceration, particularly among men of color.

The researchers urged policymakers, scholars and the public to consider wage, employment and criminal justice policies that increase the opportunity for men to engage with their children to improve their well-being.

Credit: 
Rutgers University

A technique to sift out the universe's first gravitational waves

CAMBRIDGE, Mass. -- In the moments immediately following the Big Bang, the very first gravitational waves rang out. The product of quantum fluctuations in the new soup of primordial matter, these earliest ripples through the fabric of space-time were quickly amplified by inflationary processes that drove the universe to explosively expand.

Primordial gravitational waves, produced nearly 13.8 billion years ago, still echo through the universe today. But they are drowned out by the crackle of gravitational waves produced by more recent events, such as colliding black holes and neutron stars.

Now a team led by an MIT graduate student has developed a method to tease out the very faint signals of primordial ripples from gravitational-wave data. Their results will be published this week in Physical Review Letters.

Gravitational waves are being detected on an almost daily basis by LIGO and other gravitational-wave detectors, but primordial gravitational signals are several orders of magnitude fainter than what these detectors can register. It's expected that the next generation of detectors will be sensitive enough to pick up these earliest ripples.

In the next decade, as more sensitive instruments come online, the new method could be applied to dig up hidden signals of the universe's first gravitational waves. The pattern and properties of these primordial waves could then reveal clues about the early universe, such as the conditions that drove inflation.

"If the strength of the primordial signal is within the range of what next-generation detectors can detect, which it might be, then it would be a matter of more or less just turning the crank on the data, using this method we've developed," says Sylvia Biscoveanu, a graduate student in MIT's Kavli Institute for Astrophysics and Space Research. "These primordial gravitational waves can then tell us about processes in the early universe that are otherwise impossible to probe."

Biscoveanu's co-authors are Colm Talbot of Caltech, and Eric Thrane and Rory Smith of Monash University.

A concert hum

The hunt for primordial gravitational waves has concentrated mainly on the cosmic microwave background, or CMB, which is thought to be radiation that is leftover from the Big Bang. Today this radiation permeates the universe as energy that is most visible in the microwave band of the electromagnetic spectrum. Scientists believe that when primordial gravitational waves rippled out, they left an imprint on the CMB, in the form of B-modes, a type of subtle polarization pattern.

Physicists have looked for signs of B-modes, most famously with the BICEP Array, a series of experiments including BICEP2, which in 2014 scientists believed had detected B-modes. The signal turned out to be due to galactic dust, however.

As scientists continue to look for primordial gravitational waves in the CMB, others are hunting the ripples directly in gravitational-wave data. The general idea has been to try and subtract away the "astrophysical foreground" -- any gravitational-wave signal that arises from an astrophysical source, such as colliding black holes, neutron stars, and exploding supernovae. Only after subtracting this astrophysical foreground can physicists get an estimate of the quieter, nonastrophysical signals that may contain primordial waves.

The problem with these methods, Biscoveanu says, is that the astrophysical foreground contains weaker signals, for instance from farther-off mergers, that are too faint to discern and difficult to estimate in the final subtraction.

"The analogy I like to make is, if you're at a rock concert, the primordial background is like the hum of the lights on stage, and the astrophysical foreground is like all the conversations of all the people around you," Biscoveanu explains. "You can subtract out the individual conversations up to a certain distance, but then the ones that are really far away or really faint are still happening, but you can't distinguish them. When you go to measure how loud the stagelights are humming, you'll get this contamination from these extra conversations that you can't get rid of because you can't actually tease them out."

A primordial injection

For their new approach, the researchers relied on a model to describe the more obvious "conversations" of the astrophysical foreground. The model predicts the pattern of gravitational wave signals that would be produced by the merging of astrophysical objects of different masses and spins. The team used this model to create simulated data of gravitational wave patterns, of both strong and weak astrophysical sources such as merging black holes.

The team then tried to characterize every astrophysical signal lurking in these simulated data, for instance to identify the masses and spins of binary black holes. As is, these parameters are easier to identify for louder signals, and only weakly constrained for the softest signals. While previous methods only use a "best guess" for the parameters of each signal in order to subtract it out of the data, the new method accounts for the uncertainty in each pattern characterization, and is thus able to discern the presence of the weakest signals, even if they are not well-characterized. Biscoveanu says this ability to quantify uncertainty helps the researchers to avoid any bias in their measurement of the primordial background.

Once they identified such distinct, nonrandom patterns in gravitational-wave data, they were left with more random primordial gravitational-wave signals and instrumental noise specific to each detector.

Primordial gravitational waves are believed to permeate the universe as a diffuse, persistent hum, which the researchers hypothesized should look the same, and thus be correlated, in any two detectors.

In contrast, the rest of the random noise received in a detector should be specific to that detector, and uncorrelated with other detectors. For instance, noise generated from nearby traffic should be different depending on the location of a given detector. By comparing the data in two detectors after accounting for the model-dependent astrophysical sources, the parameters of the primordial background could be teased out.

The researchers tested the new method by first simulating 400 seconds of gravitational-wave data, which they scattered with wave patterns representing astrophysical sources such as merging black holes. They also injected a signal throughout the data, similar to the persistent hum of a primordial gravitational wave.

They then split this data into four-second segments and applied their method to each segment, to see if they could accurately identify any black hole mergers as well as the pattern of the wave that they injected. After analyzing each segment of data over many simulation runs, and under varying initial conditions, they were successful in extracting the buried, primordial background.

"We were able to fit both the foreground and the background at the same time, so the background signal we get isn't contaminated by the residual foreground," Biscoveanu says.

She hopes that once more sensitive, next-generation detectors come online, the new method can be used to cross-correlate and analyze data from two different detectors, to sift out the primordial signal. Then, scientists may have a useful thread they can trace back to the conditions of the early universe.

Credit: 
Massachusetts Institute of Technology

A molecule like a nanobattery

video: The structure of the molecule under study at the University of Oldenburg. Titanium is shown in red, nitrogen in blue, carbon in grey. The basic body of the molecule is highlighted, whereas hydrogen atoms are hidden for simplification.

Image: 
Graphics: Ruediger Beckhaus / University of Oldenburg

How do molecular catalysts - molecules which, like enzymes, can trigger or accelerate certain chemical reactions - function, and what effects do they have? A team of chemists at the University of Oldenburg has come closer to the answers using a model molecule that functions like a molecular nanobattery. It consists of several titanium centres linked to each other by a single layer of interconnected carbon and nitrogen atoms. The seven-member research team recently published its findings, which combine the results of three multi-year PhD research projects, in "ChemPhysChem". The physical chemistry and chemical physics journal featured the basic research from Oldenburg on its cover.

To gain a better understanding of how the molecule works, the researchers, headed by first authors Dr. Aleksandra Markovic and Luca Gerhards and corresponding author Prof. Dr. Gunther Wittstock, performed electrochemical and spectroscopic experiments and used the university's high-performance computing cluster for their calculations. Wittstock sees the publication of the paper as a "success story" for both the Research Training Groups within which the PhD projects were conducted and for the university's computing cluster. "Without the high-performance computing infrastructure, we would not have been able to perform the extensive calculations required to decipher the behaviour of the molecule," says Wittstock. "This underlines the importance of such computing clusters for current research."

In the paper, the authors present the results of their analysis of a molecular structure, the prototype for which was the result of an unexpected chemical reaction first reported by the University of Oldenburg's Chemistry Department in 2006. It is a highly complex molecular structure in which three titanium centres (commonly referred to in high school lessons as titanium ions) are connected to each other by a bridging ligand consisting of carbon and nitrogen. Such a compound would be expected to be able to accept and release several electrons through the exchange of electrons between the metal centres among other reasons.

Gaining a proper understanding of these processes is of particular interest not only for basic research, but also for triggering or accelerating important reactions in which more than one electron is transferred. Such reactions remain a major challenge in technical systems, for which there is still no satisfactory solution. "Many efforts are currently focused on this objective," says Wittstock. One example is fuel cell technology, which requires the simultaneous transfer of four electrons to one oxygen molecule in order to achieve a flow of electrons from hydrogen to oxygen, he explains. "Such multi-electron reactions also have great potential for saving materials or energy in chemical production."

The model molecular compound consisting of the bridging ligand and the titanium centres was specifically designed to help the scientists gain a detailed understanding of how compounds with several metal centres are able to accept and release electrons. The scientists excited the molecule by light, to which the molecules responded differently depending on the number of accepted and released electrons. Unfortunately, the molecule made in 2006 proved to be poorly soluble in most solvents and therefore difficult to study. Using chemical synthesis, Dr. Pia Sander, a co-author of the paper, added propeller-like molecular motifs to the compound to improve its solubility. This provided the basis for Markovic's experiments, which revealed that the model compound could accept a total of three electrons or release six electrons - an unusually high capacity for a single molecule. In each of these reactions, not only the visible colour of the molecule changes, but the absorption of light in the spectral ranges which are invisible to the human eye. Initially, however, the precise changes in the molecule with different numbers of electrons could not be determined on the basis of those spectral ranges.

This is where Luca Gerhards and the university's computing cluster came into play. Although common explanations are based on the premise that in each transition excited by light only the energy of a single electron changes, co-author Gerhards avoided these simplifying assumptions in his quantum chemical equations. This made the calculations even more complex and kept the high-performance computing cluster busy for months. In the end, the result came as a surprise to everyone involved: Several electrons change their energy levels simultaneously when light hits the molecule under study. Moreover, this charge is not stored in the titanium, as would be expected, but mainly in the bridging ligand, the "link" between the titanium centres.

As Wittstock explains, the metal centres thus provide a positively charged "frame" for electron storage, as in a "nanobattery". The model molecule - and by extension a whole class of similar compounds - has turned out to be a "mini segment of an energy storage material". Although their full potential cannot be determined at this stage, Wittstock believes that such "frames" with molecular charge storage motifs could become a new design element in complex molecular catalysts for multi-electron reactions.

Credit: 
University of Oldenburg

Researchers discover treatment that suppresses liver cancer

image: Guangfu Li, PhD, DVM, Department of Surgery and Department of Molecular Microbiology and Immunology at the University of Missouri School of Medicine.

Image: 
Justin Kelley

Researchers from the University of Missouri School of Medicine have discovered a treatment combination that significantly reduces tumor growth and extends the life span of mice with liver cancer. This discovery provides a potentially new therapeutic approach to treating one of the leading causes of cancer-related death worldwide.

A cancer translational research team consisting of physicians, and basic scientists created an integrative therapy that combined minimally invasive radiofrequency ablation (RFA) with the chemotherapy drug sunitinib. Individually, each treatment has a modest effect in the treatment of liver cancer. The team hypothesized that pairing the two treatments would have a profound effect by activating an immune response to target and destroy liver cancer cells. That's exactly what their research revealed.

"We treated tumor-bearing mice with sunitinib to suppress the cancer cells' ability to evade detection by the immune system, then the RFA acted as a spark that ignited the anti-tumor immune response," said principal investigator Guangfu Li, PhD, DVM, Department of Surgery and Department of Molecular Microbiology and Immunology.

The team tested this approach by dividing the mice into four groups: a control group, a group that received only sunitinib, a group that received only RFA, and a group that received both RFA and sunitinib. The researchers monitored tumor progression in each mouse via magnetic resonance imaging (MRI) over 10 weeks. They discovered the mice receiving combination therapy experienced significantly slowed tumor growth. The life span of the mice in the combination therapy group was significantly longer than all of the other groups.

"These results indicate that the sunitinib and RFA-integrated therapy functions as an effective therapeutic strategy that is superior to each individual therapy, significantly suppressing tumor growth and extending the lifetime of the treated mice," said co-author Eric Kimchi, MD, MBA, Chief of Division of Surgical Oncology and General Surgery, and Medical Director of Ellis Fischel Cancer Center.

The team plans to expand their research to study the effectiveness of this combination therapy on companion dogs and eventually on humans.

"Development of an effective sunitinib and RFA combination therapy is an important contribution to the field of liver cancer treatment and can be quickly translated into clinical applications, as both sunitinib and RFA are FDA approved and are readily available cancer therapies," said co-author Kevin F. Staveley-O'Carroll, MD, PhD, MBA, Chair, Department of Surgery, and Director of Ellis Fischel Cancer Center..

Credit: 
University of Missouri-Columbia

Reductive stress in neuroblastoma cells aggregates protein and impairs neurogenesis

image: Rajasekaran "Raj" Namakkal-Soorappan

Image: 
UAB

BIRMINGHAM, Ala. - Cells require a balance among oxidation-reduction reactions, or redox homeostasis. Loss of that balance to create oxidative stress is often associated with neurodegeneration. Less is known about how loss of that balance at the other end of the spectrum -- reductive stress, or RS -- may affect neurons.

Now Rajasekaran Namakkal-Soorappan, Ph.D., associate professor in the University of Alabama at Birmingham Department of Pathology, Division of Molecular and Cellular Pathology, and colleagues in the United States and India have shown for the first time that reductive stress promotes protein aggregation in neuroblastoma cells and impairs neurogenesis.

"Our data suggest that, despite the association of oxidative stress and neuronal damage, RS can play a crucial role in promoting proteotoxicity, and thereby lead to neurodegeneration," Namakkal-Soorappan said. "Moreover, this study adds to the emerging view that the regulation of redox homeostasis, and its impact on diverse diseases, is part of a complex process in which appropriate doses of antioxidants are required only in response to an oxidative or toxic challenge in cells or organisms."

Namakkal-Soorappan and colleagues have previously found that RS is pathogenic in a mouse-model of heart disease, and that RS impairs the regeneration of skeletal muscle in cultured mouse myoblast cells.

In the current study, the researchers used sulforaphane to establish RS in proliferating and differentiating Neuro 2a neuroblastoma cells grown in culture. Sulforaphane activates Nrf2/ARE signaling, leading to antioxidant augmentation. Specifically, they found that sulforaphane-mediated Nrf2 activation diminished reactive oxygen species in a dose-dependent manner leading to RS. The resulting RS abrogated oxidant signaling and impaired endoplasmic reticulum function, which promoted protein aggregation and proteotoxicity, and impaired neurogenesis. This included elevated Tau and α-synuclein and their co-localization with other protein aggregates in the cells.

Namakkal-Soorappan says they were also surprised to see that acute RS impaired neurogenesis, as measured by reduced neurite outgrowth and length, and that maintaining the cells in sustained RS conditions for five consecutive generations dramatically reduced differentiation and prevented the formation of axons.

This impairment of neurogenesis occurs through activation of the pathogenic GSK3β/Tau cascade to promote phosphorylation of Tau and create proteotoxicity.

Intriguingly, there have been reports of increased levels of enzymes that can promote RS, both in the brains of Alzheimer's patients and in the post-mortem brains of Alzheimer's and Parkinson's patients. Also, attempts to promote neurogenesis in neurodegenerative diseases using small molecule antioxidants have had poor outcomes.

"Therefore, clinical evidence warrants a closer investigation and further understanding of redox changes and their impact at the onset and progression of neurodegeneration," Namakkal-Soorappan said.

Neurodegenerative diseases, including Alzheimer's, Parkinson's and Huntington's, are a major health problem in aging populations throughout the world.

Credit: 
University of Alabama at Birmingham

Development of high-speed nanoPCR technology for point-of-care diagnosis of COVID-19

image: The RNA of the coronavirus was extracted from the patient sample, which then underwent reverse transcription, gene amplification, and detection through nanoPCR to diagnose the COVID-19 infection. For rapid gene amplification and detection, magneto-plasmonic nanoparticles (MPNs) were used to facilitate the temperature change cycle of the existing RT-PCR at high speed. Finally, a magnetic field separated the MPN, and the fluorescent signal of the amplified DNA was detected.

Image: 
Institute for Basic Science (IBS)

A "nanoPCR" technology was developed for the point-of-care (POC) diagnosis of coronavirus disease-19 (COVID-19). This new technology can diagnose the infection within ~20 minutes while retaining the accuracy of conventional reverse transcription polymerase chain reaction (RT-PCR) technology.

A team of researchers led by Professor CHEON Jinwoo, the director of the Center for Nanomedicine (CNM) within the Institute for Basic Science (IBS) in Seoul, South Korea, in collaboration with Professor LEE Jae-Hyun from Yonsei University and Professor LEE Hakho from Massachusetts General Hospital developed a novel nanoPCR technology that can be used for the decentralized, POC diagnosis of COVID-19. The technique uses the same underlying principle as the standard diagnosis method of RT-PCR to detect viral RNA, but it also features a vast improvement in speed using hybrid nanomaterials and a miniaturized form factor which allows portability.

The gold-standard test method for COVID-19 currently used is RT-PCR: a test that amplifies DNA after changing RNA genes into complementary DNAs for detection. RT-PCR has high accuracy, but it takes one to two hours to detect viruses at the centralized facility equipped with bulky instrumentations. The logistics process of cold chain transportation from the sampling sites to the testing facility makes the conventional RT-PCR diagnosis even slower, taking 1 - 2 days to get the results back to the patients.

To overcome the limitations of existing diagnostic methods, the research team utilized a magneto-plasmonic nanoparticle (MPN) that is comprised of magnetic material in its core and a gold shell that exhibits plasmonic effects. By applying MPNs to PCR, they developed 'nanoPCR' which greatly improves the speed of RT-PCR while retaining highly accurate detection. Plasmonic properties of MPN refer to its ability to convert light energy into thermal energy, and by using this it was possible to shorten the thermocycling step of RT-PCR from 1 - 2 hours to within 11 minutes. In addition, the strong magnetic property of MPN allows an external magnetic field to clear MPNs from the PCR solution to allow for fluorescent detection of the amplified genes. The nanoPCR is capable of detecting even a small amount of genes (~3.2 copies/μl) accurately while simultaneously amplifying and detecting genetic material with high sensitivity and specificity.

The researchers tested nanoPCR under clinical settings through the patient specimen tests conducted with Professor CHOI Hyun-Jung's team at Chonnam National University Hospital. During the test, 150 subjects with or without COVID-19 infection were accurately diagnosed using this technology (75 positives, 75 negative samples; zero false-negatives and false-positives). The level of sensitivity and specificity was found to be equivalent to that of the conventional RT-PCR (~99%). In addition to high reliability, the whole diagnostic process was considerably fast, as on average it took about 17 minutes for the diagnosis of one specimen.

In addition, the researchers showed the possibility of improving the analytical throughput by applying a Ferris wheel system to load multiple samples at once, which would allow for simultaneous testing of many samples from multiple patients. Importantly, the nanoPCR equipment is very compact in size (15 × 15 × 18cm) and weight (3 kg), which allows it to be portable. All of this would pave the way for rapid, decentralized testing of patients for the POC diagnosis.

Director Cheon said, "Through the improvement and miniaturization of the PCR technology, we have shown that it is possible to perform PCR based POC diagnosis in the field quickly." The research is currently at a proof-of-concept stage and further developments are needed until it can be used in the field.

Credit: 
Institute for Basic Science

Researchers: drop the notion that more hours spent studying guarantees higher educational quality

Since 2018, Danish university students have had to report the annual number of hours they spend in lectures, at study and preparing for exams to the Danish Ministry of Higher Education and Science.

When determining whether a university should receive its entire subsidy or whether to shave up to five per cent off of its appropriation, the weekly number of study hours is one of the parameters weighed.

The reporting of study hours stems from the Ministry of Higher Education and Science's aim to ensure for high quality education.

However, according to the researchers behind a new study at the University of Copenhagen, an increased number of hours is no guarantee for higher educational quality.

"Universities have a financial incentive to get students to spend large amounts of time at study. However, this does not necessarily provide an authentic or fair view of educational quality. Instead, we should be interested in what students actually get out of the time that they spend," asserts Lars Ulriksen, a professor at the University of Copenhagen's Department of Science Education.

Together with department colleague Christoffer Nejrup, Professor Ulriksen observed second-year students in four different programmes at a Danish university, as well as conducting interviews and a series of workshops with 31 students.

Work hours and learning are not inextricably linked

In their results, the researchers found numerous students describing how they can easily spend a substantial amount of time and effort on a task or exam without feeling as if they retained anything.

"Students report that they often spend considerable time preparing for an exam, without necessarily being particularly engaged or immersed in their preparations. At other times, they spend a short amount of time on something that leaves a deeper impression upon them," explains Professor Ulriksen.

Moreover, being a student is a way of life during which lectures and assignments marinate within students as they engage with friends, exercise and so on.

As such, Ulriksen believes that it makes no sense to measure quality and engagement by way of a single time parameter:

"If we are curious about understanding how students learn, we ought to look at more than just the amount of time they spend studying. Instead, we need to find out what keeps the academic fire within them burning -- and what inspires them. We know that interest and immersion are fertile grounds for deep learning."

Swap out targets for qualitative surveys

Lars Ulriksen suggests that qualitative surveys could be implemented to allow for students to verbalize and elaborate upon their experiences with the educational quality of their programmes.

"For now, we have a metric that probably satisfies the economists who use these results to generate statistics. In the meantime, we are left only a shade wiser about what study engagement and quality are all about. Obviously, qualitative surveys would require more resources. But perhaps more should be left up to the individual university, while dropping the measurements for a while."

Credit: 
University of Copenhagen

Most U.S. social studies teachers feel unprepared to teach civic learning

Only one in five social studies teachers in U.S. public schools report feeling very well prepared to support students' civic learning, saying they need additional aid with instructional materials, professional development and training, according to a RAND Corporation survey.

"These findings are concerning," said Laura Hamilton, lead author of the report and adjunct behavioral scientist at RAND, a nonprofit research organization. "Beyond being a component of social studies, civic education can teach students the skills and attitudes - a sense of civic duty, concern for the welfare of others, critical thinking - that are crucial in a democracy."

Between a third and just over half of both elementary (K-5) teachers and secondary (grades 6-12) teachers who taught social studies reported they had not received any training on civic education despite most still prioritizing schools' role in promoting civic development and using various means to do so. The survey was conducted in late 2019 using the RAND American Educator Panels (AEP) - nationally representative samples of educators who provide their feedback on important issues of educational policy and practice.

RAND examined how, in a partisan political landscape and with increasing distrust in institutions like the media, teachers are handling civics education today and what state standards and community support - or resistance - might influence teachers' approaches. The report is part of RAND's Truth Decay initiative, which is exploring the diminishing role of facts and analysis in American public life.

Elementary social studies teachers were less likely to highlight practices explicitly related to civic education but reported a similar emphasis as secondary teachers on practices such as social and emotional learning, improving school climate and conflict resolution. Teachers of color and those serving English-language learners reported more emphasis on civic-related topics.

Most teachers reported using civics instructional materials they found or created themselves.

"District materials were reported to be culturally appropriate and effective, but at least half of the teachers reported a need for better civics resources and instructional resources more culturally relevant and appropriate for English-language learners." said Julia Kaufman, co-author and a senior policy researcher at RAND. "Most reported a need for better civics instructional resources, as well as more nonteaching time and community partnerships to support their efforts to promote students' civic development."

Among the report's recommendations:

Teachers should receive training, encouragement and support to enact practices that promote civic development, especially at the elementary level. Teachers could especially benefit from guidance on how to integrate civic-learning opportunities into their academic instruction and other classroom activities so that these opportunities support, rather than detract from, teachers' other responsibilities.

Teachers need additional instructional materials to promote the full menu of civic skills, knowledge and dispositions and to provide instruction that is engaging, culturally relevant and tailored to the needs of all their students, particularly English-language learners.

Policy supports that create more sustained attention on civic learning - like an emphasis on civic education standards and recommended high-quality curricula to teach civics - could create environments that are more conducive to civic education.

Credit: 
RAND Corporation