Brain

Moffitt researchers identify protein that causes epithelial cancers to spread

TAMPA, Fla. -- Cancer is complex and unpredictable. Despite successful treatment or years of remission, there is always a chance that a patient's cancer can return. It usually happens in the form of metastasis, which is when remaining cancer cells in the body spread to another location and grow. Moffitt Cancer Center researchers are working to better understand what happens at the cellular level of cancer to develop new strategies to prevent and treat metastasis. In a new article published in the July issue of Cancer Research, Elsa Flores, Ph.D., and her team discovered a key protein that oscillates its expression through microRNA regulation to facilitate cancer spread to distant organs. This protein is deltaNp63, a member of the p53 family of tumor suppressor genes.

The overarching goal of the Flores lab is to develop therapies for p53 mutant cancers. P53 is difficult to target therapeutically because of its wide range of important cellular functions. This work from the Flores lab expands on previous research focused on the p53 family and its role in inhibiting tumor development and growth. DeltaNp63, a protein and p53 family member, regulates the epithelial to mesenchymal (EMT) process, which in cancer can allow cells to lose contact with surrounding tumor cells, enter the blood stream and spread to other areas of the body. This protein is overexpressed in many primary tumors and metastases. The Flores lab showed that deltaNp63 must be silenced for the cancer cells to undergo EMT and enter the blood stream to spread.

To better understand the deltaNp63 protein expression and its role in cancer metastasis, the Flores lab developed mouse models to modulate deltaNp63 expression during breast cancer metastasis. They found that oscillatory, or periodic, expression of the protein is required for efficient metastasis. They also discovered a network of four microRNAs, regulated by TGFbeta;, that can target and silence deltaNp63 expression.

"This finding is important because it gives us better insight into the regulation of deltaNp63 and TGFbeta;, key players in the metastatic process," said Flores, chair of the Department of Molecular Oncology and leader of the Cancer Biology and Evolution Program at Moffitt. "P53 is commonly mutated in human cancers, and we have found that deltaNp63 and p53 interact extensively in cancer. We can use this information to design personalized therapies for cancer patients with alterations in the p53/p63 pathways."

The Flores lab is developing microRNA-based therapies to keep proteins like deltaNp63 expressed at appropriate levels in the proper context in a growing cancer to block metastatic spread.

Credit: 
H. Lee Moffitt Cancer Center & Research Institute

Researchers develop novel approach to modeling yet-unconfirmed rare nuclear process

image: Researchers have developed a new approach to model a yet-unconfirmed rare nuclear process. The binary code (1, 0) on the particles in the graphic symbolizes the computer simulations which will be performed to better understand neutrinoless double-beta decay. Certain nuclei decay by emitting electrons (e) and neutrinos (ν), but the existence of a neutrinoless double beta decay has been hypothesized.

Image: 
Facility for Rare Isotope Beams

EAST LANSING, Mich. - Researchers from the Facility for Rare Isotope Beams (FRIB) Laboratory (frib.msu.edu) at Michigan State University (MSU) have taken a major step toward a theoretical first-principles description of neutrinoless double-beta decay. Observing this yet-unconfirmed rare nuclear process would have important implications for particle physics and cosmology. Theoretical simulations are essential to planning and evaluating proposed experiments. The research team presented their results in an article recently published in Physical Review Letters.

FRIB theorists Jiangming Yao, research associate and the lead author of the study, Roland Wirth, research associate, and Heiko Hergert, assistant professor, are members of a topical collaboration on fundamental symmetries and neutrinoless double-beta decay. The U.S. Department of Energy Office of Science Office of Nuclear Physics is funding the topical collaboration. The theorists joined forces with fellow topical collaboration members from the University of North Carolina-Chapel Hill and external collaborators from the Universidad Autonoma de Madrid, Spain. Their work marks an important milestone toward a theoretical calculation of neutrinoless double-beta decay rates with fully controlled and quantified uncertainties.

The authors developed the In-Medium Generator-Coordinate Method (IM-GCM). It is a novel approach for modeling the interactions between nucleons that is capable of describing the complex structure of the candidate nuclei for this decay. The first application of IM-GCM to the computation of the neutrinoless double beta decay rate for the nucleus of calcium-48 sets the stage for explorations of the other candidates with controllable theoretical uncertainty.

In neutrinoless double-beta decay, two protons simultaneously transform into neutrons without emitting the two neutrinos that appear in more typical weak-interaction processes. If it exists, this is an extremely rare decay that is expected to have a half-life greater than 10 septillion years (a 1 with 25 zeroes), which means that half of a sample of nuclei would have undergone neutrinoless double beta-decay in this extremely long period.

Its observation would demonstrate that neutrinos are their own antiparticles. Every subatomic particle has a corresponding antiparticle, which has the same mass but an equal and opposite charge. Particles and antiparticles can annihilate each other, leaving only energy. Hence, no neutrinos would be observed in neutrinoless double-beta decay. A neutrinoless double-beta decay observation would show that a fundamental law -- the conservation of lepton number -- is violated in nature. This could help explain why the universe contains more matter than antimatter, which consists of the aforementioned antiparticles. The observation would also direct efforts to complete the Standard Model of particle physics.

"The absence of neutrinos in this yet-unconfirmed decay makes it possible to determine the neutrino masses," said Hergert. "These masses are an important parameter in models of the evolution of the universe. The theoretical decay rate is a key ingredient in the extraction of the neutrino masses from the measured lifetime, or at least provides new upper limits on these quantities."

Theoretical calculations like those presented by the authors will also help determine the size of the detectors needed for large-scale neutrinoless double-beta decay experiments.

Developing and implementing tests of fundamental symmetries is an important element of FRIB's mission. FRIB experiments explore the structure of neutrinoless double beta decay candidates and their neighboring isotopes, which affects the rate at which the decay might occur. The theoretical methods developed for this study can now be applied to other nuclei with complex structures that are studied at FRIB.

Credit: 
Michigan State University Facility for Rare Isotope Beams

When it comes to DNA repair, it's not one tool fits all

Our cells are constantly dividing, and as they do, the DNA molecule - our genetic code - sometimes gets broken. DNA has twin strands, and a break in both is considered especially dangerous. This kind of double-strand break can lead to genome rearrangements that are hallmarks of cancer cells, said James Daley, PhD, of the Long School of Medicine at The University of Texas Health Science Center at San Antonio.

Dr. Daley is first author of research, published June 18 in the journal Nature Communications, that sheds light on a double-strand break repair process called homologous recombination. Joined by senior authors Patrick Sung, DPhil, and Sandeep Burma, PhD, and other collaborators, Dr. Daley found that among an array of mechanisms that initiate homologous recombination, each one is quite different. Homologous recombination is initiated by a process called DNA end resection where one of the two strands of DNA at a break is chewed back by resection enzymes.

"What's exciting about this work is that it answers a long-held mystery among scientists," Dr. Daley said. "For a decade we have known that resection enzymes are at the forefront of homologous recombination. What we didn't know is why so many of these enzymes are involved, and why we need three or four different enzymes that seem to accomplish the same task in repairing double-strand breaks."

An array of tools, each one finely tuned

"On the surface of it, there seems to be quite a bit of redundancy," said Dr. Sung, who holds the Robert A. Welch Distinguished Chair in Chemistry at UT Health San Antonio. "Our study is significant in showing that the perceived redundancy is really a very naïve notion."

DNA resection pathways actually are highly specific, the findings show.

"It's like an engine mechanic who has a set of tools at his disposal," Dr. Sung said. "The tool he uses depends on the issue that needs to be repaired. In like fashion, each DNA repair tool in our cells is designed to repair a distinctive type of break in our DNA."

The research team studied complex breaks that featured double-strand breaks with other kinds of DNA damage nearby - such complex breaks are more relevant physiologically, Dr. Daley said. Studies in the field of DNA repair usually tend to look at simpler versions of double-strand breaks, he said. Dr. Daley found that each resection enzyme is tailored to deal with a specific type of complex break, which explains why a diverse toolkit of resection enzymes has evolved over millennia.

Cancer ramifications

Dr. Burma, the Mays Family Foundation Distinguished Chair in Oncology at UT Health San Antonio MD Anderson Cancer Center, said the fundamental understandings gleaned from this research could one day lead to improved cancer treatments.

"The cancer therapeutic implications are immense," Dr. Burma said. "This research by our team is timely because a new type of radiation therapy, called carbon ion therapy, is now being considered in the U.S. While being much more precisely aimed at tumors, this therapy is likely to induce exactly the sort of complex DNA damage that we studied. Understanding how specific enzymes repair complex damage could lead to strategies to dramatically increase the efficacy of cancer therapy."

Part of the research is funded by NASA. "These kinds of complex DNA breaks are also induced by space radiation," Dr. Burma said. "Therefore, the research is relevant not just to cancer therapy, but also to cancer risks inherent to space exploration."

Credit: 
University of Texas Health Science Center at San Antonio

Algorithm predicts risk for PTSD after traumatic injury

Researchers have developed an algorithm that can predict whether trauma survivors are likely to develop posttraumatic stress disorder (PTSD). The tool, which relies on routinely collected medical data, would allow clinicians to intervene early to mitigate the effects of PTSD.

The study was published online today in Nature Medicine.

30 Million Trauma Patients Each Year in United States

Each year, about 30 million patients in the United States are treated in an emergency department (ED) for traumatic injury due to car accidents, falls, firearm injuries, and other injuries.

Health experts estimate that 10% to 15% of trauma patients will develop long-lasting PTSD symptoms, usually within a year of the injury.

Though treatments that effectively reduce the risk for developing PTSD exist, early prevention strategies are typically not implemented due to the lack of established methods than can predict which patients are most likely to develop PTSD.

"For many trauma patients, the ED visit is often their sole contact with the health care system. The time immediately after a traumatic injury is a critical window for identifying people at risk for PTSD and arranging appropriate follow-up treatment," says lead author Katharina Schultebraucks, PhD, assistant professor of behavioral and cognitive sciences in the Department of Emergency Medicine at the Columbia University Vagelos College of Physicians and Surgeons. "The earlier we can treat those at risk, the better the likely outcomes."

Machine Learning Turns 70 Clinical Data Points into Single PTSD Risk Score

Numerous biological and psychological biomarkers -- including elevated stress hormones, increased inflammatory signals, high blood pressure, and hyperarousal (an abnormally heightened state of anxiety) - often precede PTSD in trauma survivors. However, none of these measures, alone or in combination, has proved reliable at predicting PTSD.

In the new study, the multi-site research team used supervised machine learning to develop an algorithm that computes a single PTSD risk score from a combination of 70 clinical data points and a brief clinical assessment of a patient's immediate stress response. (Supervised machine learning is a form of artificial intelligence that gives a computer system the ability to recognize patterns from data inputs to make predictions about new observations without additional programming.)

"We selected measures that are routinely collected in the ED and logged in the electronic medical record, plus answers to a few short questions about the psychological stress response," Schultebraucks says. "The idea was to create a tool that would be universally available and would add little burden to ED personnel."

Algorithm Discriminated PTSD Risk with High Precision

The researchers developed the algorithm with data from 377 adult trauma survivors in Atlanta and then tested the algorithm in 221 adult trauma survivors in New York City.

Among patients who were categorized by the algorithm as PTSD risks, 90% developed long-lasting PTSD symptoms within a year. Only 5% of patients who were free of long-lasting PTSD symptoms had been identified as at risk. Of the patients predicted to have no or few PTSD symptoms, 29% developed long-lasting PTSD (false negatives).

More Testing Needed

"Because previous models for predicting PTSD risk have not been validated in independent samples like our model, they haven't been adopted in clinical practice," says Schultebraucks. "Testing and validation of our model in larger samples will be necessary for the algorithm to be ready-to-use in the general population."

The current algorithm was built using patients who had blood drawn. This possibly limits generalizability as the algorithm would only apply to patients who undergo blood testing, such as those with more severe injuries.

In future studies, the team plans to test whether the algorithm can predict PTSD in patients who experience other potentially traumatic health events, including heart attacks and strokes.

Soon, Schultebraucks predicts, the algorithm could be incorporated into electronic health records.

"Currently only 7% of level-1 trauma centers routinely screen for PTSD," she says. "We hope that the algorithm will provide ED clinicians with a rapid, automatic readout that they could use for discharge planning and the prevention of PTSD."

Credit: 
Columbia University Irving Medical Center

Children actively want to understand and express themselves regarding the coronavirus

To comprehend and process the social crisis and upheaval in everyday life that have resulted from the corona pandemic, we need research and new perspectives.

Researchers of early childhood education at Åbo Akademi University, University of Helsinki, University of Gothenburg, Örebro University and Umeå University have studied how children attending day-care or preschool comprehended coronavirus at the time of its initial outbreak.

"Earlier research has shown that negative life experiences, such as pandemics, affect children's well-being in the short and long term, and children are especially vulnerable in crises. Thus, it is important to consider how the corona pandemic is being comprehended and expressed by children in their daily environment", explains Mia Heikkilä, Associate Professor in Early Childhood Education at Åbo Akademi University, who is leading the research team.

According to Heikkilä, children are active, participatory agents capable of contributing to the handling of a crisis with their ideas and actions, if they are given the opportunity to do so.

"It is vital to reinforce children's resilience, that is, their capacity to withstand adversities, both during and after a social crisis like the corona pandemic. Previously, it has been shown that supportive relations between adults and children, as well as children's opportunity to participate actively are significant in this respect. Here, early childhood education plays a key role", says Ann-Christin Furu, who works as a researcher at Åbo Akademi University, following a period at the University of Helsinki.

The present study was conducted as a questionnaire survey among personnel engaged in early childhood education in Finland and Sweden. The results show that children express themselves in multifaceted ways regarding the new, unfamiliar and often heavily changed daily life of the children themselves and of their families, relatives and friends, as well as the day-care or preschool personnel.

Children's expression and participation regarding the outbreak of the corona virus are approached through four themes. The first theme concerns health, and it was found that children both have knowledge and wish to know more about the virus itself and how to protect oneself against it. The second theme deals with worry and concern about those close to the child, for example, friends, parents or elderly relatives. The third theme is about how to cope with the changed routines in everyday life.

"The fourth theme relates to children's playing, creativity and humour as potential tools to cope with the situation. Children are playing, for example, 'corona tag' and 'being at hospital', or they come up with corona-related drawings, rhymes or songs of their own", explains Furu.

"These expressions can provide the personnel, as well as the parents, with tools to understand what the children are dealing with in the corona situation. It may offer a way to observe one's own group of children, and to create situations where the children can express themselves with regard to the coronavirus", says Furu.

"The abundant material we received in a very short time revealed that children have numerous and multifaceted expressions and reflections regarding the corona pandemic. Adults should be aware of this and act accordingly. Also, there are many children who need support in coping with and understanding the situation. Early childhood education should assume a clear role in developing pedagogical approaches that allow room for the various expressions of children, and offer tools to support the children's ability to face challenging situations", says Mia Heikkilä.

Credit: 
Abo Akademi University

Does DNA in the water tell us how many fish are there?

image: Researchers "counted" Japanese jack mackerel (Trachurus japonicus) in Maizuru Bay, Japan, through quantitative measurements of environmental DNA concentration. Photo credit: Reiji Masuda, Kyoto University.

Image: 
Reiji Masuda

River water, lake water, and seawater contain DNA belonging to organisms such as animals and plants. Ecologists have begun to actively analyze such DNA molecules, called environmental DNA, to assess the distribution of macro-organisms. Challenges yet remain, however, in quantitative applications of environmental DNA.

In a research article published online in Molecular Ecology, researchers from the National Institute for Environmental Studies, Tohoku University, Shimane University, Kyoto University, Hokkaido University, and Kobe University, have reported a new method for estimating population abundance of fish species (or more generally, a target aquatic species), by means of measuring concentration of environmental DNA in the water. Their results suggest the potential of the proposed approach for quantitative, non-invasive monitoring of aquatic ecosystems.

DNA molecules are released from organisms present, are transported by the flow of water, and are eventually degraded. In a natural environment, these processes can operate in a complex way.

"This complicates and limits the traditional approach of population quantification based on environmental DNA where the presence of a definite relationship between the concentration of environmental DNA and population abundance has been critical, "explained Keiichi Fukaya, research associate at the National Institute for Environmental Studies and the lead author of the paper.

"We thought that these fundamental processes of environmental DNA, the shedding, transport, and degradation, should be accounted for, when we estimate population abundance through environmental DNA," he said.

The authors implemented this idea by adopting a numerical hydrodynamic model that explicitly accounts for the processes to simulate the distribution of environmental DNA concentrations within an aquatic area. "By solving this model in the 'inverse direction', we can estimate fish population abundance based on the observed distribution of environmental DNA concentrations," Fukaya explained.

A case study conducted in Maizuru Bay, Japan, confirmed that the estimate of the population abundance of Japanese jack mackerel (Trachurus japonicus), obtained by the proposed method, was comparable to that of a quantitative echo sounder method.

"The idea and framework presented in this study forms a cornerstone towards quantitative monitoring of ecosystems through environmental DNA analysis. By combining field observation, techniques of molecular biology, and mathematical/statistical modeling, the scope of the environmental DNA analysis will be broadened beyond the determination of the presence or absence of target species," explained Professor Michio Kondoh from Tohoku University, who led the 5.5-year environmental DNA research project, funded by the Japan Science and Technology Agency (CREST).

Credit: 
National Institute for Environmental Studies

University of Oregon scientists dissociate water apart efficiently with new catalysts

image: Research in a University of Oregon chemistry lab has advanced the effectiveness of the catalytic water dissociation reaction in bipolar membranes. A three-member team used a membrane-electrode assembly where the polymer bipolar membrane is compressed between two rigid porous electrodes, allowing them to make a large number of bipolar membranes with different water dissociation catalyst layers.

Image: 
Graphic by Sebastian Z. Oener

EUGENE, Ore. -- July 2, 2020 -- University of Oregon chemists have made substantial gains in enhancing the catalytic water dissociation reaction in electrochemical reactors, called bipolar membrane electrolyzers, to more efficiently rip apart water molecules into positively charged protons and negatively charged hydroxide ions.

The discovery, published online ahead of print in the journal Science, provides a roadmap to realize electrochemical devices that benefit from the key property of bipolar membranes operation -- to generate the protons and hydroxide ions inside the device and supply the ions directly to the electrodes to produce the final chemical products.

The technology behind bipolar membranes, which are layered ion-exchange polymers sandwiching a water dissociation catalyst layer, emerged in the 1950s. While they've been applied industrially on a small scale, their performance is currently limited to low current-density operation, which hampers broader applications.

Among them are devices to produce hydrogen gas from water and electricity, capture carbon dioxide from seawater, and make carbon-based fuels directly from carbon dioxide, said co-author Shannon W. Boettcher, a professor in the UO's Department of Chemistry and Biochemistry and founding director of the Oregon Center for Electrochemistry,

"I suspect our findings will accelerate a resurgence in the development of bipolar-membrane devices and research into the fundamentals of the water-dissociation reaction," said Boettcher, who also is a member of the Materials Science Institute and an associate in the UO's Phil and Penny Knight Campus for Accelerating Scientific Impact.

"The performance we demonstrated is sufficiently high," he said. "If we can improve durability and manufacture the bipolar membranes with our industry partners, there should be important immediate applications."

Typically, water-based electrochemical devices such as batteries, fuel cells and electrolyzers operate at a single pH across the whole system -- that is, the system is either acidic or basic, said the study's lead author Sebastian Z. Oener, a postdoctoral scholar supported by a German Research Foundation fellowship in Boettcher's lab.

"Often, this leads either to using expensive precious metals to catalyze electrode reactions, such as iridium, one of the rarest metals on earth, or sacrificing catalyst activity, which, in turn, increases the required energy input of the electrochemical reactor," Oener said. "A bipolar membrane can overcome this trade-off by operating each electrocatalyst locally in its ideal pH environment. This increases the breadth of stable, earth-abundant catalyst availability for each half-reaction."

The three-member team, which also included graduate student Marc J. Foster, used a membrane-electrode assembly where the polymer bipolar membrane is compressed between two rigid porous electrodes. This approach allowed them to make a large number of bipolar membranes with different water dissociation catalyst layers and accurately measure the activity for each.

The team found that the exact position of each catalyst layer inside the bipolar membrane junction -- the interface between a hydroxide-conducting layer and the proton conducting layer in the bipolar membrane -- dramatically affects the catalyst activity. This allowed them to use catalyst bilayers to realize record-performing bipolar membranes that essentially dissociate water with negligible lost extra energy input.

"The biggest surprise was the realization that the performance could be improved substantially by layering different types of catalysts on top of each other," Boettcher said. "This is simple but hadn't been explored fully."

A second key finding, Oener said, is that the water dissociation reaction occurring inside the bipolar membrane is fundamentally related to that which occurs on electrocatalyst surfaces, such as when protons are extracted directly from water molecules when making hydrogen fuel in basic pH conditions.

"This is unique because it has not before been possible to separate the individual steps that occur during an electrochemical reaction," Oener said. "They are all linked, involving electrons and intermediates, and rapidly proceed in series. The bipolar membrane architecture allows us to isolate the water dissociation chemical step and study it in isolation."

That finding, he said, also could lead to improved electrocatalysts for reactions that directly make reduced fuels from water, such as making hydrogen gas or liquid fuel from waste carbon dioxide.

The discoveries, Boettcher said, provide a tentative mechanistic model, one that could open up the field and motivate many more studies.

“We are excited to see the response of the research community and see if these findings can be translated to products that reduce society’s reliance on fossil fuels,” he said.

Credit: 
University of Oregon

Science fiction becomes fact -- Teleportation helps to create live musical performance

image: Dr Alexis Kirke (right) and soprano Juliette Pochin during the first duet between a live singer and a quantum supercomputer

Image: 
University of Plymouth

Teleportation is most commonly the stuff of science fiction and, for many, would conjure up the immortal phrase "Beam me up Scotty".

However, a new study has described how its status in science fact could actually be employed as another, and perhaps unlikely, form of entertainment - live music.

Dr Alexis Kirke, Senior Research Fellow in the Interdisciplinary Centre for Computer Music Research at the University of Plymouth (UK), has for the first time shown that a human musician can communicate directly with a quantum computer via teleportation.

The result is a high-tech jamming session, through which a blend of live human and computer-generated sounds come together to create a unique performance piece.

Speaking about the study, published in the current issue of the Journal of New Music Research, Dr Kirke said: "The world is racing to build the first practical and powerful quantum computers, and whoever succeeds first will have a scientific and military advantage because of the extreme computing power of these machines. This research shows for the first time that this much-vaunted advantage can also be helpful in the world of making and performing music. No other work has shown this previously in the arts, and it demonstrates that quantum power is something everyone can appreciate and enjoy."

Quantum teleportation is the ability to instantaneously transmit quantum information over vast distances, with scientists having previously used it to send information from Earth to an orbiting satellite over 870 miles away.

In the current study, Dr Kirke describes how he used a system called MIq (Multi-Agent Interactive qgMuse), in which an IBM quantum computer executes a methodology called Grover's Algorithm.

Discovered by Lov Grover at Bell Labs in 1996, it was the second main quantum algorithm (after Shor's algorithm) and gave a huge advantage over traditional computing.

In this instance, it allows the dynamic solving of musical logical rules which, for example, could prevent dissonance or keep to ¾ instead of common time.

It is significantly faster than any classical computer algorithm, and Dr Kirke said that speed was essential because there is actually no way to transmit quantum information other than through teleportation.

The result was that when played the theme from Game of Thrones on the piano, the computer - a 14-qubit machine housed at IBM in Melbourne - rapidly generated accompanying music that was transmitted back in response.

Dr Kirke, who in 2016 staged the first ever duet between a live singer and a quantum supercomputer, said: "At the moment there are limits to how complex a real-time computer jamming system can be. The number of musical rules that a human improviser knows intuitively would simply take a computer too long to solve to real-time music. Shortcuts have been invented to speed up this process in rule-based AI music, but using the quantum computer speed-up has not be tried before. So while teleportation cannot move information faster than the speed of light, if remote collaborators want to connect up their quantum computers - which they are using to increase the speed of their musical AIs - it is 100% necessary. Quantum information simply cannot be transmitted using normal digital transmission systems."

Credit: 
University of Plymouth

Prospective teachers misperceive Black children as angry

Prospective teachers appear more likely to misperceive Black children as angry than white children, which may undermine the education of Black youth, according to new research published by the American Psychological Association.

While previous research has documented this effect in adults, this is the first study to show how anger bias based on race may extend to teachers and Black elementary and middle-school children, said lead researcher Amy G. Halberstadt, PhD, a professor of psychology at North Carolina State University. The study was published online in the APA journal Emotion.

"This anger bias can have huge consequences by increasing Black children's experience of not being 'seen' or understood by their teachers and then feeling like school is not for them," she said. "It might also lead to Black children being disciplined unfairly and suspended more often from school, which can have long-term ramifications."

In the study, 178 prospective teachers from education programs at three Southeastern universities viewed short video clips of 72 children ages 9 to 13 years old. The children's faces expressed one of six basic emotions: happiness, sadness, anger, fear, surprise or disgust. The clips were evenly divided among boys or girls and Black children or white children. The sample was not large enough to determine whether the race or ethnicity of the teachers made a difference in how they perceived the children.

The prospective teachers were somewhat accurate at detecting the children's emotions, but they also made some mistakes that revealed patterns. Boys of both races were misperceived as angry more often than Black or white girls. Black boys and girls also were misperceived as angry at higher rates than white children, with Black boys eliciting the most anger bias.

Anger bias against Black children can have many negative consequences. While controlling for other factors, previous research has found that Black children are three times more likely to be suspended or expelled from school than white children. Black children's negative experiences at school also could contribute to the disparate achievement gap between Black and white youth that has been documented across the United States, Halberstadt said.

Those in the study also completed questionnaires relating to their implicit and explicit racial bias, but their scores on those tests didn't affect the findings relating to Black children. However, those who displayed greater racial bias were less likely to misperceive white children as angry.

"Even when people are motivated to be anti-racist, we need to know the specific pathways by which racism travels, and that can include false assumptions that Black people are angry or threatening," Halberstadt said. "Those common racist misperceptions can extend from school into adulthood and potentially have fatal consequences, such as when police officers kill unarmed Black people on the street or in their own homes."

Previous research with adults in the United States has found that anger is perceived more quickly than happiness in Black faces, while the opposite effect was found for white faces. Anger also is perceived more quickly and for a longer time in young Black men's faces than young white men's faces.

"Over the last few weeks, many people are waking up to the pervasive extent of systemic racism in American culture, not just in police practices but in our health, banking and education systems," Halberstadt said. "Learning more about how these problems become embedded in our thought processes is an important first step."

Participants in the study were predominantly female (89%) and white (70%), mirroring the gender and race of most public-school teachers across the country. The study didn't include enough people of color from any single race or ethnicity (Hispanic 9%, Asian 8%, Black 6%, Biracial 5%, Native American 1%, and Middle Eastern 1%) to analyze separate findings based on the race or ethnicity of the participants.

Credit: 
American Psychological Association

Stemming the spread of misinformation on social media

The bad news, not that anyone needs more of it: The dangers of COVID-19 could worsen if misinformation on social media continues to spread unchecked. Essentially, what people choose to share on social media about the pandemic could become a life-or death decision.

The good news? Though there is no practical way to fully stem the tide of harmful misinformation on social media, certain tactics could help improve the quality of information that people share online about this deadly disease.

New research reported in the journal Psychological Science finds that priming people to think about accuracy could make them more discerning in what they subsequently share on social media. In two studies that included more than 1,700 participants, researchers found that when people are asked directly about accuracy, they become more adept at recognizing truth from falsehoods than they otherwise would be.

"People often assume that misinformation and fake news is shared online because people are incapable of distinguishing between what is true and what is false," said Gordon Pennycook, with the University of Regina, Canada, and lead author on the paper. "Our research reveals that is not necessarily the case. Instead, we find that people tend to share false information about COVID-19 on social media because they simply fail to think about accuracy when making decisions about what to share with others."

This inattention to accuracy, he notes, is often compounded by what the researchers consider "lazy" thinking, at least as it pertains to considering the truth of news content on social media.

For their research, Pennycook and his team acquired a list of 15 false and 15 true headlines related to COVID-19. The veracity of the headlines was determined by using various fact-checking sites like snopes.com, health information from mayoclinic.com, and credible news sites like livescience.com. The headlines were presented to the participants in the form of Facebook posts. The participants were then asked if they thought the posts were accurate or if they would consider sharing them.

In the first of the two studies, Pennycook and his colleagues found that people often fail to consider accuracy when deciding what to share on social media, and they are more likely to believe and share falsehoods if they rely more on intuition or have less scientific knowledge than others.

In the second study, the researchers found that simply asking participants to rate the accuracy of one non-COVID-related headline at the beginning of the study-subtly nudging them to think about the concept of accuracy later in the study-more than doubled how discerning they were in sharing information.

These results, which are in line with previous studies on political fake news, suggest that a subtle mental nudge that primes the brain to consider the accuracy of information in general improves people's choices about what to share on social media.

"We need to change the way that we interact with social media," said Pennycook. "Individuals need to remember to stop and think about whether something is true before they share it with others, and social media companies should investigate potential ways to help facilitate this, possibly by providing subtle accuracy nudges on their platform."

Credit: 
Association for Psychological Science

New method reveals how the Parkinson's disease protein damages cell membranes

image: An illustration showing how the mitochondria-imitating lipid vesicles are damaged by the Parkinson's protein alpha synuclein. The light scattering reveals how the membrane is destroyed even at very low, nanomolar, concentrations, where proteins do not aggregate to clumps before they bind. The watercolour illustration was created by research Fredrik Höök

Image: 
Fredrik Höök/Chalmers University of Technology

In sufferers of Parkinson's disease, clumps of α-synuclein (alpha-synuclein), sometimes known as the 'Parkinson's protein', are found in the brain. These destroy cell membranes, eventually resulting in cell death. Now, a new method developed at Chalmers University of Technology, Sweden, reveals how the composition of cell membranes seems to be a decisive factor for how small quantities of α-synuclein cause damage.

Parkinson's disease is an incurable condition in which neurons, the brain's nerve cells, gradually break down and brain functions become disrupted. Symptoms can include involuntary shaking of the body, and the disease can cause great suffering. To develop drugs to slow down or stop the disease, researchers try to understand the molecular mechanisms behind how α-synuclein contributes to the degeneration of neurons.

It is known that mitochondria, the energy-producing compartments in cells, are damaged in Parkinson's disease, possibly due to 'amyloids' of α-synuclein. Amyloids are clumps of proteins arranged into long fibres with a well-ordered core structure, and their formation underlies many neurodegenerative disorders. Amyloids or even smaller clumps of α-synuclein may bind to and destroy mitochondrial membranes, but the precise mechanisms are still unknown.

The new study, recently published in the journal PNAS, focuses on two different types of membrane-like vesicles, which are 'capsules' of lipids that can be used as mimics of the membranes found in cells. One of the vesicles is made of lipids that are often found in synaptic vesicles, the other contains lipids related to mitochondrial membranes.

The researchers found that the Parkinson's protein would bind to both vesicle types, but only caused structural changes to the mitochondrial-like vesicles, which deformed asymmetrically and leaked their contents.

"Now we have developed a method which is sensitive enough to observe how α-synuclein interacts with individual model vesicles. In our study, we observed that α-synuclein binds to - and destroys - mitochondrial-like membranes, but there was no destruction of the membranes of synaptic-like vesicles. The damage occurs at very low, nanomolar concentration, where the protein is only present as monomers - non-aggregated proteins. Such low protein concentration has been hard to study before but the reactions we have detected now could be a crucial step in the course of the disease," says Pernilla Wittung-Stafshede, Professor of Chemical Biology at the Department of Biology and Biological Engineering..

The new method from the researchers at Chalmers University of Technology makes it possible to study tiny quantities of biological molecules without using fluorescent markers. This is a great advantage when tracking natural reactions, since the markers often affect the reactions you want to observe, especially when working with small proteins such as α-synuclein.

"The chemical differences between the two lipids used are very small, but still we observed dramatic differences in how α-synuclein affected the different vesicles," says Pernilla Wittung-Stafshede.

"We believe that lipid chemistry is not the only determining factor, but also that there are macroscopic differences between the two membranes - such as the dynamics and interactions between the lipids. No one has really looked closely at what happens to the membrane itself when α-synuclein binds to it, and never at these low concentrations."

The next step for the researchers is to investigate variants of the α-synuclein protein with mutations associated with Parkinson's disease, and to investigate lipid vesicles which are more similar to cellular membranes.

"We also want to perform quantitative analyses to understand, at a mechanistic level, how individual proteins gathering on the surface of the membrane can cause damage" says Fredrik Höök, Professor at the Department of Physics, who was also involved in the research.

"Our vision is to further refine the method so that we can study not only individual, small - 100 nanometres - lipid vesicles, but also track each protein one by one, even though they are only 1-2 nanometres in size. That would help us reveal how small variations in properties of lipid membranes contribute to such a different response to protein binding as we now observed."

Credit: 
Chalmers University of Technology

Infant sleep problems can signal mental disorders in adolescents -- Study

Specific sleep problems among babies and very young children can be linked to mental disorders in adolescents, a new study has found.

A team at the University of Birmingham's School of Psychology studied questionnaire data from the Children of the 90s, a UK-based longitudinal study which recruited pregnant mothers of 14,000 babies when it was set up almost three decades ago.

They found that young children who routinely woke up frequently during the night and experienced irregular sleep routines were associated with psychotic experiences as adolescents. They also found that children who slept for shorter periods at night and went to bed later, were more likely to be associated with borderline personality disorder (BPD) during their teenage years.

Lead researcher, Dr Isabel Morales-Muñoz, explained: "We know from previous research that persistent nightmares in children have been associated with both psychosis and borderline personality disorder. But nightmares do not tell the whole story - we've found that, in fact, a number of behavioural sleep problems in childhood can point towards these problems in adolescence."

The researchers examined questionnaire data from more than 7,000 participants reporting on psychotic symptoms in adolescence, and more than 6,000 reporting on BPD symptoms in adolescence. The data analysed is from the Children of the 90s study (also known as the Avon Longitudinal Study of Parents and Children (ALSPAC) birth cohort) which was set up by the University of Bristol.

Sleep behaviour among participants was reported by parents when the children were 6, 18 and 30 months, and assessed again at 3.5, 4.8 and 5.8 years old.

The results, published in JAMA Psychiatry, show particular associations between infants at 18 months old who tended to wake more frequently at night and who had less regular sleep routines from 6 months old, with psychotic experiences in adolescence. This supports existing evidence that insomnia contributes to psychosis, but suggests that these difficulties may be already present years before psychotic experiences occur.

The team also found that children who had less sleep during the night and went to bed later at the age of three-and-a-half years were related to BPD symptoms. These results suggest a specific pathway from toddlers through to adolescents with BPD, which is separate from the pathway linked with psychosis.

Finally, the researchers investigated whether the links between infant sleep and mental disorders in teenagers could be mediated by symptoms of depression in children aged 10 years old. They found that depression mediated the links between childhood sleep problems and the onset of psychosis in adolescents, but this mediation was not observed in BPD, suggesting the existence of a direct association between sleep problems and BPD symptoms.

Professor Steven Marwaha, senior author on the study, added: "We know that adolescence is a key developmental period to study the onset of many mental disorders, including psychosis or BPD. This is because of particular brain and hormonal changes which occur at this stage. It's crucial to identify risk factors that might increase the vulnerability of adolescents to the development of these disorders, identify those at high risk, and deliver effective interventions. This study helps us understand this process, and what the targets might be.

"Sleep may be one of the most important underlying factors - and it's one that we can influence with effective, early interventions, so it's important that we understand these links."

Credit: 
University of Birmingham

Elucidating how asymmetry confers chemical properties

Washington, DC-- You've heard the expression form follows function? In materials science, function follows form.

New research by Carnegie's Olivier Gagné and collaborator Frank Hawthorne of the University of Manitoba categorizes the causes of structural asymmetry, some surprising, which underpin useful properties of crystals, including ferroelectricity, photoluminescence, and photovoltaic ability. Their findings are published this week as a lead article in the International Union of Crystallography Journal.

"Understanding how different bond arrangements convey various useful attributes is central to the materials sciences" explained Gagné. "For this project, we were particularly interested in what variations in bond lengths mean for a material's most-exciting characteristics, and in how to create a framework for their optimization."

This was the fifth and final installment in a series of papers by Gagné and Hawthorne examining variability in bond lengths of crystalline structures. This time around they focused on compounds made up of oxygen and elements from the category called transition metals.

Picture the periodic table. The transition metals make up its central block--forming a bridge linking the taller towers of elements on the left and right sides.

Like all metals, they can conduct an electrical current. They also have a tremendous range of chemical and physical properties, including the emission of visible light, malleability, and magnetism. Many, like gold, platinum, and silver are prized for their value. Others, including iron, nickel, copper, and titanium are crucial for tools and technologies.

The transition metals' ability to form a variety of useful compounds is owed in large part to the particular three-dimensional configuration of their electrons. As such, the bonds they form in compounds can be widely asymmetrical. But Gagné and Hawthorne wanted to understand whether other causes for bond-length variation were in play.

"It's a century old problem" Gagné explained. "The likes of Linus Pauling and Victor Goldschmidt made this topic one of their prime research interests; however, the data simply weren't there at the time."

Gagné and Hawthorne analyzed data on the bond lengths of 63 different transition metal ions bonded to oxygen in 147 configurations from 3,814 crystal structures and developed two new indices for contextualizing asymmetrical bonding.

"These indices allow us to pinpoint the different reasons underlying asymmetrical bonding arrangements, which will hopefully allow us to harness the properties that they convey when predicting and synthesizing new materials," Hawthorne explained.

To their surprise, they found that the internal structure of crystals often spontaneously distorts as a sole function of the connectivity of its bond network, an effect which they show occurs more frequently than distortion caused by electronic effects or any other factors.

"We suspected some bond-length variation originated from crystal-structure controls, but we didn't expect it to be the primary factor underlying bond-length variation in inorganic solids," Gagné explained. "It's a mechanism that is entirely separate and unaccounted for by current notions of solid-state chemistry; it that has been overlooked since the early days of crystallography."

Credit: 
Carnegie Institution for Science

New system combines smartphone videos to create 4D visualizations

image: By combining video of the same scene from several cameras, Carnegie Mellon University researchers can create a "virtual camera," that enables users to view the scene from various angles, or to remove people from the scene.

Image: 
Carnegie Mellon University

PITTSBURGH--Researchers at Carnegie Mellon University have demonstrated that they can combine iPhone videos shot "in the wild" by separate cameras to create 4D visualizations that allow viewers to watch action from various angles, or even erase people or objects that temporarily block sight lines.

Imagine a visualization of a wedding reception, where dancers can be seen from as many angles as there were cameras, and the tipsy guest who walked in front of the bridal party is nowhere to be seen.

The videos can be shot independently from variety of vantage points, as might occur at a wedding or birthday celebration, said Aayush Bansal, a Ph.D. student in CMU's Robotics Institute. It also is possible to record actors in one setting and then insert them into another, he added.

"We are only limited by the number of cameras," Bansal said, with no upper limit on how many video feeds can be used.

Bansal and his colleagues presented their 4D visualization method at the Computer Vision and Pattern Recognition virtual conference last month.

"Virtualized reality" is nothing new, but in the past it has been restricted to studio setups, such as CMU's Panoptic Studio, which boasts more than 500 video cameras embedded in its geodesic walls. Fusing visual information of real-world scenes shot from multiple, independent, handheld cameras into a single comprehensive model that can reconstruct a dynamic 3D scene simply hasn't been possible.

Bansal and his colleagues worked around that limitation by using convolutional neural nets (CNNs), a type of deep learning program that has proven adept at analyzing visual data. They found that scene-specific CNNs could be used to compose different parts of the scene.

The CMU researchers demonstrated their method using up to 15 iPhones to capture a variety of scenes -- dances, martial arts demonstrations and even flamingos at the National Aviary in Pittsburgh.

"The point of using iPhones was to show that anyone can use this system," Bansal said. "The world is our studio."

The method also unlocks a host of potential applications in the movie industry and consumer devices, particularly as the popularity of virtual reality headsets continues to grow.

Though the method doesn't necessarily capture scenes in full 3D detail, the system can limit playback angles so incompletely reconstructed areas are not visible and the illusion of 3D imagery is not shattered.

Credit: 
Carnegie Mellon University

Well packed

Biomacromolecules incorporated into tailored metal-organic frameworks using peptide modulators are well shielded but highly active thanks to carefully tuned nanoarchitecture. As scientists report in the journal Angewandte Chemie, this strategy can be used to synthesize an "artificial cell" that functions as an optical glucose sensor.

Biomacromolecules, such as enzymes, control reactions in cells with much higher efficiency, specificity, and selectivity than in synthetic systems. When used outside a cell, many of these sensitive molecules require a synthetic shell. Metal-organic frameworks (MOFs) are highly suited for this. These cage-like structures have metal ions as nodes, which are connected by organic ligands. Biomolecules can be incorporated within these frameworks easily during their self-assembly process. However, the limited accessibility of the biomolecules within the shells often causes the activity of these biohybrids to be disappointing.

A team led by Gangfeng Ouyang at Sun Yat-sen University in Guangzhou, China, has now introduced a simple strategy for tailoring such biohybrids to form nanoarchitectures with high activities. The key to their success lies in the addition of specific peptides that influence the structure as "modulators".

The researchers chose to work with horseradish peroxidase as their model biomolecule. This enzyme breaks down hydrogen peroxide and is used in industry for the environmentally friendly oxidation of aromatic amines. The nodes in the metal-organic framework are zinc ions, which are linked by 2-methylimidazole ligands. The modulator is γ-poly-L-glutamic acid, a natural biopolymer with multiple negative charges that binds to positive groups on the peroxidase and competitively coordinates with zinc ions. The modulator and peroxidase are, thus, both incorporated into the MOF. Varying the amount of modulator yields different morphologies, such as three-dimensional polyhedra, which are like tiny "stars" made of interlaced two-dimensional spindle-shaped layers that are about 150 nm thick, or three-dimensional flower-like structures. Whereas the enzyme activity in the microporous 3D structures is low, the enzymes in the 2D MOFs are nearly as active as in the free state. This is a result of the large pores and relatively short channels in the 2D structures, which allow the substrate to quickly access the enzyme. At the same time, the enzyme is well protected from enzymes that degrade proteins, high concentrations of urea, elevated temperatures, and a number of organic solvents, which is advantageous for industrial applications.

The researchers were also able to build an "artificial cell" that mimics the cellular cascades involved in signal transduction and acts as a glucose sensor. For this, they incorporated several components into a 2D MOF: glucose oxidase (GOx) and protein-bound fluorescent gold nanoclusters that break down hydrogen peroxide catalytically. Addition of glucose initiates the cascade. The glucose is oxidized by the GOx, which forms hydrogen peroxide. This is then converted with a substrate by the gold nanoclusters, whereupon the substrate turns blue. In parallel, the gold nanoclusters are oxidized, which quenches the fluorescence. Both optical signals are proportional to the glucose concentration and are sensitive in two complementary concentration ranges.

Credit: 
Wiley