Tech

Research breakthrough could transform clean energy technology

By some estimates, the amount of solar energy reaching the surface of the earth in one year is greater than the sum of all the energy we could ever produce using non-renewable resources. The technology necessary to convert sunlight into electricity has developed rapidly, but inefficiencies in the storage and distribution of that power have remained a significant problem, making solar energy impractical on a large scale. However, a breakthrough by researchers at UVA’s College and Graduate School of Arts & Sciences, the California Institute of Technology and the U.S. Department of Energy’s Argonne National Laboratory, Lawrence Berkeley National Laboratory and Brookhaven National Laboratory could eliminate a critical obstacle from the process, a discovery that represents a giant stride toward a clean-energy future.

One way to harness solar energy is by using solar electricity to split water molecules into oxygen and hydrogen. The hydrogen produced by the process is stored as fuel, in a form that can be transferred from one place to another and used to generate power upon demand. To split water molecules into their component parts, a catalyst is necessary, but the catalytic materials currently used in the process, also known as the oxygen evolution reaction, are not efficient enough to make the process practical.

Using an innovative chemical strategy developed at UVA, however, a team of researchers led by chemistry professors Sen Zhang and T. Brent Gunnoe have produced a new form of catalyst using the elements cobalt and titanium. The advantage of these elements is that they are much more abundant in nature than other commonly used catalytic materials containing precious metals such as iridium or ruthenium.

“The new process involves creating active catalytic sites at the atomic level on the surface of titanium oxide nanocrystals, a technique that produces a durable catalytic material and one that is better at triggering the oxygen evolution reaction.” Zhang said. “New approaches to efficient oxygen evolution reaction catalysts and enhanced fundamental understanding of them are key to enabling a possible transition to scaled-use of renewable solar energy. This work is a perfect example of how to optimize the catalyst efficiency for clean energy technology by tuning nanomaterials at the atomic scale.”

According to Gunnoe, “This innovation, centered on achievements from the Zhang lab, represents a new method to improve and understand catalytic materials with a resulting effort that involves the integration of advanced materials synthesis, atomic level characterization and quantum mechanics theory.”

 “Several years ago, UVA joined the MAXNET Energy consortium, comprised of eight Max Planck Institutes (Germany), UVA and Cardiff University (UK), which brought together international collaborative efforts focused on electrocatalytic water oxidation. MAXNET Energy was the seed for the current joint efforts between my group and the Zhang lab, which has been and continues to be a fruitful and productive collaboration,” Gunnoe said.

With the help of the Argonne National Laboratory and the Lawrence Berkeley National Laboratory and their state-of-the-art synchrotron X-ray absorption spectroscopy user facilities, which uses radiation to examine the structure of matter at the atomic level, the research team found that the catalyst has a well-defined surface structure that allows them to clearly see how the catalyst evolves in the meantime of the oxygen evolution reaction and allows them to accurately evaluate its performance.

“The work used X-ray beamlines from the Advanced Photon Source and the Advanced Light Source, including a portion of a ‘rapid-access’ program set aside for a quick feedback loop to explore emergent or pressing scientific ideas,” said Argonne X-ray physicist Hua Zhou, a co-author on the paper. “We’re very excited that both national scientific user facilities can substantially contribute to such clever and neat work on water splitting that will provide a leap forward for clean energy technologies.”

Both the Advanced Photon Source and the Advanced Light Source are U.S. Department of Energy (DOE) Office of Science User Facilities located at DOE’s Argonne National Laboratory and Lawrence Berkeley National Laboratory, respectively. 

Additionally, researchers at Caltech, using newly developed quantum mechanics methods were able to accurately predict the rate of oxygen production caused by the catalyst, which provided the team with a detailed understanding of the reaction’s chemical mechanism.

“We have been developing new quantum mechanics techniques to understand the oxygen evolution reaction mechanism for more than five years, but in all previous studies, we could not be sure of the exact catalyst structure. Zhang’s catalyst has a well-defined atomic structure, and we find that our theoretical outputs are, essentially, in exact agreement with experimental observables,” said William A. Goddard III, a professor of chemistry, materials science, and applied physics at Caltech and one of the project’s principal investigators. “This provides the first strong experimental validation of our new theoretical methods, which we can now use to predict even better catalysts that can be synthesized and tested. This is a major milestone toward global clean energy.”

“This work is a great example of the team effort by UVA and other researchers to work towards clean energy and the exciting discoveries that come from these interdisciplinary collaborations,” said Jill Venton, chair of UVA’s Department of Chemistry.

Credit: 
DOE/Argonne National Laboratory

Artificial intelligence classifies supernova explosions with unprecedented accuracy

image: Cassiopeia A, or Cas A, is a supernova remnant located 10,000 light years away in the constellation Cassiopeia, and is the remnant of a once massive star that died in a violent explosion roughly 340 years ago. This image layers infrared, visible, and X-ray data to reveal filamentary structures of dust and gas. Cas A is amongst the 10-percent of supernovae that scientists are able to study closely. CfA's new machine learning project will help to classify thousands, and eventually millions, of potentially interesting supernovae that may otherwise never be studied.

Image: 
NASA/JPL-Caltech/STScI/CXC/SAO

Cambridge, MA (December 17, 2020)--Artificial intelligence is classifying real supernova explosions without the traditional use of spectra, thanks to a team of astronomers at the Center for Astrophysics | Harvard & Smithsonian. The complete data sets and resulting classifications are publicly available for open use.

By training a machine learning model to categorize supernovae based on their visible characteristics, the astronomers were able to classify real data from the Pan-STARRS1 Medium Deep Survey for 2,315 supernovae with an accuracy rate of 82-percent without the use of spectra.

The astronomers developed a software program that classifies different types of supernovae based on their light curves, or how their brightness changes over time. "We have approximately 2,500 supernovae with light curves from the Pan-STARRS1 Medium Deep Survey, and of those, 500 supernovae with spectra that can be used for classification," said Griffin Hosseinzadeh, a postdoctoral researcher at the CfA and lead author on the first of two papers published in The Astrophysical Journal. "We trained the classifier using those 500 supernovae to classify the remaining supernovae where we were not able to observe the spectrum."

Edo Berger, an astronomer at the CfA explained that by asking the artificial intelligence to answer specific questions, the results become increasingly more accurate. "The machine learning looks for a correlation with the original 500 spectroscopic labels. We ask it to compare the supernovae in different categories: color, rate of evolution, or brightness. By feeding it real existing knowledge, it leads to the highest accuracy, between 80- and 90-percent."

Although this is not the first machine learning project for supernovae classification, it is the first time that astronomers have had access to a real data set large enough to train an artificial intelligence-based supernovae classifier, making it possible to create machine learning algorithms without the use of simulations.

"If you make a simulated light curve, it means you are making an assumption about what supernovae will look like, and your classifier will then learn those assumptions as well," said Hosseinzadeh. "Nature will always throw some additional complications in that you did not account for, meaning that your classifier will not do as well on real data as it did on simulated data. Because we used real data to train our classifiers, it means our measured accuracy is probably more representative of how our classifiers will perform on other surveys." As the classifier categorizes the supernovae, said Berger, "We will be able to study them both in retrospect and in real-time to pick out the most interesting events for detailed follow up. We will use the algorithm to help us pick out the needles and also to look at the haystack."

The project has implications not only for archival data, but also for data that will be collected by future telescopes. The Vera C. Rubin Observatory is expected to go online in 2023, and will lead to the discovery of millions of new supernovae each year. This presents both opportunities and challenges for astrophysicists, where limited telescope time leads to limited spectral classifications.

"When the Rubin Observatory goes online it will increase our discovery rate of supernovae by 100-fold, but our spectroscopic resources will not increase," said Ashley Villar, a Simons Junior Fellow at Columbia University and lead author on the second of the two papers, adding that while roughly 10,000 supernovae are currently discovered each year, scientists only take spectra of about 10-percent of those objects. "If this holds true, it means that only 0.1-percent of supernovae discovered by the Rubin Observatory each year will get a spectroscopic label. The remaining 99.9-percent of data will be unusable without methods like ours."

Unlike past efforts, where data sets and classifications have been available to only a limited number of astronomers, the data sets from the new machine learning algorithm will be made publicly available. The astronomers have created easy-to-use, accessible software, and also released all of the data from Pan-STARRS1 Medium Deep Survey along with the new classifications for use in other projects. Hosseinzadeh said, "It was really important to us that these projects be useful for the entire supernova community, not just for our group. There are so many projects that can be done with these data that we could never do them all ourselves." Berger added, "These projects are open data for open science."

Credit: 
Center for Astrophysics | Harvard & Smithsonian

NIH researchers discover brain area crucial for recognizing visual events

image: fMRI scans reveal activity changes in the fSTS.

Image: 
Richard Krauzlis, Ph.D., National Eye Institute

Researchers at the National Eye Institute (NEI) report that a brain region in the superior temporal sulcus (fSTS) is crucial for processing and making decisions about visual information. The findings, which could provide clues to treating visual conditions from stroke, appear today in the journal Neuron. NEI is part of the National Institutes of Health.

"The human visual system recognizes, prioritizes, and categorizes visual objects and events to provide actionable information," said Richard Krauzlis, Ph.D., chief of the NEI Section on Eye Movements and Selective Attention and senior author of the study. "We were surprised to learn that the fSTS is a crucial link in this story-building process, passing information from an evolutionarily ancient region in the midbrain to highly specialized regions of the visual cortex."

While aspects of visual processing begin in the eye, crucial steps in visual attention start in the superior colliculus, a part of the midbrain that handles a variety of sensory input. Previous work in Krauzlis' lab showed that neuronal activity in the superior colliculus is necessary for the brain to notice an event in the visual field and decide that it is significant.

To study visual attention, the researchers work with monkeys trained to complete specific visual attention tasks. While fixing their eyes on a dot straight ahead, the monkeys pay attention to or specifically ignore events happening in the visual periphery - in this case, a patch of moving dots that changes direction, on either the right side or the left side of their visual field. The superior colliculus is strongly triggered when the monkeys are paying attention to the visual event, and less so when they're ignoring it.

Krauzlis and his colleagues described the discovery of the fSTS in a study published last year with David Leopold, Ph.D., chief of the Section on Cognitive Neurophysiology and Imaging at the National Institute of Mental Health (NIMH). Together they had the monkeys complete the visual attention tasks inside a functional magnetic resonance imaging (fMRI) machine. fMRI imaging revealed that a specific region in the temporal cortex--later named fSTS--was, like the superior colliculus, strongly activated during these attention tasks. This was surprising because this cortical region was not yet known to be important for visual attention.

Led by co-first authors Amarender Bogadhi, Ph.D., and Leor Katz, Ph.D., the research team designed a series of experiments to further uncover the role of the fSTS in the visual attention circuits.

The researchers directly measured fSTS neurons' firing in the areas previously revealed by fMRI. These direct measurements revealed that not only is a large proportion of fSTS neuronal activity dependent on the superior colliculus, these neurons use information from the superior colliculus to represent complex visual information.

The researchers were surprised that these regions of the cortex, which are involved in higher level processing, are so heavily dependent on input from the midbrain, Krauzlis said.

The fSTS neurons activated in response to "attended" events and changes in stimulus, and their activity strongly correlated with the likelihood that the monkey would report seeing an event. For "ignored" events, the fSTS neurons were much quieter. When the researchers dampened the superior colliculus, the fSTS neurons showed less distinction between attended and ignored events, with lowered activity to attended events and higher activity for ignored events. In other words, the fSTS depends on the superior colliculus to mark which events are important and which are not.

The researchers also found that some fSTS neurons fired in response to specific images, a property found only in areas of the brain that manage high-level processing. For example, some fSTS neurons would only fire in response to an image of a water bottle, but not a stereo or an abstract image. Without the contribution of the superior colliculus, many of these object-specific neurons in the fSTS failed to fire in response to their favored object.

"Even in an animal like a mouse, which has a pretty sophisticated visual system, there are a lot of shortcuts to interpret what things mean, handling much of that in the superior colliculus," Krauzlis said. "But in humans and other primates, that processing is spread out and delayed, passing information from the superior colliculus to the cortex through this fSTS region. And I think that lets us take advantage of a wider variety of visual features to help us figure out what a visual event means."

These findings are particularly relevant to a condition known as visual neglect, which can occur in people after a stroke or other brain injury that affects brain areas involved in visual attention. People with visual neglect can see all the objects and events in their visual field, but often aren't aware of the events on the affected side, especially when the visual field is cluttered.

"Visual attention has to do with the internal management of information," Krauzlis said. "The connection with the superior colliculus is important, because we think it could be acting like a spatial index, that helps you keep track of the information that you're trying to process."

Credit: 
NIH/National Eye Institute

Errant DNA boosts immunotherapy effectiveness

image: A UT Southwestern study discovered the molecular mechanism by which tumors defective in DNA mismatch repair respond to immunotherapy. This illustration depicts how cells use a programmed mismatch repair deficiency-activated system (robot) to detect and eliminate tumors (yellow cells).

Image: 
Illustration by Yipin Wu

DALLAS - Dec. 17, 2020 - DNA that ends up where it doesn't belong in cancer cells can unleash an immune response that makes tumors more susceptible to immunotherapy, the results of two UT Southwestern studies indicate. The findings, published online today in Cancer Cell, suggest that delivering radiation - which triggers DNA release from cells - before immunotherapy could be an effective way to fight cancers that are challenging to treat.

Nearly a decade ago, the Food and Drug Administration approved checkpoint inhibitors, a type of immunotherapy that removes defenses that allow cancer cells to masquerade as healthy cells, prompting the immune system to attack them. In 2015, researchers showed that these therapies had particular promise for cancers prompted by defects in cells' "mismatch repair" system, which proofreads DNA as it is copied. If this system is faulty, genetic mutations quickly build, spurring some cells to become malignant.

These copious mutations - which tend to make tumors difficult to treat with traditional therapies such as chemotherapy and radiation - were thought to be the reason why checkpoint inhibitors were effective against mismatch repair deficient (dMMR) tumors. However, recent research has shown that only about half of patients with dMMR tumors respond to these therapies, even though nonresponders have the same abundant mutations, explains study leader Guo-Min Li, Ph.D., professor of radiation oncology and a member of the Harold C. Simmons Comprehensive Cancer Center at UT Southwestern.

"Some mechanism beyond these many mutations must be at play to trigger an immune attack," says Li, who is also director of the Reece A. Overcash Jr. Center for Research on Colon Cancer, in Honor of Dr. Eugene Frenkel, at UTSW.

To better understand what triggers an immune response against these tumors, Li and Yang-Xin Fu, M.D., Ph.D., professor of pathology at UTSW and co-corresponding author of both studies, genetically manipulated both human and mouse cancer cells to remove Mlh1, a gene that's pivotal for mismatch repair. Compared to normal cells that were not manipulated, those without Mlh1 quickly accumulated DNA breaks and DNA in the cytosol, or intracellular fluid of the cell, rather than in its nucleus, where DNA normally resides. Treating these cells with radiation significantly enhanced how much cytosolic DNA was present.

The researchers reasoned that the increased DNA breaks from within the nucleus were prompted by overactivity of a gene called Exo1, which works closely with Mlh1 to cut mistakes out of DNA during replication and repair. Further investigation showed that when researchers removed this gene from cells without Mlh1 or disrupted the interaction between the proteins produced by Mlh1 and Exo1, the cells no longer accumulated DNA breaks and cytosolic DNA. But when Exo1 remained active in these cells, it appeared to cut DNA unabated, prompting this damaged genetic material to leak from the nucleus.

More experiments showed that this leaked DNA activated the cGAS-STING pathway, a part of the immune system that senses DNA outside of a cell nucleus and interprets it as either a sign of serious cellular damage or infection. This then triggers an immune response.

Indeed, when Li and his colleagues disrupted this pathway in cancer cells without Mlh1, tumors grew far faster than in cells with intact cGAS-STING pathways because they were spared from the immune system.

The researchers further showed that this DNA sensing pathway is pivotal for an immune response by treating animals carrying Mlh1 deficient tumors with checkpoint inhibitors. When these tumors had normal cGAS-STING pathways, the drugs were effective; but when the researchers disrupted any part of the pathway, the tumors resisted treatment.

Linking this research to human patients with dMMR cancers, the researchers tested tumor samples for their cGAS-STING function. Since mismatch repair can affect any gene, Li explains, genes in this pathway are also frequently mutated in patients with dMMR tumors. Medical records showed that patients with higher levels of proteins expressed, or activated, in the cGAS-STING pathway were more likely to survive their cancers longer and/or respond to checkpoint inhibitors than those with lower expression of the proteins, suggesting that DNA sensing is key to immunotherapy success and to an immune response to these cancers in general.

These findings, Li says, could eventually steer how dMMR tumors are treated in the future. For example, he says, evaluating how well tumors' cGAS-STING pathways work could help physicians decide whether patients will benefit from immunotherapy drugs, saving them time, money, and potential side effects. This research builds on the foundational work of Zhijian "James" Chen, Ph.D., a study co-author and professor of molecular biology at UT Southwestern, who won the 2019 Breakthrough Prize in Life Sciences for his discovery of the cGAS enzyme.

Researchers may also discover ways to manipulate downstream factors of the cGAS-STING pathway to improve its effectiveness in tumors that have lost this DNA sensor, making these cancers responsive to checkpoint inhibitors. Additionally, because radiation encourages DNA to leak into the cytoplasm, it could further enhance the effectiveness of these therapies.

"This strategy of delivering radiation before immunotherapy is already showing success in clinical trials, but the reason behind how it works was unknown," Li says. "The mechanism we report in these two papers adds insight that could someday lead to completely new ways to treat cancers."

Credit: 
UT Southwestern Medical Center

Stevens creates entangled photons 100 times more efficiently than previously possible

image: Yuping Huang and his colleagues at Stevens Institute of Technology demonstrated a quantum circuit that can readily be integrated with other optical components, paving the way for high-speed, reconfigurable, and multifaceted quantum devices.

Image: 
QuEST Lab, Stevens Institute of Technology

Super-fast quantum computers and communication devices could revolutionize countless aspects of our lives -- but first, researchers need a fast, efficient source of the entangled pairs of photons such systems use to transmit and manipulate information. Researchers at Stevens Institute of Technology have done just that, not only creating a chip-based photon source 100 times more efficient that previously possible, but bringing massive quantum device integration within reach.

"It's long been suspected that this was possible in theory, but we're the first to show it in practice," said Yuping Huang, Gallagher associate professor of physics and director of the Center for Quantum Science and Engineering.

To create photon pairs, researchers trap light in carefully sculpted nanoscale microcavities; as light circulates in the cavity, its photons resonate and split into entangled pairs. But there's a catch: at present, such systems are extremely inefficient, requiring a torrent of incoming laser light comprising hundreds of millions of photons before a single entangled photon pair will grudgingly drip out at the other end.

Huang and colleagues at Stevens have now developed a new chip-based photon source that's 100 times more efficient than any previous device, allowing the creation of tens of millions of entangled photon pairs per second from a single microwatt-powered laser beam.

"This is a huge milestone for quantum communications," said Huang, whose work will appear in the Dec. 17 issue of Physical Review Letters.

Working with Stevens graduate students Zhaohui Ma and Jiayang Chen, Huang built on his laboratory's previous research to carve extremely high-quality microcavities into flakes of lithium niobate crystal. The racetrack-shaped cavities internally reflect photons with very little loss of energy, enabling light to circulate longer and interact with greater efficiency.

By fine-tuning additional factors such as temperature, the team was able to create an unprecedentedly bright source of entangled photon pairs. In practice, that allows photon pairs to be produced in far greater quantities for a given amount of incoming light, dramatically reducing the energy needed to power quantum components.

The team is already working on ways to further refine their process, and say they expect to soon attain the true Holy Grail of quantum optics: a system with that can turn a single incoming photon into an entangled pair of outgoing photons, with virtually no waste energy along the way. "It's definitely achievable," said Chen. "At this point we just need incremental improvements."

Until then, the team plans to continue refining their technology, and seeking ways to use their photon source to drive logic gates and other quantum computing or communication components. "Because this technology is already chip-based, we're ready to start scaling up by integrating other passive or active optical components," explained Huang.

The ultimate goal, Huang said, is to make quantum devices so efficient and cheap to operate that they can be integrated into mainstream electronic devices. "We want to bring quantum technology out of the lab, so that it can benefit every single one of us," he explained. "Someday soon we want kids to have quantum laptops in their backpacks, and we're pushing hard to make that a reality."

Credit: 
Stevens Institute of Technology

Scientists create a new phototoxic protein, SuperNova2

image: A single mutation S10R with the replacement of serine at position 10 for arginine (highlighted in blue on the protein structure on the left), despite its removal from the chromophore (highlighted in raspberry), led to an improvement in the properties of the phototoxic fluorescent proteins KillerRed and SuperNova

Image: 
Skoltech

Scientists from Skoltech, the Institute of Bioorganic Chemistry of RAS, and the London Institute of Medical Sciences (LMS) have developed an enhanced version of SuperNova, a genetically encoded phototoxic synthesizer, that helps control intracellular processes by light exposure. The research was published in the International Journal of Molecular Sciences.

An important research tool, phototoxic proteins are used as genetically encoded photosensitizers to generate reactive oxygen species under light irradiation. In contrast to common chemical photosensitizers, phototoxic proteins are genetically encoded and expressed by the cell itself, which makes them easy to control and direct to any selected compartment in the cell. Thanks to reactive oxygen species formed by the action of light, phototoxic proteins can create strictly localized oxidative stress, for example, to destroy a selected cell population or disable target proteins ? a feature particularly sought after in the modeling of cellular processes.

The first phototoxic protein, KillerRed, was described by a team of Russian researchers led by Konstantin Lukyanov, a professor at the Skoltech Center of Life Sciences (CLS), in 2006. KillerRed was further enhanced by Japanese scientists and renamed SuperNova. In their recent study, professor Lukyanov's team has developed SuperNova2, an improved version of SuperNova, which displays high speed and completeness of maturation and is monomeric, which makes the new protein easily usable and suitable for a broad variety of molecular biology tasks.

"We expect that the genetically encoded photosensitizer SuperNova2 will find use in a wide range of experimental models," professor Lukyanov comments.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Machine learning boosts the search for 'superhard' materials

image: Researchers have developed a machine learning model that can accurately predict the hardness of new materials, allowing scientists to more readily find compounds suitable for use in a variety of applications.

Image: 
University of Houston

Superhard materials are in high demand in industry, from energy production to aerospace, but finding suitable new materials has largely been a matter of trial and error based on classical materials such as diamonds. Until now.

Researchers from the University of Houston and Manhattan College have reported a machine learning model that can accurately predict the hardness of new materials, allowing scientists to more readily find compounds suitable for use in a variety of applications. The work was reported in Advanced Materials.

Materials that are superhard - defined as those with a hardness value exceeding 40 gigapascals on the Vickers scale, meaning it would take more than 40 gigapascals of pressure to leave an indentation on the material's surface - are rare.

"That makes identifying new materials challenging," said Jakoah Brgoch, associate professor of chemistry at UH and corresponding author for the paper. "That is why materials like synthetic diamond are still used even though they are challenging and expensive to make."

One of the complicating factors is that the hardness of a material may vary depending on the amount of pressure exerted, known as load dependence. That makes testing a material experimentally complex and using computational modeling today almost impossible.

The model reported by the researchers overcomes that by predicting the load-dependent Vickers hardness based solely on the chemical composition of the material. The researchers report finding more than 10 new and promising stable borocarbide phases; work is now underway to design and produce the materials so they can be tested in the lab.

Based on the model's reported accuracy, the odds are good. Researchers reported the accuracy at 97%.

First author Ziyan Zhang, a doctoral student at UH, said the database built to train the algorithm is based on data involving 560 different compounds, each yielding several data points. Finding the data required poring over hundreds of published academic papers to find data needed to build a representative dataset.

"All good machine learning projects start with a good dataset," said Brgoch, who is also a principal investigator with the Texas Center for Superconductivity at UH. "The true success is largely the development of this dataset."

In addition to Brgoch and Zhang, additional researchers on the project include Aria Mansouri Tehrani and Blake Day, both with UH, and Anton O. Oliynyk from Manhattan College.

Researchers traditionally have used machine learning to predict a single variable of hardness, Brgoch said, but that doesn't account for the complexities of the property like load dependence, which he said still aren't well understood. That makes machine learning a good tool, despite earlier limitations.

"A machine learning system doesn't need to understand the physics," he said. "It just analyzes the training data and makes new predictions based on statistics."

Machine learning does have limitations, though.
"The idea of using machine learning isn't to say, 'Here is the next greatest material,' but to help guide our experimental search," Brgoch said. "It tells you where you should look."

Credit: 
University of Houston

Childhood intervention can prevent 'deaths of despair'

DURHAM, N.C. -- Mortality rates among young adults are rising in the U.S. due in part to "deaths of despair" -- preventable deaths from suicide, drug overdoses and alcohol-related liver disease. An intensive childhood intervention program called Fast Track could help reduce these deaths by reducing risky behaviors in adolescence and young adulthood, finds new research from Duke University and the Conduct Problems Prevention Research Group.

"To reduce deaths of despair, we must prevent the hopelessness and destructive behaviors that often lead to these deaths," says study co-author Kenneth A. Dodge, the William McDougall Distinguished Professor of Public Policy Studies at Duke's Sanford School of Public Policy. Dodge is a member of the Conduct Problems Prevention Research Group that created the Fast Track program.

"We knew that the Fast Track intervention was successful at reducing aggression in childhood and reducing criminal arrests in early adulthood," Dodge said. "What this latest study demonstrates is that this early intervention also has positive impact in increasing hope and reducing behaviors of despair."

Factors contributing to deaths of despair include hopelessness, cynicism, poor interpersonal skills and conflict and failure in social relationships. Many of these factors originate during childhood and are ripe for preventive intervention, Dodge said.

"We designed the Fast Track program to improve emotional awareness and interpersonal competence among children at high risk for peer conflict, antisocial and delinquent behaviors and life-course failure," Dodge said. "The intervention began when children were in first grade and continued for 10 years. Participants are now reaching their mid- to late thirties."

Participants were drawn from high-risk elementary schools in Durham, North Carolina; Nashville, Tennessee; rural Pennsylvania and Seattle, Washington. Starting in first grade, students were randomly assigned to either receive Fast Track or be followed as a control group.

The findings show lower rates of "behaviors of despair" in young adulthood for Fast Track participants than for the control group.

Among young people ages 15 to 25, the Fast Track intervention was linked with significantly lower rates of suicidal ideation, or thoughts of suicide. Within the control group, 24.3 percent reported suicidal ideation, compared with only 16.3 percent of Fast Track participants - a 45.1 percent difference.

Hazardous drinking rates were also lower among young people who took part in Fast Track. Among study participants ages 15 to 25, 14.9 percent of control group members reported hazardous drinking, compared with 8.9 percent of Fast Track participants - a difference of 45.3 percent.

In addition, opioid use was significantly lower among Fast Track participants. Within the control group, 4.1 percent reported at least weekly use of opioids. Among former Fast Track participants, 1.7 percent used opioids at least weekly - a difference of 61.2 percent.

"Our findings suggest that prevention programs aimed at facilitating the acquisition of social and behavioral competence in conduct-problem children could reverse the alarming rise in early and midlife diseases of despair," the study says.

"The breadth and magnitude of the positive impacts make a clear case for the value of early holistic, developmentally informed, psychological interventions that involve the child, family, and school in mitigating preventable self-inflicted mortality."

Credit: 
Duke University

Developing new classification criteria for improving antiphospholipid syndrome research

An international team of more than 80 collaborators led by Hospital for Special Surgery (HSS) investigators is developing new classification criteria for clinical research of antiphospholipid syndrome (APS), a life-threatening autoimmune clotting disorder.

In their paper, published online ahead of print on November 30, 2020, in Arthritis Care & Research, the investigators reported on the first two of four phases of criteria development. Phase I and II involved generating 261 candidate criteria for defining APS and reducing the list to 27 finalist criteria, grouped into six specific categories: laboratory tests, macrovascular, microvascular, obstetric, cardiac and hematologic. The ultimate goal is to develop a threshold-based scoring system that will allow researchers worldwide to determine which patients most likely have APS and should be included in research studies.

APS is an autoimmune disorder in which the immune system produces antibodies against specific normal proteins in the blood, causing clots in blood vessels, low platelet counts, heart valve abnormalities and pregnancy complications. It is the leading cause of stroke in young people, causing one in three strokes in people under the age of 50, one in five recurrent miscarriages and up to 20 percent of all deep vein thromboses. The disease is often found in patients with other autoimmune conditions, especially lupus.

The current APS Sapporo Classification System was published in 1999, validated in 2000, and revised in 2006. "The previous classification system was based on the available evidence at the time," says lead author Medha Barbhaiya, MD, MPH, rheumatologist at the Barbara Volcker Center for Women and Rheumatic Diseases at HSS and assistant professor of medicine and population health sciences at Weill Cornell Medicine. "In the last decade, researchers have identified additional features of APS that deserve inclusion. We also now have more rigorous methods for developing and validating disease classification criteria."

The international initiative is jointly funded by the American College of Rheumatology (ACR) and the European League Against Rheumatism (EULAR), in recognition of the need for improving the method of classifying patients for APS research to be used worldwide. Doruk Erkan, MD, MPH, rheumatologist at the Barbara Volcker Center for Women and Rheumatic Diseases at HSS and associate professor of medicine at Weill Cornell Medicine, is the senior author and co-principal investigator together with Stephane Zuily, MD, MPH, PhD, a vascular medicine specialist and professor of medicine at the Université de Lorraine, Nancy, France.

Emphasizing the difference between classification versus diagnosis criteria for APS, Dr. Erkan says, "Classification criteria are intended to be used in a research setting whereas diagnostic criteria are used in a clinical setting for the purpose of managing diseases; the two sets of criteria may have overlapping elements, but they have separate goals. The goal of classification criteria is to ensure that patients with common characteristics, thought to be specific for a certain disease, are included in appropriate research studies. In contrast, the goal of diagnostic criteria is to identify, as accurately as possible, whether patients have that particular disease."

The next phase of the new APS classification criteria methodology involves defining, refining, and weighting the candidate criteria and assigning a threshold score for identifying APS using real-world cases and a computer-based decision tool. Following the final validation phase of the project, researchers plan to report their results in 2021. Dr. Barbhaiya is also supported by a separate but related grant, the 2018 Rheumatology Research Foundation Investigator Award, to investigate risk factors for APS using the data collected during new classification criteria development.

"We are excited to be developing validated classification criteria for APS as previously done for other rheumatologic diseases, such as lupus and rheumatoid arthritis," says Dr. Erkan, who is also co-chair of the AntiPhospholipid Syndrome Alliance for Clinical Trials and International Networking (APS ACTION). "A threshold score for identifying patients with the disease will go a long way to advancing APS research, with the ultimate goal of high-quality research to improve APS prevention and treatment options for patients."

Credit: 
Hospital for Special Surgery

Infant circumcision may lead to social challenges as an adult undergoing circumcision as an infant

Infant circumcision may lead to social challenges as an adult

Undergoing circumcision as an infant has delayed psychological complications. This is shown by an international study led by researchers from Aarhus University.

Researchers have long disagreed about the health implications - also for mental health - of small boys being circumcised. A study now shows that infant circumcision, which is the case for a third of the world's male population, has consequences in adulthood. Alessandro Miani and Michael Winterdahl from Aarhus University and Aarhus University Hospital have coordinated the study.

"We wanted to challenge the assumption that there are no delayed consequences of infant circumcision apart from the purely physical because of the absence of foreskin," says Michael Winterdahl about the background for the study.

Emotionally stability

The researchers recruited 408 American men who had been circumcised within the first month of their lives and 211 American men who had not been circumcised. All participants completed six questionnaires focusing on the ability to bond with others and the handling of stress.

"The study showed that men who had undergone circumcision as an infant found it more difficult to bond with e.g. their partner and were more emotionally unstable, while the study did not find differences in empathy or trust. Infant circumcision was also associated with stronger sexual drive as well as a lower stress threshold," says Michael Winterdahl.

He elaborates: "We know from previous studies that the combination of attachment to a partner and emotional stability is important in order to be able to maintain a healthy relationship, and thus family structure, and a lack of such, may lead to frustration and possibly less restricted sexual behaviour."

Stressed infant

The results have been published in the journal Heliyon.

According to the researcher, the study links the state of stress that infant circumcision triggers in the infant, with the altered behaviour which is first revealed as an adult.

"Our findings are especially interesting for coming parents who want to make an informed choice about circumcision on behalf of their child, but are also directed at anyone who wishes to see more light shed on a very taboo topic that often drowns in an emotional discussion," says Michael Winterdahl.

He stresses that the study does not, as such, point to any pathological changes among circumcised men.

"Our study says something about differences at population level, not about individuals. It's important to remember that as individuals, we vary enormously in virtually all parameters - also in how we bond with our partner, for example," he says.

Credit: 
Aarhus University

Fibrous protein finding may lead to improved bioprinting, tissue engineering

image: Collagen and fibrinogen in aqueous solutions form a solid layer on the surface of water, corrupting flow behavior measurements with rotational rheometers. The addition of a non-ionic surfactant in small volumes prevents the formation of the solid layer, allowing accurate estimation of flow behavior of the solutions.

Image: 
Hemanth Gudapati, Penn State

Fibrous proteins such as collagen and fibrinogen form a thin solid layer on the surface of an aqueous solution similar to the "skin" that forms on warm milk, according to a team of Penn State Researchers, who believe this finding could lead to more efficient bioprinting and tissue engineering.

In the human body, fibrous proteins provide structural support for cells and tissues and aid in biomechanics. Collagen makes up 80% of our skin and 10% of our muscles, while fibrinogen helps in blood clotting by forming the hydrogel fibrin.

"Collagen and fibrinogen protein solutions are widely used as precursors of collagen and fibrin hydrogels in tissue engineering applications," said Hemanth Gudapati, graduate student in engineering science and mechanics. "This is because collagen and fibrin, which are used as structural materials for tissue engineering similar to their role in the human body, are nontoxic, biodegradable and mimics the natural microenvironments of cells."

Gudapati and fellow researchers report, in Soft Matter, for the first time that fibrous proteins form a solid layer on the surface of water due to aggregation of proteins at the air/water interface. This solid layer interferes with accurate measurements of the solution's rheology, which is the study of fluid properties such as flow. Previously, it was only demonstrated that the other main type of protein, globular proteins, formed these solid layers at the air/water interface.

Accurate rheology measurements are vital for successful bioprinting. Measurement of viscosity is important for identifying what protein solutions are potentially printable, and for detecting inconsistencies in flow behavior among different batches of fibrous proteins.

"Collagen and fibrinogen are extracted from animals, and their flow behavior changes from batch to batch and with time," Gudapati said.

This in turn leads to a challenge for consistent bioprinting results.

"Accurate measurement of flow behavior helps in reliable or consistent delivery of the protein solutions during bioprinting," Gudapati said. "This helps in fabrication of things such as reliable organ-on-chip devices and disease models."

A potential solution for accurate measurement is to add a surfactant such as polysorbate 80 to prevent the formation of film at the air/water interface.

The research also identifies the concentrations of protein solutions which are potentially printable via inkjet bioprinting, along with identifying bioprinting operating parameters.

Gudapati said there were other findings in their research that will require further investigation. These included the possibility that the aggregated fibrous proteins at the air/water interface may get released from the interface and that these protein aggregates may cause further accumulation of the proteins in the solutions.

"The further bulk aggregation could be one of the reasons for poor alignment of collagen fibers or poor mechanical strength of fibrin outside the body, i.e., in vitro, which are the challenges facing tissue engineering applications at present," Gudapati said.

The work was done in the lab of Ibrahim Ozbolat, Hartz Family Career Development Associate Professor of Engineering Science and Mechanics, in collaboration with Ralph Colby, professor of materials science and engineering and chemical engineering.

"Dr. Colby's work with globular protein solutions influenced our work," Gudapati said. "For example, we realized that the fibrous proteins could be behaving similar to globular proteins at the air/water interface at the beginning of our research."

Credit: 
Penn State

Two thirds of people with lupus would take COVID-19 vaccine, shows LRA survey

New York, NY- December 16.  Two out of three people with lupus (64%) are willing to take a COVID-19 vaccine if it is free and determined safe by scientists according to results of a survey conducted by the Lupus Research Alliance (LRA). However, it is important to note that 24% say they would not take the vaccine while 22% are uncertain.

Conducted October 19 - November 17, the national survey included 703 people with lupus and 63 of their family members and friends. Respondents represented all 50 states, the District of Columbia, Puerto Rico and the Virgin Islands.

Those who would get the vaccine are just as motivated by wanting to protect others (97%) from the virus as they are about protecting themselves (98%). This finding is particularly poignant as the lupus population is considered more vulnerable to COVID-19 because their immune systems are weakened by the disease and many of its treatments.

Other reasons that would influence all respondents with lupus in deciding whether to take the vaccine include having information about the vaccine's safety specifically for people with lupus and a recommendation by their healthcare provider. As could be expected, a higher proportion of those who already usually take a flu vaccine are more likely to say they would get a COVID-19 vaccine.

Although many respondents are willing to take a COVID-19 vaccine, only half of all respondents surveyed have confidence in the process used to test their safety and effectiveness. Of those who say they would not take the vaccine, 90% worry about side effects and 86% fear a lupus flare. In open-ended questions, many expressed similar concerns because they think the safety has not been shown over a long-enough time. The fairness of the vaccine distribution is also questioned by 59% of respondents with lupus and 43% of those without lupus.

Attitudes Differ by Race/Ethnicity

Black or African Americans with lupus are less likely to say they would definitely or probably get a COVID-19 vaccine (34%) versus 50% of all Black/African Americans in the general population.

Hispanics/Latinos are most likely (34%) to say they are undecided about taking the vaccine.

Confidence in safety and effectiveness testing varies by race and ethnic background with Blacks (75%) being the least confident.

The U.S. Food and Drug Administration (FDA) just granted emergency use authorization to Pfizer Inc. and BioNTech SE for their COVID-19 vaccine which uses a new technology, mRNA. With this designation, the FDA "may allow unapproved medical products or unapproved uses of approved medical products to be used in an emergency to diagnose, treat or prevent serious or life-threatening diseases or conditions caused by chemical, biological, radiological, or nuclear threat agents when there are no adequate, approved and available alternatives."

LRA President Kenneth M. Farber notes, "We are very pleased to see these vaccines become available. The experts on our Scientific Advisory Board have reviewed the data made public so far, and the vaccines do not seem to pose a particular concern for people with lupus. However, whether or not to take this or any vaccine is a decision that must be made between the patient and their healthcare provider."

The LRA is keeping abreast of the evolving vaccine status. To learn more about the different types of technologies used to develop these vaccines, watch this video of LRA President & CEO Kenneth Farber. Also, this Q&A provides answers to common questions we're hearing about vaccines amid the COVID-19 pandemic.

Credit: 
Lupus Research Alliance

Elite soccer players help define normal heart measures in competitive athletes

BOSTON - Clinicians are often asked to assess competitive athletes with cardiovascular symptoms and to screen asymptomatic athletes for hidden heart problems. This is especially common with soccer, the world's most popular sport. To provide guidance, a team led by investigators at Massachusetts General Hospital (MGH) conducted a study to determine what should be considered normal heart scan results in elite female and male soccer players. The findings are published in JAMA Cardiology.

When evaluating athletes, it's important for physicians to differentiate between normal exercise-induced cardiovascular adaptations and abnormal responses that are detrimental to health. Knowing the difference requires analyses of sport- and sex-specific data. To this end, researchers examined cardiovascular data from 122 female and 116 male athletes from screenings overseen by Aaron Baggish, MD, director of the Cardiovascular Performance Program at MGH and chief cardiologist for U.S. Soccer, at U.S. Soccer National Team training locations from 2015 through 2019.

The screenings included both electrocardiograms, which assess the heart's electrical activity, and echocardiograms (heart ultrasound), which show the heart's structure. "Electrocardiograms that met international criteria for being abnormal were more common in the female athletes, but none of these individuals had evidence of underlying abnormalities on their heart ultrasounds," says lead author Timothy Churchill, MD, an investigator in Medicine at MGH. "We also found that athletes of both sexes frequently exceeded the general-population-defined normal values for a number of important measures of heart size, likely reflective of the athletes' hearts adapting to their exercise training." Churchill stressed that none of the athletes had very worrisome findings or signs of heart muscle disease that would restrict them from competition.

The investigators hope that their study will provide clinicians with a reference that can be used when assessing athletes who are seen for either pre-participation screening or evaluation of heart-related symptoms. "These types of assessments arise frequently and are expected to become even more common as athletes return to competition in the setting of COVID-19 exposure or infection, given concerns that have emerged for potential cardiac involvement," says Baggish, who was senior author of the study. "We hope our data can contextualize the athletes' cardiac findings and help clinicians determine what is normal and what may suggest possible underlying disease."

Credit: 
Massachusetts General Hospital

International study reveals the effects of COVID-19 on the experience of public transport

image: Tauri Tuvikene, a Senior Research Fellow at Tallinn University

Image: 
Tallinn University

A team of European researchers working on a project about public transport as public space have recently completed a study on the perception and use of public transport during the first wave of COVID-19.

The study employed an extensive questionnaire along with 49 interviews conducted in Tallinn, Stockholm, Brussels, Munich, Berlin and Dresden from April to July 2020. The results reveal how the coronavirus pandemic changed more than just the number of people using public transport--the outbreak also altered the emotions, sensations, and overall experience of using public transport. In other words, it influenced the ways in which public transport is used and perceived as public space.

The study diversifies and complicates the dominant fear-centric narrative connected to public transport throughout the outbreak. It shows that using public transport at the time of the COVID-19 actually encompasses a vast array of experiences. The researchers also highlight the unequal opportunities and constraints that influenced individuals and communities during the state of emergency in spring.

The study revealed a strong divide along economic and educational lines. People in a better financial condition and with a higher level of education were more likely to give up public transport, i.e. work from home or use their personal car to drive to work. Importantly, those with the opportunity to avoid the use of public transport were also the ones who are most afraid of it. In contrast, continuing users of public transport - particularly the frequent ones - find it safer than those who have stopped using it completely.

"Almost half of the people who avoided using public transport entirely found that it is much more dangerous compared with other public spaces," says Tauri Tuvikene, a Senior Research Fellow at Tallinn University on one of the study's findings. "However, those who continued to use public transport found it as safe as or even safer than grocery stores and shopping centres."

One of the most important observations in the study was that the conditions on public transport were described more neutrally than was expected. When participants were asked to describe the atmosphere in public transport, they commonly referred to distancing, a calm environment and an emptier space that made their ride more comfortable. The narratives of study participants were mostly pragmatic, neutral and calm rather than being overshadowed by fear.

However, the desire to reduce physical contact does introduce new feelings of uncertainty. The necessity to keep one's distance from others raises questions about how to engage with others. For instance, is helping other passengers in need still the right thing to do? Or, is the opposite true now that there is a risk of infection? The new normal of keeping distance has not yet struck a balance with the already existing norms of public transport as a social space.

"Perhaps COVID-19 has made it more obvious than before that public transport is also a public space where people meet a lot of strangers, which can undoubtedly cause some tensions," says Tuvikene. "Many interviewees undoubtedly considered public transport a public space. However, it was considered less social than before the pandemics, as if public transport had been a remarkably social space before that."

Credit: 
Estonian Research Council

A non-destructive method for analyzing Ancient Egyptian embalming materials

image: Researchers analyzed embalming material from the neck of this Ancient Egyptian mummy, which was acquired by a French museum in 1837.

Image: 
Frédérique Vincent, ethnographic conservator

Ancient Egyptian mummies have many tales to tell, but unlocking their secrets without destroying delicate remains is challenging. Now, researchers reporting in ACS' Analytical Chemistry have found a non-destructive way to analyze bitumen -- the compound that gives mummies their dark color -- in Ancient Egyptian embalming materials. The method provides clues to the bitumen's geographic origin and, in one experiment, revealed that a mummy in a French museum could have been partially restored, likely by collectors.

The embalming material used by Ancient Egyptians was a complex mixture of natural compounds such as sugar gum, beeswax, fats, coniferous resins and variable amounts of bitumen. Also known as asphalt or tar, bitumen is a black, highly viscous form of petroleum that arises primarily from fossilized algae and plants. Researchers have used various techniques to analyze Ancient Egyptian embalming materials, but they typically require preparation and separation steps that destroy the sample. Charles Dutoit, Didier Gourier and colleagues wondered if they could use a non-destructive technique called electron paramagnetic resonance (EPR) to detect two components of bitumen formed during the decomposition of photosynthetic life: vanadyl porphyrins and carbonaceous radicals, which could provide information on the presence, origin and processing of bitumen in the embalming material.

The researchers obtained samples of black matter from an Ancient Egyptian sarcophagus (or coffin), two human mummies and four animal mummies (all from 744-30 B.C.), which they analyzed by EPR and compared to reference bitumen samples. The team discovered that the relative amounts of vanadyl compounds and carbonaceous radicals could differentiate between bitumen of marine origin (such as from the Dead Sea) and land-plant origin (from a tar pit). Also, they detected vanadyl compounds that likely formed from reactions between the vanadyl porphyrins and other embalming components. Intriguingly, the black matter taken from a human mummy acquired by a French museum in 1837 didn't contain any of these compounds, and it was very rich in bitumen. This mummy could have been partially restored with pure bitumen, probably by a private collector to fetch a higher price before the museum acquired it, the researchers say.

Credit: 
American Chemical Society