Culture

Self-healing concrete for regions with high moisture and seismic activity

image: The slab of self-healing concrete is measured for its compressive strength.

Image: 
FEFU press office

Preparing regular concrete scientists replaced ordinary water with water concentrate of bacteria Bacillus cohnii, which survived in the pores of cement stone. The cured concrete was tested for compression until it cracked, then researchers observed how the bacteria fixed the gaps restoring the strength of the concrete. The engineers of the Polytechnic Institute of Far Eastern Federal University (FEFU), together with colleagues from Russia, India, and Saudi Arabia, reported the results in Sustainability journal.

During the experiment, bacteria activated when gained access to oxygen and moisture, which occurred after the concrete cracked under the pressure of the setup. The "awakened" bacteria completely repaired fissures with a width of 0.2 to 0.6 mm within 28 days. That is due to microorganisms released a calcium carbonate (CaCO3), a product of their life that crystallized under the influence of moisture. After 28 days of self-healing experimental concrete slabs retrieved their original compressive strength. In the renewed concrete, the bacteria "fell asleep" again.

"Concrete remains the world's number one construction material because it is cheap, durable, and versatile. However, any concrete gets cracked over time because of various external factors, including moisture and repetitive freezing/thawing cycles, the quantity of which in the Far East of Russia, for example, is more than a hundred per year. Concrete fissuring is almost irreversible process that can jeopardize the entire structure." Says engineer Roman Fediuk, FEFU professor. "What we have accomplished in our experiment aligns with international trends in construction. There is pressing demand for such "living" materials with the ability to self-diagnose and self-repair. It's very important that bacteria healed small fissures-forerunners of serious deep cracks that would be impossible to recover. Thanks to bacteria working in the concrete, one can reduce or avoid at all technically complex and expensive repair procedures."

Spores of Bacillus cohnii capable of staying alive in concrete for up to two hundred years and, theoretically, can extend the lifespan of the structure for the same period. This is almost 4 times more than the 50-70 years of conventional concrete service life.

Self-healing concrete is most relevant for construction in seismically risky areas, where small fissures appear in buildings after earthquakes of a modest magnitude, and in areas with high humidity and high rainfall where a lot of oblique rain falls on the vertical surfaces of buildings. Bacteria in concrete also fill the pores of the cement stone making them smaller and less water gets inside the concrete structure.

Scientists have cultivated the bacteria Bacillus cohnii in the laboratory using a simple agar pad and culture medium, forcing them to survive in the conditions of the pores of the cement stone and to release the desired "repair" composition. Fissures healing was assessed using a microscope. The chemical composition of the bacteria repairing life product was studied via electron microscopy and X-ray images.

Next, the scientists plan to develop reinforced concrete, further enhancing its properties with the help of different types of bacteria. That should speed up the processes of material self-recovery.

A scientific school of the scientific school of geomimetics run at FEFU. Engineers follow the principle of nature mimicking in the development of composites for special structures and civil engineering. Concrete, as conceived by the developers, should have the strength and properties of natural stone. The foundations of geomimetics were laid by Professor Valery Lesovik from V.G. Shukhov BSTU, Corresponding Member of the Russian Academy of Architecture and Construction Sciences.

Credit: 
Far Eastern Federal University

Vets' depression, social support & psychological resilience play role in later well being

(Boston)--Veterans who experienced the combination of low depression, high social support and high psychological resilience as they left military service were most likely to report high well-being a year later.

Neither demographic and military characteristics nor trauma history emerged as strong predictors of veterans' well-being when considered in the context of other factors. Although most predictors were similar for women and men, depression was a stronger predictor of women's well-being.

Every year, more than 200,000 U.S. service members transition out of the military. Although most military veterans can be expected to successfully navigate this transition, a substantial number of individuals struggle to adapt to post-military life.

In an effort to identify the combination of background factors that best predict U.S. military veterans' post-military well-being the researchers sampled more than 7,100 veterans.

Neither demographic and military characteristics nor traumatic life experiences emerged as important predictors of veterans' well-being in the context of other factors, suggesting that their impact on veterans' well-being is likely carried through other factors addressed in the study. Instead, researchers found that those who reported the combination of high depression, low social support, and low psychological resilience shortly as they leave service were the most likely to report poor well-being a year later.

"These findings support the value of screening for these risk factors and intervening with veterans who report this combination of factors at the time they leave military service to reduce veterans' risk of poor post-military well-being," explained corresponding author Dawne S. Vogt, PhD, research scientist in the Women's Health Sciences Division, National Center for PTSD at VA Boston Healthcare System and professor of psychiatry at Boston University School of Medicine.

Although prior research has examined the role that depression, social support and psychological resilience play in U.S. veterans' post-military mental health and functioning, this study provides new insight into how these factors work together to impact veterans' well-being. "The fact that the worst well-being was reported by veterans with high depression and low social support underscores the particularly important role that the combination of these two factors are likely to play in veterans' post-military well-being. Moreover, the finding that higher psychological resilience moderated the impact of depression on veterans' well-being suggests that those individuals who are most resilient may be better able to cope with depression when it is experienced," added Vogt.

Credit: 
Boston University School of Medicine

IU study finds unintended consequences of state, opioid policies

image: Byungkyu Lee

Image: 
Indiana University

In response to the increase in opioid overdose deaths in the United States, many states have implemented supply-controlling and harm-reduction policy measures aimed at reducing those deaths. But a recent study from Indiana University found the policies may have had the unintended consequence of motivating those with opioid use disorders to switch to alternative illicit substances, leading to higher overdose mortality.

"Literature from public health to social sciences has presented mixed and contradictory findings on the impact of opioid policies on various opioid adverse outcomes," said Byungkyu Lee, assistant professor of sociology at IU and co-author of the study. "Our findings suggest that the so-called opioid paradox -- the rise of opioid-related deaths despite declines in opioid prescriptions -- may arise from the success, not the failure, of state interventions to control opioid prescriptions."

Researchers used the National Vital Statics System and Optum Clinformatics DataMart to look at drug overdose mortality data from 50 states and claims data from 23 million commercially insured patients in the U.S. between 2007 and 2018. They then evaluated the prevalence of indicators of prescription opioid abuse, opioid use disorder and overdose diagnosis, and the prescription of medications-assisted treatment and drug overdose deaths before and after implementation of six state-level policies targeting the opioid epidemic.

Policies included prescription drug monitoring program access, mandatory prescription drug monitoring programs, pain clinic laws, prescription limit laws, naloxone access laws and Good Samaritan laws.

The study, published in the JAMA Network Open, found that supply-controlling policies were associated a lower proportion of patients who take opioids, have overlapping claims, receive higher opioid doses and visit multiple providers and pharmacies. They also found that harm-reduction policies were associated with modest increases in the proportion of patients with overdose and opioid use disorder. Additionally, the proportion of patients receiving medications-assisted treatment drugs increased following the implementation of supply-controlling policies.

Brea Perry, professor of sociology at IU and co-author of the study, said these findings demonstrate the power of big data to provide insights into the opioid epidemic and how to best reverse it.

"Our work reveals the unintended and negative consequences of policies designed to reduce the supply of opioids in the population for overdose," Perry said. "We believe that policy goals should be shifted from easy solutions such as dose reduction to more difficult fundamental ones, focusing on improving social conditions that create demand for opioids and other illicit drugs."

In terms of overdose mortality, the study found that all overdose deaths increased following the implementation of naloxone access laws, especially deaths attributable to heroin, synthetic opioids and cocaine. Good Samaritan laws were also associated with increases in overall overdose deaths.

Furthermore, mandatory prescription drug monitoring programs were associated with a reduction in overdose deaths from natural opioids and methadone, and the implementation of pain clinic laws was associated with an increase in the number of overdose deaths from heroin and cocaine. However, having a prescription limit law was associated with a decrease in overdose deaths from synthetic opioids.

"Our work demonstrates that there is no easy policy solution to reverse the epidemic of opioid dependence and mortality in the U.S.," Lee said. "To resolve the opioid paradox, it is imperative to design policies to address the fundamental causes of overdose deaths, such as lack of economic opportunity, persistent physical, and mental pain, and enhance treatment for drug dependence and overdose rather than focusing on opioid analgesic agents as the cause of harm."

Credit: 
Indiana University

Global mapping projects aid humanitarian organisations

image: Left: Spatial distribution of all buildings added to OpenStreetMap since 2008; right: spatial distribution of buildings added through humanitarian mapping efforts using the HOT Tasking Manager.

Image: 
Benjamin Herfort

In recent years, free digital world maps like OpenStreetMap (OSM) have become a vital instrument to support humanitarian missions over the entire world. In disaster management as well as the implementation of the United Nations Sustainability Development Goals (SDGs), geodata compiled by the volunteer mapper community open up new possibilities to coordinate aid interventions and carry out sustainability projects. The mapping data are collected either locally using a smartphone and GPS device or on the basis of satellite images. An international team of researchers led by geoinformation scientists based at Heidelberg University is studying the evolution of humanitarian mapping and its repercussions on OpenStreetMap.

Digital services and platforms like the "HOT Tasking Manager" are used to coordinate the activities of volunteer mappers in all parts of the world as well as used by aid organisations to better respond to humanitarian disasters. "In recent years, the HOT Tasking Manager has become a vital tool for humanitarian mapping. Until now, however, there has been no comprehensive study on its effects or how it influences the overall development of OpenStreetMap and the make-up of the incorporated data," explains Benjamin Herfort, a doctoral candidate in the Geoinformatics Department at the Institute of Geography of Heidelberg University and research assistant at the Heidelberg Institute for Geoinformation Technology (HeiGIT), which is supported by the Klaus Tschira Foundation.

The first long-term study of all humanitarian mapping projects in the "HOT Tasking Manager" indicates that between January 2008 and May 2020 more than 60 million buildings and over four million roads were added to OpenStreetMap, primarily in regions with low and medium human development. This form of humanitarian mapping thus actively contributes to the diversification of mapping activities and the geodata in OpenStreetMap. "However, a strong focus on the Global North still persists," states Benjamin Herfort. "In spite of the progress over recent years, regions which, according to the Human Development Index, are considered to have low and medium human development account for only 28 percent of buildings labelled on OpenStreetMap and only 16 percent of mapped streets - although half of the world's population lives there."

This inequality is largely founded in socio-economic and demographic factors. At the same time, variables like sudden natural disasters play a major role. They can launch a spate of humanitarian mapping activities, as seen for the first time in 2010 after the earthquake in Haiti. On the other hand, local and regional factors can impede humanitarian mapping, especially if there is no Internet access.

According to the study, humanitarian mapping with the "HOT Tasking Manager" has an overall positive impact on the geographical distribution of global mapping activities in OpenStreetMap. Additional studies are already being planned to bring to light the development of all humanitarian mapping activities in OSM including beyond the "HOT Tasking Manager". Based on their current analyses, however, the researchers have already identified opportunities for further development. They recommend that the humanitarian mapper community and participating organisations shift their priority from documenting completed mapping activities to specifically evaluating regions from which more data is needed. They also encourage taking a long-term view of mapping projects with a humanitarian focus, apart from acute events like natural disasters. Local mapper communities would then form in affected or vulnerable regions that would continually maintain data in the OpenStreetMap, keeping these regions visible on the map. Further, the researchers support simplifying local data collection by strengthening the technical infrastructure and engaging as many different social groups as possible in map-making. Benjamin Herfort: "This approach supports participation and opens up new local perspectives, thereby creating potential for development."

Credit: 
Heidelberg University

New highly radioactive particles found in Fukushima

image: A map showing the location of the Fukushima Daiichi Nuclear Power Plant (FDNPP) and the sampling site against the radiation dose at 1 m above the ground as of November, 2017. The red star represents the location of the soil sample containing the highly radioactive particles.

Image: 
Satoshi Utsunomiya et al.

The 10 year anniversary of the Fukushima Daiichi nuclear accident occurs in March. Work just published in the Journal 'Science of the Total Environment' documents new, large (> 300 micrometers), highly radioactive particles that were released from one of the damaged Fukushima reactors.

Particles containing radioactive cesium (134+137Cs) were released from the damaged reactors at the Fukushima Daiichi Nuclear Power Plant (FDNPP) during the 2011 nuclear disaster. Small (micrometer-sized) particles (known as CsMPs) were widely distributed, reaching as far as Tokyo. CsMPs have been the subject of many studies in recent years. However, it recently became apparent that larger (>300 micrometers) Cs-containing particles, with much higher levels of activity (~ 105 Bq), were also released from reactor unit 1 that suffered a hydrogen explosion. These particles were deposited within a narrow zone that stretches ~8 km north-northwest of the reactor site. To date, little is known about the composition of these larger particles and their potential environmental and human health impacts.

Now, work just published in the journal Science of the Total Environment characterizes these larger particles at the atomic-scale and reports high levels of activity that exceed 105 Bq.

The particles, reported in the study, were found during a survey of surface soils 3.9 km north-northwest of reactor unit 1 (Fig. 1).

From 31 Cs-particles collected during the sampling campaign, two have given the highest ever particle-associated 134+137Cs activities for materials emitted from the FDNPP (specifically: 6.1 × 105 and 2.5 × 106 Bq, respectively, for the particles, after decay-correction to the date of the FDNPP accident).

The study involved scientists from Japan, Finland, France, the UK, and USA, and was led by Dr. Satoshi Utsunomiya and graduate student Kazuya Morooka (Department of Chemistry, Kyushu University). The team used a combination of advanced analytical techniques (synchrotron-based nano-focus X-ray analysis, secondary ion mass spectrometry, and high-resolution transmission electron microscopy) to fully characterize the particles. The particle with a 134+137Cs activity of 6.1 × 105 Bq was found to be an aggregate of smaller, flakey silicate nanoparticles, which had a glass like structure. This particle likely came from reactor building materials, which were damaged during the Unit 1 hydrogen explosion; then, as the particle formed, it likely adsorbed Cs that had had been volatized from the reactor fuel. The 134+137Cs activity of the other particle exceeded 106 Bq. This particle had a glassy carbon core and a surface that was embedded with other micro-particles, which included a Pb-Sn alloy, fibrous Al-silicate, Ca-carbonate / hydroxide, and quartz (Fig. 2).

The composition of the surface embedded micro-particles likely reflect the composition of airborne particles within the reactor building at the moment of the hydrogen explosion, thus providing a forensic window into the events of March 11th 2011 (Fig. 3). Utsunomiya added, "The new particles from regions close to the damaged reactor provide valuable forensic clues. They give snap-shots of the atmospheric conditions in the reactor building at the time of the hydrogen explosion, and of the physio-chemical phenomena that occurred during reactor meltdown." He continued, "whilst nearly ten years have passed since the accident, the importance of scientific insights has never been more critical. Clean-up and repatriation of residents continues and a thorough understanding of the contamination forms and their distribution is important for risk assessment and public trust.

Professor Gareth Law (co-author, University of Helsinki) added, "clean-up and decommissioning efforts at the site face difficult challenges, particularly the removal and safe management of accident debris that has very high levels of radioactivity. Therein, prior knowledge of debris composition can help inform safe management approaches".

Given the high radioactivity associated with the new particles, the project team were also interested in understanding their potential health / dose impacts.

Dr Utsunomiya stated, "Owing to their large size, the health effects of the new particles are likely limited to external radiation hazards during static contact with skin. As such, despite the very high level of activity, we expect that the particles would have negligible health impacts for humans as they would not easily adhere to the skin. However, we do need to consider possible effects on the other living creatures such as filter feeders in habitats surrounding Fukushima Daiichi. Even though ten years have nearly passed, the half-life of 137Cs is ~30 years. So, the activity in the newly found highly radioactive particles has not yet decayed significantly. As such, they will remain in the environment for many decades to come, and this type of particle could occasionally still be found in radiation hot spots."

Professor Rod Ewing (co-author from Stanford University) stated "this paper is part of a series of publications that provide a detailed picture of the material emitted during the Fukushima Daiichi reactor meltdowns. This is exactly the type of work required for remediation and an understanding of long-term health effects".

Professor Bernd Grambow (co-author from IMT Atlantique) added "the present work, using cutting-edge analytical tools, gives only a very small insight in the very large diversity of particles released during the nuclear accident, much more work is necessary to get a realistic picture of the highly heterogeneous environmental and health impact".

Credit: 
University of Helsinki

Study shows how some neurons compensate for death of their neighbors

image: All motor neurons and interneurons in the larval spinal cord are labeled with horse radish peroxidase (magenta). The subset of motor neurons used in this study is labeled with a transgenic reporter (green). 

Image: 
Robert Carrillo, PhD, and Yupu Wang

Our brains are complicated webs of billions of neurons, constantly transmitting information across synapses, and this communication underlies our every thought and movement.

But what happens to the circuit when a neuron dies? Can other neurons around it pick up the slack to maintain the same level of function?

Indeed they can, but not all neurons have this capacity, according to new research from the University of Chicago. By studying several neuron pairs that innervate distinct muscles in a fruit fly model, researchers found that some neurons compensate for the loss of a neighboring partner.

The results, published February 17, 2021, in the Journal of Neuroscience, are a step in the direction of understanding the plasticity of the brain and using that knowledge to better understand not only normal development, but also neurodegenerative diseases.

"Now that we know that some neurons can compensate when other neurons die, we can ask whether this process can also happen in neurological diseases," said Robert Carrillo, PhD, assistant professor of Molecular Genetics and Cell Biology and corresponding author of the paper.

Because the human brain is incredibly complex, researchers use the comparatively simple fruit fly model to investigate fundamental neuroscience concepts that could potentially translate to our higher-order brains.

To better understand how the brain adapts to structural and functional changes, Carrillo and graduate student Yupu Wang examined the fruit fly's neuromuscular system, where each muscle is innervated by two motor neurons. While it is known that neurons can alter their activity when perturbations happen at their own synapses, a process known as synaptic plasticity, they wondered what would happen if one neuron was removed from the system. Would the other neurons respond and compensate for this loss?

It's not an easy question to answer: Removing single neurons without simultaneously destroying other neurons is difficult, and it is also difficult to measure a single neuron's baseline activity. The researchers solved this by expressing cell death-promoting genes in a very specific subset of motor neurons. They then used imaging and electrophysiological recordings to isolate the activity of the single remaining neuron in the pair.

In one muscle, they found that the remaining neuron expanded its synaptic arbor and compensated for both the spontaneous and evoked neurotransmission of its missing neighbor. When the researchers performed the same procedure on two other muscles, however, they found that the remaining neuron did not compensate for the loss of its neighbor.

"It appears that some neurons have the ability to detect and compensate for their neighboring neuron, and others do not," said Wang, who is doing his graduate studies in the Committee on Development, Regeneration and Stem Cell Biology.

That could be because, as the researchers found, each neuron has different functional properties. The neuron that compensated for the loss of its neighbor also contributed most to the overall activity of the muscle under baseline conditions.

This still left the researchers with an intriguing question: How does the remaining neuron know how much to compensate? They hypothesized that the neuron pairs work together to establish a "set point" for activity upon circuit formation. Indeed, they found that if the neuron's neighbor never forms synapses - if the system never knew it was supposed to get information from two neurons - then the remaining neuron will not compensate.

That leaves hope that further studies could help illuminate whether neurons whose neighbors are affected by neurogenerative diseases like amyotrophic lateral sclerosis (ALS), which causes progressive neuron death and loss of muscle function, could show synaptic plasticity.

Next the researchers are studying the mechanism that causes the compensation. They hope to better understand how the signal that the neuron has died is sent, and how that signal in turn causes the other neuron to compensate.

Credit: 
University of Chicago Medical Center

Online tool helps estimate COVID's true toll on sub-Saharan Africa

One early feature of reporting on the coronavirus pandemic was the perception that sub-Saharan Africa was largely being spared the skyrocketing infection and death rates that were disrupting nations around the world.

While still seemingly mild, the true toll of the novel coronavirus, SARS-CoV-2, on the countries of sub-Saharan Africa may be obscured by a tremendous variability in risk factors combined with surveillance challenges, according to a study published in the journal Nature Medicine by an international team led by Princeton University researchers and supported by Princeton's High Meadows Environmental Institute (HMEI).

"Although reports of the toll of SARS-CoV-2 in sub-Saharan Africa have to date been generally low in comparison to other regions, we must account for the extreme national and subnational variability in drivers of the pandemic across this region," said first author Benjamin Rice, a Presidential Postdoctoral Research Fellow in ecology and evolutionary biology at Princeton.

Precise projections for sub-Saharan African countries remain difficult given a lack of information on the prevalence of risk factors such as chronic diseases and access to healthcare, Rice said. But he and his co-authors synthesized a wide range of information on risk factors and trends in infection for sub-Saharan Africa from Feb. 25 to Dec. 20, 2020.

The researchers then developed an interactive online tool that shows the impact that different risk factors -- such as rates of chronic disease, the local population density of physicians, and the percentage of an urban population living in crowded housing -- might have on the trajectory of the pandemic.

The researchers also developed a set of simulations to evaluate the role of different drivers of viral spread. Their results showed that climatic variation between sub-Saharan African population centers had little effect on early outbreak trajectories.

That finding is consistent with research that Rice's co-authors Rachel Baker, an associate research scholar in HMEI, and C. Jessica Metcalf, a Princeton associate professor of ecology and evolutionary biology and public affairs and associated faculty in HMEI, have published suggesting that climate conditions in the summer and winter would have a minimal effect on coronavirus during the pandemic phase. Baker, Metcalf and many of the current study's authors are affiliated with the HMEI Climate Change and Infectious Disease initiative.

The Nature Medicine paper also found that differences in connectivity, although rarely considered, are likely an important contributor to differences in how the virus has spread across sub-Saharan Africa, said co-author Fidisoa Rasambainarivo, an HMEI postdoctoral research associate based in Madagascar.

"These results could allow officials to anticipate what might happen in their countries given the varying contexts of human-movement networks and health across sub-Saharan Africa," Rasambainarivo said.

The researchers developed national and sub-national analyses that indicated specific settings where strengthening coronavirus surveillance could yield the greatest returns, Metcalf said. An urgent focus in sub-Saharan Africa is developing a better understanding of the intersection between the pace of the epidemic and the likelihood of disruptions to local and national health systems, which in many areas are already fragile.

"These results underscore the importance of developing tools such as serology to better measure susceptibility in order to directly evaluate the current situation and landscape of risk," Metcalf said.

Credit: 
Princeton University

Identifying "ugly ducklings" to catch skin cancer earlier

Melanoma is by far the deadliest form of skin cancer, killing more than 7,000 people in the United States in 2019 alone. Early detection of the disease dramatically reduces the risk of death and the costs of treatment, but widespread melanoma screening is not currently feasible. There are about 12,000 practicing dermatologists in the US, and they would each need to see 27,416 patients per year to screen the entire population for suspicious pigmented lesions (SPLs) that can indicate cancer.

Computer-aided diagnosis (CAD) systems have been developed in recent years to try to solve this problem by analyzing images of skin lesions and automatically identifying SPLs, but so far have failed to meaningfully impact melanoma diagnosis. These CAD algorithms are trained to evaluate each skin lesion individually for suspicious features, but dermatologists compare multiple lesions from an individual patient to determine whether they are cancerous - a method commonly called the "ugly duckling" criteria. No CAD systems in dermatology, to date, have been designed to replicate this diagnosis process.

Now, that oversight has been corrected thanks to a new CAD system for skin lesions based on convolutional deep neural networks (CDNNs) developed by researchers at the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Massachusetts Institute of Technology (MIT). The new system successfully distinguished SPLs from non-suspicious lesions in photos of patients' skin with ~90% accuracy, and for the first time established an "ugly duckling" metric capable of matching the consensus of three dermatologists 88% of the time.

"We essentially provide a well-defined mathematical proxy for the deep intuition a dermatologist relies on when determining whether a skin lesion is suspicious enough to warrant closer examination," said the study's first author Luis Soenksen, Ph.D., a Postdoctoral Fellow at the Wyss Institute who is also a Venture Builder at MIT. "This innovation allows photos of patients' skin to be quickly analyzed to identify lesions that should be evaluated by a dermatologist, allowing effective screening for melanoma at the population level."

The technology is described in Science Translational Medicine, and the CDNN's source code is openly available on GitHub.

Bringing ugly ducklings into focus

Melanoma is personal for Soenksen, who has watched several close friends and family members suffer from the disease. "It amazed me that people can die from melanoma simply because primary care doctors and patients currently don't have the tools to find the "odd" ones efficiently. I decided to take on that problem by leveraging many of the techniques I learned from my work in artificial intelligence at the Wyss and MIT," he said.

Soenksen and his collaborators discovered that all the existing CAD systems created for identifying SPLs only analyzed lesions individually, completely omitting the ugly duckling criteria that dermatologists use to compare several of a patient's moles during an exam. So they decided to build their own.

To ensure that their system could be used by people without specialized dermatology training, the team created a database of more than 33,000 "wide field" images of patients' skin that included backgrounds and other non-skin objects, so that the CDNN would be able to use photos taken from consumer-grade cameras for diagnosis. The images contained both SPLs and non-suspicious skin lesions that were labeled and confirmed by a consensus of three board-certified dermatologists. After training on the database and subsequent refinement and testing, the system was able to distinguish between suspicious from non-suspicious lesions with 90.3% sensitivity and 89.9% specificity, improving upon previously published systems.

But this baseline system was still analyzing the features of individual lesions, rather than features across multiple lesions as dermatologists do. To add the ugly duckling criteria into their model, the team used the extracted features in a secondary stage to create a 3D "map" of all of the lesions in a given image, and calculated how far away from "typical" each lesion's features were. The more "odd" a given lesion was compared to the others in an image, the further away it was from the center of the 3D space. This distance is the first quantifiable definition of the ugly duckling criteria, and serves as a gateway to leveraging deep learning networks to overcome the challenging and time-consuming task of identifying and scrutinizing the differences between all the pigmented lesions in a single patient.

Deep learning vs. dermatologists

Their DCNN still had to pass one final test: performing as well as living, breathing dermatologists at the task of identifying SPLs from images of patients' skin. Three dermatologists examined 135 wide-field photos from 68 patients, and assigned each lesion an "oddness" score that indicated how concerning it looked. The same images were analyzed and scored by the algorithm. When the assessments were compared, the researchers found that the algorithm agreed with the dermatologists' consensus 88% of the time, and with the individual dermatologists 86% of the time.

"This high level of consensus between artificial intelligence and human clinicians is an important advance in this field, because dermatologists' agreement with each other is typically very high, around 90%," said co-author Jim Collins, Ph.D., a Core Faculty member of the Wyss Institute and co-leader of its Predictive Bioanalytics Initiative who is also the Termeer Professor of Medical Engineering and Science at MIT. "Essentially, we've been able to achieve dermatologist-level accuracy in diagnosing potential skin cancer lesions from images that can be taken by anybody with a smartphone, which opens up huge potential for finding and treating melanoma earlier."

Recognizing that such a technology should be made available to as many people as possible for maximum benefit, the team has made their algorithm open-source on GitHub. They hope to partner with medical centers to launch clinical trials further demonstrating their system's efficacy, and with industry to turn it into a product that could be used by primary care providers around the world. They also recognize that in order to be universally helpful, their algorithm needs to be able to function equally well across the full spectrum of human skin tones, which they plan to incorporate into future development.

"Allowing our scientists to purse their passions and visions is key to the success of the Wyss Institute, and it's wonderful to see this advance that can impact all of us in such a meaningful way emerge from a collaboration with our newly formed Predictive Bioanalytics Initiative," said Wyss Founding Director Don Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children's Hospital, and Professor of Bioengineering at the Harvard John A. Paulson School of Engineering and Applied Sciences.

Credit: 
Wyss Institute for Biologically Inspired Engineering at Harvard

Modeling a better catalyst for PIBSAs

Polyisobutenyl succinic anhydrides (PIBSAs) are important for the auto industry because of their wide use in lubricant and fuel formulations. Their synthesis, however, requires high temperatures and, therefore, higher cost.

Adding a Lewis acid--a substance that can accept a pair of electrons--as a catalyst makes the PIBSA formation more efficient. But which Lewis acid? Despite the importance of PIBSAs in the industrial space, an easy way to screen these catalysts and predict their performance hasn't yet been developed.

New research led by the Computer-Aided Nano and Energy Lab (CANELa) at the University of Pittsburgh Swanson School of Engineering, in collaboration with the Lubrizol Corporation, addresses this problem by revealing the detailed mechanism of the Lewis acid-catalyzed reaction using computational modeling. The work, recently featured on the cover of the journal Industrial & Engineering Chemistry Research, builds a deeper understanding of the catalytic activity and creates a foundation for computationally screening catalysts in the future.

"PIBSAs are commonly synthesized through the reaction between maleic anhydride and polyisobutene. Adding Lewis acids makes the reaction faster and reduces the energy input required for PIBSA formation," explained Giannis Mpourmpakis, the Bicentennial Alumni Faculty Fellow and associate professor of chemical and petroleum engineering at Pitt. "But the reaction mechanism has not been well understood, and there are not many examples of this reaction in the literature. Our work helps to explain the way the reaction happens and identifies Lewis acids that will work best."

This new foundational information will aid in the discovery of Lewis acid catalysts for industrial chemical production at a faster rate and reduced cost.

"The alliance between the University of Pittsburgh and Lubrizol has been instrumental in demonstrating how Academia and the Chemical Process Industry can work together to produce commercially relevant results," said Glenn Cormack, Global Process Innovation Manager at The Lubrizol Corporation. "Combining the knowledge and expertise of the Swanson School of Engineering and The Lubrizol Corporation allows both parties access to some of the best available computational and experimental techniques when exploring new challenges."

The research is one of many collaborations between Pitt and the Lubrizol Corporation, an Ohio-based specialty chemical provider for transportation, industrial and consumer markets. The alliance with Lubrizol, now in its seventh year, provides students with hands-on opportunities to experience how the knowledge and skills they're developing are used in the chemical industry. At the same time, students gain world-ready knowledge how Pitt's research helps improve Lubrizol's processes and products.

"Over the last few years, our partnership with Lubrizol has led to new, innovative ways for Lubrizol to make products and rethink their manufacturing processes," said Steven Little, William Kepler Whiteford Endowed Professor and chair of the Department of Chemical and Petroleum Engineering. "We learn a tremendous amount from them as well, and all of these publications are evidence of an alliance that continues to grow."

The paper, "Computational Screening of Lewis Acid Catalysts for the Ene Reaction between Maleic Anhydride and Polyisobutylene," (DOI: 10.1021/acs.iecr.0c04860 ) was published in the ACS journal I&EC Research. It was authored by Cristian Morales-Rivera and Giannis Mpourmpakis at Pitt and Nico Proust and James Burrington at the Lubrizol Corporation.

Credit: 
University of Pittsburgh

Study: Screen surgery patients for frailty

image: Paula K. Shireman, MD, MS, MBA, of the Long School of Medicine at UT Health San Antonio, coauthored a study that shows surgeons across nine specialties should screen patients for frailty prior to procedures.

Image: 
The University of Texas Health Science Center at San Antonio

SAN ANTONIO -- Patients should be assessed for frailty before having many types of surgery, even if the surgery is considered low risk, a review of two national patient databases shows.

Frailty is a clinical syndrome marked by slow walking speed, weak grip, poor balance, exhaustion and low physical activity. It is an important risk factor for death after surgery, although the association between frailty and mortality across surgical specialties is not well understood.

The study, conducted by faculty at multiple institutions including The University of Texas Health Science Center at San Antonio (UT Health San Antonio), mined patient data from the Veterans Affairs (VA) Surgical Quality Improvement Program and the American College of Surgeons (ACS) National Surgical Quality Improvement Program.

"In previous research, we introduced the concept of an Operative Stress Score to assess the impact of frailty on mortality after surgery," said Paula K. Shireman, MD, MS, MBA, professor of surgery at UT Health San Antonio. She is a vascular surgeon in the university's Joe R. and Teresa Lozano Long School of Medicine and the South Texas Veterans Health Care System.

"In that first study, which examined only VA data, we showed that even patients undergoing low-risk procedures had very high post-operative mortality," Dr. Shireman said.

The question remained: Because frailty screening takes effort and clinics are busy, are there specialties that shouldn't do it because they perform mainly low-risk procedures in patients who are at low risk? In these specialties, does the effort of screening outweigh the benefit?

The newest research, published in JAMA Surgery, assigned Operative Stress Scores (OSS) to cases performed in nine noncardiac specialties including vascular surgery, orthopaedic surgery, neurosurgery, gynecologic surgery, urologic surgery, and ear, nose and throat surgery.

"This time, instead of looking at all cases broadly, we characterized different specialties as high risk or low risk," Dr. Shireman said. "For example, most of the procedures done by urologic surgeons are relatively low risk. Vascular and thoracic surgeons, on the other hand, perform many high-risk procedures."

Operative Stress Scores categorized procedures as low (1-2), moderate (3) and high (4-5) stress. Specialties were categorized by case mix as predominantly low intensity, moderate intensity or high intensity. "Basically, did they do more OSS-1s or did they do more OSS-5s?" Dr. Shireman said.

The study examined 30-day postoperative mortality in the VA case records and 30-day and 180-day mortality after surgery in the ACS registry. 2.3 million patient records are in the American College of Surgeons database and 426,000 in the VA database. 56% of the patients in the ACS registry were women, while 93% of the patients in the VA registry were men. All patients were 18 or older.

The key finding: "Frailty was a consistent, independent risk factor for 30- and 180-day mortality across all specialties," the authors wrote.

"Based upon all of this, the results showed that frailty is prevalent enough that it should be screened in all different surgical specialties, regardless of the case mix," Dr. Shireman said.

Cardiac specialties were not included. The VA has a separate registry for cardiac procedures versus all other surgeries, Dr. Shireman said. The researchers used the all-other-surgeries VA registry.

Credit: 
University of Texas Health Science Center at San Antonio

Do sweat it! Wearable microfluidic sensor to measure lactate concentration in real time

image: It's important to use non-irritating materials in the design of wearable sensors that are used for quantifying lactate levels during exercise

Image: 
Tokyo University of Science

With the seemingly unstoppable advancement in the fields of miniaturization and materials science, all sorts of electronic devices have emerged to help us lead easier and healthier lives. Wearable sensors fall in this category, and they have received much attention lately as useful tools to monitor a person's health in real time. Many such sensors operate by quantifying biomarkers, that is, measurable indicators that reflect one's health condition. Widely used biomarkers are heartrate and body temperature, which can be monitored continuously with relative ease. On the contrary, chemical biomarkers in bodily fluids, such as blood, saliva, and sweat, are more challenging to quantify with wearable sensors.

For instance, lactate, which is produced during the breakdown of glucose in the absence of oxygen in tissues, is an important biomarker present in both blood and sweat that reflects the intensity of physical exercise done as well as the oxygenation of muscles. During exercise, muscles requiring energy can rapidly run out of oxygen and fall back to a different metabolic pathway that provides energy at the 'cost' of accumulating lactate, which causes pain and fatigue. Lactate is then released into the bloodstream and part of it is eliminated through sweat. This means that a wearable chemical sensor could measure the concentration of lactate in sweat to give a real-time picture of the intensity of exercise or the condition of muscles.

Although lactate-measuring wearable sensors have already been proposed, most of them are composed of materials that can cause irritation of the skin. To address this problem, a team of scientists in Japan recently carried out a study to bring us a more comfortable and practical sensor. Their work, which was published in Electrochimica Acta, was led by Associate Professor Isao Shitanda, Mr. Masaya Mitsumoto, and Dr. Noya Loew from the Department of Pure and Applied Chemistry at the Tokyo University of Science, Japan.

The team first focused on the sensing mechanism that they would employ in the sensor. Most lactate biosensors are made by immobilizing lactate oxidase (an enzyme) and an appropriate mediator on an electrode. A chemical reaction involving lactate oxidase, the mediator, and free lactate results in the generation of a measurable current between electrodes--a current that is roughly proportional to the concentration of lactate.

A tricky aspect here is how to immobilize the enzyme and mediator on an electrode. To do this, the scientists employed a method called "electron beam-induced graft polymerization," by which functional molecules were bonded to a carbon-based material that can spontaneously bind to the enzyme. The researchers then turned the material into a liquid ink that can be used to print electrodes. This last part turns out to be an important aspect for the future commercialization of the sensor, as Dr. Shitanda explains, "The fabrication of our sensor is compatible with screen printing, an excellent method for fabricating lightweight, flexible electrodes that can be scaled up for mass production."

With the sensing mechanism complete, the team then designed an appropriate system for collecting sweat and delivering it to the sensor. They achieved this with a microfluidic sweat collection system made out of polydimethylsiloxane (PDMS); it comprised multiple small inlets, an outlet, and a chamber for the sensor in between. "We decided to use PDMS because it is a soft, nonirritating material suitable for our microfluidic sweat collection system, which is to be in direct contact with the skin," comments Mr. Mitsumoto.

The detection limits of the sensor and its operating range for lactate concentrations was confirmed to be suitable for investigating the "lactate threshold"--the point at which aerobic (with oxygen) metabolism turns into anaerobic (without oxygen) metabolism during exercise. Real-time monitoring of this bodily phenomenon is important for several applications, as Dr. Loew remarks, "Monitoring the lactate threshold will help optimize the training of athletes and the exercise routines of rehabilitation patients and the elderly, as well as control the exertion of high-performance workers such as firefighters."

The team is already testing the implementation of this sensor in practical scenarios. With any luck, the progress made in this study will help develop the field of wearable chemical sensors, helping us to keep better track of our bodily processes and maintain better health.

Credit: 
Tokyo University of Science

Edible holograms could someday decorate foods

image: Nanostructures (yellowish-green images; scale bar, 5 μm) were patterned onto dried corn syrup films, producing edible, rainbow-colored holograms (scale bar, 2 mm).

Image: 
Adapted from <i>ACS Nano</i> <b>2021</b>, DOI: 10.1021/acsnano.0c02438

Holograms are everywhere, from driver's licenses to credit cards to product packaging. And now, edible holograms could someday enhance foods. Researchers reporting in ACS Nano have developed a laser-based method to print nanostructured holograms on dried corn syrup films. The edible holograms could also be used to ensure food safety, label a product or indicate sugar content, the researchers say.

Most holograms are imprinted with lasers onto metal surfaces, such as aluminum, but the materials aren't edible. For foods, holograms made with nanoparticles have been proposed, but the tiny particles can generate reactive oxygen species, which might be harmful for people to eat. In a different approach, food scientists have molded edible holograms onto chocolate, but the process only works for certain types of the confection, and a different mold is needed for each hologram design. Bader AlQattan, Haider Butt and colleagues wanted to find a safe, fast and versatile way to pattern edible holograms onto a variety of foods.

To develop their method, the researchers made a solution of corn syrup, vanilla and water and dried it into a thin film. They coated the film with a fine layer of non-toxic black dye. Then, they used a technique called direct laser interference patterning to etch off most of the dye, leaving behind raised, nanoscale lines that formed a diffraction grating. When struck by light, the nanostructure diffracted the light into a rainbow pattern, with different colors appearing at different angles of viewing. The team could control the intensity and range of colors by varying the spacing between lines in the grating or the sugar content of the corn syrup film. Before edible holograms are ready to hit store shelves, however, the researchers want to adapt the method to a food-grade dye that could replace the synthetic black dye used in these pilot experiments.

Credit: 
American Chemical Society

Researchers identify gene implicated in neuroblastoma, a childhood cancer

ROCHESTER, Minn. -- A new study by Mayo Clinic researchers has identified that a chromosome instability gene, USP24, is frequently missing in pediatric patients with neuroblastoma, an aggressive form of childhood cancer. The finding provides important insight into the development of this disease. The study is published in Cancer Research, the journal of the American Association for Cancer Research.

"Neuroblastoma is a highly aggressive cancer that nearly exclusively affects young children," says Paul Galardy, M.D., a pediatric hematologist and oncologist at Mayo Clinic. Despite the use of multiple treatment approaches, Dr. Galardy says many children die of this disease every year.

To identify new therapeutic approaches, Dr. Galardy and his colleagues examined the role of a set of enzymes known as deubiquitinating enzymes (DUB) in this disease. They chose this family of enzymes because they could be targeted using drug therapy.

"Little is known about the role of DUBs in neuroblastoma," says Dr. Galardy. "We used a computational approach to determine the effect of too much or too little of a gene on the outcome of a variety of human cancers to identify DUBs that may play a role in treating neuroblastoma."

Dr. Galardy and his team used this method to identify two genes ? USP24 and USP44 ? with the biggest potential to affect the outcomes of young patients with neuroblastoma. "These genes were the ones most closely implicated as being important for accurate cell division," he says.

Dr. Galardy says that his team found that USP44 plays an important role in cell division and was associated with poor outcomes in lung cancer. Therefore, the team shifted its attention to USP24 to understand how it might contribute to neuroblastoma.

"Little is known about how USP24 functions," says Dr. Galardy. "We observed low levels of USP24 in children with neuroblastoma whose tumors were highly aggressive, leading to early progression or recurrence of the disease." He says low levels of USP24 occur commonly with other markers of aggressive disease, including amplification of the MYCN cancer gene and a loss of a large segment of chromosome 1.

The team also found that USP24 is not simply a marker for aggressive disease. Using genetically engineered mice that lack the USP24 gene, they found that USP24 plays an important role in protecting cells against errors in chromosome distribution that take place during cell division.

"When we compared cells with normal or deleted USP24, and examined the levels of proteins in dividing cells, we found that mice lacking even one of the two copies of USP24 were more prone to developing tumors," says Dr. Galardy. "This helped lead us to our conclusion that USP24 may play a role in ensuring accurate cell division, and that a loss of USP24 in mice leads to tumor formation and may also contribute to the development of aggressive neuroblastoma tumors in children."

Credit: 
Mayo Clinic

Mutation in SARS-CoV-2 spike protein renders virus up to eight times more infectious

A mutation in the spike protein of SARS-CoV-2--one of several genetic mutations in the concerning variants that have emerged in the United Kingdom, South Africa, and Brazil--makes the virus up to eight times more infectious in human cells than the initial virus that originated in China, according to research published in the journal eLife.

The study, led by researchers at New York University, the New York Genome Center, and Mount Sinai, corroborates findings that the D614G mutation makes SARS-CoV-2 more transmissible.

"In the months since we initially conducted this study, the importance of the D614G mutation has grown: the mutation has reached near universal prevalence and is included in all current variants of concern," said Neville Sanjana, assistant professor of biology at NYU, assistant professor of neuroscience and physiology at NYU Grossman School of Medicine, and Core Faculty Member at the New York Genome Center. "Confirming that the mutation leads to more transmissibility may help explain, in part, why the virus has spread so rapidly over the past year."

The D614G mutation in the SARS-CoV-2 spike protein--commonly referred to as the "G variant"--likely emerged in early 2020 and is now is the most prevalent and dominant form of the SARS-CoV-2 virus across the United States and in many countries around the globe. With multiple mutations circulating, researchers have been working to understand the functional significance of these mutations and whether they meaningfully change how infectious or deadly the virus is.

In this study, the researchers introduced a virus with the D614G mutation into human lung, liver, and colon cells. They also introduced the "wild type" version of the coronavirus--the version of the virus without the mutation found early on in the pandemic--into these same cell types for comparison.

They found that the D614G variant increased transduction, or transmissibility, of the virus up to eight-fold as compared to the original virus. The researchers also found that the spike protein mutation made the virus more resistant to being cleaved or split by other proteins. This provides a possible mechanism for the variant's increased ability to infect cells, as the hardier variant resulted in a greater proportion of intact spike protein per virus.

"With our experimental setup we are able to quickly and specifically assess the contribution of G614 and other mutations to the increased spread of SARS-CoV-2," said Tristan Jordan, a postdoctoral scholar in the tenOever Lab at Mount Sinai and co-first author of the study.

"Going into this project we didn't really know if D614G mutation would have any functional effects, as its wide spread could be due to a founder effect, where a variant becomes dominant because a small number of individuals spread it widely by chance. However, our experimental data was pretty unambiguous--the D614G variant infects human cells much more efficiently than the wild type," said Zharko Daniloski, a postdoctoral fellow in Sanjana's lab at NYU and the New York Genome Center and the study's co-first author.

The team's findings join a growing consensus among scientists that the D614G variant is more infectious; this was also demonstrated in studies appearing in Cell by researchers at Los Alamos National Laboratory, in Nature by researchers at the University of North Carolina, and in Science by researchers at the University of Texas. However, it is still unclear whether the variant and its rapid spread have a clinical impact on COVID-19 disease progression, as several studies suggest that the D614G variant is not linked to more severe disease or hospitalization.

The researchers note that findings on the increased transmissibility of the D614G variant may influence COVID-19 vaccine development and, in particular, it may be beneficial for future booster shots to include diverse forms of the spike protein from different circulating variants. The vaccines with emergency use authorization from the FDA, as well as those under development, were created using the original spike sequence; studies are underway to understand how well these vaccines protect against the variants that emerged in the United Kingdom, South Africa, and Brazil, all of which contain the D614G mutation. Recent work from other groups suggests that initial vaccines with the D614 form of spike can protect against the newer G614 form of spike, although more work needs to be done to understand how multiple mutations can interact with each other and impact immune response.

"The research comprising this work is essential to understanding changes in biology that a given viral variant might demonstrate," said co-senior author Benjamin tenOever, Fishberg Professor of Medicine, Icahn Scholar and Professor of Microbiology at the Icahn School of Medicine at Mount Sinai. "We are presently now moving forward with similar studies to study the variants that have arisen in the UK, Brazil, and in South Africa".

Credit: 
New York University

Most teen bullying occurs among peers climbing the social ladder

Teens who bully, harass, or otherwise victimize their peers are not always lashing out in reaction to psychological problems or unhealthy home environments, but are often using aggression strategically to climb their school's social hierarchy, a University of California, Davis, study suggests. These findings point to the reasons why most anti-bullying programs don't work and suggest possible strategies for the future.

"To the extent that this is true, we should expect them to target not vulnerable wallflowers, but their own friends, and friends-of-friends, who are more likely to be their rivals for higher rungs on the social ladder," said Robert Faris, a UC Davis researcher on bullying and author of the paper "With Friends Like These: Aggression From Amity and Equivalence." The paper was published recently in the American Journal of Sociology. Co-authors are sociologists Diane Felmlee at Pennsylvania State University and Cassie McMillan at Northeastern University.

Faris, a professor of sociology, said friends and associates with close ties to one another likely compete for positions within the same clubs, classrooms, sports and dating subgroups, which heightens the risk of conflict and aggression. This paper is the first known to show that those rivals are often their own friends.

This differs from some common theories and definitions of bullying, in which the behavior stems from an imbalance of power and is mainly directed at youths in the lower social strata in school or community environments who possibly have physical, social or psychological vulnerabilities.

The study focuses, instead, on a broader definition of peer aggression -- theorizing that aggression can actually improve the social status of the aggressor.

Using a large, longitudinal social network study of more than 3,000 eighth, ninth and 10th graders in North Carolina over the course of a single school year, the authors found that teens who were friends in the fall were more than three times as likely to bully or victimize each other in the spring of that same school year. This is not merely animosity between former friends who drifted apart: Schoolmates whose friendships ended during the year were three times as likely to bully or victimize each other in the spring, while those whose friendships continued over the school year were over four times as likely to bully those friends, researchers said.

'Frenemy effect'

This "frenemy effect" is not explained by the amount of time friends spent together, Faris explained. Additionally, "structurally equivalent" classmates -- those who are not necessarily friends, but who share many friends in common -- are also more likely to bully or otherwise victimize each other. Compared to schoolmates with no overlapping friendships, those whose friendships are perfectly overlapping are roughly three times more likely to bully each other, and those who share the same bullies or victims are more than twice as likely to bully each other.

Finally, being victimized by friends is particularly painful, and is associated with significant increases in symptoms of depression and anxiety, and significant decreases in school attachment, researchers said.

Real-life case

The paper cites the real-life case of Megan Meier, who hanged herself in 2007 after being bullied by people she thought were her friends -- with the added twist of a mother orchestrating the social media bullying scheme. "The tragedy of Megan Meier highlights more than the limitations of the criminal justice system in addressing complex, often subtle, social problems like bullying," researchers said. The case illustrates the need for research in this area: ... "contrary to the once-prevailing view of bullying as a maladjusted reaction to psychological deficiencies, emotional dysregulation, empathy deficits, or problematic home lives, [the perpetrator of the bullying] is one of millions of adolescents who has harmed a schoolmate for instrumental reasons: to exact retribution, achieve prominence, or vanquish a rival," researchers said. Indeed, the research shows, "the desire for popularity motivates much aggressive behavior."

Few anti-bullying programs work

Additionally, the researchers conclude, few anti-bullying programs work. "The reason for the typically low success rates, we believe, is that aggressive behavior accrues social rewards, and to a degree that leads some to betray their closest friends. Even the most successful prevention programs are unable to alter the aggressive behavior of popular bullies, who use cruelty to gain and maintain status," the authors said. The popularity contests ubiquitous in secondary schools, the authors wrote, encourage peer bullying.

The authors suggest that efforts to support and strengthen adolescent friendships -- such as broadening extracurricular offerings and hosting camps, trainings and retreats -- could help de-emphasize popularity and reduce the "frenemy effect."

Credit: 
University of California - Davis