Tech

New algorithm for the diagnostics of dementia

A top-level international research team including researchers from the University of Eastern Finland has developed a new algorithm for the diagnostics of dementia. The algorithm is based on blood and cerebrospinal fluid biomarker measurements. These biomarkers can be used to aid setting of an exact diagnosis already in the early phases of dementia.

Researchers from the University of Eastern Finland and the University of Oulu in collaboration with an international team have created a new diagnostic biomarker-based algorithm for the diagnostics of dementia. The team is led by Professor Barbara Borroni from the University of Brescia, Italy. The article was published in the Diagnostics journal.

The accurate diagnosis of different types of dementia is frequently complicated and often cannot be set at the early phases of the disease due to the lack of practical and specific diagnostic tools. In addition, the clinical symptoms of patients with various neurodegenerative diseases often overlap and thus, the accurate diagnosis is not always possible.

The precise diagnosis, however, is needed to enable the management of the disease in the best possible way. For example, the widely used pharmaceutical cholinesterase inhibitors are beneficial in the symptomatic treatment and help maintaining activities of daily living in Alzheimer's disease patients. However, they worsen the clinical and neuropsychiatric symptoms in patients with frontotemporal dementia.

In the future, disease modifying drugs will be available for the individualized management of dementia. Those patients that benefit from these interventions should be recognized as early as possible to prevent irreversible neuronal damage and loss associated with cognitive decline and diminished capability for independent life.

The present novel diagnostic algorithm will help to reliably differentiate patients with different types of dementia and is useful in selecting patients for clinical drug trials as well.

New biomarkers are more sensitive and specific, but not yet largely available

The cerebrospinal fluid-based Alzheimer's disease biomarkers developed at the end of last century had a ground-breaking impact on the diagnostics of dementia. However, it has recently been shown that the specificity of these biomarkers in differentiating dementing diseases from each other is low, which has increased the pressure to develop new, improved biomarkers for the diagnostics of dementia.

The new generation biomarkers are based widely on the utilization of blood samples instead of the invasive cerebrospinal fluid samples. Based on the present algorithm, the researchers recommend the utilization of blood neurofilament light chain levels in the screening of dementias. Also, the new algorithm allows the diagnosis of the most common type of dementia, Alzheimer's disease, based on blood sample analysis. Later on, cerebrospinal fluid-based analyses might only be needed for the diagnostics of rarer forms of dementia.

"New biomarkers will enable ground-breaking next-generation diagnostics. Moreover, the currently time-taking diagnostic procedures will accelerate. This will diminish the humane burden of patients and their next-of-kin, when we can provide a precise diagnosis instead of prolonged uncertainty for the families," says the leading author of the article, Adjunct Professor Eino Solje from the University of Eastern Finland.

The now published algorithm cannot yet be applied in the daily clinical work, since most of the biomarkers are not so far widely available in the clinical laboratories. Nevertheless, the researchers believe that the promising results will accelerate the accessibility of these biomarker measurements in the near future.

Credit: 
University of Eastern Finland

Treating neurological symptoms of CHARGE syndrome

image: INRS Professor Kessen Patten specializes in genetics and neurodegenerative diseases.

Image: 
Christian Fleury (INRS)

CHARGE syndrome is a rare genetic disorder affecting about 1 in 10,000 newborns. It can lead to neurological and behavioural disorders for which no treatment is currently available. Dr. Kessen Patten and his team, from the Institut national de la recherche scientifique (INRS) have just discovered a compound that could alleviate these symptoms. The results of their research were published in the journal EMBO Reports.

Understanding Neurological Disorders

First described in 1979, CHARGE syndrome is caused by mutations in the CHD7 gene and is associated with neurodevelopmental disorders such as intellectual disability, attention deficit disorder with or without hyperactivity, seizures and autism spectrum disorder. Dr. Patten's research team studied the neurological symptoms of this syndrome, which are still poorly understood.

The team developed a genetic model of zebrafish with loss of function of the CHD7 gene similar to that observed in humans. They found that the CHD7 gene regulated the type of GABAergic neurons that are essential for proper brain function.

"The loss of function of CHD7 appears to cause developmental and functional abnormalities in GABAergic neurons in the zebrafish brain that are related to the observed neurological and behavioural disorders," explained Dr. Patten, who specializes in genetics and neurodegenerative diseases. The team also identified molecular events controlled by the CHD7 gene to explain these neurological symptoms in their genetic model. Similar findings were made using cells from patients with the disease.

Finding a Drug

The research team tested hundreds of compounds already approved for clinical use by the U.S. Food and Drug Administration. Drug screening was used to identify potential candidates for treatment--ephedrine was selected as the most therapeutic compound. "We observed therapeutic effects on both, the neurological and behavioural symptoms," said PhD student Priyanka Jamadagni, lead author of the article. "It allowed the diseased zebrafish model to partially recover its normal functions."

This research opens the door to new avenues for the treatment of other neurological disorders with similar neuronal imbalances, such as autism spectrum disorder and hyperactivity.

Credit: 
Institut national de la recherche scientifique - INRS

Scientists' discovery of blood clotting mechanism could lead to new antithrombotic drugs

Under normal, healthy circulatory conditions, the von Willebrand Factor (vWF) keeps to itself. The large and mysterious glycoprotein moves through the blood, balled up tightly, its reaction sites unexposed. But when significant bleeding occurs, it springs into action, initiating the clotting process.

When it works properly, vWF helps stop bleeding and saves lives. However, according to the Centers for Disease Control and Prevention (CDC), about 60,000 to 100,000 Americans die each year from thrombosis, a disorder characterized by too much clotting. Blood clots can trigger a stroke or heart attack.

According to X. Frank Zhang, associate professor in the Department of Bioengineering at Lehigh University, only one drug has been FDA-approved to target vWF and treat thrombosis, or excessive blood clotting disorders, Caplacizumab. It works by binding to vWF and blocking it from binding to platelets. However, no one has understood the specific mechanism behind how it accomplishes this.

Until now, for the first time, Zhang and his colleagues from Emory University School of Medicine and the University of Nottingham have identified the specific structural element of vWF that allows it to bind with platelets and initiate clotting. The team says that the specific unit, which they call the discontinuous autoinhibitory module, or AIM, is a prime site for new drug development. The work is described in an article published last week in Nature Communications, "Activation of von Willebrand factor via mechanical unfolding of its discontinuous autoinhibitory module." The study was co-led by Lehigh Bioengineering graduate student Wenpeng Cao.

"The AIM module allows the vWF molecule to remain non-reactive in normal circulating blood, and activates the vWF instantly upon bleeding," says Zhang. "In our research, we identified that Caplacizumab works by binding the AIM region of vWF and enhancing the force threshold to mechanically remove vWF's autoinhibitory structures, opening up a new avenue to the development of antithrombotic drugs targeting the AIM structures."

An essential feature of vWF is that it remains non-reactive towards platelets most of the time in circulation, says Zhang. However, at bleeding sites, vWF can be activated almost instantly to achieve platelet adhesion and blood clot formation. In this research, the team identified a structural element, AIM, located around the portion of vWF, called the A1 domain, that is responsible for binding platelets.

"In normal circulating blood," explains Zhang "the AIM wraps around the A1 and prevents the A1 from interacting with platelets. However, at the binding site, the blood flow pattern change leads to enough hydrodynamic force to stretch the AIM and pull it away from the A1, allowing the A1 to grab platelets to the bleeding site."

Zhang, who has been studying vWF for years, specializes in single-molecule force spectroscopy and mechanosensing, or how cells respond to mechanical stimuli. He uses a specialized tool called optical tweezers, which utilizes a focused laser beam to apply force to objects as small as a single molecule.

"Optical tweezers can grab tiny objects," Zhang explains. "We can grab the vWF and at the same time we apply force to see how the protein changes shape, to see how the proteins are activated when there's a mechanical perturbation or a mechanical force."

Zhang says that before conducting the study, the team suspected that they would find an autoinhibitory module, given previous research by co-corresponding author Renhao Li at Emory.

"However, we did not expect this inhibitory module to play such an important role in vWF," says Zhang. "It not only controls the vWF activation for platelet interaction, but plays a role in triggering some types of von Willebrand disease, a hereditary bleeding disease affecting one percent of the human population."

Credit: 
Lehigh University

Spring forest flowers likely key to bumble bee survival, Illinois study finds

image: The timing of floral resources complicates life for the rusty patched bumble bee, Bombus affinis, a new study finds. This bee is foraging on the flower of the bee balm, of the genus Monarda.

Image: 
U.S. Fish & Wildlife Service

CHAMPAIGN, Ill. -- For more than a decade, ecologists have been warning of a downward trend in bumble bee populations across North America, with habitat destruction a primary culprit in those losses. While efforts to preserve wild bees in the Midwest often focus on restoring native flowers to prairies, a new Illinois-based study finds evidence of a steady decline in the availability of springtime flowers in wooded landscapes.

The scarcity of early season flowers in forests - a primary food source for bumble bees at this time of year - likely endangers the queen bees' ability to start their nesting season and survive until other floral resources become available, researchers say. They report their findings in the Journal of Applied Ecology.

"We went through long-term vegetation data from 262 random sites across Illinois, most of them privately owned," said study co-author David Zaya, a plant ecologist in the Illinois Natural History Survey at the University of Illinois Urbana-Champaign. These data were collected through the Critical Trends Assessment Program, which began in 1997 at the INHS.

"We filtered our data to look at two subsets of plants: those used by bumble bees generally, and those thought to be preferred by the endangered rusty patched bumble bee, Bombus affinis," said study lead John Mol, a research ecologist at the U.S. Geological Survey Fort Collins Science Center. "We then looked at how the abundance and richness of these bumble bee food sources either increased, decreased or stayed the same since 1997."

The team also looked at timing - scouring records on flowering dates to map the availability of floral resources throughout the year for each habitat type: forest, grassland and wetland. They also analyzed satellite data to track trends in forest, grassland, wetland and agricultural acreage since 1997.

The research yielded some good news.

"We found that bumble bee food-plant cover has increased within Illinois grasslands by about 7% since 1997," Mola said. "This is great because it suggests that restoration or management actions by landowners is succeeding."

However, satellite data revealed that total acreage devoted to grassland in the state shrank by about 7.5% over the same time period.

"It may not matter much that the typical grassland is better if there's less of it," Mola said.

While the richness of floral resources in grasslands increased, the researchers saw an opposite trend in forested areas of the state, with food plants for bees in forests declining by 3-4% since 1997.

"The biggest finding of this study was that forest plants that bloom in spring appear to be declining, and the timing of those flowers matches up almost perfectly with when queens are out and foraging," Zaya said. "We can't say for sure, but if declining food is contributing to bumble bee declines, it is most likely related to when the queens are trying to establish nests."

Previous studies have shown that browsing by deer and invasive shrubs reduce the abundance of flowering plants in forests. Climate change also shifts the flowering times for many plants, potentially causing a mismatch between the bees' needs and the availability of food.

"The forest is a really important habitat for bees early in the season that often gets overlooked in pollinator conservation planning," Mola said. "This has me thinking very carefully about the role of forests in bumble bee conservation."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Causes of extreme weather and climate events in China during 2020/21

image: Figure 1. Schematic diagrams of atmospheric circulation patterns associated with the winter cold surges. (a) A normal, mild winter with the relatively flat jet stream. (b) A cold surge in China with the enhanced Siberian High and wavy jet stream. (c) A cold surge in the United States with the enhanced North American High and wavy jet stream.

Image: 
Chunzai Wang

During the summer of 2020, especially June and July, periods of extreme heavy rainfall occurred in China's Yangtze River Valley (YRV). These rain events caused the severest floods for the region since the summer of 1998. Despite this, the 2020 western North Pacific (WNP) typhoon season started slowly, but eventually produced 23 named tropical cyclones, still slightly below 27, the WNP seasonal average. As summer transitioned to winter, three severe cold surges swept most parts of China during late 2020 and early 2021, prompting the National Meteorological Center to issue its highest cold surge warning alert for the first time in four years. After a volatile weather year, scientists are finding answers as to why the past year featured so many extreme weather and climate events in China.

Professor Chunzai Wang and his team in the State Key Laboratory of Tropical Oceanography, South China Sea Institute of Oceanology, Chinese Academy of Sciences were tasked to analyze global and regional climate influences that may have played roles in the 2020/21 extreme weather events. The researchers found many key oceanographic and meteorological connections, which have just been published in Advances in Atmospheric Sciences.

Sea surface temperature (SST) fluctuations in the tropical Pacific, Indian, and Atlantic Oceans can contribute to heavy rainfall events in China. However, observational data suggest that Atlantic and Indian Ocean influences dominate over those from the Pacific. Beginning in May 2020, positive SST anomalies, or change from average, throughout the tropical western North Atlantic (WNA) induced positive geopotential height anomalies in June over the mid-latitude North Atlantic. Geopotential height is the altitude above sea level at which a certain pressure surface exists, typically analyzed at 500mb. This metric is excellent for identifying the ridges and troughs which affect the rainfall anomalies in the YRV via an Atlantic-induced atmospheric "wave train" across Eurasia. Further analysis suggests that the Indian Ocean did not significantly affect June rainfall over the YRV. However, when considering June and July rainfall together, both the Indian Ocean and WNA influences are important.

Regarding the extremely cold surges during the 2020/21 winter, Prof. Wang's team points to the Siberian High. An enhancement and northward movement of the Siberian High force the jet stream to develop a wavy pattern. This disrupts the polar vortex, allowing cold polar air to invade southward, thus inducing the cold surges in China (Fig. 1b) and North America (Fig. 1c).

The below average 2020 typhoon season is associated with large vertical wind shear (which is defined as the wind difference between the upper and lower troposphere) and low humidity in the WNP. Tropical cyclones do not form and develop in heavily sheared and dry environments. Scientists believe that these are responsible for fewer typhoons in the first half of the 2020 typhoon season.

This study points out the importance of three-ocean interactions and their influences across the Northern Hemisphere. The tropical Pacific, Indian, and Atlantic Oceans can affect the anticyclone in the WNP, providing moisture transport to its northwest side, therefore increasing summer rainfall in China. The same anticyclone also modifies atmospheric circulation and thermodynamic factors in the WNP, influencing typhoon activity. Global warming can increase the occurrence of extreme weather and climate events. However, future studies are needed to quantify the influences of global warming on an individual extreme weather and climate event.

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Restricting internet searches causes stock market instability: study

The research by RMIT University looked at the ramifications on the stock market following Google's withdrawal from mainland China in 2010.

It found access to unbiased information about companies' performance - aided by unrestricted internet search results - led to investors making more informed decisions.

On the flip side, search results manipulated to show overly positive information led to stocks for those companies being overvalued temporarily, increasing the stock market crash risk by 19%.

The study has been published in the Journal of Financial Economics.

Lead researcher Dr Gaoping Zheng, a lecturer in finance at RMIT, said the study showed search results influenced decisions, a challenge to previous thinking that they merely justified people's existing ideas.

"Until now it's been widely thought that unrestrictive internet searches result in bias and an overvaluation of stocks but that would mean restricting search would decrease stock market crash risk. Instead, we saw a significant jump," Zheng said.

"This suggests internet searching does not exacerbate investors' biases - instead, it facilitates their ability to access and analyse information."

The research has implications for Australia following Google's recent attempt to withdraw from the country.

"While China has alternative search engines, their results are concentrated and an identical search on Google would show vastly different results," Zheng said.

"Our research emphasises the importance of access to diverse results and if Google did decide to withdraw, it could have a destabilising impact on the economy."

Comparing China during and after Google

In 2010, Google unexpectedly withdrew its searching business from China, reducing investors' ability to find information online.

To measure the impact, researchers divided a list of Chinese firms into two groups: firms that had a high search volume on Google prior to 2010 and firms that were not regularly searched for on Google prior to 2010.

By averaging the stock price cash risk of both groups after Google withdrew and comparing their standard deviation, the researchers found firms that were regularly searched for on Google were 19% more unstable.

Zheng said while Chinese investors could still look for information about stocks using other search engines, they were more likely to be shown positively-biased information from websites hosted in China.

"Google was more likely to show content from international websites such as Bloomberg, Reuters or The New York Times, which are free from political constraints to talk about what is happening," she said.

"Investors were more likely to overvalue stocks due to biased information found through Chinese-owned search engines."

Zheng said restricted searches gave firms opportunities to hide adverse news from the public, preventing potential investors from discovering accurate information online.

"If managers withhold negative news, investors are less likely to mitigate their misconceptions and biases surrounding a certain stock," she said.

"Let's say I believed that eating carrots could cure cancer and searched the internet to confirm this. An unrestricted search would correct my bias because I would find that carrots are not actually a cure for cancer."

Credit: 
RMIT University

Cognitive neuroscience could pave the way for emotionally intelligent robots

image: A parallel LSTM network takes in MMCG features with different resolutions and yields outputs that are concatenated together and then sent to a merging LSTM layer and a dense layer to yield the valence (V) and arousal (A) sequences.

Image: 
Masashi Unoki

Ishikawa, Japan - Human beings have the ability to recognize emotions in others, but the same cannot be said for robots. Although perfectly capable of communicating with humans through speech, robots and virtual agents are only good at processing logical instructions, which greatly restricts human-robot interaction (HRI). Consequently, a great deal of research in HRI is about emotion recognition from speech. But first, how do we describe emotions?

Categorical emotions such as happiness, sadness, and anger are well-understood by us but can be hard for robots to register. Researchers have focused on "dimensional emotions," which constitute a gradual emotional transition in natural speech. "Continuous dimensional emotion can help a robot capture the time dynamics of a speaker's emotional state and accordingly adjust its manner of interaction and content in real time," explains Prof. Masashi Unoki from Japan Advanced Institute of Science and Technology (JAIST), who works on speech recognition and processing.

Studies have shown that an auditory perception model simulating the working of a human ear can generate what are called "temporal modulation cues," which faithfully capture the time dynamics of dimensional emotions. Neural networks can then be employed to extract features from these cues that reflect this time dynamics. However, due to the complexity and variety of auditory perception models, the feature extraction part turns out to be pretty challenging.

In a new study published in Neural Networks, Prof. Unoki and his colleagues, including Zhichao Peng, from Tianjin University, China (who led the study), Jianwu Dang from Pengcheng Laboratory, China, and Prof. Masato Akagi from JAIST, have now taken inspiration from a recent finding in cognitive neuroscience suggesting that our brain forms multiple representations of natural sounds with different degrees of spectral (i.e., frequency) and temporal resolutions through a combined analysis of spectral-temporal modulations. Accordingly, they have proposed a novel feature called multi-resolution modulation-filtered cochleagram (MMCG), which combines four modulation-filtered cochleagrams (time-frequency representations of the input sound) at different resolutions to obtain the temporal and contextual modulation cues. To account for the diversity of the cochleagrams, researchers designed a parallel neural network architecture called "long short-term memory" (LSTM), which modeled the time variations of multi-resolution signals from the cochleagrams and carried out extensive experiments on two datasets of spontaneous speech.

The results were encouraging. The researchers found that MMCG showed a significantly better emotion recognition performance than traditional acoustic-based features and other auditory-based features for both the datasets. Furthermore, the parallel LSTM network demonstrated a superior prediction of dimensional emotions than that with a plain LSTM-based approach.

Prof. Unoki is thrilled and contemplates improving upon the MMCG feature in future research. "Our next goal is to analyze the robustness of environmental noise sources and investigate our feature for other tasks, such as categorical emotion recognition, speech separation, and voice activity detection," he concludes.

Looks like it may not be too long before emotionally intelligent robots become a reality!

Credit: 
Japan Advanced Institute of Science and Technology

Study of marine noise highlights need to protect pristine Australian waters

video: New Curtin research has found urgent action is needed to ensure man-made underwater noise in Australian waters does not escalate to levels which could be harmful to marine animals, such as whales, and negatively impact our pristine oceans.

Image: 
Curtin University

New Curtin research has found urgent action is needed to ensure man-made underwater noise in Australian waters does not escalate to levels which could be harmful to marine animals, such as whales, and negatively impact our pristine oceans.

Lead author Professor Christine Erbe, Director of Curtin's Centre for Marine Science and Technology, said recent studies from the northern hemisphere showed man-made noise, in particular from ships, often dominates the underwater soundscape over large areas, such as entire seas, and could interfere with marine fauna that rely on sound for communication, navigation and foraging.

"When humans go to sea, they generate underwater noise, from boat and ship traffic, dredging, port construction, offshore exploration for oil and gas, offshore drilling, seafloor surveying, fishing and naval exercises, which impacts a wide variety of animals including, whales, dolphins, fishes and crustaceans," Professor Erbe said.

"We set out to measure and model underwater sound in Australia's maritime regions and found that on average, over the course of six months, ship noise dominated only in tightly localised regions or right under the major shipping routes when these are confined to a narrow channel or strip.

"In most of our waters, naturally generated underwater sound dominated and was mostly due to consistently strong winds blowing along Australia's southern coasts and strong whale and fish choruses."

Professor Erbe said while these findings show the vast majority of Australian maritime waters were not as polluted by man-made noise as some northern hemisphere waters, action was required in order for it to remain that way.

"If you define 'pristine' as rich in biological sounds and their diversity, and devoid of man-made noise, then Australia has several regions, not just pockets, where the marine soundscape is undisturbed," Professor Erbe said.

"We need to set out and protect these regions by becoming more proactive in managing our marine environment.

Usually we only become aware of an environmental problem when it's potentially too late, and find ourselves in a race to mitigate negative impacts. But in Australia, we have the opportunity to act early and protect healthy environments now."

The research was funded by the Federal Government's National Environmental Science Program and the paper, 'It often howls more than it chugs: Wind versus ship noise under water in Australia's maritime regions' was published in Journal of Marine Science and Engineering.

Credit: 
Curtin University

Touched by light: Photoexcited stannyl anions are great for producing organotin compounds

image: -

Image: 
Yuki Nagashima

Scientists at Tokyo Institute of Technology developed a new strategy for producing a wide range of organotin compounds, which are the building blocks of many organic synthesis methods. Their approach is based on the photoexcitation of stannyl anions, which alters their electronic state and increases their selectivity and reactivity to form useful compounds. This protocol will be helpful for the efficient synthesis of many bioactive products, novel drugs, and functional materials.

Organotin compounds, also known as stannanes, are made of tin (Sn), hydrocarbons, and sometimes other elements like nitrogen and oxygen. During the 1970s, stannanes rapidly took the spotlight as building blocks in the field of organic synthesis mainly because of their use as reagents in the Stille reaction, which remains essential for chemists to combine various organic molecules.

Equally important to the organotin reagents are the techniques and molecules that we rely on to make them. Stannyl anions have gained their place as the most widely used precursor for organotin reagents. However, their chemical properties make them prone to partake in unwanted reactions that compete with the synthesis of the target organotin reagent. This decreases yield and puts constraints on the main reaction, restricting the possible organotin reagents that can be produced in practice.

Surprisingly, in a recent collaborative study by Tokyo Institute of Technology and The University of Tokyo, Japan, scientists discovered a new type of stannyl species useful for producing organotin reagents. In their paper, published in the Journal of the American Chemical Society, they explain that this new stannyl species was first theorized to exist based on small anomalies observed in previous works. "During our studies involving stannyl anions, we occasionally detected small amounts of compounds called distannanes that were probably generated by the irradiation of stannyl anions with light. Inspired by these observations, we became interested in exploring the synthesis applications of these theoretical photoexcited stannyl anion species," explains Assistant Professor Yuki Nagashima, lead researcher from Tokyo Tech.

Through density functional theory calculations, the team determined that trimethyltin anion (Me3Sn), a model stannyl anion, has a special affinity for blue light, which energizes the molecule to an excited 'single' state. From this state, the system naturally progresses to another state known as excited 'triplet,' where two electrons are unpaired. This easy-to-induce progression from a stannyl anion to a stannyl radical in an excited 'triplet' state gives the stannyl species vastly different chemical properties, including enhanced reactivity and selectivity towards certain compounds.

The scientists explored reactions between photoexcited stannyl anions and several compounds, including alkynes, aryl fluorides, and aryl halides. They found that the photoexcited stannyl species had unprecedented selectivity for the synthesis of various useful reagents that conventional stannyl anions could not easily produce. Moreover, these photoexcited anions had a remarkable ability for the defluorostannylation and dehalostannylation of aryl molecules. In simpler terms, this means that if you have an aryl fluoride or halide (an organic molecule with a fluorine or halide group, respectively), it is easy to set up a reaction that substitutes the fluorine or halide group with a stannyl group. This allowed the researchers to create a wide variety of organotin reagents useful for Stille reactions.

Excited about the results, Prof. Nagashima remarks: "Although many stannylation methods and reagents have been established over almost two centuries, our protocol using photoexcited anion species provides a new and complementary tool for preparing a wide range of organotin compounds."

This new method will certainly be helpful for synthesizing many bioactive products, novel drugs, and functional materials, and further studies are already underway to see how far it will take us.

Credit: 
Tokyo Institute of Technology

Doctors overestimate risk leading to over-diagnosis, overtreatment, study finds

Primary care practitioners often over-estimate the likelihood of a patient having a medical condition based on reported symptoms and laboratory test results. Such overestimations can lead to overdiagnosis and overtreatment, according to a recent study conducted by researchers at the University of Maryland School of Medicine (UMSOM) published in JAMA Internal Medicine.

"A large gap exists between practitioner estimates and scientific estimates of the probability of disease," said study leader Daniel Morgan, MD, a Professor of Epidemiology & Public Health at UMSOM. "Practitioners who overestimate the probability of disease might use that overestimation when deciding whether to initiate therapy, which could lead to the overuse of risky medications and procedures."

To conduct the study, Dr. Morgan and his colleagues surveyed 553 primary health practitioners, including residents, attending physicians, nurse practitioners and physician assistants, in Maryland and seven other states. Survey respondents were asked to determine how well they could estimate the risk of four well-known health conditions based on hypothetical diagnostic scenarios. The researchers found, based on symptoms and test results, that health care providers significantly overestimated the likelihood of conditions. For example, health care providers, on average, estimated a 70 percent likelihood of cardiac ischemia in patients who had a positive finding on a stress test. In reality, based on evidence from medical studies, the real likelihood of cardiac ischemia is 2 to 11 percent.

The study also found that survey respondents estimated a 50 percent risk of breast cancer after a positive finding on a mammogram when evidence suggests 3 to 9 percent chance of breast cancer. They estimated an 80 percent likelihood of a urinary tract infection from a positive urine culture, and the vast majority of survey respondents said they would treat with antibiotics in these cases. The real risk of a UTI with a positive urine culture, however, is at most 8 percent.

"Solving this problem is not about asking health care providers to memorize numbers or practice math in order to improve their understanding of risks," Dr. Morgan said. "We should, however, use probability and better utilization of decision-making tools to help them make better estimates."

He developed a free tool called Testing Wisely, funded by the National Institutes of Health, that is designed to improve clinician understanding and ordering of diagnostic tests to make patient care safer. The site also includes a risk calculator to assess patients' symptoms, exposure, and local positivity rates where they live to calculate their individual risk of having COVID-19.

"Informed medical decision-making is incredibly important, and physicians should have access to tools that make their job easier and improve patient safety," said E. Albert Reece, MD, PhD, MBA, Executive Vice President for Medical Affairs, UM Baltimore, and the John Z. and Akiko K. Bowers Distinguished Professor and Dean, University of Maryland School of Medicine. "This study demonstrates the need for better decision-making tools to help healthcare providers provide the best possible care to their patients."

Credit: 
University of Maryland School of Medicine

Researchers use a nanoscale synthetic antiferromagnet to toggle nonlinear spin dynamics

image: Graduate students Arezoo Etesamirad (seated) and Rodolfo Rodriguez (right) are seen here with their advisor, Igor Barsukov.

Image: 
Barsukov lab, UC Riverside.

RIVERSIDE, Calif. -- Researchers at the University of California, Riverside, have used a nanoscale synthetic antiferromagnet to control the interaction between magnons -- research that could lead to faster and more energy-efficient computers.

In ferromagnets, electron spins point in the same direction. To make future computer technologies faster and more energy-efficient, spintronics research employs spin dynamics -- fluctuations of the electron spins -- to process information. Magnons, the quantum-mechanical units of spin fluctuations, interact with each other, leading to nonlinear features of the spin dynamics. Such nonlinearities play a central role in magnetic memory, spin torque oscillators, and many other spintronic applications.

For example, in the emergent field of magnetic neuromorphic networks -- a technology that mimics the brain -- nonlinearities are essential for tuning the response of magnetic neurons. Also, in another frontier area of research, nonlinear spin dynamics may become instrumental.

"We anticipate the concepts of quantum information and spintronics to consolidate in hybrid quantum systems," said Igor Barsukov, an assistant professor at the Department of Physics & Astronomy who led the study that appears in Applied Materials & Interfaces. "We will have to control nonlinear spin dynamics at the quantum level to achieve their functionality."

Barsukov explained that in nanomagnets, which serve as building blocks for many spintronic technologies, magnons show quantized energy levels. Interaction between the magnons follows certain symmetry rules. The research team learned to engineer the magnon interaction and identified two approaches to achieve nonlinearity: breaking the symmetry of the nanomagnet's spin configuration; and modifying the symmetry of the magnons. They chose the second approach.

"Modifying magnon symmetry is the more challenging but also more application-friendly approach," said Arezoo Etesamirad, the first author of the research paper and a graduate student in Barsukov's lab.

In their approach, the researchers subjected a nanomagnet to a magnetic field that showed nonuniformity at characteristic nanometer length scales. This nanoscale nonuniform magnetic field itself had to originate from another nanoscale object.

For a source of such a magnetic field, the researchers used a nanoscale synthetic antiferromagnet, or SAF, consisting of two ferromagnetic layers with antiparallel spin orientation. In its normal state, SAF generates nearly no stray field -- the magnetic field surrounding the SAF, which is very small. Once it undergoes the so-called spin-flop transition, the spins become canted and the SAF generates a stray field with nonuniformity at nanoscale, as needed. The researchers switched the SAF between the normal state and the spin-flop state in a controlled manner to toggle the symmetry-breaking field on and off.

"We were able to manipulate the magnon interaction coefficient by at least one order of magnitude," Etesamirad said. "This is a very promising result, which could be used to engineer coherent magnon coupling in quantum information systems, create distinct dissipative states in magnetic neuromorphic networks, and control large excitation regimes in spin-torque devices."

Credit: 
University of California - Riverside

Cloth face coverings can be as effective as surgical masks at protecting against COVID-19

Researchers from the Universities of Bristol and Surrey have found that well-fitting, three-layered cloth masks can be as effective at reducing the transmission of COVID-19 as surgical masks.

At the height of the COVID-19 pandemic, 139 countries mandated the use of face coverings in public space such as supermarkets and public transports. The World Health Organization also advises the use of face coverings and offers guidance on their effective features. Face coverings suppress the onward transmission of COVID-19 through exhalation and protect the wearer on inhalation.

In a paper published by the Physics of Fluids journal, the researchers detail how they looked at how liquid droplets are captured and filtered out in cloth masks by reviewing and modelling filtration processes, including inertial impaction.

Inertial impaction does not filter as a sieve or colander does - it works by forcing the air in your breath to twist and turn inside the mask so much that the droplets can't follow the path of the air. Instead, the droplets crash into fibres inside the mask to prevent inhalation.

The team found that, under ideal conditions and dependent on the fit, three-layered cloth masks can perform similarly to surgical masks for filtering droplets - with both reducing exposure by around 50 to 75 per cent. For example, if an infected person and a healthy individual are both wearing masks, scientists believe this could result in up to 94 per cent less exposure.

Dr Richard Sear, co-author of the study and Leader of the Soft Matter Group at the University of Surrey, said:

"While wearing a simple and relatively inexpensive cloth face mask cannot eliminate the risk of contracting COVID-19, measurements and our theoretical model suggests they are highly effective in reducing transmission. We hope that our work inspires mask designs to be optimised in the future and we hope it helps to remind people of the importance of continuing to wear masks while COVID-19 remains present in the community."

Credit: 
University of Surrey

May Day: How electricity brought power to strikes

Areas in Sweden with early access to electricity at the start of the 1900s underwent rapid change. Electrification led to more strikes, but it was not those who were threatened by the new technology who protested. Instead, it was the professional groups who had acquired a stronger negotiating position - thanks to technological development, according to new research from Lund University.

Labour market conditions are affected by new technology. Currently, the impact of automation on the labour market is often discussed, whether jobs will disappear as computers take over, or whether digitalisation drives development towards a gig economy with uncertain employment conditions. One fear is that the technological development could generate social unrest and a risk of increased conflict in society.

The ongoing automation shares many similarities with historical technological changes, such as the advent of electricity in the early 1900s. In a study recently published in the renowned Journal of Economic History, researchers Jakob Molinder, Tobias Karlsson and Kerstin Enflo show that areas in Sweden with early access to electricity from the national grid changed rapidly as a result of the new technology.

However, unlike what is feared today, the researchers show that jobs did not disappear. Demand for labour was certainly affected: in agriculture, many jobs were replaced by machines, but the need for a new labour force in other sectors more than compensated for the loss.

"Industry and service sector workers with some training and experience were particularly in demand, such as lathe operators in the manufacturing industry or electricians in construction," says Tobias Karlsson, one of the researchers behind the recently published study.

The rapid structural transformation led to conflicts in the labour market. The researchers have digitised data covering over 8 000 strikes and lockouts which took place in locations around the country between 1859 and 1938.

"The opportunity to examine where the strikes occurred provides new perspectives. By linking this data to information about the extension of the electrical grid, we can investigate the effects of the new technology on the labour market," says Jakob Molinder.
The researchers also gathered information about the causes of the strikes. The data surprised them:

"Because electrification meant that many jobs could be carried out by machines instead of human power, we thought that at least some of the conflicts would be about resistance to new machinery. However, when we went through the triggers for the Swedish strikes, we noted that almost no strikes were ascribed to technological change," says Kerstin Enflo.

The strikes were mostly about wage increases, and among professional groups who, thanks to the new technology, had acquired a stronger negotiating position.

"This pattern reminds us how strong professional groups in strategic negotiating positions, such as pilots and port workers, use the weapon of strikes today", concludes Tobias Karlsson.

Credit: 
Lund University

Measuring the Moon's nano dust is no small matter

image: Colorized screenshots of the exact shapes of moon dust collected during the Apollo 11 mission. NIST researchers and collaborators developed a method of measuring these nanoscale particles as a prelude to studying their light-scattering properties.

Image: 
E. Garboczi/NIST and A. Sharits/AFRL

Like a chameleon of the night sky, the Moon often changes its appearance. It might look larger, brighter or redder, for example, due to its phases, its position in the solar system or smoke in Earth's atmosphere. (It is not made of green cheese, however.)

Another factor in its appearance is the size and shape of moon dust particles, the small rock grains that cover the moon's surface. Researchers at the National Institute of Standards and Technology (NIST) are now measuring tinier moon dust particles than ever before, a step toward more precisely explaining the Moon's apparent color and brightness. This in turn might help improve tracking of weather patterns and other phenomena by satellite cameras that use the Moon as a calibration source.

NIST researchers and collaborators have developed a complex method of measuring the exact three-dimensional shape of 25 particles of moon dust collected during the Apollo 11 mission in 1969. The team includes researchers from the Air Force Research Laboratory, the Space Science Institute and the University of Missouri-Kansas City.

These researchers have been studying moon dust for several years. But as described in a new journal paper, they now have X-ray nano computed tomography (XCT), which allowed them to examine the shape of particles as small as 400 nanometers (billionths of a meter) in length.

The research team developed a method for both measuring and computationally analyzing how the dust particle shapes scatter light. Follow-up studies will include many more particles, and more clearly link their shape to light scattering. Researchers are especially interested in a feature called "albedo," moonspeak for how much light or radiation it reflects.

The recipe for measuring the Moon's nano dust is complicated. First you need to mix it with something, as if making an omelet, and then turn it on a stick for hours like a rotisserie chicken. Straws and dressmakers' pins are involved too.

"The procedure is elaborate because it is hard to get a small particle by itself, but one needs to measure many particles for good statistics, since they are randomly distributed in size and shape," NIST Fellow Ed Garboczi said.

"Since they are so tiny and because they only come in powders, a single particle needs to be separated from all the others," Garboczi continued. "They are too small to do that by hand, at least not in any quantity, so they must be carefully dispersed in a medium. The medium must also freeze their mechanical motion, in order to be able to get good XCT images. If there is any movement of the particles during the several hours of the XCT scan, then the images will be badly blurred and generally not usable. The final form of the sample must also be compatible with getting the X-ray source and camera close to the sample while it rotates, so a narrow, straight cylinder is best."

The procedure involved stirring the Apollo 11 material into epoxy, which was then dripped over the outside of a tiny straw to get a thin layer. Small pieces of this layer were then removed from the straw and mounted on dressmakers' pins, which were inserted into the XCT instrument.

The XCT machine generated X-ray images of the samples that were reconstructed by software into slices. NIST software stacked the slices into a 3D image and then converted it into a format that classified units of volume, or voxels, as either inside or outside the particles. The 3D particle shapes were identified computationally from these segmented images. The voxels making up each particle were saved in separate files that were forwarded to software for solving electromagnetic scattering problems in the visible to the infrared frequency range.

The results indicated that the color of light absorbed by a moon dust particle is highly sensitive to its shape and can be significantly different from that of spherical or ellipsoidal particles of the same size. That doesn't mean too much to the researchers -- yet.

"This is our first look at the influence of actual shapes of lunar particles on light scattering and focuses on some fundamental particle properties," co-author Jay Goguen of the Space Science Institute said. "The models developed here form the basis of future calculations that could model observations of the spectrum, brightness and polarization of the moon's surface and how those observed quantities change during the moon's phases."

The authors are now studying a wider range of moon dust shapes and sizes, including particles collected during the Apollo 14 mission in 1971. The moon dust samples were loaned to NIST by NASA's Curation and Analysis Planning Team for Extraterrestrial Materials program.

Credit: 
National Institute of Standards and Technology (NIST)

How can we stop mankind from stagnating?

image: Alan Turing's theory of pattern formation may explain human population distribution across the world.

Image: 
University of Leicester

Fast growth of the global human population has long been regarded as a major challenge that faces mankind. Presently, this challenge is becoming even more serious than before, in particular because many natural resources are estimated to deplete before the end of this century.

The increasing population pressure on agriculture and ecosystems and the environment more generally is predicted to result in worldwide food and water shortages, pollution, lack of housing, poverty and social tension. The situation is exacerbated by global climate change as considerable areas of land are predicted to be flooded and hence taken out of human's use.

It is widely believed that, unless alternative scenarios of sustainable population growth and social development are identified and implemented, mankind is likely to experience stagnation or even decline.

Population growth in time is complemented with the population dynamics in space. Population distribution over space is hugely heterogeneous for a variety of reasons, to mention the climate, the history and the economy just as a few. The spatial heterogeneity may result in significant migration flows that in turn can have a significant feedback on the local demography and the population growth.

On a smaller scale of individual countries and states, understanding of the factors affecting the population distribution in space is needed to ensure adequate development of infrastructure, transport and energy network.

Poorly informed decisions are likely to result in overcrowding and social problems in urban areas and/or lower quality of life in rural neighbourhoods. Identification of scenarios of sustainable population growth and social development on various spatial and temporal scales requires good understanding of the relevant processes and mechanisms that affect both the population growth and the population distribution. Arguably, such understanding is unlikely to be achieved without a well-developed theory and the corresponding mathematical/modelling framework.

Indeed, mathematical models of human population dynamics has a long history dating back to the seventeenth century. Over the last few decades, the need for an adequate and efficient mathematical theory of the human population dynamics has been reflected by a steady growth in the number of studies where problems of demography along with related issues of economy were considered using mathematical models, tools and techniques.

In our recent paper, we use mathematical modelling to address the phenomenon of heterogeneous spatial population distribution. Heterogeneity of geographical features (mountains, forests, rivers, etc.) and natural resources (e.g. coal, iron and copper ore) are commonly accepted as factors leading to the demographic and economic heterogeneity.

Here we ask a question: is this natural heterogeneity the only underlying cause, or can there be another and perhaps more general principle responsible for emergence of heterogeneous population distribution? In order to answer this question, we first revisit available data on the population density over a few areas in different parts of the world to show that, in all cases considered, the population distribution exhibits a clear nearly periodic spatial pattern in spite of the fact that the environmental conditions are relatively uniform. Being inspired by this finding, we then consider a novel model of coupled economic-demographic dynamics in space and time and endeavour to use it to simulate the spatial population distribution. The model consists of two coupled partial-differential equations of reaction- diffusion type.

Following a similar modelling approaches that was successfully used in ecology and biology we then show that the emergence of spatial patterns in our model appears to be possible as a result of Turing instability.

Although it is not our goal to provide any direct comparison between the real-world demographic patterns and the model properties, we regard the qualitative agreement between the model predictions and the data on the human population density as an indication that the heterogeneous population distribution observed across different countries in different continents may, at least in some cases, have been caused by endogenous rather than exogenous factors, i.e. may have appeared due to intrinsic Turing instability of the corresponding economic-demographic dynamical system.

In many countries, the population distribution over space is distinctly heterogeneous, e.g. urbanized areas with a high population density alternate with rural areas with a low population density. Apparently, spatial variation in geographical and climatic factors can play a significant role in shaping the population distribution.

Our main hypothesis in our paper is the existence of a dynamical mechanism that may lead to the formation of heterogeneous population distribution regardless of the geographical heterogeneity. In our search for the real-world examples we focus on the cases where the environment may be regarded, up to a certain spatial scale, as relatively uniform. The environmental properties that we consider here as proxies for the environmental heterogeneity are the elevation, the annual mean temperature and the annual mean precipitation.

Credit: 
University of Leicester