Tech

Financial networks: A new discipline to interpret crises and green transition

image: Guido Caldarelli and Stefano Battiston, professors at Ca' Foscari University of Venice and co-authors of the review.

Image: 
Ca' Foscari

Modelling the financial system as a network is a precondition to understanding and managing challenges of great relevance for society, including the containment of financial crises and the transition to a low-carbon economy.

Financial Networks is the scientific discipline that deals with these issues. An article published in the scientific journal Nature Review Physics carries out the first comprehensive review of this exciting interdisciplinary field. By covering over 250 studies across domains, the paper is also a call for researchers in all scientific disciplines to consider the insights from the financial network models, because of their implications for citizens, public agencies and governments. Professor Guido Caldarelli from Ca' Foscari University of Venice coordinated the study, which involved Marco Bardoscia, a researcher from the Bank of England, as the first author.

"Traditional economic models describe the financial system either as a macroeconomic aggregate or, in contrast, as a collection of microeconomic actors in isolation. - Stefano Battiston, co-author and professor of Finance, at the Department of Economics of Ca' Foscari University of Venice, explains. - Both approaches are not equipped to describe those phenomena that emerge at the intermediate scale, because of interconnectedness. Financial actors are connected (directly and indirectly) via contracts, markets and institutions. These phenomena include, in particular, the propagation of risk along a chain of contracts (financial contagion), as well as the collective behaviour of investors when they stamped to get rid of assets suddenly deemed as riskier than expected (fire-sales)".

The discipline of financial networks has thus filled important scientific gaps. The importance of financial networks is widely recognised today. Many central banks use network models to carry out stress-tests. The highest financial authorities both in the US and in the EU follow macroprudential policies that recognise the key role of the interconnectedness of the financial system. Indeed, network effects played a key role in the 2007-2008 financial crisis, with an impact persisting for a decade, and they played a role also in the Covid crisis.

"There is something fascinating and special about the field of financial networks. - Guido Caldarelli, a co-author of the study and professor of Physics at the Department of Molecular Sciences and Nanosystems of Ca' Foscari, adds. - Questions that pertain to finance and economics are addressed by modelling financial actors as nodes and financial contracts as links in a network. For instance, the question of how to preserve the stability of the financial system is addressed by looking at the interplay between the network structure (i.e. the topology), the characteristics of individual nodes (the balance sheets) and the dynamic process on the nodes (the propagation of financial losses)".

This approach owes a great deal to the field of statistical physics which has been historically devoted to the challenge of explaining the emergence of macroscopic behaviour of a system from the microscopic properties of the individual entities. However, in financial networks, there are additional distinct features that raise the stakes of the challenge. The entities of the system are not particles, but agents that form expectations about the future evolution of the network and even about the policy maker's attempt to regulate the system. This leads to new scientific questions in terms of the mathematical equations that can describe such reflexivity.

"In the near future, Financial Networks will address several exciting scientific challenges - Guido Caldarelli foresees - For example, modern financial systems are composed of multiple interacting networks because of agents acting on multiple markets with different instruments. Further, modelling the financial system poses big data issues as transactions generate Terabytes of information every day. Moreover, the interaction of the financial system with the real economy is a feedback loop and still not well-understood".

One avenue of research of particular interest for Ca' Foscari's researchers is the application of financial networks in the area of sustainable finance. The European Union has set the goal to become net carbon neutral by 2050. The transition to a carbon-neutral economy will avoid the most adverse impact of global warming on current and future generations.

It will also ensure the competitiveness of the EU and of Italy. Indeed, an early transition brings opportunities. In contrast, a transition that would be first delayed and then would occur in a sudden way would bring higher risks, possibly systemic. In fact, financial institutions have large exposures to economic activities that are affected by climate policies. Therefore, financial network models are key to understanding how to facilitate the transition and mitigate climate-related financial risks.

In addition, the article demonstrates the relevance of financial networks not only for research but also for practitioners in financial authorities and in the industry. "Today, having competencies in Financial Networks gives a competitive advantage to prospective students graduating both at a Master's and PhD level. - professor Battiston adds, - Courses in this field will enrich the curricula of study not only in physics but also in economics and finance. This could apply also to future executive education programmes".

Credit: 
Università Ca' Foscari Venezia

A promising new target for urinary tract infections and kidney stones

image: Desmopressin increased short-term urinary secretion of uromodulin and reduced kidney uromodulin abundance in vivo. In vitro studies revealed that this rapid effect depends on cAMP/PKA signaling pathway, cell polarity and probably apical proteases.

Image: 
Department of Nephrology,TMDU

Researchers from Tokyo Medical and Dental University (TMDU) find that the secretion of uromodulin protein into urine can be induced by treatments that may protect against urinary tract infections and kidney stones, among other diseases

Tokyo, Japan - The normal function of uromodulin, a protein that is made in the kidney and secreted into the urine, remains largely unknown. However, higher levels of uromodulin in the urine are related to lower rates of urinary tract infections and kidney stones, while higher levels of this protein in kidney cells are associated with higher rates of hypertension and chronic kidney disease. Researchers from Japan have now uncovered how uromodulin secretion into the urine can be increased by the hormone vasopressin--a finding that may have many practical applications.

In a study published recently in Hypertension, researchers from Tokyo Medical and Dental University (TMDU) revealed that vasopressin receptor stimulation leads to the short-term secretion of uromodulin into the urine in mice. They found that this occurs via the protein kinase A pathway, a common cell signaling pathway that is dependent on cyclic adenosine monophosphate (cAMP) levels, in kidney cells.

Most of our knowledge about the role of uromodulin in disease comes from genetic studies. Notably, uromodulin secretion into the urine seems to be particularly important for disease prevention, but little is known about how this secretion might occur. Researchers at Tokyo Medical and Dental University (TMDU) aimed to explore this process in more detail.

"The urinary secretion of uromodulin is associated with protection against many common diseases," says lead author of the study Azuma Nanamatsu. "We wanted to investigate how to increase this secretion, to develop better therapies in the future."

To do this, the researchers stimulated vasopressin receptors in mice, which has previously been reported to affect uromodulin secretion. When they saw that the treated mice had higher levels of uromodulin in the urine and lower levels in the kidney, they decided to test the effects of increased cAMP on uromodulin secretion in a kidney cell line, because vasopressin receptor stimulation leads to increased cAMP levels.

"We found that uromodulin was secreted from kidney cells when cAMP levels were increased," explains Takayasu Mori, senior author. "This secretion only happened on the apical cell surface, which normally faces the lumen or external space in the kidney."

The authors then found that this secretion of uromodulin could be decreased by treating cells with a protein kinase A inhibitor. Together, their findings suggest that vasopressin/cAMP/protein kinase A signaling is important for the secretion of uromodulin from kidney cells into the urine.

The findings of this study may be used to develop better therapies for diseases that are related to lower urinary uromodulin levels, such as urinary tract infections and kidney stones, as well as for those linked to higher kidney uromodulin levels, such as chronic kidney disease and hypertension.

Credit: 
Tokyo Medical and Dental University

UCLA and UIC researchers discover foam 'fizzics'

video: Drainage via stratification of micellar foam film formed by an aqueous SDS solution, visualized in reflected light microscopy. Each shade of gray corresponds to a different thickness. Stratification proceeds via nucleation of thinner darker domains, and number of steps increases with concentration, whereas step-size decreases with concentration.

Image: 
Chrystian Ochoa and Vivek Sharma

Chemical engineers at the University of Illinois Chicago and UCLA have answered longstanding questions about the underlying processes that determine the life cycle of liquid foams. The breakthrough could help improve the commercial production and application of foams in a broad range of industries.

Findings of the research were featured this month in Proceedings of the National Academy of Sciences.

Foams are a familiar phenomenon in everyday lives -- mixing soaps and detergents into water when doing dishes, blowing bubbles out of soapy water toys, sipping the foam off a cup of lattes or milk shake. Liquid foams can occur in a variety of natural and artificial settings. While some foams are produced naturally, as in bodies of water creating large ocean blooms on the beaches, others arise in industrial processes. In oil recovery and fermentation, for example, foams are a byproduct.

Whenever soapy water is agitated, foams are formed. They are mostly gas pockets separated by thin liquid films that often contain tiny molecular aggregates called micelles. Oily dirt, for example, is washed away by hiding in the water-phobic cores of micelles. In addition, fat digestion in our bodies relies on the role of micelles formed by bile salts.

Over time, foams dissipate as liquid within the thin films is squeezed out. Soap and detergent molecules that are by very nature amphiphilic (hydrophilic and hydrophobic) aggregate within water to form spherical micelles, with their outward-facing heads being hydrophilic and water-phobic tails forming the core.

"Micelles are tiny, but influential, not just in cleaning and solubilizing oil-loving molecules but also in affecting flows within foam films," said co-principal investigator Vivek Sharma, an associate professor of chemical engineering at the UIC College of Engineering. For nearly a decade, he has pursued the question of how and why the presence of micelles leads to stepwise thinning, or stratification, within ultrathin foam films and soap bubbles.

To solve the puzzle, Sharma and his collaborators developed advanced imaging methods they call IDIOM (interferometry digital imaging optical microscopy) protocols that are implemented with high speed and digital single-lens reflex (DLSR) cameras. They found that foam films have a rich, ever-changing topography, and the thickness differences between different strata are much greater than the size of micelles.

"We used a precision technique called small-angle X-ray scattering to resolve the micelles' shape, sizes, and densities," said co-principal investigator Samanvaya Srivastava, an assistant professor of chemical and biomolecular engineering at the UCLA Samueli School of Engineering. "We found that the foam film thickness decreases in discrete jumps, with each jump corresponding to the exact distance between the micelles in the liquid film."

The team also discovered that the arrangement of micelles in foam films is governed primarily by the ionic interactions between micelles. The electrostatic attraction and repulsion between ions influences how long foams remain stable and how their structure decays. With these findings, the researchers determined that by simply measuring the foam film thickness, which can be accomplished with a DSLR camera using the IDIOM protocols, they could characterize both the nanoscale interactions of micelles in liquids and the stability of the foams.

Compared to previous techniques that are more time-consuming and require expensive, customized equipment, the new method is not only less expensive but is also more comprehensive and efficient.

"The knowledge and understanding could aid in the development of new products -- from food and personal care to pharmaceuticals," said the study's co-lead authors, graduate students Shang Gao of UCLA Samueli and Chrystian Ochoa of UIC. "It could also help engineers improve the control of foams in industrial processes."

Credit: 
University of California - Los Angeles

Long-term Himalayan glacier study

image: Nanga Parbat: Photo of the Rupal flank taken in 2010

Image: 
Marcus Nüsser

The glaciers of Nanga Parbat - one of the highest mountains in the world - have been shrinking slightly but continually since the 1930s. This loss in surface area is evidenced by a long-term study conducted by researchers from the South Asia Institute of Heidelberg University. The geographers combined historical photographs, surveys, and topographical maps with current data, which allowed them to show glacial changes for this massif in the north-western Himalaya as far back as the mid-1800s.

Detailed long-term glacier studies that extend the observation period to the time before the ubiquitous availability of satellite data are barely possible in the Himalayan region due to the dearth of historical data. As Prof. Dr Marcus Nüsser from the South Asia Institute explains, this is not the case for the Nanga Parbat Massif. The earliest documents include sketch maps and drawings made during a research expedition in 1856. Based on this historical data, the Heidelberg researchers reconstructed the glacier changes along the South Face of Nanga Parbat. Additionally, there are numerous photographs and topographical maps stemming from climbing and scientific expeditions since 1934. Some of these historical photographs were retaken in the 1990s and 2010s from identical vantage points for the purpose of comparison. Satellite images dating back to the 1960s completed the data base Prof. Nüsser and his team used to create a multi-media temporal analysis and quantify glacier changes.

The Nanga Parbat glaciers largely fed by ice and snow avalanches show significantly lower retreat rates than other Himalayan regions. One exception is the mainly snow-fed Rupal Glacier, whose retreat rate is significantly higher. "Overall, more studies are needed to better understand the special influence of avalanche activity on glacier dynamics in this extreme high mountain region," states Prof. Nüsser.

The researchers are particularly interested in glacier fluctuations, changes in ice volume, and the increase of debris-covered areas on the glacier surfaces. Their analyses covered 63 glaciers already documented in 1934. "The analyses showed that the ice-covered area decreased by approximately seven percent, and three glaciers disappeared completely. At the same time we identified a significant increase in debris coverage," adds Prof. Nüsser. The geographical location of the Nanga Parbat Massif in the extreme northwest of the Himalayan arc near the Karakorum range could play a particular role in the comparatively moderate glacier retreat. In the phenomenon known as the Karakorum anomaly, no major glacier retreat has been identified as a result of climate change in this mountain range - as opposed to everywhere else in the world. "An increase in precipitation at high altitudes may be the reason, but the exact causes are still unknown," explains Prof. Nüsser. The researchers assume that the low ice losses in the Karakorum and the Nanga Parbat region may also be due to the protection offered by the massive debris-cover and a year-round avalanche flow from the steep flanks.

Credit: 
Heidelberg University

Passive rewilding can rapidly expand UK woodland at no cost

image: Woodland development at a former barley field abandoned in 1961, one of the two passive rewilding study sites next to Monks Wood, Cambridgeshire

Image: 
UKCEH

A long-term passive rewilding study has shown that natural regeneration could make a significant contribution to meeting the UK's ambitious woodland expansion targets - potentially at no cost and within relatively short timescales.

The research, led by the UK Centre for Ecology & Hydrology (UKCEH), found natural growth due to seed dispersal by birds, mammals and wind can produce biodiverse and resilient woodland.

Woodland development can be rapid, while avoiding the cost, management and plastic tubing involved in planting schemes.

The study - published in the journal PLOS ONE - found that after just 15 years, previously bare agricultural fields became a wildlife-rich shrubland. Within 40-50 years it had progressed to native closed-canopy oak, ash and field maple, with densities of up to 390 trees per hectare.

Meeting the Government's target to plant 30,000 hectares of woodland each year in the UK by 2025 is set to come at a high cost to the taxpayer, with schemes such as the £5.7 million 'Northern Forest' planned between Liverpool and Hull.

While natural regeneration relies on proximity to existing woodland or mature trees and is not suitable for all sites, the scientists involved in the study say incorporating passive rewilding into national planting targets could result in significant cost savings.

Their research has informed the Forestry Commission's new England Woodland Creation Offer scheme (EWCO), which is offering grants to landowners for natural tree colonisation for the first time.

Dr Richard Broughton of UKCEH, who led the study, says: "Biodiversity-rich woodland that is resilient to drought and reduces disease risk can be created without any input from us. Our study provides essential evidence that passive rewilding has the potential to expand native woodland habitat at no cost and within relatively short timescales.

"Natural colonisation could play a significant role in helping to meet the UK's ambitious targets for woodland creation, as well as nature recovery and net zero greenhouse gas emissions by 2050. It is an effective option for expanding woodland in many places without the costs of planting, the disease risk of transporting nursery-grown saplings, or using plastic tree tubes that are unsightly and pollute the environment."

Dr Broughton says the research also highlights the crucial role of natural seed dispersers such as wind, mammals and birds - especially jays, which are commonly regarded as pests by landowners and are persecuted for their predation of other birds. "The huge benefits that jays provide in natural colonisation by dispersing tree seeds, especially acorns, helps to create more woodland habitat for all wildlife and far outweighs any impact of predation," he adds.

The study was carried out by scientists from UKCEH as well as Bournemouth University; the Polish Academy of Sciences; the Natural Resources Institute in Finland; Pozna? University of Life Sciences in Poland; and the University of Cambridge.

The research team studied woodland development on two former agricultural fields over 24 and 59 years respectively - a two-hectare field of grassland abandoned in 1996 and a four-hectare barley field abandoned in 1961. The two sites are next to Monks Wood National Nature Reserve in Cambridgeshire, an ancient woodland that has been documented since 1279 AD.

The tree and shrub growth at Monks Wood has been monitored by researchers over several decades. This has included counting trees during field surveys as well as measuring vegetation heights and spatial cover using remote sensing data (Lidar laser scanning from aircraft).

Importantly, the study found the developing woodland was not hindered by herbivores such as deer and rabbits, so did not require fencing. Young trees were protected by the initial growth of bramble and thorny shrubs, giving truth to the old saying: 'The thorn is the mother of the oak'.

The young woodland was also resilient to periods of drought in dry summers, which will be important for future woodlands coping with climate change.

Credit: 
UK Centre for Ecology & Hydrology

Probing the dynamics of photoemission

Physicists at Ludwig-Maximilian University in Munich (LMU) and the Max Planck Institute for Quantum Optics (MPQ) have used ultrashort laser pulses to probe the dynamics of photoelectron emission in tungsten crystals.

Almost a century ago, Albert Einstein received the Nobel Prize for Physics for his explanation of the photoelectric effect. Published in 1905, Einstein's theory incorporated the idea that light is made up of particles called photons. When light impinges on matter, the electrons in the sample respond to the input of energy, and the interaction gives rise to what is known as the photoelectric effect. Light quanta (photons) are absorbed by the material and excite the bound electrons. Depending on the wavelength of the light source, this can result in the ejection of electrons. The electronic band structure of the material involved has a significant effect on the timescales of photoemission. Physicists based at Ludwig-Maximilian University (LMU) in Munich and the Max Planck Institute for Quantum Optics (MPQ) have now taken a closer look at the phenomenon of photoemission. They measured the influence of the band structure of tungsten on the dynamics of photoelectron emission, and provide theoretical interpretations of their observations.

This is now possible thanks to the development and continuing refinement of attosecond technology. An 'attosecond' corresponds to 10-18 of a second, i.e. a billionth of a billionth of a second. The ability to reproducibly generate trains of pulses of laser light that last for a few hundred attoseconds enables researchers to follow the course of photoemission by 'freezing the action' at regular intervals - analogously to a stroboscope, but with far better temporal resolution.

In a series of photoelectron spectroscopy experiments, the team used attosecond pulses of extreme ultraviolet light to probe the dynamics of photoemission from a tungsten crystal. Each pulse contained a few hundred X-ray photons, each energetic enough to dislodge a photoelectron. With the aid of detectors mounted in front of the crystal, the team was able to characterize the ejected electrons in terms of their times of flight and angles of emission.

The results revealed that electrons which interact with incoming photons take a little time to react to such encounters. This finding was made possible by the adoption of a new approach to the generation of attosecond pulses. Thanks to the introduction of a passive cavity resonator with an enhancement factor of 35, the new set-up can now produce attosecond pulses at a rate of 18.4 million per second, approximately 1000-fold higher than that previously common in comparable systems. Because the pulse repetition rate is so high, only very few photoelectrons per pulse are sufficient to provide a high average flux.

"Since the negatively charged photoelectrons repel one another, their kinetic energies are subject to rapid change. In order to characterize their dynamics, it's therefore important to distribute them over as many attosecond pulses as possible," as joint first author Dr. Tobias Saule explains. The increased pulse rate means the particles have little opportunity to interact with each other because they are well distributed in time and space, so that the maximal energy resolution is largely retained. In this way, the team was able to show that, in terms of the kinetics of photoemission, electrons in neighboring energy states in the valence band (i.e. the outermost orbits of the atoms in the crystal), which have different angular momenta also differ by a few tens of attoseconds in the time they take to respond to incoming photons.

Notably, the arrangement of the atoms within the crystal itself has a measurable influence on the delay between the arrival of the light pulse and the ejection of photoelectrons. "A crystal is made up of multitudes of atoms, all of whose nuclei are positively charged. Each nucleus is the source of an electrical potential, which attracts the negatively charged electrons - in the same way as a round hole acts as a potential well for marbles," says Dr. Stephan Heinrich, also joint first author of the report. "When an electron is dislodged from a crystal, what happens is a bit like the progress of a marble across a table that is pitted with depressions.

These indentations represent the positions of the individual atoms in the crystal, and they are regularly organized. The trajectory of the marble is directly affected by their presence, and it differs from what would be observed on a smooth surface," he points out. "We have now demonstrated how such a periodic potential within a crystal affects the temporal behavior of photoemission - ¬and we can theoretically account for it," Stephan Heinrich explains. The delays observed can be attributed to the complex nature of electron transport from the interior to the surface of the crystal, and to the impact of the electron scattering and correlation effects that this entails.

"The insights provided by our study open up the possibility of experimental investigations of the complex interactions that take place in multi-electron systems in condensed matter on an attosecond timescale. This in turn will enable us to understand them theoretically," says LMU-Prof. Ulf Kleineberg, who led the project.

In the longer term, the new findings could also lead to novel materials with electronic properties that enhance light-matter interactions, which would make solar cells more efficient, and improve switching rates of nano-optical components for ultrafast data processing and promote the development of nanosystems for use in the biomedical sciences.

Credit: 
Ludwig-Maximilians-Universität München

University of Groningen scientists design superfast molecular motor

image: Upon simultaneous excitation of the two chromophores by light the they repel each other through dipolar interactions. As the chromophores are bound to each other they start rotating about the bond keeping them together.

Image: 
Thomas Jansen, University of Groningen

Light-driven molecular motors have been around for over twenty years. These motors typically take microseconds to nanoseconds for one revolution. Thomas Jansen, associate professor of physics at the University of Groningen, and Master's student Atreya Majumdar have now designed an even faster molecular motor. The new design is driven by light only and can make a full turn in picoseconds, using the power of a single photon. Jansen: 'We have developed a new out-of-the-box design for a motor molecule that is much faster.' The design was published in The Journal of Physical Chemistry Letters on 7 June.

The new motor molecule design started with a project in which Jansen wanted to understand the energy landscape of excited chromophores. 'These chromophores can attract or repel each other. I wondered if we could use this to make them do something', explains Jansen. He gave the project to Atreya Majumdar, then a first-year student in the Top Master's degree programme in Nanoscience in Groningen. Majumdar simulated the interaction between two chromophores that were connected to form a single molecule.

Light

Majumdar, who is now a PhD student in nanoscience at the Université Paris-Saclay in France, explains what he found: 'A single photon will excite both chromophores simultaneously, creating dipoles that make them repel each other.' But as they are stuck together, connected by a triple bond axis, the two halves push each other away around the axis. 'During this movement, they start to attract each other.' Together, this results in a full rotation, generated by the light energy and the electrostatic communication between the two chromophores.

The original light-driven molecular motor was developed by Jansen's colleague Ben Feringa, Professor of Organic Chemistry at the University of Groningen and recipient of the 2016 Nobel Prize for Chemistry. This motor makes one revolution in four steps. Two steps are driven by light and two are driven by heat. 'The heat steps are rate-limiting,' explains Jansen. 'The molecule has to wait for a fluctuation in heat energy to drive it to the next step.'

Bottlenecks

By contrast, in the new design, a rotation is fully downhill from an excited state. And as - due to the laws of quantum dynamics - one photon excites both chromophores simultaneously, there are no major bottlenecks to limit the speed of rotation, which is therefore two to three orders of magnitude greater than that of the classic 'Feringa' motors.

All of this is still theoretical, based on calculations and simulations. 'Building one of these motors is not trivial', acknowledges Jansen. The chromophores are widely used but slightly fragile. Creating a triple bond axis is also not easy. Jansen expects that someone will try to build this organic molecule now that its properties have been described. And it is not one specific molecule that has these properties, adds Majumdar: 'We have created a general guide for the design of this type of molecular motor.'

Blueprint

As for applications, Jansen can think of a handful. They might be used to power drug delivery or move nanoscale objects on a surface, or they might be used in other nanotech applications. And the rotational speed is well above that of the average biophysical process, so it may be used to control biological processes. In the simulations, the motors were attached to a surface but they will also rotate in solution. Jansen: 'It will require a lot of engineering and tweaking to realize these motors but our blueprint will deliver a brand-new type of molecular motor.'

Credit: 
University of Groningen

If you ride an e-scooter, take safety precautions

image: Chair of the Department of Otolaryngology - Head and Neck Surgery at Henry Ford Health System, and the study's senior author.

Image: 
Henry Ford Health System

As pandemic restrictions begin to loosen around the country and summer temperatures rise, more people will be moving about on public rideshare electric scooters. With that comes this warning: Ride with safety.

A Henry Ford Health System study published in The Laryngoscope, shows that head and neck injuries caused by use of e-scooters have been on the rise since rideshare systems were introduced to the public in late 2017.

Kathleen Yaremchuk, M.D., Chair of the Department of Otolaryngology - Head and Neck Surgery and the study's senior author, said that a review of emergency visits in the last three years showed e-scooter injuries have increased significantly with many related to head and neck injuries. "Since e-scooters became a popular form of transportation in major cities, the number of injuries jumped significantly because they've become more available to more people," said Dr. Yaremchuk.

Henry Ford researchers looked at available data from the U.S. Consumer Product Safety Commission and found that between January 2009 and December 2019 there were more than 100,000 e-scooter related injuries reported. The study found that head and neck injuries made up nearly 28% of the total e-scooter related injuries reported.

Dr. Yaremchuk hopes that local and national regulations can be developed to increase ride safety.

Researchers found that since the introduction of rideshare e-scooters, motorized vehicles that can reach speeds of up to 35 miles per hour, injuries have increased as more people gravitate to the inexpensive and convenient form of transportation used mostly in crowded urban centers and on college campuses.

"We hope our findings will help educate users of rideshare e-scooters about the potential for serious head and neck injuries and the safety precautions they should take," said Dr. Yaremchuk.

E-scooters are part of the Micromobility revolution that has been called the future of urban transportation. Serious injuries, though, are mounting among riders who find themselves unguarded against cars and bicycles and fixed street ornaments like light poles and signs.

The study found common types of e-scooter related head and neck injuries included:

Internal organs, including brain injuries, 32.5%

Lacerations, 24.9%

Contusions and abrasions, 15.6%

Concussions, 11.1%

Fractures, 7.8%

"As a physician, I would recommend that people who use this mode of transportation wear a helmet and apply the same approach as when driving a car," said Samantha Tam, M.D., a Henry Ford otolaryngologist and study co-author.

If you plan to ride an electric scooter, here are a few safety tips to reduce your risk for injury:

Wear a helmet, knee and elbow pads

Wear appropriate clothing that won't constrict your body while riding

Understand the specifications, features and capabilities of the specific e-scooter you are riding

Observe traffic laws, focus on the path ahead and watch for pedestrians, cars and other obstacles

Safety research has shown that e-scooter accidents involved cars and ground obstacles such as curbs, poles and even manhole covers. Other factors that led to accidents include mechanical problems such as failing brakes and wheels, and distracted riders.

Credit: 
Henry Ford Health

New in the Hastings Center Report, May-June 2021

In the Name of Racial Justice: Why Bioethics Should Care about Environmental Toxins

Keisha Ray

Facilities that emit hazardous toxins, such as toxic landfills, oil refineries, and chemical plants, are disproportionately located in predominantly Black, Latinx, and Indigenous neighborhoods. Environmental injustices like these threaten just distribution of health itself. Facilities that emit environmental toxins wrongly make people's race, ethnicity, income, and neighborhood essential to who is allowed to breathe clean air and drink clean water, and thus, who is allowed to be healthy. This can be seen in the environmental crises in Louisiana; Mississippi; Houston, Texas; and Flint, Michigan. Since bioethics purports to concern itself with the principle of justice as applied to individuals and increasingly to populations, the field ought to concern itself more with environmental injustice. Keisha Ray is an assistant professor at McGovern Medical School at the University of Texas Health Science Center at Houston.

Credit: 
The Hastings Center

Thin, stretchable biosensors could make surgery safer

image: The new biosensors allow for simultaneous recording and imaging of tissues and organs during surgical procedures. In this photo, researchers attached the biosensor to the heart of a pig that was obtained commercially.

Image: 
Photo Bongjoong Kim, Purdue University.

LOS ALAMOS, N.M., June 17, 2021 -- A research team from Los Alamos National Laboratory and Purdue University have developed bio-inks for biosensors that could help localize critical regions in tissues and organs during surgical operations.

"The ink used in the biosensors is biocompatible and provides a user-friendly design with excellent workable time frames of more than one day," said Kwan-Soo Lee, of Los Alamos' Chemical Diagnostics and Engineering group.

The new biosensors allow for simultaneous recording and imaging of tissues and organs during surgical procedures.

"Simultaneous recording and imaging could be useful during heart surgery in localizing critical regions and guiding surgical interventions such as a procedure for restoring normal heart rhythms," said Chi Hwan Lee, the Leslie A. Geddes Assistant Professor of Biomedical Engineering and Assistant Professor of Mechanical Engineering and, by courtesy, of Materials Engineering at Purdue University.

Los Alamos was responsible for formulating and synthesizing the bio-inks, with the goal of creating create an ultra-soft, thin and stretchable material for biosensors that is capable of seamlessly interfacing with the surface of organs. They did this using 3D-printing techniques.

"Silicone materials are liquid and flow like honey, which is why it is very challenging to 3D-print without sagging and flowing issues during printing," Kwan-Soo Lee said. "It is very exciting to have found a way to create printed inks that do not have any shape deformation during the curing process."

The bio-inks are softer than tissue, stretch without experiencing sensor degradation, and have reliable natural adhesion to the wet surface of organs without needing additional adhesives.

Craig Goergen, the Leslie A. Geddes Associate Professor of Biomedical Engineering at Purdue University, aided with the in vivo assessment of the patch via testing in both mice and pigs. The results showed the biosensor was able to reliably measure electrical signal while not impairing cardiac function.

The research was published today in Nature Communications. It was funded by Science Campaign 2.

Credit: 
DOE/Los Alamos National Laboratory

Study explores how the elderly use smart speaker technology

Researchers from Bentley University, in partnership with Waltham Council on Aging in Massachusetts, and as part of a study funded by the National Science Foundation, have been exploring how the elderly use smart speakers at home. Waltham, a satellite city about eight miles west of Cambridge has a population of about 60,000, with about one in six being an elderly citizen. The purpose of the study was to understand how the elderly use the smart speaker technology at home. A smart speaker is a hardware device that is always-on. When a wake-word triggers the software contained in the device, the smart speaker listens to the command to provide a response or carry out the command (accessing resources on the internet as needed). News stories about smart speakers have often contemplated issues such as privacy but often lack actual usage data.

The researchers deployed smart speakers in seven homes with individuals or couples aging-at-home over several months to collect actual usage data. The study followed research protocols and was designed to ensure confidentiality. The time period ranged from 178 days (about 6 months) to 410 days (about a year and a month) for different users. The usage data was analyzed to characterize use patterns. The biggest finding was heterogeneity in use patterns. In general, mornings and afternoons were more active, uses such music and news were most prevalent, and most interactions were simple commands as opposed to longer conversations. Most participants did not demonstrate sustained use over a long period. As an example, one participant who used the smart speaker for more than a year had an enthusiastic start, followed by moderate use (listen to the 14-second sonification here - listen to how the early high notes give way to repeating low notes).

"Smart speaker technology has the potential to be more than a surveillance or shopping device particularly for population segments such as the aging-at-home. Our study shows that the elderly can use this voice-based technology more easily. Future research can develop specific voice skills aimed at this population segment," says Dr. Sandeep Purao, a professor of information and process management and one of the lead researchers. "The usage data streams provide several opportunities for analysis that provide a window into how the elderly use the smart speaker technology," says Dr. Haijing Hao, a computer information systems professor who also participated in the study.

The results of this work are being published in the Big Data Research journal. The team will continue the work by developing software platforms for the elderly to manage health information and exploring the design of voice skills for the elderly.

The work is important. The US Census Bureau projects that by 2040, about one in five Americans will be age 65 or older, up from about one in eight in 2000. Our study shows that technologies such as smart speakers have the potential for sustained use by the elderly, and therefore, can be the basis for developing new solutions for this population segment.

Credit: 
Bentley University

Women in science propose changes to discriminatory measures of scientific success

image: Ana K. Spalding and 23 other women scientists from around the world, advocate for a shift in the value system in science, to emphasize a more equal, diverse and inclusive academic culture.

Image: 
Jorge Aleman

When Ana K. Spalding, a Research Associate at the Smithsonian Tropical Research Institute (STRI) and Assistant Professor of Marine and Coastal Policy at Oregon State University (OSU) talks about mentorship in academia, she describes it as meaningful relationship. It goes beyond conversations about research and publications, and into shared experiences. This is just one approach--proposed by Spalding and 23 other women scientists from around the world, in a new article published in PLOS Biology--that calls for a shift in the value system of science to emphasize a more equal, diverse and inclusive academic culture.

The authors came together after reading a paper in Nature Communications that was later retracted, which claimed that women in science fare better with male rather than female mentors. That paper used data on co-authorship among senior and junior researchers and citations as measures of mentorship and success. Yet these metrics are flawed and biased against marginalized groups. The data show that women receive more manuscript rejections and are less likely to be published in prestigious journals than men, while ethnically diverse scientific teams experience lower acceptance rates and fewer citations than less diverse teams.

Meanwhile, productivity is not always a sign of a supportive working environment, and recent studies show that graduate students are twice as likely to experience mental health challenges, compared to the general population with equivalent education. For women of color in STEMM fields, the trend is even more pronounced. They face both systemic sexism and racism, along with daily microaggressions. The situation is not much better for sexual minorities.

Spalding is afro-Panamanian, and a minority among tenured faculty at OSU. According to a 2019 article in the Chronicle of Higher Education, less than 1 percent of tenured professors at OSU look like her. In the state of Oregon, things are not much different, where only about 2 percent of the population is Black or African American, per the U.S. Census Bureau. So, around OSU, challenges range from lack of representation in predominantly white spaces where her presence or expertise are questioned, to finding places to get her hair done (where hair, as a representation of Blackness is often questioned or seen as unprofessional).

As a graduate student, she never felt represented. Looking back, Spalding understands how important it would have been to have the kind of support where her identity and culture were considered holistically. In that sense, the PLOS Biology article encourages individuals to explore a variety of mentoring relationships throughout their careers: relationships that may help them achieve their different goals and needs beyond academia. This is one crucial way to promote wellbeing and foster a sense of belonging for mentees with diverse backgrounds, increasing their retention in science.

"Don't think you have to be a certain way to belong," Spalding said. "Feel confident that you belong, but also look for people who accept you as you are."

Ultimately, the authors call for the scientific community, in particular those in positions of power and privilege, to take strong action towards helping ensure safe and healthy work environments for scientists from diverse backgrounds, while supporting a more inclusive value system in science that embraces the multifaceted nature of scientific impact. With these changes in place, the scientific world will not only have a greater capacity for innovation, which is essential for addressing the pressing challenges of our times, such as pandemics and climate change, but will be a better place from a purely humanistic perspective.

As an example, Katalin Kariko, a Hungarian scientist who immigrated to the United States, struggled to find a permanent position for decades, relying instead on senior scientists to take her in. Now, her groundbreaking research in mRNA has made possible the development of the Pfizer-BioNTech and Moderna vaccines against Covid-19.

"I'm particularly excited about the idea of expanding our definitions of science to be more inclusive of applied and relevant contributions to societal issues such as climate change (which may or may not get into the highest impact journals)," said Spalding. "Furthermore, I am keen on supporting a 'multidimensional mentorship model' that emphasizes mentee wellbeing in academia."

Credit: 
Smithsonian Tropical Research Institute

On the road to practical, low-cost superconductors with unexplored materials

image: Scientists systematically optimized the composition of (Gd0.33Y0.33-xEr0.33+x)-123 samples by tuning the ratio of Y and Er in the 211 precursor (specifically, x=0, 0.05, 0.1, 0.15, and 0.2). The sample corresponding to x=0.2 showed the highest trapped field.

Image: 
Muralidhar Miryala from SIT, Japan

Superconductors are something like a miracle in the modern world. Their unique property of zero resistance can revolutionize power transmission and transport (e.g., Maglev train). However, most of the conventional superconductors require cooling down to extremely low temperatures that can only be achieved with liquid helium, a rather expensive coolant. Material scientists are now investigating "high-temperature superconductors" (HTSs) that can be cooled to a superconducting state by using the significantly cheaper liquid nitrogen (which has a remarkably higher temperature than liquid helium).

Currently, a prospective HTS material for such an exploration is (RE)Ba2Cu3Oy, RE-123, where RE stands for "Rare Earth" elements such as yttrium (Y), gadolinium (Gd), erbium (Er), neodymium (Nd), or europium (Eu). These materials in the single-crystalline form are able to overcome physical constraints that weaken superconductivity, thereby opening doors to a variety of engineering applications.

In a recent study published in the Journal of Alloys and Compounds, a team of scientists from Shibaura Institute of Technology, Japan, led by Prof. Muralidhar Miryala, a pioneer in the area of HTS, developed single-crystalline bulk superconductors that can trap magnetic fields within them in a manner similar to how ferromagnets (iron, nickel, cobalt) retain the magnetic field. "The trapped field is one of the most relevant parameters in many practical applications of bulk RE-123 and is related to the bulk diameter," explains Prof. Miryala.

Among the several techniques available for fabricating bulk RE-123, the team went for an infiltrated growth (IG) technique, in which solid (RE)BaCuO5 (RE-211) reacts with a Ba-Cu-O liquid phase to form the superconducting RE-123. Prof. Miryala lays down the motivation behind their approach: "IG technique produces RE-123 bulks without homogeneities, can be performed in air, and scaled up to industrial levels. Moreover, it provides a fertile ground for exploring ternary RE elements systems, which have not been studied until now."

Recently, the team investigated the ternary (Gd0.33Y0.33-xEr0.33+x)-123 bulk system, optimizing its composition by tuning the ratio of Y and Er in the 211 precursor (specifically, x = 0, 0.05, 0.1, 0.15, and 0.2). The team characterized the superconducting phases in the samples using X-ray diffraction and measured the trapped field and superconducting transition temperature (Tc). Finally, they carried out microstructural and chemical analysis using field-emission scanning electron microscope (FESEM) and energy-dispersive X-ray spectroscopy (EDX).

The XRD proved the single-crystalline nature of the RE-123 bulks, with Tc values in the range (91.5-92) K, which were significantly above the boiling point of liquid nitrogen (77K), and the highest trapped field of 0.53 tesla was observed in (Gd0.33Y0.13Er0.53)-123 (x=0.2). FESEM and EDX identified finely dispersed (Gd, Y, Er) -211 particles in all samples, with an Er-rich precipitates distribution for x=0.2, the sample which also showed the best superconducting performance.

"The findings in our study provides a notion of how to implement a low-cost production of high-performance (Gd, Y, Er)BCO bulks for real-life applications such as magnetic levitation, superconducting bearing, flywheel energy storage, magnetic resonance imaging, rotary motors, drug delivery, and water purification," comments a contemplative Prof. Miryala.

Looks like a superconducting future may not be too far!

Credit: 
Shibaura Institute of Technology

Convalescent plasma improves survival in COVID-19 patients with blood cancers

SAN ANTONIO (June 17, 2021) — Convalescent plasma therapy was associated with better survival in blood cancer patients hospitalized with COVID-19, especially in sicker patients. The findings by the COVID-19 and Cancer Consortium (CCC19) are newly published in the peer-reviewed journal JAMA Oncology.

The Mays Cancer Center, home to UT Health San Antonio MD Anderson, is part of the CCC19. The international consortium is composed of 124 medical centers and institutions in North and South America that conduct research to learn how COVID-19 affects cancer patients.

Dimpy Shah, MD, PhD, is an epidemiologist and assistant professor of population health sciences at The University of Texas Health Science Center at San Antonio, and a member of its Mays Cancer Center. She serves on the CCC19 steering committee, leads the consortium’s Epidemiology Core Committee and is a co-senior author on the study.

“Early case reports suggested that cancer patients with COVID-19 benefitted from convalescent plasma, but this is the first analysis that associated convalescent plasma with improved survival using this large, real-world data set,” Dr. Shah said.

The analysis compared the 30-day death rates of hospitalized adults with both blood cancer and COVID-19 from patient data supplied by the CCC19 consortium institutions. The analysis compared treatment data from 143 patients who received convalescent plasma and 823 who did not.

“Our study showed a 48% reduced risk of death for COVID patients who had blood cancer and had received convalescent plasma compared to similar patients who did not receive this treatment,” she said. “This survival benefit with convalescent plasma was even greater in patients who were admitted to the intensive care unit (60% reduced risk of death) and those who needed mechanical ventilation (68% reduced mortality),” she said.

Blood cancers are associated with defects in the immune system. They begin either in the bone marrow, where blood is made, or in cells of the immune system. These types of cancer include leukemia, lymphoma and multiple myeloma.

Plasma is the largest component of human blood. Red blood cells, white blood cells and platelets are other major blood components. Convalescent plasma is plasma donated by patients who have recovered from an infection, such as COVID 19, and has been used to treat other patients suffering from the disease. Convalescent plasma was used to treat patients during the 1916 poliomyelitis outbreak in New York and during the 1918 Spanish flu epidemic, as well as later for viral infections.

“While acknowledging the limitations of non-randomized observational data, this study provides an important signal regarding the benefits of convalescent plasma. We recommend that researchers conduct randomized clinical trials to prospectively evaluate the benefits of convalescent plasma in patients with blood cancer and severe COVID-19,” added Pankil Shah, MD, PhD, MSPH, assistant professor of urology at UT Health San Antonio. As the lead data scientist, Dr. Pankil Shah performed the analysis for this CCC19 study.

Ruben Mesa, MD, FACP, executive director of the Mays Cancer Center, added, “Membership in the CCC19 is just one example of our cancer center’s commitment to provide the best possible care for patients throughout South Texas. In addition to providing evidence-based treatments, we also offer our patients participation in hundreds of cancer clinical trials led nationally and at our cancer center that evaluate the newest possible treatments.”

He continued: “Throughout the pandemic we have safely cared for our patients and encourage the public to continue getting cancer screenings to avoid the possible progression of undiagnosed cancer to stages where it is more difficult to treat. We are open and ready to safely provide cancer screenings, research opportunities and treatment to the people of South Texas.”

Credit: 
University of Texas Health Science Center at San Antonio

Mountain fires burning higher at unprecedented rates

Forest fires have crept higher up mountains over the past few decades, scorching areas previously too wet to burn, according to researchers from McGill University. As wildfires advance uphill, a staggering 11% of all Western U.S. forests are now at risk.

"Climate change and drought conditions in the West are drying out high-elevation forests, making them particularly susceptible to blazes," says lead author Mohammad Reza Alizadeh, a PhD student at McGill University under the supervision of Professor Jan Adamowski. "This creates new dangers for mountain communities, with impacts on downstream water supplies and the plants and wildlife that call these forests home."

Climate warming has diminished 'flammability barrier'

In a study published in Proceedings of the National Academy of Sciences, the researchers analyzed records of fires larger than 405 hectares in the mountainous regions of the contiguous Western U.S. between 1984 and 2017. Their results show that climate warming has diminished the 'high-elevation flammability barrier' - the point where forests historically were too wet to burn regularly because of the lingering presence of snow. The researchers found that fires advanced about 252 meters uphill in the Western mountains over those three decades.

The amount of land that burned increased across all elevations during that period, however the largest increase was at elevations above 2,500 meters. Additionally, the area burning above 8,200 feet more than tripled in 2001 to 2017 compared with 1984 to 2000. Over the past 34 years, rising temperatures have extended fire territory in the West to an additional 81,500 square kilometers of high-elevation forests, an area similar in size to South Carolina.

"Climate change continues to increase the risk of fire, and this trend will likely continue as the planet warms. More fire activity higher in the mountains is yet another warning of the dangers that lie ahead," says co-author Jan Adamowski, a Professor in the Department of Bioresource Engineering at McGill University.

Credit: 
McGill University