Tech

Magnetic field of a spiral galaxy

This image shows the huge extent of a spiral galaxy's magnetic field. The galaxy NGC 4217 is a star-forming, spiral galaxy, similar to our own Milky Way, 67 million light-years from Earth in the constellation Ursa Major. The galaxy is seen edge-on in a visible-light image from the Sloan Digital Sky Survey and Kitt Peak National Observatory, and the magnetic field lines, shown as green, are revealed by the National Science Foundation's Karl G. Jansky Very Large Array (VLA) radio telescope.

The magnetic field lines extend as much as 22,500 light-years beyond the galaxy's disk. Scientists know that magnetic fields play an important role in many processes, such as star formation, within galaxies. However, it is not fully understood how such huge magnetic fields are generated and maintained. A leading explanation, called the dynamo theory, suggests that magnetic fields are generated by the motion of plasma within the galaxy's disk. Ideas about the cause of the kinds of large vertical extensions seen in this image are more speculative, and astronomers hope that further observations and more analysis will answer some of the outstanding questions.

"This image clearly shows that when we think of galaxies like the Milky Way, we should not forget that they have galaxy-wide magnetic fields," said Yelena Stein, of the Centre de Données astronomiques de Strasbourg, leader of the study.

The scientists who produced the image are reporting their results in the journal Astronomy & Astrophysics.

Credit: 
National Radio Astronomy Observatory

Tropical Storm Douglas organizing in NASA infrared imagery

image: On July 21 at 6 a.m. EDT (1000 UTC), the MODIS instrument aboard NASA's Aqua satellite gathered temperature information about Tropical Storm Douglas' cloud tops. MODIS found several areas of powerful thunderstorms (red) where temperatures were as cold as or colder than minus 70 degrees Fahrenheit (minus 56.6 Celsius).

Image: 
NASA/NRL

Tropical Depression 8E developed on July 20 and quickly organized into a tropical storm. Infrared NASA satellite imagery revealed that Tropical Storm Douglas contained strong storms and showed banding of thunderstorms around its center.
Tropical Depression 8E formed about 905 miles (1,460 km) southwest of the southern tip of Baja California, Mexico by 11 a.m. EDT on July 20. Within 12 hours, 8E had strengthened into a tropical storm and was renamed Douglas.

On July 21 at 6 a.m. a.m. EDT (1000 UTC), the Moderate Resolution Imaging Spectroradiometer or MODIS instrument aboard NASA's Aqua satellite analyzed Douglas' cloud tops in infrared light. Infrared data provides temperature information, and the strongest thunderstorms that reach high into the atmosphere have the coldest cloud top temperatures.

The National Hurricane Center (NHC) noted that Douglas' center is also now embedded beneath a central dense overcast (a large central area of thunderstorms surrounding its circulation center) in infrared imagery, near an area of cold overshooting cloud tops. Aqua found several areas of strong storms around the center of Douglas' circulation and in broken bands of thunderstorms wrapping into that low-level center. In those areas, temperatures were as cold as or colder than minus 70 degrees Fahrenheit (minus 56.6 Celsius). Cloud top temperatures that cold indicate strong storms with the potential to generate heavy rainfall.

At 10 a.m. EDT (1500 UTC), the center of Tropical Storm Douglas was located near latitude 12.4 degrees north and longitude 124.2 degrees west. About 2,110 miles (3,390 km) east of Hilo, Hawaii.

Maximum sustained winds have increased to near 65 mph (100 kph) with higher gusts.

The estimated minimum central pressure is 998 millibars. Douglas is moving toward the west-southwest near 15 mph (24 kph). A turn toward the west at a similar forward speed is expected later today, followed by a turn toward the west-northwest Wednesday night (July 22).

NHC forecaster Robbie Berg said, "The low-[wind]shear, warm sea surface temperature environment within which Douglas is moving is a recipe for continued strengthening, potentially at a rapid rate, for the next 48 hours."

Additional strengthening is forecast during the next several days, and Douglas could become a hurricane later today.

Tropical cyclones/hurricanes are the most powerful weather events on Earth. NASA's expertise in space and scientific exploration contributes to essential services provided to the American people by other federal agencies, such as hurricane weather forecasting.

For updated forecasts, visit: http://www.nhc.noaa.gov

Credit: 
NASA/Goddard Space Flight Center

Study calls for review of rice and sugar in food subsidy programme

image: the research team saw no evidence of improvements when children received subsidized rice and sugar.

Image: 
Lancaster University

The nutritional benefit of rice and sugar distributed by a national food subsidy programme in India may be limited, says new research published today.

India's main food subsidy program, the Public Distribution System (PDS) provides sugar, rice, and wheat to households at reasonably low costs to improve their nutrition intake and attain food security.

Although the programme aims to improve nutritional outcomes through its subsidies, the research team saw no evidence of improvements when children received subsidized rice and sugar.

'Subsidising rice and sugar? The Public Distribution System and Nutritional Outcomes in Andhra Pradesh India', carried out by a research team from Oxford and Lancaster Universities, the BITS Pilani in India, and Bocconi University in Italy, is published in the Journal of Social Policy.

Today, one in every nine people in the world is hungry. In India, 38% of children under 5 experience long-term malnutrition that impact their growth, cognition and psycho-social development and perpetuate a cycle of intergenerational poverty. This already alarming situation is compounded by the Covid-19 outbreak and the measures to contain it.

Food subsidy programs are a key component of efforts to combat food insecurity and malnutrition around the globe, including in India.

Subsidy programs can offer important caloric and nutrient supplementation, and may also free up income for households to spend on other vital items. However, it is also possible that subsidizing items of limited nutritional value can promote unhealthy dietary patterns.

"Importantly, our findings suggest that nutritional outcomes and food subsidies need to be considered over time rather than as a snapshot. This is essential for understanding not just the short-term effects of subsidies, but also the association with long-term nutritional outcomes," said Dr Jasmine Fledderjohann, of Lancaster University.

"The subsidised foods available in the PDS may very well prevent severe malnutrition in the short-term by addressing caloric deficiencies, but rice and sugar subsidies appear not to improve longer-term nutritional outcomes," she explained.

"Our findings suggest the subsidies should be carefully reviewed. It is possible that other more nutrient-dense foods could offer greater benefits for improving nutrition," said Dr Sukumar Vellakkal of the BITS Pilani Goa campus.

The study also found that, particularly for wealthier households, the subsidies encouraged the consumption of less nutritious foods, with children in households receiving sugar subsidies snacking on sugary treats.

They also found that boys received more rice and sugar than girls, which is consistent with broader evidence of son preference in India.

A range of previous research has found no evidence of gender disparities in food consumption, but this body of research has generally focused on whether or not boys and girls consumed specific foods, and not on the quantity of those foods.

Findings in this report suggest that girls may receive the same food items as boys, but in more limited quantities.

Data on children's nutrition came from the longitudinal Young Lives Survey, conducted in Andhra Pradesh.

The study constitutes an important contribution to the evidence on food subsidy schemes and nutrition outcomes.

As countries around the globe struggle to feed their populations in the midst of record unemployment, food system disruptions, and social distancing associated with the COVID-19 pandemic, careful consideration of what items are included in food aid programs and what the long-term consequences of specific programs of food provision are is vital.

Findings from this study highlight that it is important from a long-term perspective to consider carefully the nutritional value of foods on offer through state provisioning. Addressing caloric shortfalls in the short-term may save lives, but subsidized food items of limited nutritional value may not improve longer-term problems.

Credit: 
Lancaster University

New insights into anxiety

Tension while waiting for test results, the fear of not making it, the feeling of being under pressure, apprehension: these emotional states often come with physical illnesses like backache, headache, nausea, tachycardia, tremors, difficulty breathing, dizziness. These illnesses, which vary in intensity and duration, are all associated with anxiety, which includes a variety of disorders. While there is no definite cure for anxiety, neuro-scientific research is making progress to develop new diagnostic tools and more efficient treatments.

The study conducted by researchers of the University of Trento, which has just been published in Scientific Reports, pursues these goals and helps draw a line between different aspects of anxiety and to find the best treatment for each one. The team of researchers focused on what goes on in the brain of people with the two main types of the condition: trait and state anxiety, respectively the temporary and the stable, chronic form of the disease.

Nicola De Pisapia, a researcher of the Department of Psychology and Cognitive Science of the University of Trento and scientific coordinator of the study, explained the difference between the two: "If you are feeling very tense today, but you usually are calm and quiet, you have high state and low trait anxiety. Whereas if you are unusually quiet, while in general you feel nervous, you may have low state and high trait anxiety. Therefore, state anxiety is a temporary condition, while trait anxiety is usually a stable feature of a person". Clinical experience shows, among other things, that individuals with trait anxiety have difficulties managing stressful situations, are at risk of depression, have altered cognitive functions, are less socially competitive and tend to develop psychopathological disorders.

Differentiating between trait and state anxiety is helpful to choose the most appropriate treatment for patients and to prevent the condition from becoming chronic. "Our study makes it clear that it is fundamental to treat individuals with state anxiety so that they do not develop trait anxiety, which is a chronic condition. One way to treat it is to reduce anxiety as soon as it manifests itself, for example using relaxation techniques, physical activity and other means that improve personal well-being in general", commented De Pisapia.

The goal of the study was to better understand the neural bases of the two types of anxiety. "Our research group - commented De Pisapia - studied and investigated with MRI the anatomy and activity of the brain at rest in more than 40 individuals. We then correlated our measurements to variations in state and trait anxiety in the participants in the study with standard questionnaires which are also used in clinical practice. We found that the most stable aspects of trait anxiety are associated with specific anatomic traits which are therefore constant, and lead to developing repetitive and self-generated negative thoughts, while the features of state anxiety correlate to functional connectivity of the brain, which is a transient and dynamic activity".

In other words, trait anxiety correlates to permanent anatomic features (in the anterior cingulate cortex and medial prefrontal cortex) while state anxiety manifests with temporary reactions in the brain activity.

The study conducted by the University of Trento also led to findings that can be useful in clinical practice. "Based on our results - concluded Nicola De Pisapia - a strategic improvement in anxiety regulation in high trait anxiety individuals could be achieved via pharmacological and/or neurostimulation methods (for example with Transcranial Magnetic Stimulation or transcranial Direct Current Stimulation). Finally, these findings may lead to the creation of new diagnostic tools and treatments aimed at ameliorating the symptoms of anxiety disorders and treat them before they become chronic".

Credit: 
Università di Trento

Smile: Atomic imaging finds root of tooth decay

ITHACA, N.Y. - A collaboration between researchers from Cornell University, Northwestern University and University of Virginia combined complementary imaging techniques to explore the atomic structure of human enamel, exposing tiny chemical flaws in the fundamental building blocks of our teeth. The findings could help scientists prevent or possibly reverse tooth decay.

The team's paper, "Chemical Gradients in Human Enamel Crystallites," published July 1 in Nature. Cornell's contribution was led by Lena Kourkoutis, associate professor in applied and engineering physics. Derk Joester, professor of materials science and engineering at Northwestern, directed the research.

The paper's co-lead authors are Northwestern doctoral student Karen DeRocher and postdoctoral researcher Paul Smeets.

Thanks to its high mineral count, tooth enamel is a sturdy substance that can withstand the rigors of chewing, although excessive acid in the mouth can make it vulnerable to decay. While scientists have previously peeked into the crystallites that compose enamel, nanoscale images of its structure and chemical composition have been harder to come by. In one method, scanning transmission electron microscopy, or STEM, a beam of electrons is shot through a sample. But that process has its limits.

"Enamel is mechanically a very, very strong material, but when you put it in the electron microscope, it's very sensitive to the electron beam," Kourkoutis said. "So compared to the crystalline materials that you find in electronics, for example, you can only put a fraction of the number of electrons into an enamel crystal. Normally, pushing down to the atomic scale means you have to put more electrons into the material. But if it damages the material before you get the information out, then you're lost."

In recent years, Joester's Northwestern group has imaged sensitive biological materials with atom probe tomography, a process that essentially strips atoms off a sample's surface one at a time and reconstructs the structure of the material.

At the same time, Cornell researchers at PARADIM (Platform for the Accelerated Realization, Analysis and Discovery of Interface Materials), a National Science Foundation-supported user facility, have advanced a form of low-temperature electron microscopy that can image the atomic structure of radiation-sensitive samples. The technique can also safely map a sample's chemical composition by measuring how much energy is lost when the electrons interact with the atoms.

"When you operate at low temperature, the material becomes more robust against electron beam damage," said Kourkoutis, who directs PARADIM's electron microscopy facility. "We are now working at the intersection between the developments in the physical sciences which have pushed electron microscopy to the atomic scale and the developments in the life sciences in the cryogenic field."

The two university groups linked up after Smeets, a member of Joester's group, attended PARADIM's summer school on electron microscopy in 2017. There, he learned how PARADIM's cryogenic electron microscopy capabilities could complement Northwestern's human enamel project.

Smeets worked with Kourkoutis' doctoral students Berit Goodge and Michael Zachman, Ph.D. '18, co-authors of the new paper. The group performed cryogenic electron microscopy on enamel samples that were cooled with liquid nitrogen to around 90 kelvins, or minus 298 degrees Fahrenheit.

By combining their complementary techniques, the Cornell and Northwestern researchers were able to image an enamel crystallite and its hydroxylapatite atomic lattice. But all was not crystal clear: The lattice contained dark distortions - caused by two nanometric layers with magnesium, as well as sodium, fluoride and carbonate ion impurities near the core of the crystal.

Additional modeling confirmed the irregularities are a source of strain in the crystallite. Paradoxically, these irregularities and the enamel's core-shell architecture may also play a role in reinforcing the enamel, making it more resilient.

The researchers say the findings could lead to new treatments for strengthening enamel and combating cavities.

"On the foundation of what we discovered, I believe that atom probe tomography and correlative electron microscopy will also have tremendous impact on our understanding of how enamel forms, and how diseases like molar incisor hypomineralization disrupt this process," Joester said.

And mouths aren't the only beneficiaries of cryogenic electron microscopy. Kourkoutis is also using the process to probe the chemistry in energy systems, such as batteries and fuel cells that contain a mix of soft electrolytes and hard electrode materials.

Credit: 
Cornell University

WashU-developed holograms help physicians during cardiac procedure

Bringing a little bit of science fiction into an operating room, a team of engineers and physicians at Washington University in St. Louis has shown for the first time that using a holographic display improves physician accuracy when performing a procedure to treat irregular heartbeat.

Jennifer N. Avari Silva, MD, associate professor of pediatrics at the School of Medicine, and Jonathan Silva, associate professor of biomedical engineering in the McKelvey School of Engineering, co-led a team that tested a Microsoft HoloLens headset with custom software during cardiac ablation procedures on patients at St. Louis Children's Hospital. Results of the trial were published in the Journal of the American College of Cardiology - Electrophysiology July 2020, International Conference on Human Computer Interaction July 10, 2020 and IEEE Journal of Translational Engineering in Health and Medicine, July 3, 2020.

In an ablation procedure using existing technology, known as electroanatomic mapping system (EAMS), a technician controls the catheters while the physician views the images on monitors presented in two different planes. The physician then has to mentally create the image of the heart. Jennifer Silva, also director of pediatric electrophysiology at St. Louis Children's Hospital, said that standard-of-care technology is antiquated, and the team believed it could do better.

Jon Silva and his team of engineers created software for the Microsoft HoloLens headset that converts the data from the catheters fed into the patient's heart into a geometrical holographic image that hovers over the patient. The headset, which weighs roughly a pound, allows the physician to take control of the procedure by using his or her gaze to guide the controls and to keep hands free and sterile. Their system, Enhanced Electrophysiology Visualization and Interaction System (?LVIS), provides a 3D digital image of the patient's electroanatomic maps that provide a picture of the inside of the heart, which they can measure and manipulate during the procedure.

Jon Silva said that the technology has caught up with the application. The headset allows the user to view his or her whole environment, including the patient, unlike virtual reality, which take a user completely out of his or her environment.

"The old headsets were slow to update and made the users sick," he said. "The development of mobile phones and mobile technology and computer has enabled these kinds of displays and headsets."

To test the device, two physicians at St. Louis Children's Hospital received a short training session on the device before using it on a total of 16 pediatric patients. During a post-procedure waiting phase, the physicians were given 60 seconds to navigate to each of five target markers within the geometry of the heart, using both the 3D ?LVIS and the 2D EAMS technology. The physicians were significantly more accurate with the ?LVIS technology.

"Without the use of the ?LVIS 3D display, a significant fraction of ablation lesions, 34%, would be made outside of the target area, as opposed to 6% with ?LVIS 3D display," Jon Silva said. "We expect that this will improve patient outcomes and potentially reduce the need for repeat procedures."

Jennifer Silva said the team learned a lot from taking something from the lab to nearly market-ready.

"What ended up being equally important, if not more important, was that this was the springboard for everything that is to come, not only that we can visualize it better, but that we can control it," Jennifer Silva said. "There are people working in this extended reality space who have come to conclusions that the control is the strongest value-add, particularly in medical applications."

Credit: 
Washington University in St. Louis

Starve the cancer

Fighting cancer often means employing a suite of techniques to target the tumor and prevent it from growing and spreading to other parts of the body. It's no small feat -- the American Cancer Society predicts roughly 1.8 million new cases of cancer in the country in 2020, underscoring the need to identify additional ways to outsmart the runaway cells.

Researchers at UC Santa Barbara may have added to that arsenal by helping to identify a cellular mechanism that, if inhibited, could enable interruption of the signal to proliferate, as well as starvation of the malignant cells and their eventual death. Their research is published in the journal Science Signaling.

"This particular approach is unique in the sense that it targets mitochondrial metabolism," said organic chemistry professor and paper co-author Armen Zakarian, whose lab looks to nature to guide the molecules they synthesize. Though strategies continue to diversify, cancer chemotherapy typically works by damaging the cells' genes, rendering them unable to replicate successfully. Targeting the ability of the cancer cells to access the energy and biological molecules they need to carry out their functions is a relatively new tactic, and the subject of intense research, according to Zakarian.

The molecular workhorse of this project is derived from sea sponges -- Xestospongin B (XeB), isolated from Xestospongia exigua. First isolated years ago by study co-author Jordi Molgó, Zakarian said, the molecule was subsequently found by researchers, including Cesar Cárdenas, this paper's lead author, to have inhibitory effects on inositol triphosphate (IP3) receptors found on the endoplasmic reticulum (ER). The ER is a cell organelle that performs several functions, including molecule transport and storage, and synthesis of lipids, proteins and nucleic acids. In addition to faculty appointments at the Universidad Mayor and at the Geroscience Center for Brain Health and Metabolism in Chile, Cárdenas holds an adjunct position in UCSB's Department of Chemistry and Biochemistry.

But, supply of the XeB molecule had gotten low. Production by the marine sponge is not guaranteed, so attempts to re-isolate the molecule were coming up short.

"That's where we came in," Zakarian said of the collaboration behind this paper. As a result of the synthetic access his lab provided, the researchers determined that by using XeB to block activation of IP3 receptors, they were able to prevent the subsequent calcium ion transport from the ER to the mitochondria -- a signal that kicks off processes in the mitochondria that produce chemical energy (ATP) and metabolic intermediates necessary for cell survival.

"There's no calcium in the mitochondria and it impacts the bioenergetics and, perhaps more importantly, the cell's metabolism," Zakarian said. In particular, the mitochondria fail to proceed with a modified metabolism favored by cancer cells that results in the Warburg effect, which, in addition to generating ATP, results in highly efficient conversion of nutrients into biomass (tumor growth). Cancer cells also were found to be particularly sensitive to the toxicity caused by XeB's interruption, while healthy cells remained viable.

ER-to-mitochondria calcium ion transport is essential also to the more typical and well-studied oxidative phosphorylation type of metabolism that normal and some cancer cells use for energy. Inhibiting inositol triphosphate receptors and lowering calcium ion uptake reduces ATP in this scenario, and prolonged inhibition, according to the researchers, "generates a bioenergetic crisis that results in >70% cell death" in tumorigenic breast and prostate cells metabolizing via oxidative phosphorylation.

While the bioenergetic crisis would be experienced also by non-cancerous cells exposed to XeB or a similar inhibitor, the energy requirements of cancer cells render them more vulnerable.

"Because the cancer cells have all of these high-energy demands, as well as increased demands for metabolic intermediates to sustain replication, basically it begins the process that leads to cell death," Zakarian said. "Normal cells could survive the period of energetic stress and recover."

The finding that ER-to-mitochondria calcium flow is critical for multiple metabolic cancer pathways suggests that this mechanism could be an important target for future cancer therapies, and potentially some subtypes that are resistant to current chemotherapies. The research is still ongoing.

"We're going to continue to get data right now on the effects of xestospongin itself on cancer cells and also on neurodegeneration," said Zakarian, whose lab is studying how to scale up production of XeB. "Long-term goals could be in developing some sort of therapeutics."

Credit: 
University of California - Santa Barbara

US military improved mortality since World War II, but there have been alarming exceptions

New analysis shows that while the survivability of wounds on the battlefield has steadily improved for United States service members since World War II, there were several increases that bucked that trend during subsequent conflicts. By understanding these bumps and making steps to improve readiness between conflicts, troops' lives could be saved in the future. These insights were published in a special supplement of the Journal of Trauma and Acute Care Surgery focusing on the military.

"This shows us the big picture of combat casualty outcomes from the beginning of World War II through the modern era, and, at the same time, it also provides significant details on the month-to-month outcomes in each individual war," said the supplement's editor and this study's first author, Jeremy Cannon, MD, the Trauma medical director and section chief of Trauma, as well as an associate professor of Surgery at Penn Medicine. "In all, this is good news because our outcomes have improved significantly over time. However, we see that there is still work to be done--specifically in identifying specific areas for improvement and in keeping our medical corps ready for the next conflict."

The researchers examined several different metrics for this study: the case fatality rate (CFR)--a measure of the total lethality of the battlefield, which is determined by dividing the total number of combat deaths by the number of combat deaths and combat wounded; the killed in action (KIA) rate-- the percentage of combatants who died before hospitalization; and the died of wounds (DOW) rate--the percentage of those wounded who died after receiving hospital-level care.

Four different conflicts were studied, each required to be at least three years long to properly assess the data: World War II, the Korean War, and Vietnam War, with Operations Enduring Freedom (Afghanistan conflict) and Iraqi Freedom assessed separately as well as together.

Since the start of World War II, the researchers found significant gains across two of their measures. The combat fatality rate fell from 55 to 12 percent between the start of World War II and the most recent conflicts, as did the KIA rate (52 to 5 percent). These were all numbers that confirmed historic studies looking at the big picture.

However, as the research team dove into the month-to-month outcomes of each conflict, they found instances of major spikes in mortality amid conflicts. In the case of Vietnam, for example, extremely low rates of fatality in the middle of the conflict, approximately 19 percent, rose to 63 percent during the last stages of the war. Cannon and his co-authors have speculated that factors like poor compliance with body armor use and withdrawal of medical assets despite continued combat may have contributed, but this finding also represents an important area for further analysis.

Additionally, the start of each conflict they studied exhibited higher than expected fatality rates, given what was achieved in the previous war. This was determined by examining something called the "observed to expected mortality ratio," which takes the lowest sustained case fatality rate from the previous conflict and makes it the benchmark for the next. The reasoning for this number is the belief that progress made in a previous war should carry over to the next. In practice, this study showed that is not universally the case.

In every conflict studied, at some point during the first and even into the second year, fatality rates exceeded the previous conflict's best numbers. Rates for U.S. troops in the first year of World War II and Vietnam were more than triple the expected rate through significant parts of the first year, and Operation Enduring Freedom was twice that of the previous conflict at one point. Although the Korean War stayed close to the expected fatality rate through most of its first two years, it did begin above the benchmark and actually closed its second year above it as well.

The authors called these unexpected increases "the peacetime effect."

"Most major conflicts are separated by a number of years and, in many ways, you're starting from scratch at the beginning of each conflict," Cannon explained. "In the time between wars, those with deployment experience move off into civilian practice, and the lessons learned fade from the military's collective memory. Then, when the next conflict does occur, many medical personnel have never deployed before and many also aren't as versed in military history and the experiences of others."

Something else the researchers discovered that was not expected was that the DOW rate stayed roughly the same across all earlier conflicts, until the recent wars in Iraq and Afghanistan, when it increased.

"Why specifically did the rate of death increase after casualties reached the hospital?" Cannon questioned. "Although this may be an artifact of being able to more rapidly transport those with worse wounds in the modern era, this finding needs to be examined more closely."

Cannon believes that these phenomena could use further studying to better uncover their triggers. But he has an idea that involves using civilian hospitals as training grounds for military personnel to help solve the lapses in fatality after interwar periods.

"Busy trauma centers across the U.S. and even internationally can provide a robust surrogate experience for military teams in clinical care," Cannon explained. "At the same time, research advances that benefit both civilian and military trauma victims can also continue during peacetime in these busy civilian centers."

Credit: 
University of Pennsylvania School of Medicine

Shortening of average serial interval over time indicates isolation effectively limits COVID-19 transmission

A new analysis of SARS-CoV-2 transmission data in China shows that faster identification and isolation of infected, symptomatic individuals contributed to the shortening of the average serial interval - or the period between the onset of symptoms in successive cases - over time, as fewer opportunities occurred for viral transmission from one infector to more individuals. Importantly, changes in average serial intervals in this study reflect the efficacy of case isolation, the researchers demonstrate, rather than indicating modified virulence or incubation times. If a primary infector contacts fewer individuals over time as a result of isolation, this contributes to a shorter average serial interval, the authors show. A longer average serial interval indicates that a failure to isolate infectors leads to greater numbers of uninfected people being exposed to the virus for greater lengths of time. For a given chain of disease transmission, the average serial interval was thought to be fixed, but this study reveals that interventions like isolation can affect real-time changes in serial intervals as the transmission chain unfolds. Therefore, serial intervals may serve as useful tools to assess whether an intervention is effectively limiting transmission. More broadly, serial intervals could help researchers measure population immunity, forecast future incidence of infection, and quantify accurate reproduction numbers, or the expected number of cases arising from a single infector.

Analyzing publicly available data on 677 different pairs of infectors and infectees, Sheikh Ali and colleagues found that from January 9 to 22 of 2020, the serial interval averaged 7.8 days, whereas from January 30 to February 13, the average was 2.2 days, shortening by more than threefold over the 36-day period. Evaluating potential associations between serial intervals and each patient's age, sex, household contacts, or isolation delay (the duration from the onset of symptoms to isolation), the researchers found that the threefold decrease in average serial interval correlated with quicker isolation of symptomatic infectors. A model that simulated a reduction in isolation delay from 10 to 0 days for each infector confirmed this association, and further demonstrated that serial intervals became shorter as infectors were isolated more quickly, regardless of when the infector became infectious before the onset of illness. Based on their findings, the authors argue that fixed serial intervals cannot be generalized to other places or periods - rather, real-time estimation of serial intervals accounting for variation over time provides a more accurate picture of disease transmission in a population.

Credit: 
American Association for the Advancement of Science (AAAS)

Anti-Asian hate crime during the COVID-19 pandemic

Under the Hate Crime Statistic Act, hate crimes are defined as “crimes that manifest evidence of prejudice based on race, gender and gender identity, religion, disability, sexual orientation, or ethnicity.” Since the outbreak of COVID-19 in Wuhan, China, the United States has seen a surge of Asian Americans reporting racially motivated hate crimes. Earlier this month, University of Colorado Denver School of Public Affairs professor Angela Gover, PhD, along with researchers from Iowa State University and RTI International, published a research paper outlining how COVID-19 has enabled the spread of racism and created national insecurity, fear of foreigners, and general xenophobia. 

Stigmatization and Pandemics in the United States  

Throughout the history of humankind, infectious disease has caused more fatalities than any other medical cause. Studies have proven that when viral outbreaks are deadly, fear often drives those at risk to place blame on external groups, such as minorities.  

This isn’t the first time this issue has arose. In fact, many individuals associate particular diseases with a group of people, such as Irish Catholics are responsible for “Irish disease” (cholera), Jewish immigrants for “consumption” (tuberculosis), Irish and German immigrants for yellow fever, and Italians for polio.  

As for Asian-Americans, in 1900, when the bubonic plague began in San Francisco, public health officials quarantined Chinese residents in Chinatown but allowed white merchants to leave the area. Now we’re facing COVID-19, which originated in Wuhan, China. Since the spread of the virus to the rest of the world, especially the United States, the virus has become labeled by some as the “Chinese virus.” 

“Once again, we are seeing a pattern of scapegoating,” said Gover. “It is important to learn lessons from the past and not repeat history by blaming those of Asian descent for the current pandemic.” 

Anti-Asian Hate Crime Statistics in the United States 

For this study, researchers examined hate crime data from the FBI’s Uniform Crime Report (UCR) and the Bureau of Justice Statistics’ National Crime Victimization Survey (NCVS), which provide national estimates of hate crime as defined above. The statistics used were from two collections for the 16-year period from 2003 to 2018.  

UCR data revealed that during the two 5-year periods from 2003 to 2007, and 2014-2018, hate crimes against Asian-Americans dropped 30.8%.  
NCVS data reveled that during the two 5-year periods from 2003 to 2007, and 2014-2018, hate crimes against Asian-Americans dropped 7%.  

The large percentage discrepancies between the two institutions cited above suggest vast underreporting of hate crime to police and magnify the hidden nature of hate crime against Asian Americans in the U.S. today. Importantly, the NCVS data also reveals that less than half of Asian hate crime victimizations are reported to police, only 47.6%.  

“As of July 1, the Stop AAPI Hate self-reporting tool had recorded over 800 discrimination and harassment incidents against Asian Americans in California in the span of three months, including 81 assaults and 64 potential civil rights violations,” said Gover. “These occurrences are likely a small fraction of what is actually transpiring as most of these types of incidents go unreported.” 

COVID-19 and Asian-American Hate Crimes 

Since the use of the term “Chinese virus” has been widely used by elected officials and the media, anti-Asian sentiments are on the rise. According to reports from late March, the FBI anticipated that there would be a surge in anti-Asian hate crimes during the pandemic and even alerts local authorities to be on alert for these occurrences.  

Since as early as February, racist acts against Asian Americans directly related to the COVID-10 pandemic have been recorded and shared on social media to raise awareness of the growing problem. These acts of violence include both physical and mental abuse of Asian Americans of all ages and genders.  

“Victims of hate crimes experience significant psychological trauma, often presenting as PTSD and/or debilitating anxiety and depression,” said Gover. “This isn’t surprising being that the cultural stigmatization and “othering” of a particular group fosters an environment of normalizing instances of assault and harassment, creating a day-to-day atmosphere of fear for the safety and security of themselves and their loved ones.” 

Looking Forward 

While there has been a reversal of the term “Chinese virus” by the media and government, the damage has been done. According to the researchers, this narrative around COVID-19 and anti-Asian has once again reignited the racist stereotype of Asian culture. Once the 2020 hate crime data is released in 2021, we’ll have a better understanding of the number of hate crimes related to COVID-19. However, from the personal accounts of hate crimes already reported, we now understand anti-Asian bias on the rise here in the United States.  

“The U.S. has seen a recurring history of socially entrenched racism towards Asian Americans with spikes occurring during historical times of crisis, including during the coronavirus pandemic. Moreover, racist attitudes have been reinforced by institutional-level support, thus promoting a culture of “othering” towards Asians in America, once again. Covid-19 is a public health crisis, not a racial matter. It does not discriminate along racial lines and nor should we.” 

Credit: 
University of Colorado Denver

Spider monkey groups as collective computers

image: Two spider monkeys swing through the trees.

Image: 
Sandra E. Smith Aguilar

The wild spider monkeys living in a protected area near Punta Laguna, Mexico, collectively figure out good ways to divide up and conquer the forest. These monkeys live in a special type of society called a "fission-fusion" society. The group breaks up into little teams to find food -- called, "foraging" in the world of ecology -- but there is no "gym teacher" or "popular kid" picking teams. Rather, the monkeys each make decisions about how long to stay on foraging teams and when to switch to another. It turns out the collective effect of these individual decisions is to produce a range of foraging team sizes. And this range works well given how many trees in the forest have tasty fruit ready to eat. The monkeys are collectively computing good team sizes given the availability of food in the forest.

The findings are published this week in the journal Frontiers in Robotics and AI. The researchers -- from the National Autonomous University of Mexico (UNAM) and the Santa Fe Institute, in Santa Fe, NM -- report that monkeys make use of the smarts of their group mates to inform their own decisions.

"By forming these subgroups -- constantly coming together and splitting -- the spider monkeys develop a more thorough knowledge of their environment," says the study's lead author, Gabriel Ramos-Fernandez at UNAM, who studies animal communication, social complexity and networks. "They seem to be pooling information about resources, so that as a group they know their environment better than any individual does on its own."

Ramos-Fernandez and his group recorded the interactions of 47 monkeys for five hours per day over two years. He says the monkeys, which are accustomed to being observed by people, typically formed subgroups of 2 to 17 animals, but those subgroups typically stayed together only for 1-2 hours. "We noted who was where, and with whom, at any given time," he says.

To understand how the monkeys to collectively compute team sizes, Ramos-Fernandez's team collaborated with SFI Professor Jessica Flack and SFI President David Krakauer. Flack leads SFI's Collective Computation Group, and Krakauer is co-developer of the collective computation ideas with Flack.

The researchers used an approach called inductive game theory, developed by Flack and Krakauer in collaboration with another SFI researcher, Simon DeDeo, to figure out what decision rules the spider monkeys use in deciding to stay on or leave a foraging team. In traditional game theory, researchers make assumptions about the strategies in play. Inductive game theory, in contrast, asks what strategies are the animals (or cells or neurons) actually using--what do we see in the data? Inductive Game Theory starts by specifying in advance a space of decision rules the study subjects -- here spider monkeys -- could be using given their cognitive and behavioral sophistication and, ideally, for which there is already some empirical support. The researchers search the data for evidence of these strategies and then ask how the strategies the individuals are found to use, combine to produce social structure.

"This kind of methodology is useful for studying optimal foraging because it requires no a priori assumptions about benefits and costs," says Ramos-Fernandez. The researchers found individual monkeys' decisions to stay or leave a foraging team were influenced by the stay and leave decisions of other individuals on the team. This result suggests spider monkeys take into account the opinions of their group mates about what a good team size is and use those opinions to inform their own decision-making. The collective effects of these decisions produced a range of team sizes that worked well given the availability of fruiting trees in the monkeys' forest. But the researchers also found that the spider monkeys' "collective intelligence" had room for improvement! The team sizes the monkeys collectively computed were not a perfect match to the availability of fruiting trees.

A similar approach might help researchers understand other collective systems, including flocks of birds, groups of fish, or financial markets. Insights from this study also reinforce an idea in the collective intelligence literature that in decentralized systems when individual parts or agents have imperfect knowledge or only partial windows on the world, collective pooling of knowledge can be beneficial. Questions for future work include studying how individuals optimally combine the knowledge of group mates, depending on how diverse the group is, and how costly it is to make mistakes.

Credit: 
Santa Fe Institute

Russian scientists identified energy storage mechanism of sodium-ion battery anode

image: A proposed model during desodiation of hard carbon

Image: 
Zoia V.Bobyleva et al./Electrochimica Acta

Scientists from Skoltech and Moscow State University (MSU) identified the type of electrochemical reaction associated with charge storage in the anode material for sodium-ion batteries (SIB), a new promising class of electrochemical power sources. Their findings along with the anode manufacturing method developed by the same team will help bring closer the SIB commercialization in Russia and beyond. The research was published in the journal Electrochimica Acta.

Today lithium-ion batteries (LIB) are the most popular electrochemical power sources used in diverse applications running the gamut from mobile phones (several watt-hours) to buffer systems at power plants (millions of watt-hours). The demand for LIB and the average size of storage devices are constantly growing, however this growth trend is encountering multiple barriers, such the high cost of lithium salts, limited global reserves of lithium and uneven distribution of lithium-containing deposits across countries. To overcome these hurdles, scientists worldwide, Russia included, are working on SIB, an alternative technology that may challenge both LIB and the widely used lead-acid batteries.

Sodium is the sixth most common element in the Earth's crust. Its salts are about 100 times cheaper as compared to lithium. Although similar to lithium in terms of chemical properties, sodium has other distinctions that call for new approaches in SIB design. A battery is made up of three main components: the cathode, the anode and the electrolyte. There is a broad diversity of compositions and structures that could be suitable for SIB cathodes or electrolytes, whereas the anode still remains a stumbling block. Graphite, which is successfully used in LIB, does not work for SIB because the sizes of carbon hexagons and sodium cations differ too much to provide intercalation. Hard carbon seems to be the only material that can actually be used in the anode. Hard carbon formed by an irregular arrangement of distorted graphite-like layers demonstrates sodium-ion storage properties comparable to those of graphite in LIB, however it still remains unclear why and how this happens.

"There are several hypotheses as to how sodium could be introduced into hard carbon. In our study, we validated and slightly expanded one of them. We found that hard carbon exhibits intercalation-type behavior to accumulate most of the charge, which is great news. Intercalation is exactly what the battery needs, while the surface processes associated with "pseudocapacitance" are the responsibility of supercapacitors that form a very narrow niche among chemical power sources. Funnily enough, our Japanese colleague and research supervisor for our principal investigator and MSU PhD student, Zoya Bobyleva, held a totally different view at the start. He is one of the world's top experts in SIB and hard carbon and we had a hard time convincing him that we were right, but we did it!" says Oleg Drozhzhin, project lead and senior research scientist at Skoltech's Center for Energy Science and Technology (CEST) and MSU.

Last year, Nobel Prizes in Chemistry were awarded to three scientists "for the development of lithium-ion batteries". One of the winners owes his prize to hard carbon, an anode material that gave life to the LIB technology about three decades ago and was later replaced with graphite. Now hard carbon can once again give rise to a new technology.

"This work is remarkable not only in showing how hard carbon works in the sodium-ion system but also in finding a way to produce hard carbon with a capacity of over 300 mAh/g comparable to that of graphite in LIB. Creating and optimizing a new method takes a lot of painstaking effort that typically remains behind the scenes and is hardly ever reported in scientific papers, so it is important for us to show the ultimate result: we succeeded in making good anode materials for SIB and we know how they work," comments Evgeny Antipov, a Skoltech professor and head of the Department of Electrochemistry at the MSU Faculty of Chemistry.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)

Mount Sinai researcher identifies single gene biomarker to differentiate between atopic dermatitis

Mount Sinai researchers have pinpointed a single gene biomarker, nitride oxide synthase 2 (NOS2) that can distinguish atopic dermatitis (AD) and psoriasis with 100 percent accuracy using adhesive tape strips, a non-invasive alternative to skin biopsy. The research will be published online today in the Journal of Allergy and Clinical Immunology.

The study was led by Emma Guttman-Yassky, MD, PhD, Sol and Clara Kest Professor and Vice Chair of Dermatology at the Icahn School of Medicine at Mount Sinai. It evaluated tape strips obtained from 20 adults with moderate to severe AD, 20 with moderate to severe psoriasis, and 20 healthy individuals. From each subject, 20 tape strips were collected, some from lesions and the rest from clinically unaffected skin. The skin cells collected from the tape strips were subjected to global molecular profiling for identification of disease-related biomarkers.

Atopic dermatitis, also known as eczema, is an inflammatory, extremely itchy skin disorder that affects more than 31 million adults in the United States, including 10 to 20 percent of children. Psoriasis is a skin disorder that causes red, itchy scaly patches; it has no cure and affects more than 8 million people in the United States.

"In the past, skin tissue biopsies have always been considered the gold standard for distinguishing between inflammatory skin diseases, but they can cause pain, scarring, and increased risk of infection," said Dr. Guttman-Yassky, whose past revolutionary research on AD focused on the mechanism underlying the disease and promoted development of targeted therapeutics for it. "This study shows that using adhesive tape strips may provide a minimally invasive alternative to skin biopsies for monitoring biomarkers of patients with these particular skin diseases and beyond."

The researchers also captured other genes related to immune and epidermal barrier function that were dysregulated in AD and/or psoriasis, and that distinguished each disease from the other. For example, tape strips from AD patients strongly expressed cell markers related to T-helper 2 (Th2) immune response, which is characteristic to AD, while psoriasis patients displayed much higher levels of Th1 and Th17 cytokines, which are characteristic to psoriasis.

Dr. Guttman-Yassky added that the molecular phenotypes described in the study were notably in accord with previous reports from skin biopsy studies and with the current mechanistic understanding of both diseases.

"This revolutionary study emphasizes the great need for better understanding immune and barrier alterations in both adults and children living with inflammatory skin disease," said Mark Lebwohl, MD, Waldman Professor and Chair of Dermatology at the Icahn School of Medicine at Mount Sinai. "The results of this study may help provide a useful alternative to the invasive method of skin biopsies to track cutaneous disease activity in future clinical trials."

Credit: 
The Mount Sinai Hospital / Mount Sinai School of Medicine

Front-line physicians stressed and anxious at work and home

Amid the COVID-19 chaos in many hospitals, emergency medicine physicians in seven cities around the country experienced rising levels of anxiety and emotional exhaustion, regardless of the intensity of the local surge, according to a new analysis led by UC San Francisco.

In the first known study to assess stress levels of U.S. physicians during the coronavirus pandemic, doctors reported moderate to severe levels of anxiety at both work and home, including worry about exposing relatives and friends to the virus. Among the 426 emergency physicians surveyed, most reported changes in behavior toward family and friends, especially decreased signs of affection.

"Occupational exposure has changed the vast majority of physicians' behavior at both work and home," said lead author Robert M. Rodriguez, MD, a professor of Emergency Medicine at UCSF. "At home, doctors are worried about exposing family members or roommates, possibly needing to self-quarantine, and the effects of excess social isolation because of their work on the front line."

The results, which appear July 21, 2020, in Academic Emergency Medicine, found slight differences between men and women, with women reporting higher stress. Among male physicians, the median reported effect of the pandemic on both work and home stress levels was 5 on a scale of 1 to 7 (1=not at all, 4=somewhat, and 7=extremely). For women, the median was 6 in both areas. Both men and women also reported that levels of emotional exhaustion or burnout increased from a pre-pandemic median of 3 to a median of 4 after the pandemic started.

Lack of PPE was associated with the highest level of concern and was also the measure most often cited that would provide greatest relief. The doctors also voiced anxiety about inadequate rapid diagnostic testing, the risk of community spread by discharged patients, and the well-being of coworkers diagnosed with COVID-19.

But the survey also showed clear-cut ways of mitigating anxiety:

Improve access to PPE;

Increase availability of rapid turnaround testing;

Clearly communicate COVID-19 protocol changes;

Assure access to self-testing and personal leave for front line providers.

The responses came from faculty (55 percent), fellows (4.5 percent), and residents (about 39 percent), with a median age of 35. Most physicians lived with a partner (72 percent), while some lived alone (nearly 15 percent) or with roommates (11 percent). Nearly 39 percent had a child under age 18.

The study involved healthcare providers at seven academic emergency departments and affiliated institutions in California, Louisiana and New Jersey. Researchers noted that the majority of study sites were in California, which at the time of the survey had not yet experienced the large surges of patients seen in other areas of the country. But the study found that median levels of anxiety in the California sites were similar to those in the New Orleans and Camden sites, which were experiencing surges at the time.

"This suggests that the impact of COVID-19 on anxiety levels is pervasive and that measures to mitigate stress should be enacted universally," Rodriguez said. "Some of our findings may be intuitive, but this research provides a critical early template for the design and implementation of interventions that will address the mental health needs of emergency physicians in the COVID-19 pandemic era."

The study is longitudinal, with this first phase focused on the early "acceleration" phase of the pandemic. Subsequent studies will address stressors that have arisen throughout the course of the pandemic, including childcare and homeschooling demands, the economic impact of fewer patients overall in the ER, and possible development of long-term post-traumatic stress.

Credit: 
University of California - San Francisco

Popular seafood species in sharp decline around the world

image: Octopus at a fish market in Indonesia.

Image: 
Photo by Deng Palomares, Sea Around Us.

Fish market favourites such as orange roughy, common octopus and pink conch are among the species of fish and invertebrates in rapid decline around the world, according to new research.

In the first study of its kind, researchers at the University of British Columbia, the GEOMAR Helmholtz Centre for Ocean Research Kiel and the University of Western Australia assessed the biomass--the weight of a given population in the water--of more than 1,300 fish and invertebrate populations. They discovered global declines, some severe, of many popularly consumed species.

Of the populations analyzed, 82 per cent were found to be below levels that can produce maximum sustainable yields, due to being caught at rates exceeding what can be regrown. Of these, 87 populations were found to be in the "very bad" category, with biomass levels at less than 20 per cent of what is needed to maximize sustainable fishery catches. This also means that fishers are catching less and less fish and invertebrates over time, even if they fish longer and harder.

"This is the first-ever global study of long-term trends in the population biomass of exploited marine fish and invertebrates for all coastal areas on the planet," said Maria "Deng" Palomares, lead author of the study and manager of the Sea Around Us initiative in UBC's Institute for the Oceans and Fisheries."When we looked at how the populations of major species have been doing in the past 60 years, we discovered that, at present, most of their biomasses are well below the level that can produce optimal catches."

To reach their findings, the researchers applied computer-intensive stock assessment methods known as CMSY and BSMY to the comprehensive catch data by marine ecosystem reconstructed by the Sea Around Us for the 1950-2014 period.

The greatest declines in stocks were found in the southern temperate and polar Indian Ocean and the southern polar Atlantic Ocean, where populations shrunk by well over 50 per cent since 1950.

While much of the globe showed declining trends in fish and invertebrates, the analysis found a few exceptions. One of these was the Northern Pacific Ocean where population biomass increased by 800 per cent in its polar and subpolar zones, and by about 150 per cent in its temperate zone.

Despite these pockets of improvement, the overall picture remains a cause for concern, according to co-author Daniel Pauly, principal investigator at Sea Around Us.

"Despite the exceptions, our findings support previous suggestions of systematic and widespread overfishing of the coastal and continental shelf waters in much of the world over the last 60-plus years," said Pauly. "Thus, pathways for improvements in effective fisheries management are needed, and such measures should be driven not only by clearly set total allowable annual catch limits, but also by well-enforced and sizeable no-take marine protected areas to allow stocks to rebuild."

Credit: 
University of British Columbia