Tech

Artificial intelligence can improve how chest images are used in care of COVID-19 patients

According to a recent report by Johns Hopkins Medicine researchers, artificial intelligence (AI) should be used to expand the role of chest X-ray imaging -- using computed tomography, or CT -- in diagnosing and assessing coronavirus infection so that it can be more than just a means of screening for signs of COVID-19 in a patient's lungs.

Within the study, published in the May 6 issue of Radiology: Artificial Intelligence, the researchers say that "AI's power to generate models from large volumes of information -- fusing molecular, clinical, epidemiological and imaging data -- may accelerate solutions to detect, contain and treat COVID-19."

Although CT chest imaging is not currently a routine method for diagnosing COVID-19 in patients, it has been helpful in excluding other possible causes for COVID-like symptoms, confirming a diagnosis made by another means or providing critical data for monitoring a patient's progress in severe cases of the disease. The Johns Hopkins Medicine researchers believe this isn't enough, making the case that there is "an untapped potential" for AI-enhanced imaging to improve. They suggest the technology can be used for:

Risk stratification, the process of categorizing patients for the type of care they receive based on the predicted course of their COVID-19 infection.

Treatment monitoring to define the effectiveness of agents used to combat the disease.

Modeling how COVID-19 behaves, so that novel, customized therapies can be developed, tested and deployed.

For example, the researchers propose that "AI may help identify the immunological markers most associated with poor clinical course, which may yield new targets" for drugs that will direct the immune system against the SARS-CoV-2 virus that causes COVID-19.

Credit: 
Johns Hopkins Medicine

New test of dark energy and expansion from cosmic structures

image: A section of the three-dimensional map of galaxies Sloan Digital Sky Survey used in this analysis. The rectangle on the far left shows a patch of the sky containing nearly 120,000 galaxies (a small fraction of the total survey). The centre and right images show the 3D map created from these data: brighter regions correspond to the regions of the Universe with more galaxies, and darker regions to voids.

Image: 
Image Jeremy Tinker and the SDSS-III collaboration

A new paper has shown how large structures in the distribution of galaxies in the Universe provide the most precise tests of dark energy and cosmic expansion yet.

The study uses a new method based on a combination of cosmic voids - large expanding bubbles of space containing very few galaxies - and the faint imprint of sound waves in the very early Universe, known as baryon acoustic oscillations (BAO), that can be seen in the distribution of galaxies. This provides a precise ruler to measure the direct effects of dark energy driving the accelerated expansion of the Universe.

This new method gives much more precise results than the technique based on the observation of exploding massive stars, or supernovae, which has long been the standard method for measuring the direct effects of dark energy.

The research was led by the University of Portsmouth, and is published in Physical Review Letters.

The study makes use of data from over a million galaxies and quasars gathered over more than a decade of operations by the Sloan Digital Sky Survey.

The results confirm the model of a cosmological constant dark energy and spatially flat Universe to unprecedented accuracy, and strongly disfavour recent suggestions of positive spatial curvature inferred from measurements of the cosmic microwave background (CMB) by the Planck satellite.

Lead author Dr Seshadri Nadathur, research fellow at the University's Institute of Cosmology and Gravitation (ICG), said: "This result shows the power of galaxy surveys to pin down the amount of dark energy and how it evolved over the last billion years. We're making really precise measurements now and the data is going to get even better with new surveys coming online very soon."

Dr Florian Beutler, a senior research fellow at the ICG, who was also involved in the work, said that the study also reported a new precise measurement of the Hubble constant, the value of which has recently been the subject of intense debate among astronomers.

He said: "We see tentative evidence that data from relatively nearby voids and BAO favour the high Hubble rate seen from other low-redshift methods, but including data from more distant quasar absorption lines brings it in better agreement with the value inferred from Planck CMB data."

Credit: 
University of Portsmouth

Australian researchers set record for carbon dioxide capture

Researchers from Monash University and the CSIRO have set a record for carbon dioxide capture and storage (CCS) using technology that resembles a sponge filled with tiny magnets.

Using a Metal Organic Frameworks (MOFs) nanocomposite that can be regenerated with remarkable speed and low energy cost, researchers have developed sponge-like technology that can capture carbon dioxide from a number of sources, even directly from air.

The magnetic sponge is used to remove carbon dioxide using the same techniques as induction cooktops using one-third of the energy than any other reported method.

Associate Professor Matthew Hill (CSIRO and Department of Chemical Engineering, Monash University) and Dr Muhammad Munir Sadiq (Department of Chemical Engineering, Monash University) led this research.

In the study, published in Cell Reports Physical Science, researchers designed a unique adsorbent material called M-74 CPT@PTMSP that delivered a record low energy cost of just 1.29 MJ kg-1CO2 , 45 per cent below commercially deployed materials, and the best CCS efficiency recorded.

MOFs are a class of compounds consisting of metal ions that form a crystalline material with the largest surface area of any material known. In fact, MOFs are so porous that they can fit the entire surface of a football field in a teaspoon.

This technology makes it possible to store, separate, release or protect valuable commodities, enabling companies to develop high value products.

"Global concerns on the rising level of greenhouse gas emissions and the associated environmental impact has led to renewed calls for emissions reduction and the development of green and renewable alternative energy sources," Associate Professor Hill said.

"However, existing commercial carbon capture technologies use amines like monoethanolamine, which is highly corrosive, energy intensive and captures a limited amount of carbon from the atmosphere.

"Our research shows the lowest reported regeneration energy calculated for any solid porous adsorbent, including monoethanolamine, piperazine and other amines. This makes it a cheap method that can be paired with renewable solar energy to capture excess carbon dioxide from the atmosphere.

"Essentially, we can capture CO2 from anywhere. Our current focus is for capture directly from the air in what are known as negative emissions technologies."

For MOFs to be used in CCS applications, it is essential to have materials that can be easily fabricated with good stability and performance.

The stability of M-74 CPT@PTMSP was evaluated by estimating the amount of CO2 and H2O captured and released via the researchers' magnetic induction swing adsorption (MISA) process over 20 consecutive cycles.

The regeneration energy calculated for M-74 CPT@PTMSP is the lowest reported for any solid porous adsorbent. At magnetic fields of 14 and 15 mT, the regeneration energy calculated for M-74 CPT was 1.29 and 1.44 MJ kg CO2-1.

Credit: 
Monash University

Negative emotions cause stronger appetite responses in emotional eaters

Turning to a tub of ice cream after a break-up may be a cliché, but there's some truth to eating in response to negative emotions. Eating serves many functions - survival, pleasure, comfort, as well as a response to stress. However, emotional overeating - eating past the point of feeling full in response to negative emotions, is a risk factor for binge eating and developing eating disorders such as bulimia.

"Even at a healthy BMI, emotional overeating can be a problem," says Rebekka Schnepper of the University of Salzburg in Austria, who co-authored a recent paper in Frontiers in Behavioral Neuroscience.

The study investigated the extent to which individual eating styles and emotional states predict appetite response to food images, by comparing emotional eaters - people who use food to regulate negative emotions - and restrictive eaters - people who control their eating through diets and calorie restriction. (While a person can be both an emotional and a restrictive eater, the two traits were not highly correlated in this study's sample.)

Schnepper and her co-authors found that emotional eaters had a stronger appetite response and found food to be more pleasant when experiencing negative emotions compared to when they felt neutral emotions. Restrictive eaters, on the other hand, appeared more attentive towards food in the negative condition although this did not influence their appetite, and there was no significant change between the negative and neutral emotion conditions.

The findings point towards potential strategies for treating eating disorders. "When trying to improve eating behavior, a focus on emotion regulation strategies that do not rely on eating as a remedy for negative emotions seems promising," says Schnepper.

The authors were compelled to investigate the subject because of a lack of consensus in the literature. "There are different and conflicting theories on which trait eating style best predicts overeating in response to negative emotions. We aimed to clarify which traits predict emotional overeating on various outcome variables," says Schnepper.

They conducted the study among 80 female students at the University of Salzburg, all of whom were of average body mass index (BMI). During the lab sessions, experimenters read scripts to the participants in order to induce either a neutral or a negative emotional response. The negative scripts related to recent events from the participant's personal life during which they experienced challenging emotions, while the neutral scripts related to subjects such as brushing one's teeth. The participants were then shown images of appetizing food and neutral objects.

Researchers recorded participants' facial expressions through electromyography, brain reactivity through EEGs (electroencephalography), as well as self-reported data. For example, emotional eaters frowned less when shown images of food after experimenters read the negative script compared to when they read the neutral script, an indication of a stronger appetite response.

The study chose to only test female participants since women are more prone to eating disorders but, given the limited subject pool as well as the controlled conditions, Schnepper says that "We cannot draw conclusions for men or for long-term eating behavior in daily life." Nevertheless, the study furthers our understanding of emotional overeating, and the findings may help in the early detection and treatment of eating disorders.

Credit: 
Frontiers

New Papua New Guinea research solves archaeological mysteries

image: Professor Glenn Summerhayes at the "Joes' Garden" site in the Ivane Valley in the New Guinea highlands.

Image: 
University of Otago

New research which "fills in the blanks" on what ancient Papuan New Guineans ate, and how they processed food, has ended decades-long speculation on tool use and food stables in the highlands of New Guinea several thousand years ago.

Findings from the "Joe's Garden" site in the Ivane Valley in the New Guinea highlands end several decades of academic speculation about what a formally manufactured mortar and other tools were used for, and shows a variety of once widely eaten starchy plants were processed in the area.

Report co-author and University of Otago Archaeology Professor Glenn Summerhayes says the research means several archaeological "mysteries have finally been solved".

"Although ground stone bowls, known as mortars, have been found throughout most of New Guinea, little was known of their function or age. Most have been found from surface collections or dug by in and re-used by locals while gardening. Only a couple had been excavated in archaeological contexts and their use was unknown. This paper presents the discovery of a mortar fragment excavated from the Ivane Valley of Papua New Guinea in contexts dated to four and a half thousand years ago."

Clinging to the stone tools recovered from the site are microscopic starch grains from tree nuts (Castanopsis acumeninatissma) and Pueraria labota (tuber), which were first proposed as common stables by researchers in the mid-1960s.

"Usewear and residue analysis on the fragment has shown it was used for processing starch rich plants such as nut and tuber, and insights into past subsistence patterns are rare, especially for 4,400 year ago!"Professor Summerhayes says.

The research adds to the findings from other studies by demonstrating the long-term survival of starchy residues in an open site in a montane setting at 2000m above sea level, and confirms the resilience of these microfossils in equatorial/tropical contexts.

Summerhayes says over the last 300 or so years, the predominance of sweet potato in subsistence gardening practices has led to a range of starchy plants falling into disuse. While previous studies in the region have mainly focused on the use of taro, banana and some yams, the researchers found several species, including Castanopsis sp. - commonly referred to as chinquapin or chinkapin - may have played an important, if up until now an invisible, role in highland diets over the millennia.

Similarly, the widely available, C. acuminatissima - commonly known as white oak or New Guinea oak - has been recorded as eaten on hunting trips but has never been clearly identified as a common starchy staple. As with pestles from the Waim site, the Ivane mortars confirm consumption of these tree nuts was widespread.

Some areas recognise these dietary links with the past; in the highland Kaironk Valley in Madang Province at least one stand of C. acuminatissima has conservation status.

"This regional example is an exciting addition to our wider project on understanding of the trajectories of diverse plant food exploitation and ground stone technology development witnessed globally in the Holocene," Professor Summerhayes says.

Credit: 
University of Otago

Extracellular vesicles play an important role in the pathology of malaria vivax

image: Hernando del Portillo and Carmen Fernandez led the study on extracellular vesicles in malaria vivax

Image: 
del Portillo

Plasmodium vivax is the most widely distributed human malaria parasite, mostly outside sub-Saharan Africa, and responsible for millions of clinical cases yearly, including severe disease and death. The mechanisms by which P. vivax causes disease are not well understood. Recent evidence suggests that, similar to what has been observed with the more lethal P. falciparum, red blood cells infected by the parasite may accumulate in internal organs and that this could contribute to the pathology of the disease. In fact, the team led by Hernando A. del Portillo and Carmen Fernández-Becerra, recently showed that P. vivax-infected red blood cells adhere to human spleen fibroblasts thanks to the surface expression of certain parasite proteins, and that this expression is induced by the spleen itself. "These findings indicate that the spleen plays a dual role in malaria vivax," says ICREA researcher Hernando A del Portillo. "On one hand, it eliminates infected red blood cells. On the other hand, it may serve as a "hiding" place for the parasite." This could explain why P. vivax can cause severe disease in spite of low peripheral blood parasitemia.

To understand the molecular mechanisms responsible for this adhesion process, the research team turned its attention to something they have been working on for the last few years: extracellular vesicles. These small particles surrounded by a membrane are naturally released from almost any cell and play a role in communication between cells. There is increasing evidence that they could be involved in a wide range of pathologies, including parasitic diseases such as malaria. "Our new findings reveal, for what we believe is the first time, a physiological role of EVs in infectious diseases," says del Portillo, last author of the study.

The research team isolated EVs from the blood of patients with acute P. vivax infection or from healthy volunteers and showed a very efficient uptake of the former by human spleen fibroblasts. Furthermore, this uptake induced the expression of a molecule (ICAM-1) on the surface of the fibroblast which in turn serves as an "anchor" for the adherence of P. vivax-infected red blood cells.

"Our study provide insights into the role of extracellular vesicles in malaria vivax and support the existence of parasite populations adhering to particular cells of the spleen, where they can multiply while not circulating in the blood" says Fernández-Becerra, senior co-author of the study. "Importantly, these hidden infections could represent an additional challenge to disease diagnosis and elimination efforts as they might be the source of asymptomatic infections," she adds.

Credit: 
Barcelona Institute for Global Health (ISGlobal)

When determining sex, exceptions are the rule

image: A new conceptual framework has been proposed, in which sex chromosome evolution is more cyclical than linear

Image: 
Jacelyn Shu / Genome Biology and Evolution

For nearly a century, biologists have modeled the evolution of sex chromosomes--the genetic instructions that primarily determine whether an individual will develop into a male or female (or a certain mating type)--resulting in an impressive theoretical framework. Now, thanks to the publication of genomic data from a wide variety of non-model organisms, these theories are being tested against empirical evidence from nature--often with surprising results. In a new review in Genome Biology and Evolution, Benjamin Furman, Judith Mank, and colleagues detail the surprising number of exceptions to the purported rules of sex chromosome evolution theory, revealing the potential limitations to our understanding of chromosome-based sex determination systems.

It is often taken for granted that sexual reproduction requires the participation of two individuals of different sexes--generally males and females in the case of plants and animals. While there are numerous exceptions, such as hermaphroditic species of plants, worms, and fish, separate sexes are indeed seen frequently across eukaryotes, especially among animals. An individual's sex may be determined in various ways in different species; while some systems are driven by environmental or social cues, sex is often determined by the presence or absence of a specific sex chromosome--a discrete unit of DNA carrying one or more genes that initiate the development of male- or female-specific characteristics.

Since the first discovery of sex chromosomes by Nettie Stevens in 1905, who noted that male but not female mealworms carried one chromosome smaller than the rest, a considerable body of theory predicting the forces that govern sex chromosome evolution has arisen. The generally accepted model for sex chromosome evolution goes like this: 1) A genetic variant arises on an autosome (a non-sex chromosome) and specifies either the male or female sex. 2) The acquisition of sexually antagonistic alleles, which are advantageous in one sex but disadvantageous in the other, near the new sex determining gene favors the suppression of recombination, allowing for example male-beneficial traits to be inherited in males only. 3) Genes are eventually lost from the new sex chromosome, leading to imbalances in protein expression in one sex. 4) Mechanisms evolve to correct the imbalances in gene dosage. Notably, this model assumes a successive and irreversible progression from autosome to fully differentiated, dosage-compensated sex chromosome.

In reviewing the newly emerging literature from a broad range of organisms, Furman, Mank, and co-authors were stunned at the number of findings that deviate from the above rules. According to Mank, "All of us in the field know that although much of this theory has been upheld in many organisms, there are notable exceptions. Until we compiled this review, we did not really have a sense of how common the exceptions are, or how much diversity there is in sex chromosome evolution."

These exceptions call into question how broadly applicable each of the canonical steps in sex chromosome evolution may be. For example, sex chromosomes in some insects are derived not from autosomes but from nonessential selfish chromosomes called B chromosomes or from bacteria-derived DNA that has made its way into the nuclear genome. Moreover, rather than sexual antagonism driving recombination suppression, recombination may instead be initially inhibited due to the movement of transposable elements or other epigenetic changes to young sex chromosomes. In addition, global dosage compensation of sex chromosomes (as found in humans) appears to be the exception rather than the rule, with most species showing incomplete, gene-by-gene compensation mechanisms.

Most surprisingly, studies have revealed a far greater degree of sex chromosome diversity and turnover than previously appreciated. In fact, some lineages including lizards, fish, amphibians, insects, and plants show frequent changes in the location of sex determining genes and high rates of turnover of sex chromosomes. This led Mank and her co-authors to propose a new conceptual framework in which sex chromosome evolution is more cyclical than linear.

To test this cyclical theory, additional studies from a more diverse array of organisms are needed, especially from taxonomic groups that exhibit variation in sex chromosome features at the individual and population level, as well as understudied groups like fungi and protists. According to Mank, "We still don't really understand the earliest stages of sex chromosome evolution, and I suspect this will be a major area of future focus. Additionally, our map of sex chromosomes is very sparsely populated across the tips of the tree of life, and I'm hopeful that this sampling will become much denser in the coming years, allowing for more robust comparative studies."

Credit: 
SMBE Journals (Molecular Biology and Evolution and Genome Biology and Evolution)

Boosting energy efficiency of 2D material electronics using topological semimetal

image: Topological semimetal electrical contacts can significantly reduce the contact resistance and improve the energy efficiency of 2D semiconductor transistor

Image: 
SUTD

Driven by the ever-increasing desires of the consumer market for smaller, lighter and smarter devices, the size of consumer electronics such as smartphones, tablets and laptops, have been continually shrinking while becoming more powerful in terms of performance over the years.

Making these devices smaller, however, comes at a price. Due to the dominance of bizarre quantum effects in ultracompact semiconductor chips, field-effect transistors (FET) - electrical switches that form the backbones of computer processors and memory chips - stop behaving in a controllable way. Sophisticated device architectures, such as FinFET and Gate-All-Around FET, must be employed in order to continue scaling down the size of electronic devices.

Two-dimensional (2D) semiconductors have been hailed as a new option for next-generation ultracompact computing electronics. As their ultra-thin body are typically only a few atoms thick, electrical switching operations can be efficiently controlled without involving sophisticated device architectures when it is made into an FET.

In 2016, the World Economic Forum has named 2D material as one of the top 10 emerging technologies for future electronics. Again in 2018, graphene - a 2D material with exceptional properties - has been highlighted in the World Economic Forum as one of the key plasmonic materials for revolutionising sensor technology.

When making a transistor, the 2D semiconductor needs to be electrically contacted by two pieces of metals known as the source and drain. Such processes, however, creates an undesirably large electrical resistance, commonly known as contact resistance, at the source and drain the components. Large contact resistance can adversely degrade the transistor performance and generate substantial amount of heat in the device.

These adverse effects can severely limit the potential of 2D materials in the semiconductor industry. The search for a metal that does not produce a large contact resistance when bonded to 2D semiconductor remains an ongoing quest thus far.

Reporting in Physical Review Applied, a research team led by the Singapore University of Technology and Design (SUTD) have discovered a new strategy to resolve the contact resistance problem in 2D semiconductor. By performing a state-of-art density functional theory (DFT) computational simulation, the SUTD research team discovered that an ultrathin film of Na3Bi - a recently discovered topological semimetal whose conductive nature is protected by its crystal symmetry - with just two atomic layers can be used as a metal contact for 2D semiconductors with ultralow contact resistance.

"We found that the Schottky barrier height formed between Na3Bi and 2D semiconductor is one of the lowest among many metals commonly used by the industry," said Dr Yee Sin Ang one of the lead scientists of the SUTD research team.

Simply put, Schottky barrier is a thin insulator layer formed between metal and semiconductor. The height of Schottky barrier crucially influences the contact resistance. A small Schottky barrier height is desirable for achieving low contact resistance.

The discovery that the Schottky barrier formed between Na3Bi and two commonly studied 2D semiconductors, MoS2 and WS2, is substantially lower than many commonly used metals, such as gold, copper and palladium, reveals the strength of topological semimetal thin films for designing energy-efficient 2D semiconductor devices with minimal contact resistance.

"Importantly, we found that when 2D semiconductors are contacted by Na3B, the intrinsic electronic properties of the 2D semiconductor are retained.", said Dr Liemao Cao, the DFT expert from the SUTD research team.

2D semiconductors can 'fuse' together with a contacting metal and becomes metalised. Metalised 2D semiconductors lose its original electrical properties that are much needed for electronics and optoelectronics applications. The research team found that Na3Bi thin film does not metalise 2D semiconductors. Using Na3Bi thin film as a metal contact to 2D semiconductor can thus be highly beneficial for device applications, such as photodetectors, solar cells, and transistors.

"Our pioneering concept that synergises 2D materials and topological materials will offer a new route towards the design of energy-efficient electronic devices, which is particularly important for reducing the energy foot-print of advanced computing systems, such as internet-of-things and artificial intelligence," commented Professor Ricky L. K. Ang, the principle investigator of the research team, and the Head of the Science, Math and Technology cluster in SUTD.

Credit: 
Singapore University of Technology and Design

Novel bioaccumulative compounds found in marine bivalves

image: BSAFs of representative OHCs detected in the paired mussel and sediment samples.

Image: 
Reprinted with permission from Environmental Science & Technology. © 2020 American Chemical Society

A research team in Ehime University found novel bioaccumulative compounds in mussels inhabiting Hiroshima Bay and suggested their unintentional (natural) formation in the environment. The findings were published on March 12, 2020 in Environmental Science & Technology and selected as a supplementary cover of the journal.

Persistent organic pollutants (POPs) such as polychlorinated biphenyls (PCBs), dichlorodiphenyltrichloroethanes (DDTs), and dioxins possess environmentally persistent and bioaccumulative properties and can cause adverse effects on humans and wildlife. POPs are strictly regulated and targeted for abolition (prohibition of production and usage) and for reduction of unintentional formation according to the Stockholm Convention on POPs. Since the Stockholm Convention came into effect in May 2004, several organohalogen compounds (OHCs) have additionally been registered as new POPs. However, legacy and emerging POPs drawing international attention are confined to well-known anthropogenic OHCs, and hence environmental release and biological exposure of unknown POP-like substances have been overlooked.

Another emerging issue is the natural formation of OHCs in the coastal environment. Several halogenated natural products (HNPs), which are biosynthesized by marine algal and bacterial species, are known to have POP-like physicochemical properties and toxicological effects. Although it is likely that marine biota is chronically exposed to OHCs from both anthropogenic and natural origins, there is a lack of comprehensive screening surveys, especially for low-trophic level organisms such as marine fish and shellfish. Therefore, the origin and bioaccumulation potential for POP-like substances present in the coastal environment remain unclear.

The research team in Ehime University screened known and unknown POP-like substances in mussels and sediment sampled from Hiroshima Bay by using advanced instrumental methods including comprehensive two-dimensional gas chromatography/high-resolution time-of-flight mass spectrometry (GCxGC/HRToFMS) and GC/magnetic sector HRMS. In addition to known OHCs (POPs and HNPs), unknown mixed halogenated compounds containing one chlorine and three to five bromines (UHC-Br3-5Cl) with molecular formulae of C9H6Br3ClO, C9H5Br4ClO, and C9H5Br4ClO were detected. These unknown compounds were found to be ubiquitous in the bay coast despite not being produced commercially, suggesting the unintentional (natural) formation in the coastal environment. The biota (mussel)-sediment accumulation factors (BSAFs) of UHC-Br3-5Cl were an order of magnitude higher than those for POPs with similar log octanol-water partition coefficient (log Kow) values, indicating their high bioaccumulative potential. Thus, comprehensive monitoring and ecotoxicological risk assessment on anthropogenic and natural OHCs targeted for a wide range of trophic-level marine organisms are necessary.

Credit: 
Ehime University

Philippine volcanic eruption could prompt El Niño warming next winter

image: El Niño events and high-latitude Eurasian warming are usually observed after large tropical eruptions. However, simulations of climate impacts by volcanos show large biases when compared with observations, calling for substantial improvements in climate models. The graph depicts the change in zonally averaged surface temperature in observations (the shaded part) and simulations (the line) after large tropical volcanic eruptions. The equation denotes the chemical reaction in the stratosphere.

Image: 
Advances in Atmospheric Sciences

Climatologists have found that if an ongoing Philippine volcanic eruption becomes more violent, the gases released are likely to produce an El Niño event during the 2020-21 winter, a more intense polar vortex and warming across Eurasia.

When Taal volcano near Manila started erupting on January 12, ash spewed 14 kilometers into the air, coating villages in a blanket of dust and affecting nearly 460,000 people, many of whom lost access to electricity and fresh water. Despite the significant impact, the eruption so far has been tame compared to some of the biggest eruptions in history, and the Philippine Institute of Volcanology and Seismology (PHIVOLCS) has since downgraded its alert.

But these same volcanologists now warn that the Taal volcano may merely be in a lull and that the risk of a more dangerous eruption still exists.

"The Taal eruption has been terrible for local communities," said Fei Liu, a researcher with China's Sun Yat-sen University, "but it has also become a deep global concern, with potentially hazardous consequences for the Earth's climate."

Fine ash and sulfur dioxide from eruptions block incoming solar radiation, thus reducing heat at Earth's surface that in turn produces atmospheric warming. As a result, for a year after especially violent eruptions, it can be cooler than normal across much of the planet. But in addition to the cooling, there can also be a warming in the first post-eruption winter in the northern hemisphere as surface temperatures rebounds.

However, there are still some gaps between the climate impacts of volcanoes projected by computer simulations and what observations demonstrate. So Liu's team are using the Taal eruption as a real-time assessment tool of the impact of volcanic eruptions on the climate in order to improve models.

The team's research, joined by scientists from Nanjing University of Information Science and Technology, University of Hawaii, Zhejiang University, Institute of Atmospheric Physics at Chinese Academy of Sciences, Nanjing Normal University, and University of Gothenburg, is reported as a "News &Views" article in Advances in Atmospheric Sciences.

The researchers took data on the scale of volcanic eruptions worldwide over the past 1,100 years taken from Greenland and Antarctica ice cores and inputted this into global climate models. This then allowed them to project the impact of the Taal eruption.

They reckon there is a high likelihood (83% probability) of an El Niño-like warming event during the 2020/21 winter if the magnitude of the Taal eruption reaches a mid-range "volcanic explosive index". Such an eruption would also produce an enhanced polar vortex-- a large area of low pressure and cold air surrounding the Earth's North and South poles, which in turn would drive warming across the Eurasian continent.

The researchers' next step will be to use the eruption to better assess impacts from stratospheric aerosol geo-engineering (also known as "solar radiation management), a theoretical method of reducing global warming by injecting sulfate aerosols into the stratosphere via balloons that would work as a sort of 'deliberate' volcanic eruption.

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Piecing together the Dead Sea Scrolls with DNA evidence

image: This image shows one of the Qumran caves where the Dead Sea Scroll fragments were found.

Image: 
Courtesy of the Israel Antiquities Authority, Photographer Shai Halevi

The collection of more than 25,000 fragments of ancient manuscripts known as the Dead Sea Scrolls include, among other ancient texts, the oldest copies of books of the Hebrew Bible. But finding a way to piece them all together in order to understand their meaning has remained an incredibly difficult puzzle, especially given that most pieces weren't excavated in an orderly fashion. Now, researchers reporting in the journal Cell on June 2 have used an intriguing clue to help in this effort: DNA "fingerprints" lifted from the animal skins on which the texts were written.

"The discovery of the 2,000-year-old Dead Sea Scrolls is one of the most important archaeological discoveries ever made," says Oded Rechavi (@OdedRechavi) of Tel Aviv University in Israel. "However, it poses two major challenges: first, most of them were not found intact but rather disintegrated into thousands of fragments, which had to be sorted and pieced together, with no prior knowledge on how many pieces have been lost forever, or--in the case of non-biblical compositions--how the original text should read. Depending on the classification of each fragment, the interpretation of any given text could change dramatically."

The second challenge is that most of the scrolls were acquired not directly from eleven Qumran caves near the Dead Sea but through antiquity dealers. As a result, it's not clear where many of the fragments came from in the first place, making it that much more difficult to put them together and into their proper historical context.

Since their discovery, mainly in the late 1940s and 1950s, scholars tried to put them together like a jigsaw puzzle, relying primarily on visible properties of the fragments in order to learn about their relationship to other fragments. In the new study, Rechavi and colleagues including Noam Mizrahi, Tel Aviv University, Israel, and Mattias Jakobsson, Uppsala University, Sweden, decided to look deeper for clues. From each piece, they extracted ancient DNA of the animals that were used to make the parchments. Then, using a forensic-like analysis, they worked to establish the relationship between the pieces based on that DNA evidence and on scrutiny of the language within the texts under investigation.

The DNA sequences revealed that the parchments were mostly made from sheep, which wasn't known. The researchers then reasoned that pieces made from the skin of the same sheep must be related, and that scrolls from closely related sheep were more likely to fit together than those from more different sheep or other species.

The researchers stumbled onto an interesting case in which two pieces thought to belong together were in fact made from different animals--sheep and cow. It suggested they don't belong together at all. The most notable example came from scrolls that comprise different copies of the biblical, prophetic book of Jeremiah, which are also some of the oldest known scrolls.

"Analysis of the text found on these Jeremiah pieces suggests that they not only belong to different scrolls, they also represent different versions of the prophetic book," says Mizrahi. "The fact that the scrolls that are most divergent textually are also made of a different animal species is indicative that they originate at a different provenance."

Most likely, he explains, the cow fragments were written elsewhere because it wasn't possible to raise cows in the Judean desert. The discovery also has larger implications. The researchers write that the fact that different versions of the book circulated in parallel suggests that "the holiness of the biblical book did not extend to its precise wording." That's in contrast to the mutually exclusive texts that were adopted later by Judaism and Christianity, they note.

"This teaches us about the way this prophetic text was read at the time and also holds clues to the process of the text's evolution," Rechavi says.

Other highlights include insight into the relationship among different copies of a non-biblical, liturgical work known as the Songs of the Sabbath Sacrifice, found in both Qumran and Masada. The analysis shows that the various copies found in different Qumran Caves are closely related genetically, but the Masada copy is distinct. The finding suggests that the work had a wider currency in the period.

"What we learn from the scrolls is probably relevant also to what happened in the country at the time," Mizrahi says. "As the Songs of the Sabbath Sacrifice foreshadows revolutionary developments in poetic design and religious thinking, this conclusion has implications for the history of Western mysticism and Jewish liturgy."

The evidence also confirmed that some other fragments of uncertain origin likely came from other places and not the Qumran caves. In one case, the DNA evidence suggests a fragment from a copy of the biblical book of Isaiah--one of the most popular books in ancient Judea--likely came from another site, which suggests to the researchers the potential existence of an additional place of discovery that still awaits identification.

Although the DNA evidence adds to understanding, it can only "reveal part of the picture and not solve all the mysteries," Rechavi says. The researchers had to extract DNA from tiny amounts of materials--what they refer to as scroll "dust" in certain cases--and say there are also many scrolls that have yet to be sampled and others that can't be, for fear it might ruin them.

Nevertheless, the researchers hope that more samples will be tested and added to the database to work toward a more complete Dead Sea Scroll "genome." They now think they can apply the same methods to any ancient artifact that contains enough intact DNA or perhaps other biological molecules.

Credit: 
Cell Press

Artificial tissue used to research uterine contractions

image: Scheme of the tissue engineered endometrial barrier model and experimental setup to induce peristaltic flow patterns on the biological model.

Image: 
David Elad

WASHINGTON, June 2, 2020 -- Advanced tissue engineering technologies allow scientists to mimic the structure of a uterus, enabling crucial research on fertility and disease.

In an APL Bioengineering article, by AIP Publishing, researchers present two mechanobiology tools for experiments on synthetic or artificial uterine tissue. They wanted to study the negative effects of hyperperistalsis, contractions of the uterine wall that occur too frequently.

Throughout an individual's lifetime, the uterus undergoes spontaneous contractions of the uterine wall, which can induce uterine peristalsis, a specific wavelike contraction pattern. These contractions are important for many reproductive processes, such as the transportation of sperm prior to impregnation, but hyperperistalsis could impede fertility and lead to diseases, such as adenomyosis or endometriosis.

"The nonpregnant spontaneous myometrial contractions induce the uterine peristalsis, which exerts physical loads on the internal endometrial barrier and, thereby, affect the biological performance," said David Elad, one of the paper's authors.

The designed tools include a well, which can be disassembled for installation of a biological in vitro model in an experimental chamber, and a flow chamber and transmission system that create the contraction patterns.

"The cells are cultured on a biological or synthetic membrane stretched over a well bottom, which is installed into a cylindrical medium holder. Once the biological model is ready, the well bottom can be disassembled and then installed in the experimental chamber," said Elad. "As far as we know, it is the first time an in vitro biological model was exposed to peristaltic wall shear stresses."

Using their experimental setup, the authors found peristaltic shear stresses caused alterations to the cytoskeleton of endometrial epithelial cells and myometrial smooth muscle cells of the synthetic uterine tissue.

Future research using this approach might include studying the effect of different contraction patterns and the role hormones play in uterine peristalsis. The researchers noted uterine wall models can typically be applied to study models of intestine walls and some organs and tissues.

"The laboratory approach of this work demonstrates future options to explore the complex processes of human reproduction, especially during early stages when accessibility and ethical limitations prohibit in vivo experiments," said Elad.

Credit: 
American Institute of Physics

Swing voters, swing stocks, swing users

In group decision-making, swing voters are crucial...or so we've heard. Whether it's a presidential election, a Supreme Court vote, or a congressional decision --and especially in highly partisan environments, where the votes of the wings are almost guaranteed -- the votes of the few individuals who seem to be in the middle could tip the scales.

However, the notion of a swing voter is limited because people don't always fall neatly onto one side or another. In many cases, the "middle" shifts over time; others may watch the swing voter to determine their own votes; or voters may make compromises, reflecting complex and overlapping networks of influence. To account for such complexity, the authors of a new paper published in the Journal of the Royal Society Interface develop a more general approach to identifying "pivotal components," which are akin to swing voters but applicable to a wide range of systems.

"We propose a generalizable approach for identifying pivotal components across a wide variety of systems," says author Edward Lee, who studies collective behavior at the Santa Fe Institute. "These systems go beyond voting, and include social media (like Twitter), biology (like the statistics of neurons), or finance (like fluctuations of the stock market)."

In the paper, Lee and his co-authors, Daniel Katz (Illinois Tech), Michael Bommarito (CodeX), and Paul Ginsparg (Cornell University) identify a statistical signature of pivotal components that they then trace to communities on Twitter, votes in the Supreme Court and Congress, and stock indices within financial markets. They find wide diversity in how social systems depend on sensitive points, when such points exist at all

For example, between 1994 and 2005, the US Supreme Court was generally dominated by patterns other than partisan politics, despite partisan votes like Bush v. Gore, which effectively decided the presidential election in 2000. In contrast, the New Jersey State Supreme Court from 2007-2010 was characterized by two pivotal voters. This variation reflects the role of institutional rules and norms.

"This concentration of power may correspond to weakness because focused pressure, such as intense lobbying, might be used to control outcomes, a kind of tyrannical exploitation of democracy," says Lee. This finding presents the possibility of learning next how institutional mechanisms diffuse power away from swing voters or concentrate them in the hands of a few individuals.

The authors' new framework for identifying pivotal components could also be applied to a variety of other systems to identify individuals or swing coalitions, which consist of multiple components or voters that need to be changed simultaneously, even in opposing ways.

To develop their approach, the interdisciplinary team combined ideas from statistical physics, mathematics, political science, and finance. Their work could help identify prime candidates for changing outcomes, or lead to a better understanding of how institutional and environmental factors shape the emergence of social structure.

Credit: 
Santa Fe Institute

Possible physical trace of short-term memory found

image: Without activity, only few vesicles are found at the synapse. After a short burst of activity, vesicles dock at the synapse. The pool of vesicles is stored at the synapse even minutes later. This pool may be the physical trace of memory, the 'engram'.

Image: 
© Jonas Group / IST Austria

Forming memory is essential for us to learn and acquire knowledge. In the 20th century, Richard Semon introduced the idea of an "engram," a physical substrate of a memory: as an animal learns, information is stored in an engram in the brain. Later, this information is retrieved. "Where are the engrams? This was one of the questions we asked", explains Peter Jonas. "Synaptic plasticity, the strengthening of communication between neurons, explains memory formation at the subcellular level. To find the engram, we, therefore, explored structural correlates of synaptic plasticity."

Unexpected mechanism strengthens communication

For this search, postdoc David Vandael studied single synapses in the hippocampus, the brain area required for learning and memory. Among the many synapses that connect with a pyramidal cell neuron in the hippocampus, Vandael picked out one and recorded what happens as a granule cell sends a signal to the pyramidal cell it connects with. "Recording from single identified synapses is crucial. We, therefore, set up a close-to-impossible experiment, in which we made simultaneous recordings of electrical signals from a small pre-synaptic terminal and its postsynaptic target neuron. This is the perfect way to examine the synapse", Vandael illustrates.

Vandael found that when a granule cell fires, it induces a type of synaptic plasticity called post-tetanic potentiation, which strengthens communication between granule cell and pyramidal cell for several minutes. However, the mechanism behind this plasticity was unexpected: based on what others had found for a model synapse, the calyx of Held, Vandael, and Jonas had hypothesized that plasticity arises because, after a burst of activity, vesicles would be more likely to release neurotransmitters into the synapse. This release of neurotransmitter is how signals are transmitted from one neuron to the other. "Instead, we found that after a granule cell is active, more vesicles containing neurotransmitter are stored at the pre-synaptic terminal," Vandael explains. "Firing patterns induce plasticity through an increase of vesicles in this active zone, which can be stored for a few minutes."

Vesicle pools as an engram for short-term memory?

During learning, when the granule cell is active, vesicles are pushed into this pool at the active zone. When activity drops, the vesicles stay in the pool. When activity resumes, more vesicles stored in the active zone means more transmitter is released into the synapse. "Short-term memory might be activity stored as vesicles that are released later," Vandael adds.

In the end, this might be an important discovery, Jonas says. "By analyzing the biophysical and structural components of plasticity, David may have discovered the engram - if we believe that synaptic plasticity underlies learning." In further work, the group currently tries to correlate synaptic signals in vivo with behavioral changes.

The new findings help expand our understanding of learning and memory, Vandael says. "It is fascinating to think of memories as numbers of neurotransmitter-containing quanta, and we truly believe it will be inspiring for the neuroscience research community. We hope our work will contribute to solving part of the unresolved mysteries of learning and memory." Getting to grips with how different synapses work may also help to understand how diseases affect synapses, Jonas adds.

Credit: 
Institute of Science and Technology Austria

Promising new method for producing tiny liquid capsules

image: Microfluidic technology is used to miniaturize a fish oil capsule from its normal size to the size of a printed dot in a book.

Image: 
Nam-Trung Nguyen

WASHINGTON, June 2, 2020 -- Microcapsules for the storage and delivery of substances are tiny versions of the type of capsule used for fish oil or other liquid supplements, such as vitamin D. A new method for synthesizing microcapsules, reported in AIP Advances, by AIP Publishing, creates microcapsules with a liquid core that are ideal for the storage and delivery of oil-based materials in skin care products. They also show promise in some applications as tiny bioreactors.

Current production methods for microcapsules involve the use of emulsions, but these often require surfactants to ensure stability of the interface between the inner liquid and the one used to create the outer shell. Since surfactants can adversely affect the liquids involved, other approaches have been tried, including spraying liquids in a strong electric field.

One technique for creating microcapsules that works reasonably well involves the use of tiny channels. This microfluidics approach requires the complete wetting of the tiny channels with the liquids used to make the droplets. This, again, requires surfactants, complicating the fabrication process.

In this new method, a surfactant-free microfluidics process is used. The technique can produce up to 100 microcapsules per second. The output could be even larger at higher flow rates, according to the authors.

To produce the microcapsules, the investigators created a device by etching tiny channels into hard plastic. Two different liquids, an oily one for the core and a different one for the shell, were injected into the channels.

As the liquids are pumped through, droplets form when the immiscible liquids come into contact. The droplets are kept separate from one another with a third liquid and, finally, irradiated with ultraviolet light. This final step causes the outer shell to polymerize and solidify, trapping the liquid core.

The investigators analyzed and optimized the system by trying different flow rates and other operating conditions. The final droplets were examined and allowed to dry overnight at a high temperature, but no evaporation or shrinkage was observed, showing that the microcapsules can be safely stored without rupturing. This makes them ideal for pharmaceutical or skin care applications.

"Another application for microcapsules would be the polymerase chain reaction, PCR," said co-author Nam-Trung Nguyen, of Griffith University in Australia.

Keeping the PCR samples in these tiny capsules allows for implementation of a technique known as digital PCR.

"Each microcapsule could serve as a single microreactor, eliminating the need for well plates," said Nguyen.

Credit: 
American Institute of Physics