Tech

Spintronics: Physicists discover new material for highly efficient data processing

A new material could aid in the development of extremely energy efficient IT applications. The material was discovered by an international research team in cooperation with Martin Luther University Halle-Wittenberg (MLU). The electrons at the oxide interface of the material possess special properties which drastically increase the conversion rate of spin current to charge current. This is the foundation for future spintronic applications. The new material has been found to be more efficient than any previously investigated material, the team writes in the journal Nature Materials.

Electric current flows through all technical devices. Heat is generated and energy is lost. Spintronics explores new approaches to solving this issue that utilise a special property of electrons: spin. This is a type of intrinsic angular momentum of electrons that generates a magnetic torque and it is what causes magnetism. The idea behind spintronics is: If spin current flows through a material instead of an electrical charge, no heat is generated and significantly less energy is lost in the device. "However, this approach still requires an electric current for the device to work. Therefore, an efficient spin-to-charge conversion is necessary for this novel technology to work," explains Professor Ingrid Mertig, a physicist at MLU. Her research group is part of the international research team that discovered the new material. The work was led by the French physicist Dr Manuel Bibes, who conducts research at the renowned institute Centre national de la recherche scientifique (CNRS) - Thales.

The group investigated the interface between two oxides. "The two substances are actually insulators and are non-conductive. However, a kind of two-dimensional electron gas forms at their interface, which behaves like a metal, conducts current and can convert charge current into spin current with extremely high efficiency," explains Mertig. Dr Annika Johansson and Börge Göbel, two members of her research group, provided the theoretical explanation for this unusual observation. According to the researchers, the new material is significantly more efficient than any other known material. This could pave the way for the development of new, energy-saving computers.

MLU has extensive expertise in the field of oxide interfaces. Since 2008, Collaborative Research Centre 762 "Functionality of Oxidic Interfaces" has been located at MLU, which is funded by the Deutsche Forschungsgemeinschaft (German Research Foundation, DFG). The CRC is part of the university's core research area "Materials Science - Nanostructured Materials".

The idea for the project arose during Manuel Bibes's guest stay in Halle last year. Bibes is a recipient of the Alexander von Humboldt Foundation's Friedrich Wilhelm Bessel Research Award. The prize is awarded to internationally renowned scientists from abroad for their research achievements. Researchers can use the prize money for research stays at German universities and research institutions.

Credit: 
Martin-Luther-Universität Halle-Wittenberg

Researchers show satellite data can reveal fire susceptibility in peatlands

When large areas of carbon-rich soil catch fire, the blaze emits massive amounts of carbon into the atmosphere and creates a thick haze some residents of Southeast Asia know all too well. In 2015, the haze from peatland fires was fatal, responsible for more than 100,000 premature deaths in Indonesia, Malaysia and Singapore.

Because of how they accumulate organic material for long periods of time, undisturbed peatlands are considered one of the most effective natural ecosystems for carbon storage. So large fires come at a huge cost to human health and sustainability.

"Although they only cover 3 percent of the world's land area, peatlands are estimated to contain 21 percent of the world's soil carbon," said Stanford University doctoral candidate Nathan Dadap, lead author on a new study correlating soil moisture with fire vulnerability in peatlands.

In order to understand fire susceptibility in Asian peatlands, where blazes have increased in scale and severity over the past 30 years due to land-use changes, scientists developed a novel approach to measuring soil moisture using a previously underestimated tool: satellite data.

Since tropical peatlands are found in swamps where the ground can be obscured by dense vegetation, it was thought impossible to use satellite data for monitoring soil moisture. By developing an alternative algorithm, Stanford scientists have shown for the first time that analyzing remote sensing data can reveal soil moisture in this region, which can in turn be used to predict fire risk. The research appeared in Environmental Research Letters Sept. 9.

"This clearly shows the potential to lead to improved fire predictions," said co-author Alexandra Konings, an assistant professor of Earth system science in Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). "More research is needed, but it opens the door to a new way of figuring out long-term policies for managing peatland fire risk."

Researchers analyzed data from the NASA Soil Moisture Active Passive (SMAP) mission during the 2015 El Niño and found that the replacement of tropical forests with palm oil and acacia plantations allowed for measurement of the soil moisture in this region. The analyses show that drier soil up to 30 days before a fire correlated with a larger burned area. While rainfall is currently used as an indicator for fire risk in the region, soil moisture is the most direct way of assessing that risk.

"The problem with using precipitation as an indicator is that it doesn't take into account the local conditions," Dadap said. "If one area has drainage canals and another does not, but you still have the same amount of precipitation, the one with canals still is going to have a much higher risk of fires. That's why we think that inclusion of soil moisture can be an important metric for capturing conditions on the ground."

Carbon sink or fossil fuel?

When fires start in peatlands and the soils are dry enough, blazes there can quickly become out of control, causing haze downwind in the densely populated cities of Jakarta and Singapore and ushering in long-term climate impacts that affect the whole planet.

"In the 2015 peat fires, nearly the same amount of carbon dioxide was released as India's total annual carbon emissions from fossil fuels," Dadap said.

Nearly 95 percent of the peatlands in this region of Sumatra, peninsular Malaysia and Borneo have been degraded - a factor that increases susceptibility to widespread fires - yet those land-use changes also enabled the researchers to use satellite data to measure its soil moisture. Their new approaches for interpreting the satellite data might also work in other peatlands where the land cover allows for accurate soil moisture measurement, Dadap said.

While policymakers have expressed some interest in implementing water table-based management policies in the area, measurements for creating such guidelines would need to happen on the ground - a process that would be extremely labor-intensive for such a large region and infeasible in some areas, according to Konings. The approach used in this study shows the value of using satellite data for a more detailed understanding of peatland hydrology.

"This shows that the consideration of hydrologic factors beyond just the commonly cited water table in this region - factors like soil moisture or canals that might be easier to map than a water table - could be relevant for avoiding fire outcomes," Konings said.

Laboratory links

While exploring the relationship between fire susceptibility and soil moisture in peatlands, Dadap turned to lab-based research for supporting evidence. The satellite data analysis showed that burned areas were much larger when soils were below a certain soil moisture value. A laboratory study from the 1990s similarly showed that ignition of peat samples was much more likely below the same value.

"That was probably the most shocking finding, since we were measuring soil moisture from the satellite - it was a totally different method than this laboratory ignition study," Dadap said. "It was a pleasant surprise to have an independent comparison that seems to match up really well."

Credit: 
Stanford's School of Earth, Energy & Environmental Sciences

Salk scientists develop technique to reveal epigenetic features of cells in the brain

image: From left: Jingtian Zhou, Dong-Sung Lee, Chongyuan Luo, Jesse Dixon and Joseph Ecker.

Image: 
Salk Institute

LA JOLLA--(September 9, 2019) The brain's prefrontal cortex, which gives us our ability to solve problems and plan ahead, contains billions of cells. But understanding the large diversity of cell types in this critical region, each with unique genetic and molecular properties, has been challenging.

Scientists have known that much of this diversity results from epigenetics (such as the chemical tags on DNA) as well as how epigenetic features ultimately fold up within chromosomes to affect how genes are expressed.

Now, Salk researchers have developed a method to simultaneously analyze how chromosomes, along with their epigenetic features, are compacted inside of single human brain cells. A collaborative team of scientists from the Ecker and Dixon labs combined two different analysis techniques into one method, which enabled them to identify gene regulatory elements in distinct cell types. The work, which was published in Nature Methods on September 9, 2019, paves the way toward a new understanding of how some cells become dysregulated to cause disease.

"We've taken this new and better approach to analyzing the genomes of single cells and applied it to healthy brain tissue," says Salk Professor and Howard Hughes Medical Institute Investigator Joseph Ecker, head of the Genomic Analysis Laboratory and the paper's co-corresponding author. "The next step is to compare normal and disease tissue."

How DNA is packed within structures called chromosomes in a cell's nucleus can play a critical role in cellular function. And how DNA is ultimately folded depends on which sections of DNA need to interact with each other and which need to be easily accessible to cellular machinery. The structure of chromosomes acts as a sort of cellular fingerprint: although different cell types have the same sequence of DNA, they have different chromosome structures to organize that DNA.

At the same time, chemical (epigenetic) modifications to DNA itself--such as the addition of methyl groups to a strand of DNA--also control the timing and levels of gene expression. When a methyl group is tacked onto a bit of DNA, a gene is typically blocked from being expressed.

In the past, researchers have had to use separate methods to determine chromosome structures and methylation patterns of individual cells. In July, for instance, Ecker's team reported they had developed a new tool that could differentiate cell types based solely on chromosome structure. And in 2017, they sorted mouse and human brain cells based on their methylation patterns.

When doing the experiments separately, however, researchers can't determine how chromosome structure and methylation patterns might be related. It has been unclear whether each subset of chromosome structures corresponds to a subset of methylation patterns. Or whether the two datasets, when combined, reveal more nuanced subtypes of cells.

In their new method, called single-nucleus methyl-3C sequencing (sn-m3C-seq), the Salk team "double dips" from each single cell, collecting data on both chromosome structure and methylation at the same time. While doing the process manually would be slow and cumbersome, the team automated sn-m3C-seq, letting them easily study thousands of cells. The development of new approaches to handle cells, coupled with new computational methods to handle data, enabled this new technique.

The team says developing a method that examines these features in single cells allows scientists to use certain "analytical tricks" to directly study tissue samples and resolve chromosome structure and DNA methylation in all the different cell types in the tissue. "We know these features can vary a lot between cell types and there's value in having both types of information together from the same cells," says Jesse Dixon, a Helmsley-Salk Fellow and co-corresponding author. "It really opens up our ability to understand what regulatory sequences are affecting which genes across a wide variety of cell types and tissues."

Knowing what regulatory sequences regulate which genes has important implications for understanding how genetic variations may contribute to human disease. For example, much of the genetic variation the contributes to common human brain diseases like schizophrenia and depression, as well as non-brain diseases like heart disease, lies in regions of our genome that are far away from genes. The researchers say that, by studying chromosome folding in actual human tissues and resolving distinct cell types, these methods may allow them to link disease-causing genetic variants with the genes they regulate, which may tell them more about why certain variants contribute to diseases and offer insights into how to best treat them.

To test out sn-m3C-seq, Ecker, Dixon and colleagues applied the method to more than 4,200 human brain prefrontal cortex cells. While using data from chromosome structure alone allowed only the crude separation of neurons from non-neurons, combining the approaches let the researchers identify gene regulatory elements in distinct cell types and then further study the chromosome structures present in each cell type.

Moreover, the team noticed relationships between the two levels of regulation that they plan to study more in the future. Now that the method is established, they'd like to begin applying it to more types of both healthy and diseased tissues.

Instrumental to that effort will be a four-million-dollar grant that Dixon and Ecker received from the National Institutes of Health's National Genome Research Institute on September 6, 2019, which will greatly facilitate their studies of gene regulation in human tissues and diseases such as cancer.

Credit: 
Salk Institute

Threatened species habitat destruction shows federal laws are broken

image: Native Australian species, such as the koala, have lost one million hectares since 2000.

Image: 
The University of Queensland

Human activities have destroyed more than 7.7 million hectares of threatened species habitat, revealing critical failures with Australia's federal environmental protection laws.

A University of Queensland-led study has revealed that less than seven per cent of this destruction was referred to the Federal Government for assessment, scrutiny required under Australia's flagship environmental legislation, the Environment Protection and Biodiversity Conservation Act 1999 (EPBC Act).

Lead author and PhD candidate in UQ's School of Earth and Environmental Sciences, Michelle Ward, said habitat for our most imperiled species should be regulated, maintained, and fully protected.

"It's alarming for a species to lose 25 per cent of its habitat in less than two decades - it must be addressed," she said.

"Species threatened with extinction are a matter of national environmental significance and need to be protected and conserved."

The authors looked at the distributions of 1,638 terrestrial threatened species, terrestrial migratory species and threatened ecological communities, quantifying the loss of potential habitat and communities since the EPBC Act came into force.

The team found that more than 7.7 million hectares of potential habitat and communities were cleared between 2000 and 2017.

While 1,390 or 84 per cent of species suffered loss, the Mount Cooper striped skink, Keighery's macarthuria and the Southern black-throated finch lost 25 per cent, 23 per cent and 10 per cent of potential habitat respectively.

The koala lost approximately one million hectares of habitat since 2000.

Dr Martin Taylor from the World Wide Fund for Nature-Australia, one of the paper's senior authors, said it was a national scandal which should outrage the community.

"It's hard for any reasonable person to see how seven million hectares of unassessed, unapproved destruction of threatened species habitat can be other than unlawful.

"The government is failing to enforce a law designed to halt Australia's extinction crisis.

"It's as if the cops are asleep at the wheel, while all the shops up and down the street are looted.

"Most of the destruction is to create livestock pasture.

"Why are agricultural developers not referring their clearing for assessment?

"This cannot be allowed to continue," he said.

Credit: 
University of Queensland

Metal-organic framework nanoribbons

image: Comparison of the traditional approach for preparation of bulk MOF crystal (top) and metal hydroxide nanostructure precursor approach for synthesis of ultrathin MOF nanoribbon (bottom).

Image: 
©Science China Press

Metal-organic frameworks (MOFs) have attracted great attentions in the past decades due to their many noticeable features, such as large surface areas, highly ordered pores, tunable structures and unique functions, making them promising for many promising applications. The structure engineering of MOFs at the nanometer scale is essential to customize MOFs for specific applications.

Among various nanostructures, ultrathin nanoribbons (NRBs) show great potentials in both fundamental studies and technological applications. Their unique features like high surface-to-volume ratio, highly active surface, and high concentration of selectively exposed crystal facet enable them to exhibit unique electronic structures, mechanical properties, and excellent catalytic efficiency. However, so far, the preparation of ultrathin MOF NRBs still remains a great challenge due to the complicated nucleation and growth processes of MOFs.

In a new research article published in the National Science Review, the scientists at Nanyang Technological University, City University of Hong Kong and Beijing University of Chemical Technology present a general method to prepare ultrathin MOF NRBs by using the metal hydroxide nanostructures as precursors. They found that metal hydroxide nanostructures used as precursors can regulate the growth of MOF crystals by controlled releasing metal ions from the metal hydroxides, which plays a key role in the synthesis of MOF NRBs. Importantly, the proposed method is simple, efficient and versatile, which could be used for the preparation of a series of ultrathin MOF NRBs. As a proof-of-concept application, the as-prepared ultrathin NRBs were used in DNA detection, exhibiting excellent sensitivity and selectivity.

Credit: 
Science China Press

Cheap water treatment

image: The new nickel catalysts synthesized at the Institute of Physical Chemistry PAS allow for extremely effective water treatment in flow mode removing harmful organochlorine compounds.

Image: 
IPC PAS, G.Krzyzewski

There's nothing new in treating water by sorption of organic solvents such as trichloroethylene (TCE). But finding a method that neutralizes these contaminants, instead of just shifting them somewhere else, is no mean feat. A team led by Anna Śrębowata, professor at the IPC has improved a method of catalytic hydrotreatment, that is, transforming TCE into hydrocarbons that are environmentally less harmful. Thanks to scientists from the IPC PAS, not only the water in our taps, but also in our rivers, can be cleaner and safer for human health.

Clean water is a treasure, but also a resource that is becoming more and more scarce. Various contaminants are widespread, and some are extremely difficult to remove. Such pollutants include trichloroethylene (known in Poland as TRI). This organic solvent used to be commonly used in, amongst others, organic syntheses, dry cleaners and for the industrial degreasing of metals during their processing. Due to its negative impact, its use has been officially banned since 2016. However, considering its stability, it may remain in both the water and soil for many years to come, explains MSc. Emil Kowalewski, a member of the team that developed the innovative method of removing this compound from water. The project is part of a global trend focused on the protection of water resources. The research may be of interest to wastewater treatment plants and become a potential starting point for the development of innovative water treatment systems. Why?

Today's wastewater treatment plants are systems consisting of many physical, chemical and biological processes, but they effectively eliminate mainly conventional pollutants. Others may remain in the water if their concentrations are high enough. "Meanwhile, trichloroethylene should not be in water at all, because it is mutagenic, carcinogenic, teratogenic...", says the scientist, "and what's more, extremely long-lasting. It accumulates and stays at the bottom of reservoirs, and since its solubility in water is very poor, it can remain harmful for many years to come."

"Today we deal with such compounds mainly by the process of sorption. However, in this way we're only transferring the threat from one place to another. An attractive solution seems to be catalytic hydrotreatment, i.e. transforming the TCE into less harmful hydrocarbons. However, in order to fully exploit the potential of this method, it was necessary to develop an efficient, stable and cheap catalyst," says Dr. Anna Śrębowata, professor at the IPC.

"Previously, we carried out research with palladium catalysts. They were effective but expensive," smiles Emil Kowalewski. The new nickel catalysts, developed at the IPC PAS, allow for a cheap and effective method of conducting the process of water treatment in flow mode, and at the same time they are easy to synthesize. "Using a catalyst in which nickel nanoparticles with a diameter of about 20 nm are deposited on the surface of activated carbon, we combine the sorption properties of carbon and the catalytic activity of nickel," explains Emil Kowalewski. In their research, the scientists from the IPC PAS also showed that nickel nanoparticles deposited on activated carbon with a partially ordered structure show higher activity and stability than an analogous catalyst based on a support with amorphous structure

The scientists are, however, proudest of the innovative element of their research - introducing of the flow technology to the water purification from TCE. Thanks to this the parameters of the process can be optimized, the amount of waste can be reduced, and at the same time catalysts which were inefficient or even ineffective in batch reactors (i.e. where a specific batch of product is treated at one time) can be used. "This was the case with our nickel catalyst," says MSc. Kowalewski. "Without flow technology its capacity to utilize TCE declined in time, the catalyst underwent poisoning. In the flow reactor, even after 25 hours, we did not observe any decrease in activity, although we conducted research on concentrations about 8000 times higher than the Polish standards of its content in drinking water.

Where can the innovative method be used? Above all in water and wastewater treatment plants. Wherever we want the water reaching the end user to be clean, regardless of whether it's a user of tap water or a fish floating in the river.

And what should be done with the products of the hydrotreatment of water to remove trichloroethylene? "The resulting compounds are hydrocarbons, mainly ethylene. But it's not enough for a banana maturation plant," smiles the scientist half-jokingly. "It will simply escape..."

"This publication is part of a project which has received funding from the European Union's research and innovation programme "Horizon 2020" under contract no. 666295 for co-financing and from funds for science in 2016-2019 allocated by the Ministry of Science and Higher Education for the implementation of an international co-financed project."

Credit: 
Institute of Physical Chemistry of the Polish Academy of Sciences

Researchers unearth 'new' extinction

A team of scientists has concluded that earth experienced a previously underestimated severe mass-extinction event, which occurred about 260 million years ago, raising the total of major mass extinctions in the geologic record to six.

"It is crucial that we know the number of severe mass extinctions and their timing in order to investigate their causes," explains Michael Rampino, a professor in New York University's Department of Biology and a co-author of the analysis, which appears in the journal Historical Biology. "Notably, all six major mass extinctions are correlated with devastating environmental upheavals--specifically, massive flood-basalt eruptions, each covering more than a million square kilometers with thick lava flows."

Scientists had previously determined that there were five major mass-extinction events, wiping out large numbers of species and defining the ends of geological periods: the end of the Ordovician (443 million years ago), the Late Devonian (372 million years ago), the Permian (252 million years ago), the Triassic (201 million years ago), and the Cretaceous (66 million years ago). And, in fact, many researchers have raised concerns about the contemporary, ongoing loss of species diversity--a development that might be labeled a "seventh extinction" because such a modern mass extinction, scientists have predicted, could end up being as severe as these past events.

The Historical Biology work, which also included Nanjing University's Shu-zhong Shen, focused on the Guadalupian, or Middle Permian period, which lasted from 272 to about 260 million years ago.

Here, the researchers observe, the end-Guadalupian extinction event--which affected life on land and in the seas--occurred at the same time as the Emeishan flood-basalt eruption that produced the Emeishan Traps, an extensive rock formation, found today in southern China. The eruption's impact was akin to those causing other known severe mass extinctions, Rampino says.

"Massive eruptions such as this one release large amounts of greenhouse gases, specifically carbon dioxide and methane, that cause severe global warming, with warm, oxygen-poor oceans that are not conducive to marine life," he notes.

"In terms of both losses in the number of species and overall ecological damage, the end-Guadalupian event now ranks as a major mass extinction, similar to the other five," the authors write.

Credit: 
New York University

Bias against single people affects their cancer treatment

image: Unmarried cancer patients may receive second-best treatment.

Image: 
Jeff Chase, University of Delaware

Unmarried patients with cancer are less likely to get potentially life-saving surgery or radiotherapy than their married counterparts, raising the concern that medical providers may be relying on stereotypes that discount sources of social support other than a current spouse.

That's the conclusion reached by the University of Delaware's Joan DelFattore, a professor emerita of English who combined her personal experience as an unmarried patient with her skills as a researcher to publish a peer-reviewed article in the latest issue of The New England Journal of Medicine.

Titled "Death by Stereotype? Cancer Treatment in Unmarried Patients," the article examines 84 medical articles that draw on a massive National Cancer Institute database to show that patients are significantly less likely to receive surgery or radiotherapy if they are not currently married.

Although this disparity has been attributed in studies to such factors as patients' treatment preferences or a weaker will to live among unmarried people, DelFattore found that those speculations are not only unsupported by data but actually conflict with extensive research findings. Rather, her article suggests that cultural stereotypes inappropriately influence the treatments recommended for unmarried patients with cancer.

The central issue for physicians is the social support that patients need, especially if their treatments require numerous healthcare visits or may cause debilitating side effects. But while unmarried people often have especially strong networks of friends and community ties, medical researchers tend to equate social support with having a spouse, DelFattore found.

"The statistics definitely show a connection between marital status and the treatment patients receive," she said. "There are people getting sick and getting second-best treatment."

Some patients--married as well as unmarried--certainly lack the social support necessary to handle aggressive treatment, "but that generalization can't possibly apply to nearly half the adult population," she said. According to the U.S. Census Bureau, 45% of U.S. adults are unmarried.

DelFattore is part of that population, and when she was diagnosed in 2011 with advanced gallbladder cancer, she relied on her network of friends, colleagues, neighbors and extended family to help her. As she recounts in the journal article, her surgeon at Memorial Sloan Kettering Cancer Center accepted her description of her friend-based support network without question.

At the time, she didn't realize that his acceptance couldn't be taken for granted, but when she went for post-surgery chemotherapy, the first doctor she saw asked about her marital status and continued to focus on that subject. Even after she tried to explain the support she had available, the oncologist recommended a milder course of treatment that DelFattore knew was not the most effective.

"He wouldn't risk serious side effects [of the more aggressive treatment] with, as he put it, 'someone in your situation,'" she writes. She changed doctors and was given the harsher, more effective chemotherapy by an oncologist who accepted that she had the necessary support.

DelFattore is concerned that doctors might rely on medical researchers who, in turn, are citing sociological and psychological studies that don't say what the researchers assume they say.

"Even if medical researchers mean to recommend what's best for patients, as they presumably do, their reliance on stereotypes about unmarried adults is misleading, especially when they misinterpret sociological and psychological studies that do not, in fact, support those stereotypes," DelFattore said.

For example, she said, almost all authors in the 84 articles she reviewed equate marriage with social support, "but the psychological and sociological studies they cite to support that claim don't even mention the words 'marriage,' 'marital' or 'spouse,'" she said. Instead, those studies talk about social support as a complex web of connections that can't be reduced to a single element.

Consistent with longstanding social stereotypes, DelFattore said, doctors may use the question about marital status as a kind of shorthand way to ask about social support. Once they hear the word "unmarried," they may stop there.

DelFattore is quick to point out that she's not the person who discovered the differences in treatment between married and unmarried patients. Based on her review of articles, it's been known since at least 1987 that cancer patients with a current spouse are more likely to get surgery or radiotherapy than those who are divorced, separated, widowed or never married, she said.

"This is not shocking news," she said of the disparity. "What's shocking is that it's been buried in the fine print of academic journals and footnotes for over 30 years."

Now, she said, she hopes her journal article will raise awareness and spur additional research. She also hopes that, just as medical schools now teach about the dangers of unintentional racial and gender bias in treating patients, they'll also start discussing marital status.

"I'm not writing about this and advocating for change out of anger or outrage," she said. "It's not about blame. It's about asking people to examine their assumptions--in this case, with respect to potentially life-or-death decisions. Medicine has to evolve, not only in science and technology, but also with respect to an evolving society."

Credit: 
University of Delaware

Are there health consequences associated with not using a smartphone?

image: Cyberpsychology, Behavior, and Social Networking explores the psychological and social issues surrounding the Internet and interactive technologies.

Image: 
Mary Ann Liebert Inc., publishers

New Rochelle, NY, September 9, 2019--Many studies have examined the health effects of smartphone abuse, but a new study looks at the sociodemographic features and health indicators of people who have a smartphone but do not use it regularly. This under-studied group of individuals were significantly more likely to report feelings of loneliness, according to the article published in Cyberpsychology, Behavior, and Social Networking, a peer-reviewed journal from Mary Ann Liebert, Inc., publishers. Click here to read the full-text article free on the Cyberpsychology, Behavior, and Social Networking website through October 9, 2019.

Eduardo Pedrero-Pérez and colleagues from Madrid Salud (Spain) coauthored the article entitled "Smartphone Nonusers: Associated Sociodemographic and Health Variables." The researchers conducted a random sampling of people living in a large city ages 15-65 who own a smartphone and identified those who do not use their smartphone regularly. In comparing the two groups, they found the non-users more likely to be male, older, have a lower educational level, and belong to an underprivileged social class. In addition, the non-users showed worse mental health indicators, and a lower perceived quality of life relating to their health

"Population health research can help us to discover how technology use patterns may contribute to mental and physical health difficulties as well as provide protective factors for large groups of individuals," says Editor-in-Chief Brenda K. Wiederhold, PhD, MBA, BCB, BCN, Interactive Media Institute, San Diego, California and Virtual Reality Medical Institute, Brussels, Belgium.

Credit: 
Mary Ann Liebert, Inc./Genetic Engineering News

How brain rhythms organize our visual perception

image: A rhesus monkey in the primate husbandry at the German Primate Center.

Image: 
Margrit Hampe

To investigate how information of different visual features is processed in the brain, the neuroscientists from the German Primate Center - Leibniz Institute of Primate Research in Göttingen, Germany, the Iran University of Science and Technology and the Institute for Research in Fundamental Sciences in Tehran, Iran measured the activity of individual nerve cells in the brain of rhesus monkeys, while the animals performed a visual perception task. The monkeys were trained to report changes in moving patterns on a computer screen. Using hair-thin microelectrodes, which are painless for the animals, the researchers measured the electrical activity of groups of nerve cells. These signals continuously oscillate over a broad frequency spectrum.

The scientists recorded the activity in the brain area highly specialized for the processing of visual motion information. Using advanced signal processing techniques, they found that the activity of those nerve cells oscillates at high frequencies (around 200 cycles per second) and that these oscillations are linked to perception. "We observed that faster responses of the animals occurred whenever the nerve cells showed a stronger oscillatory activity at high frequencies, suggesting that these oscillations influence perception and action," explains Stefan Treue, head of the Cognitive Neuroscience Laboratory at the German Primate Center and one of the senior authors of the study.

Previous studies had shown that different visual aspects, such as the color and motion direction of visual objects, are analyzed in highly specialized, anatomically separate brain areas. These areas then transmit their information to high-level brain areas, where individual features are combined to form our unified percept of visual objects. It turns out that the brain region processing color information transmits information via a lower frequency (around 70 cycles per second) than the high-frequency transmission of the brain region processing motion signals. "Our computational analysis shows that high level regions could use these different frequencies to distinguish the source of neural activity representing the different features," explains Mohammad Bagher Khamechian, scientist at the Iran University of Science and Technology in Tehran and first author of the study.

The detailed knowledge of how the brain of rhesus monkeys enables perception as well as other complex cognitive functions provides insights about the same processes in the human brain. "The oscillatory activity of neurons plays a critical role for visual perception in humans and other primates," summarizes Stefan Treue. "Understanding how exactly these activity patterns are controlled and combined, not only helps us to better understand the underlying neural correlates of conscious perception, but also may enable us to gain a better understanding of physiological deficits underlying disorders that involve perceptual errors, such as in schizophrenia and other neurological and neuropsychiatric diseases."

Credit: 
Deutsches Primatenzentrum (DPZ)/German Primate Center

Offering children a variety of vegetables increases acceptance

audio: Although food preferences are largely learned, dislike is the main reason parents stop offering or serving their children foods like vegetables. Lead author Astrid A.M. Poelman, PhD, CSIRO Agriculture & Food, Sensory, Flavour and Consumer Science, highlights a new study that demonstrated that repeatedly offering a variety of vegetables increased acceptance and consumption by children.

Image: 
<em>Journal of Nutrition Education and Behavior</em>

Philadelphia, September 9, 2019 - Although food preferences are largely learned, dislike is the main reason parents stop offering or serving their children foods like vegetables. A new study in the Journal of Nutrition Education and Behavior, published by Elsevier, demonstrated that repeatedly offering a variety of vegetables increased acceptance and consumption by children.

"In Australia, dietary guidelines for vegetable consumption by young children have increased although actual consumption is low," said lead author Astrid A.M. Poelman, PhD, CSIRO Agriculture & Food, Sensory, Flavour and Consumer Science, North Ryde, Australia. "This study introduces an effective strategy for parents wanting to address this deficiency."

This study recruited 32 families with children between the ages of four and six where low consumption of vegetables was reported. Parents completed an online survey and attended an information meeting prior to participating. Three groups were created: children introduced to a single vegetable; children to receive multiple vegetables; and a group where eating habits were not changed.

Study data were collected in several ways: two dinner meals served at the research facility during which children could eat as much of the broccoli, cauliflower and green beans as they wished; changes to actual vegetables consumed at home, childcare or school recorded through food diaries; and parents reporting on usual vegetable consumption.

Strategies of offering vegetables were parent led and home based. Families introducing one vegetable served broccoli and families trying multiple vegetables served broccoli, zucchini and peas. Parents were provided with a voucher to purchase the vegetables and given instructions on portion size and cooking instructions along with tips on how to offer the vegetables. Children were served a small piece of vegetable three times a week for five weeks. A sticker was given as a reward to children trying a vegetable.

There was no difference between groups at the start of the study for any of the methods measured. The dinner meal, during which the children ate without parents present, did not increase consumption perhaps due to an unfamiliar setting. Vegetable acceptance increased for both the single and multiple vegetable groups during the intervention. Families that offered multiple vegetables recorded an increase in consumption from .6 to 1.2 servings, while no change in consumption was observed in families serving a single vegetable or families that did not change their eating habits. Increased acceptance for multiple vegetables was noted during the five weeks of the study and sustained at three-month followup. Following the study parents reported that offering the vegetables was "very easy" or "quite easy" with the majority following the instructions provided by the study.

Dr. Poelman recommended, "While the amount of vegetables eaten increased during the study, the amount did not meet dietary guidelines. Nonetheless, the study showed the strategy of offering a variety of vegetables was more successful in increasing consumption than offering a single vegetable."

Credit: 
Elsevier

New salt-based propellant proven compatible in dual-mode rocket engines

image: University of Illinois at Urbana-Champaign associate professor Joshua Rovey, left, talks about electric propulsion system testing with AE graduate students Nick Rasmont and Matt Klosterman

Image: 
University of Illinois Department of Aerospace Engineering

For dual-mode rocket engines to be successful, a propellant must function in both combustion and electric propulsion systems. Researchers from the University of Illinois at Urbana-Champaign used a salt-based propellant that had already been proven successful in combustion engines, and demonstrated its compatibility with electrospray thrusters.

"We need a propellant that will work in both modes," said Joshua Rovey, associate professor in the Department of Aerospace Engineering in The Grainger College of Engineering at the U of I. "So, we created a propellant that is a mixture of two commercially available salts--hydroxylammonium nitrate and emim ethylsulfate. We have published other research papers showing that salt propellants work in the high-acceleration combustion mode. Now we know that this unique combination of salts will also work in the electric fuel-efficient mode."

With electrospray or colloid propulsion, the thrusters electrostatically accelerate ions and droplets from these liquids. It's a technique that started in the biology/chemistry community, then the propulsion community began looking at it about 20 years ago.

Rovey explained that liquid is fed through a very small diameter needle, or capillary tube. At the tip of the tube, a strong electric field is applied that interacts with the liquid in the tube because the liquid itself is a conductor. The liquid responds to that electric field. Small droplets and ions get pulled out of the liquid--spraying them out of the tube or needle.

In this study, in addition to showing that the propellant could be sprayed, Rovey said they were interested in learning what kinds of chemical species come out in the plume. "Because no one has ever tried this type of propellant before, we expected to see species that no one else has ever seen before and, in fact, we did."

Rovey said they also saw a new swapping of the constituents that make up the two different salts.

"We saw some of the hydroxylammonium nitrate salt bonding with the emim ethyl sulfate salt. The two are mixed together inside the propellant, and are constantly bonding with each other and then detaching.

"There's a chaotic nature to the system and it was unclear how those interactions within the liquid itself would propagate and show up in the spray. There are no chemical reactions happening. It's just that we start with A and B separately and when they come out in the spray, A and B are bonded together," he said.

Rovey said these findings shed a lot of light on what's happening in these mixtures of salts that are possible propellants for electrosprays. But it also opens doors to a lot of other questions that will lead to fundamental studies that try to understand the interactions within these propellants and how that translates into what comes out in the spray itself.

Credit: 
University of Illinois Grainger College of Engineering

Compound offers prospects for preventing acute kidney failure

image: Kidney care.

Image: 
Elena Khavina/MIPT Press Office

Russian researchers from the Moscow Institute of Physics and Technology, the Institute of Cell Biophysics, and elsewhere have shown an antioxidant compound known as peroxiredoxin to be effective in treating kidney injury in mice. The study in Cell and Tissue Research reports tripled survival rates in test animals treated with the chemical prior to sustaining an ischemia-reperfusion injury. The team says peroxiredoxin also offers prospects for longer kidney transplant storage.

Ischemia-reperfusion causes kidney failure

Living tissues rely on a constant inflow of blood for survival. Restrictions in blood supply, known as ischemia, lead to a shortage of oxygen and nutrients. This may make the tissue more acidic and impair cell membrane permeability. Ultimately, tissue damage can arise, with the kidney, heart, and nerve tissues being the most vulnerable.

When normal blood inflow resumes, this so-called reperfusion does not cause the damaged tissue to regenerate. Instead, the concentration of reactive oxygen species grows, bringing about oxidative stress and further harming the cells. A pathologically high ROS count may trigger the cells to self-destruct in what is known as apoptosis.

The causes of ischemia include blood vessel constriction, changes in blood pressure or heart rate, loss of blood, and trauma. The ischemia-reperfusion syndrome remains a key factor in organ pathologies. When it affects the kidneys, acute renal failure may occur, resulting in death in half of the cases.

Antioxidants offer treatment options

Since oxidative stress is involved in the tissue damage under ischemia-reperfusion, antioxidants are a particularly promising treatment option. These are compounds that reduce oxidative stress by lowering ROS concentration.

In their recent study, the Russian researchers used antioxidant enzymes from the peroxiredoxins family. In addition to being involved in cell signaling, they reduce the level of the ROSs called peroxides. Among the six known enzymes in this family, peroxiredoxin 6 -- aka PRDX6 -- has the greatest appeal. Shown in figure 1, it has the capacity to neutralize the largest number of peroxides, both organic and inorganic.

Peroxiredoxin 6 boosts mouse survival

To prove the efficiency of the enzyme in kidney ischemia-reperfusion treatment, the researchers modeled this injury in mice and compared the survival rates of the animals that received PRX6 treatment and those that did not. In the latter group, 1 in 5 mice survived by day four, compared with 3 in 5 mice alive by that time in the group that received a PRX6 infusion 15 minutes prior to ischemia.

The induced ischemia-reperfusion in untreated mice was accompanied by edema, blood-engorged vessels in the kidneys, renal tubule degeneration, as well as an increased concentration of kidney damage markers and the transcription factors responsible for inflammation development. By contrast, the kidneys of the test animals receiving PRX6 displayed much smaller changes of pathological and morphological nature.

To rule out the possibility that the benefits of PRX6 are not related to peroxide suppression, the team synthesized a mutant version of the enzyme. It has the same structure, yet it does not affect peroxide levels. Administering the mutant enzyme 15 minutes before ischemia had no positive effect on mice survival. It is thus the compound's ability to keep peroxides in check that gives rise to its therapeutic effect.

The study's senior author Mars Sharapov of MIPT and the Institute of Cell Biophysics commented on the team's findings: "We observed the intravenously infused PRX6 in the animals' bloodstream, under kidney ischemia-reperfusion. That is, it did not enter the cells. Despite this, PRX6 effectively neutralized the peroxides that were released by the cells into the extracellular environment. This suppressed oxidative stress and apoptotic cell death, resulting in significantly less tissue damage."

Credit: 
Moscow Institute of Physics and Technology

Are black holes made of dark energy?

image: Objects like Powehi, the recently imaged supermassive compact object at the center of galaxy M87, might actually be GEODEs. The Powehi GEODE, shown to scale, would be approximately 2/3 the radius of the dark region imaged by the Event Horizon Telescope. This is nearly the same size expected for a black hole. The region containing Dark Energy (green) is slightly larger than a black hole of the same mass. The properties of any crust (purple), if present, depend on the particular GEODE model.

Image: 
Photo: EHT collaboration; NASA/CXC/Villanova University

Two University of Hawaii at Manoa researchers have identified and corrected a subtle error that was made when applying Einstein's equations to model the growth of the universe.

Physicists usually assume that a cosmologically large system, such as the universe, is insensitive to details of the small systems contained within it. Kevin Croker, a postdoctoral research fellow in the Department of Physics and Astronomy, and Joel Weiner, a faculty member in the Department of Mathematics, have shown that this assumption can fail for the compact objects that remain after the collapse and explosion of very large stars.

"For 80 years, we've generally operated under the assumption that the universe, in broad strokes, was not affected by the particular details of any small region," said Croker. "It is now clear that general relativity can observably connect collapsed stars--regions the size of Honolulu--to the behavior of the universe as a whole, over a thousand billion billion times larger."

Croker and Weiner demonstrated that the growth rate of the universe can become sensitive to the averaged contribution of such compact objects. Likewise, the objects themselves can become linked to the growth of the universe, gaining or losing energy depending on the objects' compositions. This result is significant since it reveals unexpected connections between cosmological and compact object physics, which in turn leads to many new observational predictions.

One consequence of this study is that the growth rate of the universe provides information about what happens to stars at the end of their lives. Astronomers typically assume that large stars form black holes when they die, but this is not the only possible outcome. In 1966, Erast Gliner, a young physicist at the Ioffe Physico-Technical Institute in Leningrad, proposed an alternative hypothesis that very large stars should collapse into what could now be called Generic Objects of Dark Energy (GEODEs). These appear to be black holes when viewed from the outside but, unlike black holes, they contain Dark Energy instead of a singularity.

In 1998, two independent teams of astronomers discovered that the expansion of the Universe is accelerating, consistent with the presence of a uniform contribution of Dark Energy. It was not recognized, however, that GEODEs could contribute in this way. With the corrected formalism, Croker and Weiner showed that if a fraction of the oldest stars collapsed into GEODEs, instead of black holes, their averaged contribution today would naturally produce the required uniform Dark Energy.

The results of this study also apply to the colliding double star systems observable through gravitational waves by the LIGO-Virgo collaboration. In 2016, LIGO announced the first observation of what appeared to be a colliding double black hole system. Such systems were expected to exist, but the pair of objects was unexpectedly heavy--roughly 5 times larger than the black hole masses predicted in computer simulations. Using the corrected formalism, Croker and Weiner considered whether LIGO-Virgo is observing double GEODE collisions, instead of double black hole collisions. They found that GEODEs grow together with the universe during the time leading up to such collisions. When the collisions occur, the resulting GEODE masses become 4 to 8 times larger, in rough agreement with the LIGO-Virgo observations.

Croker and Weiner were careful to separate their theoretical result from observational support of a GEODE scenario, emphasizing that "black holes certainly aren't dead. What we have shown is that if GEODEs do exist, then they can easily give rise to observed phenomena that presently lack convincing explanations. We anticipate numerous other observational consequences of a GEODE scenario, including many ways to exclude it. We've barely begun to scratch the surface."

Credit: 
University of Hawaii at Manoa

'Clamp' regulates message transfer between mammal neurons

MADISON - A fundamental question in nerve biology brings to mind a race car at the starting line: The engine is revving, but the brake is on. The system is ready to go, but under tight control.

And when the light flashes green, the car blasts away from the line.

A similar process occurs at junctions between two nerve cells when a different type of brake is disengaged. At that point, the machinery controlling the release of nerve signaling chemicals open a "fusion pore" that allows the escape of these neurotransmitters. Only then is a signal -- contained within tiny packages called vesicles -- released to other neurons.

Today, in the journal Nature Communications, Edwin Chapman, a professor of neuroscience at University of Wisconsin-Madison, has described a key component of the system -- the brake, or "clamp," that prevents the fusion pore from completing its formation and opening.

"We need the vesicles to release their contents, yes," says Chapman, a Howard Hughes Medical Institute investigator. "But release is not always appropriate. Preventing the pore from fully forming and opening is just as critical. But today, the physiology of clamping -- or inhibiting -- the pore is hotly debated. We believe we have a conclusive answer to what happens in mammal neurons, and it's different from what others have found based on studies of invertebrate neurons."

The fusion pore forms in synapses, the junctions between adjacent neurons. The pore, comprising "SNARE" proteins, cannot release a signal until a spritz of calcium ions triggers the opening. The calcium sensor is a protein called synaptotagmin which, Chapman has now found, is also key to the fusion clamp.

Transmissions across synapses underlie every function of the nervous system: signaling muscle movement, recalling a memory of childhood, feeling a hot stove or thinking through a calculus problem. Each of these builds on a complex choreography of nerve signals passed from one neuron to another by neurotransmitters contained in vesicles that are released through fusion pores.

The discovery that pore formation requires SNARE proteins won the 2013 Nobel Prize. But Chapman and others have raised a critical question: If SNAREs are so attuned to making pores, why are the pores usually closed?

"There has been a tremendous amount of attention on the synapse, but no conclusive answer," he says. "What keeps the SNAREs from forming open fusion pores? What is the brake, the clamp? What shuts it off until the appropriate time?"

In Nature Communications, Chapman, recent postdoctoral researcher Nicholas Courtney and colleagues reported on their studies of two candidate clamp molecules: complexin and a type of synaptotagmin called syt-1.

"There is this strong evidence, from fruit flies and worms, that complexin is the clamp," Chapman says. "But our new study rules out that role in mammals.

"There has been a lot of confusion, because the different model systems give different results," Chapman adds. "But we took a long, comprehensive look, in one model system -- mammalian cells in a dish. The question was: Is complexin or syt-1 the clamp? Or do they work together?"

In mammals, he found, complexin cannot be the fusion clamp because it plays a positive role, actually helping to open the pore. Curiously, complexin is aided in this function by syt-1.

But syt-1 by itself is able to clamp the pore.

"Syt-1 thus is the long-sought brake that prevents errant signals across a synapse," Chapman says.

The race-car analogy is apt, Chapman says, since hair-trigger speed is necessary in the nervous system.

"Nerve terminals have the tightest regulation -- at millisecond accuracy -- in our physiology," he says. "When the kidney is making urine, it does not need such accuracy. But neurons, and the neuroendocrine cells [like those that produce adrenaline] are specialized for speed. You need to have them primed, waiting for the trigger. The synapse is held in an arrested state, and boom! It gets the signal, and the signal passes. The speed of synapses determines how fast our brains work. This is important stuff."

In 2018, Chapman's lab showed that fusion pores are not simply pipes, but instead highly tuned valves with a full range of openings from off to intermittent, to full open, depending on the proteins present. In 2019, they showed how a different synaptotagmin, syt-17, helped trigger the growth of axons -- the slender nerve fibers that convey signals to synapses, and then to dendrites on a neighboring neuron.

Neurons need fusion clamps as much as cars need brakes, Chapman emphasizes. "If the rate of fusion - the transfer of signals - is dis-regulated, the nervous system will not work properly," he says.

Fusion pores aren't just gates between neurons. They connect membranes between biological units inside and outside cells.

"The need to make membranes fuse and separate underlies all of cell biology for organisms with a cell nucleus," Chapman says. "When you slice a cell, it's made of myriad compartments, each separated by a membrane. For cells to work, there needs to be a constant process of fission and fusion involving fusion pores."

In neurological disease, "the release of neurotransmitters can be an important factor," Chapman adds. "If there is too little, we might be able to design a drug to affect the machinery. Or do the opposite if there is too much release. That's the ultimate point of this basic research - to find ways to make life better for people in need."

Credit: 
University of Wisconsin-Madison