Tech

Transposable elements play an important role in genetic expression and evolution

image: Chromatin loops are important for gene regulation because they define a gene's regulatory neighbor-hood, which contains the promoter and enhancer sequences responsible for determining its expression level. Remarkably, transposable elements (TEs) are responsible for creating around 1/3 of all loop boundaries in the human and mouse genomes, and contribute up to 75% of loops unique to either species. When a TE creates a human-specific or mouse-specific loop it can change a gene's regulatory neighborhood, leading to altered gene expression. The illustration shows a hypothetical region of the human and mouse genomes in which four enhancer sequences for the same target gene fall within a conserved loop. In this example, a TE-derived loop boundary in the human genome (orange bar) shrinks the regulatory neighborhood, preventing two of four enhancers from interacting with their target gene's promoter sequence. The net result is reduced gene expression in human relative to mouse. Looping variations such as these appear to be an important underlying cause of differential gene regulation across species and between different human cell types, suggesting that TE activity may play significant roles in evolution and disease.

Image: 
Adam Diehl

Until recently, little was known about how transposable elements contribute to gene regulation. These are little pieces of DNA that can replicate themselves and spread out in the genome. Although they make up nearly half of the human genome, these were often ignored and commonly thought of as "useless junk," with a minimal role, if any at all, in the activity of a cell. A new study by Adam Diehl, Ningxin Ouyang, and Alan Boyle, University of Michigan Medical School and members of the U-M Center for RNA Biomedicine, shows that transposable elements play an important role in regulating genetic expression with implications to advance the understanding of genetic evolution.

Transposable elements move around the cell, and, unlike previously thought, the authors of this paper found that when they go to different sites, transposable elements sometimes change the way DNA strands interact in 3D space, and therefore the structure of the 3D genome. It appears a third of the 3D contacts in the genome actually originate from transposable elements leading to an outsized contribution by these regions to looping variation and demonstrating their very significant role in genetic expression and evolution.

The main component that determines 3D structure is a protein called CTCF. This study particularly focused on how transposable elements create new CTCF sites that, in turn, hijack existing genomic structure to form new 3D contacts in the genome. The authors show that these often create variable loops that can influence regulatory activity and gene expression in the cell. These findings were observed in human cells and mouse cells and show how transposable elements contribute to intraspecies variation and interspecies divergence, and will guide further research efforts in areas including gene regulation, regulatory evolution, looping divergence, and transposable element biology.

To streamline this work, the authors developed a piece of software, MapGL, to track the physical gain and loss of short genetic sequences across species. For example, a sequence that existed in the most common ancestor may have been lost somewhere or, inversely, could have been absent in the common ancestor but later gained in the human genome. MapGL enables predictions about the evolutionary influences of structural variations between species and makes this type of analysis much more accessible. For this paper, their input was a set of CTCF binding sites which were labeled by MapGL to show that a sequence gain/loss process explains many of the differences in CTCF binding between humans and mice.

With a background in computer science and molecular biology, Alan Boyle explains that he has always been interested in gene regulation. "It's like a complex circuit: perturbing gene regulation through changes to the three-dimensional structure of the genome can have very different and wide-ranging outcomes."

For Adam Diehl, this research continues the great discoveries that started in the late 1800s, when scientists first looked at the shape of chromosomes through microscopes. They observed the shape differences between cells, and noticed that the shape inside the nuclei remained the same between mother and daughter cells. Decades later, transposable elements were discovered at his alma mater, Cornell University: jumping genes could change the phenotypes of corn plants. In the 70s, because the genes between humans and chimpanzees are much too similar to explain the differences between the species, scientific focus shifted on how the genes are being used. For Diehl "It's so exciting to be able to synthesize all this knowledge, and contribute to the next step of the story of species evolution."

This research team will further study the impact of transposable elements on the 3D genome, but this time with a particular interest on a single human population sample rather than across species. The next steps will include experimental follow-up using a new sequencing method capable of identifying transposable element insertions that are variable across human populations. This method was developed in collaboration with Ryan Mills's lab, at the University of Michigan, Medical School. It is expected that the next results will further the understanding of the regulatory role of the transposable elements with possible applications to neurodegenerative diseases.

Credit: 
University of Michigan

New research helps explain why the solar wind is hotter than expected

image: A mirror machine is a linear fusion reactor. It allows scientists to apply research in themachines to an understanding of solar wind phenomena.

Image: 
Courtesy of Cary Forest / UW-Madison

MADISON, Wis. -- When a fire extinguisher is opened, the compressed carbon dioxide forms ice crystals around the nozzle, providing a visual example of the physics principle that gases and plasmas cool as they expand. When our sun expels plasma in the form of solar wind, the wind also cools as it expands through space -- but not nearly as much as the laws of physics would predict.

In a study published April 14 in the Proceedings of the National Academy of Sciences, University of Wisconsin-Madison physicists provide an explanation for the discrepancy in solar wind temperature. Their findings suggest ways to study solar wind phenomena in research labs and learn about solar wind properties in other star systems.

"People have been studying the solar wind since its discovery in 1959, but there are many important properties of this plasma which are still not well understood," says Stas Boldyrev, professor of physics and lead author of the study. "Initially, researchers thought the solar wind has to cool down very rapidly as it expands from the sun, but satellite measurements show that as it reaches the Earth, its temperature is 10 times larger than expected. So, a fundamental question is: Why doesn't it cool down?"

Solar plasma is a molten mix of negatively charged electrons and positively charged ions. Because of this charge, solar plasma is influenced by magnetic fields that extend into space, generated underneath the solar surface. As the hot plasma escapes from the sun's outermost atmosphere, its corona, it flows through space as solar wind. The electrons in the plasma are much lighter particles than the ions, so they move about 40 times faster.

With more negatively charged electrons streaming away, the sun takes on a positive charge. This makes it harder for the electrons to escape the sun's pull. Some electrons have a lot of energy and keep traveling for infinite distances. Those with less energy can't escape the sun's positive charge and are attracted back to the sun. As they do, some of those electrons can be knocked off their tracks ever-so-slightly by collisions with surrounding plasma.

"There is a fundamental dynamical phenomenon that says that particles whose velocity is not well aligned with the magnetic field lines are not able to move into a region of a strong magnetic field," Boldyrev says. "Such returning electrons are reflected so that they stream away from the sun, but again they cannot escape because of the attractive electric force of the sun. So, their destiny is to bounce back and forth, creating a large population of so-called trapped electrons."

In an effort to explain the temperature observations in the solar wind, Boldyrev and his colleagues, UW-Madison physics professors Cary Forest and Jan Egedal looked to a related, but distinct, field of plasma physics for a possible explanation.

Around the time scientists discovered solar wind, plasma fusion researchers were thinking of ways to confine plasma. They developed "mirror machines," or plasma-filled magnetic field lines shaped as tubes with pinched ends, like bottles with open necks on either end.

As charged particles in the plasma travel along the field lines, they reach the bottleneck and the magnetic field lines are pinched. The pinch acts as a mirror, reflecting particles back into the machine.

"But some particles can escape, and when they do, they stream along expanding magnetic field lines outside the bottle. Because the physicists want to keep this plasma very hot, they want to figure out how the temperature of the electrons that escape the bottle declines outside this opening," Boldyrev says. "It's very similar to what's happening in the solar wind that expands away from the sun."

Boldyrev and colleagues thought they could apply the same theory from the mirror machines to the solar wind, looking at the differences in the trapped particles and those that escape. In mirror machine studies, the physicists found that the very hot electrons escaping the bottle were able to distribute their heat energy slowly to the trapped electrons.

"In the solar wind, the hot electrons stream from the sun to very large distances, losing their energy very slowly and distributing it to the trapped population," Boldyrev says. "It turns out that our results agree very well with measurements of the temperature profile of the solar wind and they may explain why the electron temperature declines with the distance so slowly," Boldyrev says.

The accuracy with which mirror machine theory predicts solar wind temperature opens the door for using them to study solar wind in laboratory settings.

"Maybe we'll even find some interesting phenomena in those experiments that space scientists will then try to look for in the solar wind," Boldyrev says. "It's always fun when you start doing something new. You don't know what surprises you'll get."

Credit: 
University of Wisconsin-Madison

Solar power plants get help from satellites to predict cloud cover

image: Light coming from the Earth's surface and detected by the Advanced Baseline Imager (ABI) aboard the GOES-R satellite, shown as a function of wavelength.

Image: 
Carlos F.M. Coimbra

WASHINGTON, April 14, 2020 -- The output of solar energy systems is highly dependent on cloud cover. While weather forecasting can be used to predict the amount of sunlight reaching ground-based solar collectors, cloud cover is often characterized in simple terms, such as cloudy, partly cloudy or clear. This does not provide accurate information for estimating the amount of sunlight available for solar power plants.

In this week's Journal of Renewable and Sustainable Energy, from AIP Publishing, a new method is reported for estimating cloud optical properties using data from recently launched satellites. This new technique is known as Spectral Cloud Optical Property Estimation, or SCOPE.

In 2016, NASA began launching a new generation of Geostationary Operational Environmental Satellites, the GOES-R series. These satellites occupy fixed positions above the Earth's surface. Each is equipped with several sophisticated instruments, including the Advanced Baseline Imager, or ABI, which can detect radiation upwelling from the Earth at specific wavelengths.

The SCOPE method estimates three properties of clouds that determine the amount of sunlight reaching the Earth's surface. The first, cloud top height, is the altitude corresponding to the top of each cloud. The second, cloud thickness, is simply the difference in altitude between a cloud's top and bottom. The third property is the cloud optical depth, a measure of how a cloud modifies light passing through it.

Clouds are, essentially, floating masses of condensed water. The water takes multiple forms as liquid droplets or ice crystals of varying sizes. These different forms of water absorb light in different amounts, affecting a cloud's optical depth.

The amount of light absorbed also depends on the light's wavelength. Absorption is especially variable for light in the wider infrared range of the spectrum but not so much for light in the narrower visible range.

The SCOPE method simultaneously estimates cloud thickness, top height and optical depth by coupling ABI sensor data from GOES-R satellites to an atmospheric model. Two other inputs to the model come from ground-based weather stations: ambient temperature and relative humidity at the ground. These are used to adjust temperature and gas concentration vertical profiles in the model.

The accuracy of the estimated cloud optical properties was evaluated using one year of data from 2018 for measurements taken at seven ground-based locations in the U.S. during both night and day, in all sorts of weather, and for a wide spatial coverage at 5-minute intervals.

"SCOPE can be used during both day and night with reliable accuracy," said co-author Carlos F.M. Coimbra. "Due to its high-frequency output during daytime, SCOPE is especially suitable for providing accurate real-time estimates of cloud optical properties for solar forecasting applications."

Credit: 
American Institute of Physics

Predictability of temporal networks quantified by an entropy-rate-based framework

image: Quantifying the predictability of a temporal network.

Image: 
©Science China Press

Network or graph is a mathematical description of the internal structure between components in a complex system, such as connections between neurons, interactions between proteins, contacts between individuals in a crowd, and interactions between users in online social platform. The links in most real networks change over time, and such networks are often called temporal networks. The temporality of links encodes the ordering and causality of interactions between nodes and has a profound effect on neural network function, disease propagation, information aggregation and recommendation, emergence of cooperative behavior, and network controllability. More and more researches have focuses on mining the patterns in a temporal network and predicting its future evolution, using machine learning techniques, especially graph neural networks. However, how to quantify the predictability limit of a temporal network, i.e. the limit that no algorithm can go beyond, is still an open question.

Recently, a research team led by Xianbin Cao with Beihang University, Beijing, and Gang Yan at Tongji University, Shanghai, published a paper entitled Predictability of real temporal networks in National Science Review and proposed a framework for quantifying the predictability of temporal networks based on the entropy rate of random fields.

The authors mapped any given network to a temporality-topology matrix, and then extended the classic entropy rate calculation (that is only applicable to square matrices) to arbitrary matrices through regression operators. The significant advantages of this temporal-topological predictability were validated in two typical models of temporal networks. Applying the method to calculate the predictability of 18 real networks, the authors found that in different types of real networks, the contributions of topology and temporality to network predictability are significantly variable; Although the theoretical baseline and difficulty of temporal-topological predictability are much higher than that of one-dimensional time series, the temporal-topological predictabilities of most real networks are still higher than that of time series.

The predictability limit calculated in this research is an intrinsic property of temporal networks, i.e. is independent of any predictive algorithm, hence it can also be used to measure the possible space of improving predictive algorithms. The authors examined three widely used predictive algorithms and found that the performance of these algorithms is significantly lower than the predictive limits in most real networks, suggesting the necessity of new predictive algorithms that take into account both temporal and topological features of networks.

Credit: 
Science China Press

Mouse study shows how advancing glioma cells scramble brain function, blood flow

video: Real-time imaging of a spontaneous glioma-induced seizure in the brain of a mouse. Left shows neuronal calcium activity across the surface of the brain, center and right show changes in oxy-hemoglobin (HbO) and deoxy-hemoglobin (HbR) respectively. The tumor is located in the top right quadrant.

Image: 
Elizabeth Hillman/Columbia's Zuckerman Institute

NEW YORK -- The first sign of trouble for a patient with a growing brain tumor is often a seizure. Such seizures have long been considered a side effect of the tumor. But now a joint team of Columbia engineers and cancer researchers studying brain tumors has found evidence that the seizures caused by an enlarging tumor could spur its deadly progression.

These interactions, described today in Cell Reports, were revealed using a novel mouse brain imaging technology that tracks real-time changes in brain activity and blood flow as a tumor grows in the brain. The research identifies potential new targets for diagnosing and treating glioma, a rare but aggressive form of brain cancer, notable in recent years for having claimed the lives of United States Senator John McCain and Beau Biden, the son of Vice President Joe Biden.

"As gliomas spread within the brain, they gradually infiltrate surrounding brain regions, altering blood vessels and interactions between neurons and other brain cells," said Peter Canoll, MD, PhD, professor of pathology and cell biology at Columbia's Vagelos College of Physicians and Surgeons and the paper's co-senior author. "Neuro-oncologists have generally focused on developing ways to selectively kill glioma cells, but we are also interested in understanding how infiltrating glioma cells change the way that the brain functions. We believe that this approach can lead to new treatments for this terrible disease."

Meanwhile, Elizabeth Hillman, PhD, a professor of biomedical engineering at Columbia's School of Engineering and Applied Science, had developed a novel method for real-time imaging of both neural activity and blood flow dynamics in the brains of mice. Called wide-field optical mapping, or WFOM, Dr. Hillman and her lab were using the system to study how neuronal activity in the brain drives local changes in blood flow, a process known as neurovascular coupling.

Drs. Hillman and Canoll realized that combining Dr. Hillman's imaging platform with Dr. Canoll's method of generating realistic tumors in the brain of mice, could let them explore how brain activity was affected during tumor growth.

"We used WFOM to image the brains of mice every few days for many weeks, observing how tumors grew and invaded different areas," said Dr. Hillman, who is also a Zuckerman Institute Principal Investigator. "We studied mice whose neurons were labeled with a green fluorescent calcium indicator, which gets brighter when neurons are more active, letting us detect how tumor invasion affected the normal activity of neurons and dilations and constriction of blood vessels."

The team first found that migrating glioma cells desynchronized both neuronal activity and blood flow changes that normally fluctuate together across either side of the brain. They also found that the tumor affected neurovascular coupling -- making blood vessels less likely to dilate and provide fresh blood when neurons fired.

But another thing also caught the researchers' eye.

"We saw some flashes in our images of neuronal activity, accompanied by big changes in blood flow," said Dr. Hillman. "When we looked more closely, we found these flashes became more and more frequent as the tumors grew, and in some cases we saw massive, profound blasts of neuronal activity."

"We realized that as the tumors progressed in the brains of mice, we were seeing many seizure-like discharges, which eventually progressed into full-blown seizures that resemble the generalized seizures that glioma patients often experience," said Dr. Canoll. "We noticed that these seizures were most prominent at the edges of the tumor, where tumor cells were growing into and intermingling with surrounding healthy brain."

During these generalized seizures, WFOM also revealed that blood oxygenation levels within the tumor dropped sharply. This finding was surprising, and concerning, as tumor cells are known to thrive in low-oxygen, or hypoxic, environments.

"When brain tissue becomes hypoxic, brain cells can secrete proteins that could actually stimulate tumor growth, migration, proliferation and progression," said Dr. Canoll. "We think that the altered neurovascular coupling in the tumor is causing hypoxia during seizures, and might create a vicious cycle of tumor growth, seizure, hypoxia and further tumor growth."

The team's findings point to a number of promising targets to disrupt glioma's vicious assault on the brain.

"To break this cycle of tumor growth, we could target reducing seizures. We may be able to use WFOM to determine which types of drug are most effective for suppressing these types of seizure, while not interfering with cancer treatments," said Dr. Canoll. "We will also be looking to see whether small seizure events that might be difficult for a patient to perceive could be an early warning sign of tumor development or regrowth."

What the team learned could also help with diagnosis and surgical guidance.

"Functional magnetic resonance imaging, fMRI, is a human brain imaging method that detects active regions via local changes in brain blood flow," said Dr. Hillman, who is also a professor of radiology at Columbia's Vagelos College of Physicians and Surgeons. "Our results suggest that fMRI may be unreliable for guiding surgeons to avoid specific brain regions, if the tumor alters blood flow changes. However, the changes in synchrony and neurovascular coupling that we observed could also potentially be leveraged as biomarkers to detect tumor regions."

The team's results also highlight the power of interdisciplinary collaborations. Dr. Hillman developed the study's imaging and analysis techniques with support from the NIH BRAIN Initiative, and the team combined them with novel mouse glioma models developed in the Canoll lab to study alterations in brain function associated with neurological disease.

"We are so excited to demonstrate that these methods can give us a new view of how diseases affect how the brain works," said Dr. Hillman. "We hope this study prompts more scientists and clinical researchers to start using in-vivo imaging methods to advance our understanding of brain diseases and disorders."

Credit: 
The Zuckerman Institute at Columbia University

Particle billiards with three players

image: Selfie of Max Kircher in front of the COLTRIMS reaction microscope

Image: 
Max Kircher, Goethe University

When the American physicist Arthur Compton discovered that light waves behave like particles in 1922, and could knock electrons out of atoms during an impact experiment, it was a milestone for quantum mechanics. Five years later, Compton received the Nobel Prize for this discovery. Compton used very shortwave light with high energy for his experiment, which enabled him to neglect the binding energy of the electron to the atomic nucleus. Compton simply assumed for his calculations that the electron rested freely in space.

During the following 90 years up to the present, numerous experiments and calculations have been carried out with regard to Compton scattering that continually revealed asymmetries and posed riddles. For example, it was observed that in certain experiments energy seemed to be lost when the motion energy of the electrons and light particles (photons) after the collision were compared with the energy of the photons before the collision. Since energy cannot simply disappear, it was assumed that in these cases, contrary to Compton's simplified assumption, the influence of the nucleus on the photon-electron collision could not be neglected.

For the first time in an impact experiment with photons, a team of physicists led by Professor Reinhard Dörner and doctoral candidate Max Kircher at Goethe University Frankfurt have now simultaneously observed the ejected electrons and the motion of the nucleus. To do so, they irradiated helium atoms with X-rays from the X-ray source PETRA III at the Hamburg accelerator facility DESY. They detected the ejected electrons and the charged rest of the atom (ions) in a COLTRIMS reaction microscope, an apparatus that Dörner helped develop and which is able to make ultrafast reactive processes in atoms and molecules visible.

The results were surprising. First, the scientists observed that the energy of the scattering photons was of course conserved and was partially transferred to a motion of the nucleus (more precisely: the ion). Moreover, they also observed that an electron is sometimes knocked out of the nucleus when the energy of the colliding photon is actually too low to overcome the binding energy of the electron to the nucleus. Overall, the electron was only ejected in the direction one would expect in a billiard impact experiment in two thirds of the cases. In all other instances, the electron is seemingly reflected by the nucleus and sometimes even ejected in the opposite direction.

Reinhard Dörner: "This allowed us to show that the entire system of photon, ejected electron and ion oscillate according to quantum mechanical laws. Our experiments therefore provide a new approach for experimental testing of quantum mechanical theories of Compton scattering, which plays an important role, particularly in astrophysics and X-ray physics."

Credit: 
Goethe University Frankfurt

Researchers design microsystem for faster, more sustainable industrial chemistry

image: Ryan Hartman, professor of chemical and biomolecular engineering at the NYU Tandon School of Engineering.

Image: 
Ryan Hartman, Ph.D.

BROOKLYN, New York, Tuesday, April 14, 2020 - The synthesis of plastic precursors, such as polymers, involves specialized catalysts. However, the traditional batch-based method of finding and screening the right ones for a given result consumes liters of solvent, generates large quantities of chemical waste, and is an expensive, time-consuming process involving multiple trials.

Ryan Hartman, professor of chemical and biomolecular engineering at the NYU Tandon School of Engineering, and his laboratory developed a lab-based "intelligent microsystem" employing machine learning, for modeling chemical reactions that shows promise for eliminating this costly process and minimizing environmental harm.

In their research, "Combining automated microfluidic experimentation with machine learning for efficient polymerization design," published in Nature Machine Intelligence, the collaborators, including doctoral student Benjamin Rizkin, employed a custom-designed, rapidly prototyped microreactor in conjunction with automation and in situ infrared thermography to study exothermic (heat generating) polymerization -- reactions that are notoriously difficult to control when limited experimental kinetic data are available. By pairing efficient microfluidic technology with machine learning algorithms to obtain high-fidelity datasets based on minimal iterations, they were able to reduce chemical waste by two orders of magnitude and catalytic discovery from weeks to hours.

Hartman explained that designing the microfluidic setup required the team to first estimate the thermodynamics of polymerization reactions, in this case involving a class of metallocene catalysts, widely used in industrial-scale polymerization of polyethylene and other thermoplastic polymers.

"We first developed an order-of-magnitude estimation of heat and mass transport," said Hartman. "Knowledge of these quantities enabled us to design a microfluidic device that can screen the activity of catalysts and offer scalable mechanisms mimicking the intrinsic kinetics needed for industrial-scale processes."

Hartman added that such a benchtop system could open the door to a range of other experimental data. "It could provide context for analyzing other properties of interest such as how stream mixing, dispersion, heat transfer, mass transfer, and the reaction kinetics influence polymer characteristics," he explained.

Using a class of zirconocene-based polymer catalysts, the research team paired microfluidics -- proven in research of other exothermic reactions -- with an automated pump and infrared thermography to detect changes in reactivity based on exotherms (compounds that give off heat during their formation) resulting in efficient, high-speed experimentation to map the catalyst's reaction space. Since the process was conducted in a small reactor, they were able to introduce the catalyst dissolved in liquid, eliminating the need for extreme conditions to induce catalysis.

"The fact is, most plastics are made using metallocene catalysts bound to silica particles, creating a heterogenous substrate that polymerizes monomers like propylene and ethylene," said Hartman. "Recent advances in homogenous catalyst of dissolved metallocene allow milder reaction conditions."

Hartman's group previously demonstrated that artificial neural networks (ANN) can be used as a tool for modelling and understanding polymerization pathways. In the new research they applied ANNs to modeling the zirconocene-catalyzed exothermic polymerization. Using MATLAB and LabVIEW systems to control the reactions, interface with external devices, and generate advanced computational algorithms, the researchers generated a series of ANNs to model and optimize catalysis based on experimental results.

"Chemical companies typically use 100-milliliter to 10-liter reactors to screen hundreds of catalysts that in turn could be scaled up to manufacture plastics. Here we are using less than a milliliter, and by scaling down the footprint of lab experiments you scale down the facilities needed, so the whole footprint is reduced. Our work provides a useful tool for both scientific and technoeconomic analysis of complex catalytic polymerizations," said Hartman.

Hartman and his lab's discoveries open doors to new types of research, primarily involving the concept of automated, or "robotic" chemistry, increasing throughput, data fidelity, and the safe handling of highly exothermic polymerizations.

He explained that, in principle, the method could lead to more efficient design and more environmentally benign plastics, since screening catalysts and polymers faster allows the ability to more quickly tailor processes to more environmentally friendly polymers.

Credit: 
NYU Tandon School of Engineering

Discovery offers new avenue for next-generation data storage

image: Researchers Liangzi Deng, left, and Paul Chu worked with colleagues reporting the discovery of a new compound capable of maintaining its skyrmion properties at room temperature through the use of high pressure. The work holds promise for next-generation data storage.

Image: 
Audrius Brazdeikis, University of Houston

The demands for data storage and processing have grown exponentially as the world becomes increasingly connected, emphasizing the need for new materials capable of more efficient data storage and data processing.

An international team of researchers, led by physicist Paul Ching-Wu Chu, founding director of the Texas Center for Superconductivity at the University of Houston, is reporting a new compound capable of maintaining its skyrmion properties at room temperature through the use of high pressure. The results also suggest the potential for using chemical pressure to maintain the properties at ambient pressure, offering promise for commercial applications.

The work is described in the Proceedings of the National Academy of Sciences.

A skyrmion is the smallest possible perturbation to a uniform magnet, a point-like region of reversed magnetization surrounded by a whirling twist of spins. These extremely small regions, along with the possibility of moving them using very little electrical current, make the materials hosting them promising candidates for high-density information storage. But the skyrmion state normally exists only at a very low and narrow temperature range. For example, in the compound Chu and colleagues studied, the skyrmion state normally exists only within a narrow temperature range of about 3 Kelvin degrees, between 55 K and 58.5 K (between -360.7 Fahrenheit and -354.4 Fahrenheit). That makes it impractical for most applications.

Working with a copper oxyselenide compound, Chu said the researchers were able to dramatically expand the temperature range at which the skyrmion state exists, up to to 300 degrees Kelvin, or about 80 degrees Fahrenheit, near room temperature. First author Liangzi Deng said they successfully detected the state at room temperature for the first time under 8 gigapascals, or GPa, of pressure, using a special technique he and colleagues developed. Deng is a researcher with the Texas Center for Superconductivity at UH (TcSUH).

Chu, the corresponding author for the work, said researchers also found that the copper oxyselenide compound undergoes different structural-phase transitions with increasing pressure, suggesting the possibility that the skyrmion state is more ubiquitous than previously thought.

"Our results suggest the insensitivity of the skyrmions to the underlying crystal lattices. More skyrmion material may be found in other compounds, as well," Chu said.

The work suggests the pressure required to maintain the skyrmion state in the copper oxyselenide compound could be replicated chemically, allowing it to work under ambient pressure, another important requirement for potential commercial applications. That has some analogies to work Chu and his colleagues did with high-temperature superconductivity, announcing in 1987 that they had stabilized high-temperature superconductivity in YBCO (yttrium, barium, copper, and oxygen) by replacing ions in the compound with smaller isovalent ions.

Credit: 
University of Houston

Keratin scaffolds could advance regenerative medicine and tissue engineering for humans

Researchers at Mossakowski Medical Research Center of the Polish Academy of Science have developed a simple method for preparing 3D keratin scaffold models which can be used to study the regeneration of tissue.

The regeneration of tissue at the site of injury or wounds caused by burns or diseases such as diabetes is a challenging task in the field of biomedical science. Regenerative medicine and tissue engineering require complementary key ingredients, such as biologically compatible scaffolds that can be easily adopted by the body system without harm, and suitable cells including various stem cells that effectively replace the damaged tissue without adverse consequences. The scaffold should mimic the structure and biological function of the native extracellular matrix at the side of injury for regeneration of tissues.

The article "Can keratin scaffolds be used for creating three-dimensional cell cultures" has been published in the De Gruyter open access journal Open Medicine. It presents findings on the development of three-dimensional cell cultures derived from fiber keratin scaffolds from rat fur. The study suggests that both the use of appropriate digest enzymes and the fraction in length and diameter of the keratin fibers is significant. The cells demonstrated the ability to grow up and form the 3D colonies on rat F-KAP for several weeks without morphological changes of the cells and with no observed apoptosis.

"We believe that this absence of morphological changes in cells and the lack of apoptosis, in addition to the low immunogenicity and biodegradation of KAP scaffolds, indicates that they are promising candidates for tissue engineering in clinical applications," said Piotr Kosson, PhD.

Credit: 
De Gruyter

Volcanic CO2 emissions helped trigger Triassic climate change

A new study finds volcanic activity played a direct role in triggering extreme climate change at the end of the Triassic period 201 million year ago, wiping out almost half of all existing species. The amount of carbon dioxide released into the atmosphere from these volcanic eruptions is comparable to the amount of CO2 expected to be produced by all human activity in the 21st century.

The end-Triassic extinction has long been thought to have been caused by dramatic climate change and rising sea levels. While there was large-scale volcanic activity at the time, known as the Central Atlantic Magmatic Province eruptions, the role it played in directly contributing to the extinction event is debated. In a study for Nature Communications, an international team of researchers, including McGill professor Don Baker, found evidence of bubbles of carbon dioxide trapped in volcanic rocks dating to the end of the Triassic, supporting the theory that volcanic activity contributed to the devastating climate change believed to cause the mass extinction.

The researchers suggest that the end-Triassic environmental changes driven by volcanic carbon dioxide emissions may have been similar to those predicted for the near future. By analysing tiny gas exsolution bubbles preserved within the rocks, the team estimates that the amount of carbon emissions released in a single eruption - comparable to 100,000 km3 of lava spewed over 500 years - is likely equivalent to the total produced by all human activity during the 21st century, assuming a 2C rise in global temperature above pre-industrial levels.

"Although we cannot precisely determine the total amount of carbon dioxide released into the atmosphere when these volcanoes erupted, the correlation between this natural injection of carbon dioxide and the end-Triassic extinction should be a warning to us. Even a slight possibility that the carbon dioxide we are now putting into the atmosphere could cause a major extinction event is enough to make me worried," says professor of earth and planetary sciences Don Baker.

Credit: 
McGill University

Automated 'pipeline' improves access to advanced microscopy data

ANN ARBOR--A new data-processing approach created by scientists at the University of Michigan Life Sciences Institute offers a simpler, faster path to data generated by cryo-electron microscopy instruments, removing a barrier to wider adoption of this powerful technique.

Cryo-EM enables scientists to determine the 3D shape of cellular proteins and other molecules that have been flash-frozen in a thin layer of ice. Advanced microscopes beam high-energy electrons through the ice while capturing thousands of videos. These videos are then averaged to create a 3D structure of the molecule.

By uncovering the precise structures of these molecules, researchers can answer important questions about how the molecules function in cells and how they might contribute to human health and disease. For example, researchers recently used cryo-EM to reveal how a protein spike on the COVID-19 virus enables it to gain entry into host cells.

Recent advances in cryo-EM technology have rapidly opened this field to new users and increased the rate at which data can be collected. Despite these improvements, however, researchers still face a substantial hurdle in accessing the full potential of this technique: the complex data processing landscape required to turn the microscope's terabytes of data into a 3D structure ready for analysis.

Before researchers can begin analyzing the 3D structure they want to study, they have to complete a series of preprocessing steps and subjective decisions. Currently, these steps must be supervised by humans--and because researchers use cryo-EM to analyze a huge variety of molecule types, scientists thought that it was nearly impossible to create a general set of guidelines that all researchers could follow for these steps, said Yilai Li, a Willis Life Sciences Fellow at the LSI who led the development of the new program.

"If we can create an automated pipeline for those preprocessing steps, the whole process could be much more user-friendly, especially for newcomers to the field," Li said.

Using machine learning, Li and his colleagues in the lab of LSI assistant professor Michael Cianfrocco have developed just such a pipeline. The program was published April 14 as part of a study in the journal Structure.

The new program connects several deep-learning and image-analysis tools with preexisting software data preprocessing algorithms to narrow enormous data sets down to the information that researchers need to begin their analysis.

"This pipeline takes the knowledge that experienced users have gained and puts it into a program that improves accessibility for users from a range of backgrounds," said Cianfrocco, who is also an assistant professor of biological chemistry at the U-M Medical School. "It really streamlines the process stage so that researchers can jump in and focus on what's important: the scientific questions they want to ask and answer."

Credit: 
University of Michigan

Timing of large earthquakes follows a 'devil's staircase' pattern

At the regional level and worldwide, the occurrence of large shallow earthquakes appears to follow a mathematical pattern called the Devil's Staircase, where clusters of earthquake events are separated by long but irregular intervals of seismic quiet.

The finding published in the Bulletin of the Seismological Society of America differs from the pattern predicted by classical earthquake modeling that suggests earthquakes would occur periodically or quasi-periodically based on cycles of build-up and release of tectonic stress. In fact, say Yuxuan Chen of the University of Missouri, Columbia, and colleagues, periodic large earthquake sequences are relatively rare.

The researchers note that their results could have implications for seismic hazard assessment. For instance, they found that these large earthquake sequences (those with events magnitude 6.0 or greater) are "burstier" than expected, meaning that the clustering of earthquakes in time results in a higher probability of repeating seismic events soon after a large earthquake. The irregular gap between event bursts also makes it more difficult to predict an average recurrence time between big earthquakes.

Seismologists' catalogs for large earthquakes in a region might include too few earthquakes over too short a time to capture the whole staircase pattern, making it "difficult to know whether the few events in a catalog occurred within an earthquake cluster or spanned both clusters and quiescent intervals," Chen and his colleagues noted.

"For this same reason, we need to be cautious when assessing an event is 'overdue' just because the time measured from the previous event has passed some 'mean recurrence time' based an incomplete catalog," they added.

The Devil's Staircase, sometimes called a Cantor function, is a fractal demonstrated by nonlinear dynamic systems, in which a change in any part could affect the behavior of the whole system. In nature, the pattern can be found in sedimentation sequences, changes in uplift and erosion rates and reversals in Earth's magnetic field, among other examples.

Chen's Ph.D. advisor Mian Liu had an unusual introduction to the Devil's Staircase. "I stumbled into this topic a few years ago when I read about two UCLA researchers' study of the temporal pattern of a notorious serial killer, Andrei Chikatilo, who killed at least 52 people from 1979 to 1990 in the former Soviet Union," he explained. "The time pattern of his killings is a Devil's staircase. The researchers were trying to understand how the criminal's mind worked, how neurons stimulate each other in the brain. I was intrigued because I realized that earthquakes work in a similar way, that a fault rupture could stimulate activity on other faults by stress transfer."

"Conceptually, we also know that many large earthquakes, which involve rupture of multiple and variable fault segments in each rupture, violate the basic assumption of the periodic earthquakes model, which is based on repeated accumulation and release of energy on a given fault plane," Liu added.

The factors controlling the clustered events are complex, and could involve the stress that stimulates an earthquake, changes in frictional properties and stress transfer between faults or fault segments during a rupture, among other factors, said Gang Luo of Wuhan University. He noted that the intervals appear to be inversely related to the background tectonic strain rate for a region.

Credit: 
Seismological Society of America

Moffitt researchers identify molecular pathway that controls immunosuppression in tumors

TAMPA, Fla. - An active immune system plays an important role in stopping cancer progression by identifying tumor cells and targeting them for destruction. Any deviation in the normal activities of the immune system could lead to accelerated tumor growth. Moffitt Cancer Center researchers wanted to determine how myeloid cells, a type of immune cell, contribute to the progression of cancer. In a new article published today in the journal Immunity, the Moffit team reveals how protein-signaling pathways associated with cellular stress processes turn myeloid cells into tumor-promoting players and suggests that targeting the PERK protein may be an effective therapeutic approach to reactivate the immune system and boost the effectiveness of immunotherapy.

Myeloid cells are involved in the anti-cancer activity of the immune system. However, in most patients with advanced malignancies, myeloid cell pathways become altered and can transform into myeloid-derived suppressor cells (MDSCs) that inhibit protective anti-tumor immunity. These MDSCs expand in cancer patients and turn on signaling pathways that deactivate the effective destruction of tumor cells by other immune populations. Studies have shown that high numbers of MDSCs correlate with poor outcomes and drug resistance in many cancer patients. MDSCs also promote metastatic spreading by forming favorable niches for sprouted tumor cells. These observations suggest that targeting MDSCs may be a viable approach to re-stimulate the immune system to target cancer cells; however, scientists do not completely understand how MDSCs function in the tumor environment and no effective strategies to block their activity currently exist.

Moffitt researchers aimed to identify central molecular mechanisms of how MDSCs contribute to tumor progression. The team performed a series of pre-clinical studies to determine how the protein kinase PERK contributes to MDSC activity. They discovered that active PERK levels were higher in the MDSCs from animals and patients with metastatic non-small cell lung carcinoma and ovarian cancer when compared to normal lung and ovarian tissue, suggesting that there may be an association between active PERK in MDSCs and cancer development.

In order to more clearly define the role of PERK in tumor progression, the Moffitt researchers created a mouse model that was lacking the PERK gene in MDSCs to determine the effect of its loss on immune cell activity. They discovered that deletion of PERK reprogrammed MDSCs into cells that reactivated the anti-tumor activity of T cells, other immune cells, which could identify and target tumor cells for destruction. The researchers also identified the molecular pathway that led from PERK to T-cell activation and demonstrated that treatment of mice with drugs that inhibit PERK reduced tumor growth, suggesting that targeting the PERK protein may be an effective approach in cancer patients.

"Our findings demonstrate the pivotal role of PERK in tumor-MDSC functionality and unveil strategies to reprogram myeloid cells in cancer patients from an immunosuppressive to an immunostimulatory cell type that boosts cancer immunotherapy" said Paulo Rodriguez, Ph.D., associate member of the Department of Immunology at Moffitt. "Despite these encouraging data, further research evaluating the activity of PERK inhibitors is warranted, as we anticipate that the dose, treatment period, route, and bioavailability could regulate their therapeutic vs. toxic effects."

Credit: 
H. Lee Moffitt Cancer Center & Research Institute

NREL six-junction solar cell sets two world records for efficiency

Scientists at the National Renewable Energy Laboratory (NREL) have fabricated a solar cell with an efficiency of nearly 50%.

The six-junction solar cell now holds the world record for the highest solar conversion efficiency at 47.1%, which was measured under concentrated illumination. A variation of the same cell also set the efficiency record under one-sun illumination at 39.2%.

"This device really demonstrates the extraordinary potential of multijunction solar cells," said John Geisz, a principal scientist in the High-Efficiency Crystalline Photovoltaics Group at NREL and lead author of a new paper on the record-setting cell.

The paper, "Six-junction III-V solar cells with 47.1% conversion efficiency under 143 suns concentration," appears in the journal Nature Energy. Geisz's co-authors are NREL scientists Ryan France, Kevin Schulte, Myles Steiner, Andrew Norman, Harvey Guthrey, Matthew Young, Tao Song, and Thomas Moriarty.

To construct the device, NREL researchers relied on III-V materials--so called because of their position on the periodic table--that have a wide range of light absorption properties. Each of the cell's six junctions (the photoactive layers) is specially designed to capture light from a specific part of the solar spectrum. The device contains about 140 total layers of various III-V materials to support the performance of these junctions, and yet is three times narrower than a human hair. Due to their highly efficient nature and the cost associated with making them, III-V solar cells are most often used to power satellites, which prize III-V's unmatched performance.

On Earth, however, the six-junction solar cell is well-suited for use in concentrator photovoltaics, said Ryan France, co-author and a scientist in the III-V Multijunctions Group at NREL.

"One way to reduce cost is to reduce the required area," he said, "and you can do that by using a mirror to capture the light and focus the light down to a point. Then you can get away with a hundredth or even a thousandth of the material, compared to a flat-plate silicon cell. You use a lot less semiconductor material by concentrating the light. An additional advantage is that the efficiency goes up as you concentrate the light."

France described the potential for the solar cell to exceed 50% efficiency as "actually very achievable" but that 100% efficiency cannot be reached due to the fundamental limits imposed by thermodynamics.

Geisz said that currently the main research hurdle to topping 50% efficiency is to reduce the resistive barriers inside the cell that impede the flow of current. Meanwhile, he notes that NREL is also heavily engaged in reducing the cost of III-V solar cells, enabling new markets for these highly efficient devices.

Credit: 
DOE/National Renewable Energy Laboratory

Big variability in blood pressure readings between anatomical sites

image: Neurocritical care nurse Kathrina Siaron, taking the blood pressure of a mock patient, helped lead a study that revealed such readings can vary when taken on different parts of the body.

Image: 
UTSW

DALLAS - April 14, 2020 - Blood pressure readings taken from neuroscience intensive care unit (NSICU) patients had marked differences between opposite sides of the body and different anatomical sites in each individual, highlighting the significant and sometimes extreme variability of this measure even in the same person depending on where it's taken, researchers from UT Southwestern Medical Center report in a new study.

The findings, published online Feb. 25, 2020, in Scientific Reports and the 100th research paper published by nurses at UTSW, could eventually impact how blood pressure information - which informs a variety of medical decisions in the NSICU and beyond - is collected.

Having an accurate blood pressure reading is essential to delivering often lifesaving care, say UT Southwestern study leaders Kathrina B. Siaron, B.S.N., R.N., a neurocritical care nurse, and DaiWai M. Olson, Ph.D., R.N., a professor of Neurology and Neurotherapeutics and Neurological Surgery.

"For our patients in the NSICU, blood pressure often needs to be maintained in a very narrow range," Siaron says. "Moving it one way or another could potentially harm the patient."

It's also a parameter that's been measured much the same way for over a century, she explains: Patients wear a cuff around the upper arm, wrist, or thigh for noninvasive assessments, or a thin plastic catheter is inserted into their arteries for invasive measures, with arterial pressure long considered the gold standard and thought to be within 10 points of blood pressure in the upper arm. Although it's well known that blood pressure can vary dramatically between patients or in the same patient from moment to moment, medical care providers have long assumed that there was little variation between these measures at different sites on the same patient.

To test this idea, Siaron, Olson, and their colleagues worked with 80 patients admitted to UT Southwestern's NSICU between April and July 2019. These patients - split almost equally between men and women with a mean age of about 53 - were admitted to the unit for a variety of common serious neurological problems, including stroke, subarachnoid hemorrhage, and brain tumors.

The researchers had these patients sit upright in their hospital beds, wearing blood pressure cuffs on both upper arms connected to different, precisely calibrated machines. Simultaneously, clinicians activated the machines to take readings, then recorded them. They then had the patients wear wrist cuffs and performed the same exercise. For the 29 patients who also had arterial blood pressure sensors, values from those devices were recorded while patients were receiving noninvasive blood pressure readings.

As expected, there were often large blood pressure differences from patient to patient. However, there were also significant differences in individual patients from site to site. There was a mean difference of about 8 points in systolic pressure (the top number in blood pressure values) between upper arms, and a mean difference of up to 13 points between upper arm and wrist systolic values.

Diastolic measures (the bottom number in blood pressure values) varied by a mean of about 6 points between arms and about 5 points between upper arms and wrists. Arterial pressure often varied significantly from each of the values - sometimes as much as 15 points higher or lower.

Although the mean differences between sites were just a few points on average, they differed by as much as 40 points between some patients, says Olson - a dramatic difference that could radically affect what type of care that patient receives.

"If we take pressure in one arm, a patient seems fine, but in the other arm, they're in a crisis," he says. "The values we collected were really all over the place. There was no consistency between the same arm or wrist between different patients."

It's unclear why these differences exist between sites, adds Siaron - blood pressure numbers could be affected by an assortment of factors, such as a patient's posture, anatomical differences, or medical conditions that affect blood flow. The team plans to continue to study blood pressure among different anatomical sites in varying populations, such as patients in the general ICU or healthy volunteers. Eventually, they say, blood pressure might be collected using a totally different protocol, such as averaging the values between two sides of the body or accepting the highest number.

Credit: 
UT Southwestern Medical Center