Tech

How cancer cells stiff-arm normal environmental cues to consume energy

image: Proposed model of mechanically regulated glycolysis. At left, the cytoskeleton architecture in normal cells responds to mechanical cues to modulate glycolysis. In right image, maintenance of the cytoskeleton architecture in cancer cells decoupled from mechanical cues results in preserved glycolysis.

Image: 
uTSW

DALLAS - Feb. 24, 2020 - Using human lung cancer cells, UT Southwestern researchers have uncovered how cells in general modulate their energy consumption based on their surroundings and, furthermore, how cancer cells override those cues to maximize energy use. The findings, published this week in Nature, extend a report from last year in which the same group discovered that the cell's skeleton can promote cancer cell growth in metastasis or when under chemotherapy assault.

"Cancer cells experience variable mechanical conditions during tumor growth and spread, so we wondered whether the mechanical conditions also affect glycolysis - the cell's energy use. Enhanced glycolysis is a hallmark of cancer," says Gaudenz Danuser, Ph.D., a professor of cell biology and chair of the Lyda Hill Department of Bioinformatics.

The mechanics of the microenvironment of the cell impact cell functions like growth, survival, death, and changes in cell shape, says Danuser. All of these behaviors require energy, but Danuser says it has never been studied how cells might change their energy use based on their microenvironment.

Cells can sense the stiffness of the tissues/materials around them through the proteins that make up their skeleton, he says. Lung cells in particular stretch and contract with every breath; however lung tissues stiffen when certain conditions develop, like pulmonary fibrosis or cancer. Danuser's team set out to study the interplay between energy use and tissue stiffness. They first grew normal lung cells on stiff and soft surfaces separately - glass coated with a stiff collagen or glass coated with a softer, more gelatinous collagen. They found by measuring breakdown metabolites that cells grown on soft surfaces routinely decrease their energy use: stiff surface - more metabolites; soft surface - fewer metabolites.

The team then investigated how this was controlled and in particular asked whether the cells were making the same amount of one critical enzyme, phosphofructokinase (PFK), needed for glucose metabolism. By tagging the PFK so that it glowed green and growing cells containing that tagged enzyme on soft and hard surfaces, the team found the same amount of the enzyme was initially made, but much less was found in the cells grown on soft surfaces. They concluded that something must be sensing the soft surface, then clearing out the PFK after it was made.

Next, the team wondered whether lung cancer cells behaved similarly. So they grew cancerous lung cells on the same stiff and soft surfaces. They found plenty of energy use on the stiff surface, just like the healthy cells. But they also found high energy use on the soft surfaces, again based on measuring breakdown metabolites.

To confirm if this indeed happened in intact lungs, the team measured the same metabolites in tissue - healthy and tumorous - removed from patients. Using a computer-based measurement tool, they were able to approximate the levels of PFK in single cells growing in the softer center of the lung, near stiff airway branches, and in tumors. Cells from tumors and those growing near airway branches routinely had higher levels of PFK than those growing in the center of the lung, as predicted from the experiments with lab-cultured cells.

"These experiments allowed us to conclude that there is indeed some sort of mechanical regulation of cellular metabolism and that cancer cells can override that regulation," says Danuser, holder of the Patrick E. Haggerty Distinguished Chair in Basic Biomedical Science.

The researchers next reasoned that stiffness sensing must somehow be related to the cell's skeleton, which itself can change dynamically from rigid to more pliable, depending on what the cell is doing. Using microscopes and computer software they wrote themselves, they examined the skeletons in cells grown on both stiff and soft surfaces. On stiff surfaces, they found the cell skeleton contained longer, thicker protein cables; on soft surfaces, the skeleton material was shorter, less bundled, and more curved.

They knew from their experiments that PFK, the limiting enzyme for glycolysis, is somehow removed after it's made if cells are grown on softer tissues. That finding caused the researchers to look for a link between the cell's skeleton and its system for so-called protein degradation, or removal of unneeded enzymes. The team systematically depleted from cells each of 18 proteins they identified in a bioinformatics analysis as candidates targeting PFK for degradation. They asked which of those candidates, when removed, would increase the cell's energy use, mimicking the override observed in lung tumor cells.

They found the TRIM21 protein.

The researchers discovered that the thicker cell skeleton bundles sequester the TRIM21 and prevent it from targeting other proteins for destruction. When the cell grows on a softer surface, the bundles are thin, releasing TRIM21 and allowing it to interact with the PFK, so that the enzyme gets degraded and thus glucose metabolism is reduced, says Danuser. Cancer cells regulate their skeletons differently: The TRIM21 stays stuck to the skeleton, prevented from targeting metabolic enzymes for degradation and keeping metabolism high.

They also found that a genetic mutation of TRIM21, which has been clinically reported in cancer, leads to clumping of the TRIM21 protein. Instead of sticking to the cell skeleton, the TRIM21 sticks to itself, rendering it inactive.

"This study establishes a mechanism for mechanically regulated glycolysis via TRIM21 degradation of PFK in a pathway that involves the cell's skeleton and ultimately affects energy availability in healthy and cancerous cells," Danuser says. "In cancer cells, suppressing TRIM21 function through sequestration may contribute to the malignancy's metabolic hallmark: the ability to keep the energy coming despite changes in its environment."

UTSW co-authors include Tadamoto Isogal, Boning Gao, John Minna, Robert Bachoo, and Ralph DeBerardinis. Others involved in the study include Rossana Lazcano and Luisa Solis of UT MD Anderson Cancer Center and Linqing Li and Christopher Chen, both of Harvard University and Boston University.

Credit: 
UT Southwestern Medical Center

Swarming robots avoid collisions, traffic jams

video: One hundred small robots swarm together to self-assemble to form "NU." A new algorithm prevents the robots from colliding and getting stuck in traffic jams. Someday, this algorithm could make fleets of autonomous vehicles more reliable, safe and efficient.

Image: 
Northwestern University

EVANSTON, Ill. -- For self-driving vehicles to become an everyday reality, they need to safely and flawlessly navigate one another without crashing or causing unnecessary traffic jams.

To help make this possible, Northwestern University researchers have developed the first decentralized algorithm with a collision-free, deadlock-free guarantee.

The researchers tested the algorithm in a simulation of 1,024 robots and on a swarm of 100 real robots in the laboratory. The robots reliably, safely and efficiently converged to form a pre-determined shape in less than a minute.

"If you have many autonomous vehicles on the road, you don't want them to collide with one another or get stuck in a deadlock," said Northwestern's Michael Rubenstein, who led the study. "By understanding how to control our swarm robots to form shapes, we can understand how to control fleets of autonomous vehicles as they interact with each other."

The paper will be published later this month in the journal IEEE Transactions on Robotics. Rubenstein is the Lisa Wissner-Slivka and Benjamin Slivka Professor in Computer Science in Northwestern's McCormick School of Engineering.

The advantage of a swarm of small robots -- versus one large robot or a swarm with one lead robot -- is the lack of a centralized control, which can quickly become a central point of failure. Rubenstein's decentralized algorithm acts as a fail-safe.

"If the system is centralized and a robot stops working, then the entire system fails," Rubenstein said. "In a decentralized system, there is no leader telling all the other robots what to do. Each robot makes its own decisions. If one robot fails in a swarm, the swarm can still accomplish the task."

Still, the robots need to coordinate in order to avoid collisions and deadlock. To do this, the algorithm views the ground beneath the robots as a grid. By using technology similar to GPS, each robot is aware of where it sits on the grid.

Before making a decision about where to move, each robot uses sensors to communicate with its neighbors, determining whether or not nearby spaces within the grid are vacant or occupied.

"The robots refuse to move to a spot until that spot is free and until they know that no other robots are moving to that same spot," Rubenstein said. "They are careful and reserve a space ahead of time."

Even with all this careful coordination, the robots are still able to communicate and move swiftly to form a shape. Rubenstein accomplishes this by keeping the robots near-sighted.

"Each robot can only sense three or four of its closest neighbors," Rubenstein explained. "They can't see across the whole swarm, which makes it easier to scale the system. The robots interact locally to make decisions without global information."

In Rubenstein's swarm, for example, 100 robots can coordinate to form a shape within a minute. In some previous approaches, it could take a full hour. Rubenstein imagines that his algorithm could be used in fleets of driverless cars and in automated warehouses.

"Large companies have warehouses with hundreds of robots doing tasks similar to what our robots do in the lab," he said. "They need to make sure their robots don't collide but do move as quickly as possible to reach the spot where they eventually give an object to a human."

Credit: 
Northwestern University

Living cell imaging technique sheds light on molecular view of obesity

image: USU researchers developed a sensing optical imaging nanoprobe that uses scattered light to provide a structural fingerprint for molecules.

Image: 
USU

A collaborative team of researchers at Utah State University and the University of Central Florida developed a tool to track cellular events that may lead to obesity-related conditions in people.

The research findings were published Feb. 3 in the Proceedings of the National Academy of Sciences.

The team, led by Anhong Zhou, a professor in USU's Department of Biological Engineering, developed a sensing optical imaging nanoprobe that uses scattered light to provide a structural fingerprint for molecules. The probes can be used to more easily identify and illustrate cell surface receptors that can either prompt or stop cellular responses to certain external stimuli. The probes make it possible to monitor multiple surface receptors on an individual cell and provide researchers an unprecedented view of cellular surface activity. Zhou and his team, including the biological engineering PhD student Wei Zhang, applied these novel nanoprobes to successfully detect the cell receptors that recognize fatty acids at the single living cell level.

The technique represents a major step in developing improved understanding of certain cellular events and could have widespread impact on the study of fat intake and the development of obesity. The new method could also be used as a simple screening technique for testing external stimuli that trigger the surface cell receptors and lead to the linking of fatty acids. This would make for an efficient test to ensure that new drugs accurately prompt the correct cellular activities that lead to obesity and other obesity-related conditions. Zhou and his team's work is increasingly relevant as the prevalence of obesity impacts public health in the United States.

Zhou says the research represents an exciting collaboration between researchers and aligns well with his belief that biological engineering is an important frontier in the scientific community. "This is an excellent example that fulfills our long-term goal of applying engineering tools to solve biology-driven problems," he said. "In the past several years, we have been thrilled to develop new cell-based assay technologies that potentially benefit human health problems like obesity. We are currently extending this technology for developing a new method for early cancer diagnosis."

Credit: 
Utah State University

A simple retrofit transforms electron microscopes into high-speed atom-scale cameras

image: NIST researcher June Lau with a transmission electron microscope (TEM) that she and her colleagues retrofitted in order to make high-quality atom-scale movies.

Image: 
N. Hanacek/NIST

Researchers at the National Institute of Standards and Technology (NIST) and their collaborators have developed a way to retrofit the transmission electron microscope -- a long-standing scientific workhorse for making crisp microscopic images -- so that it can also create high-quality movies of super-fast processes at the atomic and molecular scale. Compatible with electron microscopes old and new, the retrofit promises to enable fresh insights into everything from microscopic machines to next-generation computer chips and biological tissue by making this moviemaking capability more widely available to laboratories everywhere.

"We want to be able to look at things in materials science that happen really quickly," said NIST scientist June Lau. She reports the first proof-of-concept operation of this retrofitted design with her colleagues in the journal Review of Scientific Instruments. The team designed the retrofit to be a cost-effective add-on to existing instruments. "It's expected to be a fraction of the cost of a new electron microscope," she said.

A nearly 100-year-old invention, the electron microscope remains an essential tool in many scientific laboratories. A popular version is known as the transmission electron microscope (TEM), which fires electrons through a target sample to produce an image. Modern versions of the microscope can magnify objects by as much as 50 million times. Electron microscopes have helped to determine the structure of viruses, test the operation of computer circuits, and reveal the effectiveness of new drugs.

"Electron microscopes can look at very tiny things on the atomic scale," Lau said. "They are great. But historically, they look at things that are fixed in time. They're not good at viewing moving targets," she said.

In the last 15 years, laser-assisted electron microscopes made videos possible, but such systems have been complex and expensive. While these setups can capture events that last from nanoseconds (billionths of a second) to femtoseconds (quadrillionths of a second), a laboratory must often buy a newer microscope to accommodate this capability as well as a specialized laser, with a total investment that can run into the millions of dollars. A lab also needs in-house laser-physics expertise to help set up and operate such a system.

"Frankly, not everyone has that capacity," Lau said.

In contrast, the retrofit enables TEMs of any age to make high-quality movies on the scale of picoseconds (trillionths of a second) by using a relatively simple "beam chopper." In principle, the beam chopper can be used in any manufacturer's TEM. To install it, NIST researchers open the microscope column directly under the electron source, insert the beam chopper and close up the microscope again. Lau and her colleagues have successfully retrofitted three TEMs of different capabilities and vintage.

Like a stroboscope, this beam chopper releases precisely timed pulses of electrons that can capture frames of important repeating or cyclic processes.

"Imagine a Ferris wheel, which moves in a cyclical and repeatable way," Lau said. "If we're recording it with a pinhole camera, it will look blurry. But we want to see individual cars. I can put a shutter in front of the pinhole camera so that the shutter speed matches the movement of the wheel. We can time the shutter to open whenever a designated car goes to the top. In this way I can make a stack of images that shows each car at the top of the Ferris wheel," she said.

Like the light shutter, the beam chopper interrupts a continuous electron beam. But unlike the shutter, which has an aperture that opens and closes, this beam aperture stays open all the time, eliminating the need for a complex mechanical part.

Instead, the beam chopper generates a radio frequency (RF) electromagnetic wave in the direction of the electron beam. The wave causes the traveling electrons to behave "like corks bobbing up and down on the surface of a water wave," Lau said.

Riding this wave, the electrons follow an undulating path as they approach the aperture. Most electrons are blocked except for the ones that are perfectly aligned with the aperture. The frequency of the RF wave is tunable, so that electrons hit the sample anywhere from 40 million to 12 billion times per second. As a result, researchers can capture important processes in the sample at time intervals from about a nanosecond to 10 picoseconds.

In this way, the NIST-retrofitted microscope can capture atom-scale details of the back-and-forth movements in tiny machines such as microelectromechanical systems (MEMS) and nanoelectromechanical systems (NEMS). It can potentially study the regularly repeating signals in antennas used for high-speed communications and probe the movement of electric currents in next-generation computer processors.

In one demo, the researchers wanted to prove that a retrofitted microscope functioned as it did before the retrofit. They imaged gold nanoparticles in both the traditional "continuous" mode and the pulsed beam mode. The images in the pulsed mode had comparable clarity and resolution to the still images.

"We designed it so it should be the same," Lau said.

The beam chopper can also do double duty, pumping RF energy into the material sample and then taking pictures of the results. The researchers demonstrated this ability by injecting microwaves (a form of radio wave) into a metallic, comb-shaped MEMS device. The microwaves create electric fields within the MEMS device and cause the incoming pulses of electrons to deflect. These electron deflections enable researchers to build movies of the microwaves propagating through the MEMS comb.

Lau and her colleagues hope their invention can soon make new scientific discoveries. For example, it could investigate the behavior of quickly changing magnetic fields in molecular-scale memory devices that promise to store more information than before.

The researchers spent six years inventing and developing their beam chopper and have received several patents and an R&D 100 Award for their work. Co-authors in the work included Brookhaven National Laboratory in Upton, New York, and Euclid Techlabs in Bolingbrook, Illinois.

One of the things that makes Lau most proud is that their design can breathe new life into any TEM, including the 25-year-old unit that performed the latest demonstration. The NIST design gives labs everywhere the potential to use their microscopes to capture important fast-moving processes in tomorrow's materials.

"Democratizing science was the whole motivation," Lau said.

Credit: 
National Institute of Standards and Technology (NIST)

Research identifies how new cancer treatments can activate tuberculosis infection

image: Lung damage caused by TB infection

Image: 
University of Southampton

Researchers at the University of Southampton have identified how new checkpoint inhibitor treatments for cancer can activate tuberculosis in some patients.

Immune therapies for cancer are transforming treatment by activating the body's immune cells to fight off cancer. Immune checkpoints are part of the human body's immune system that prevent damaging inflammation, and checkpoint inhibitors are drugs used in immunotherapy to permit the body's immune system to attack cancer cells.

Surprisingly, immune activation with checkpoint inhibitors can sometimes lead to rapidly progressive tuberculosis, an infection that used to kill one in three people in the UK. Researchers in the Faculty of Medicine at the University of Southampton described one of the earliest cases of immunotherapy-associated tuberculosis in December 2018 in the American Journal of Respiratory and Critical Care Medicine, and reports of similar cases have progressively accumulated. However, the true incidence is unknown as progression of cancer and the development of tuberculosis can be similar.

To understand mechanisms underlying this emerging phenomenon, Dr Liku Tezera, a senior research fellow at the University who led the project, used a 3-dimensional cell culture model to measure the effect of checkpoint inhibitors on the immune system's ability to control the bacteria that causes tuberculosis disease. The team's findings, reported in the latest edition of eLife, demonstrate that the addition of an immune checkpoint inhibitor, anti-PD1, led to an excessive immune response, which actually increased growth of the bacteria. The involvement of PD-1 in the natural immune response to TB infection in patients was demonstrated with long-term collaborators based at the Africa Health Research Institute (AHRI), in Durban, South Africa.

"This is an important emerging clinical phenomenon, and by understanding the process that leads to increased tuberculosis growth, we can identify existing treatments that could be used reduce severity of infection and permit continuation of the cancer treatment", says Dr Tereza. "This may improve outcomes when this surprising side-effect of emerging cancer immunotherapies occurs".

The group are currently aiming to establish a national register to capture the true incidence of this phenomenon, and are developing the laboratory system to predict what other new cancer therapies may have a similar effect. The Southampton - Durban team was funded by an MRC Global Challenges Research Fund foundation award to establish sustainable research collaborations to address globally important diseases.

Credit: 
University of Southampton

Technology in higher education: learning with it instead of from it

image: Isa Jahnke is an associate professor in the MU College of Education's School of Information Science and Learning Technologies.

Image: 
MU News Bureau

COLUMBIA, Mo. - Technology has shifted the way that professors teach students in higher education. For example, by uploading recorded lectures online, students can reference a digital copy of the topics discussed in class. However, lecture-based teaching traditionally leaves students as consumers of information solely with little room for student creativity or interaction.

Now, researchers at the University of Missouri have found that activity-based learning, rather than lecture-based, enhances student creativity and learning by allowing students to use technology to develop their own original ideas.

Isa Jahnke, associate professor in the MU College of Education's School of Information Science and Learning Technologies, collaborated with former doctoral student Julia Liebscher to study how higher education professors in Europe use mobile technology in their classes. She found that student creativity was most enhanced by professors who allowed their students to use technology in a team setting to come up with a novel product or idea.

For example, one group of students in a history class developed an app that virtually teaches users about the history surrounding the Berlin Wall. Rather than simply lecturing the material to the students, Jahnke found that allowing them to use technology in a collaborative way enhanced the students' creativity and understanding of the content.

"This research is useful for professors to rethink how they design their existing courses," Jahnke said. "We need to shift away from purely lecture-based learning where students are just consumers of information toward a more meaningful learning approach with technology where students are able to come up with creative and novel solutions in a team setting."

Jahnke added that there are resources at MU, such as the Teaching For Learning Center, to help professors rethink their course designs amidst the ever-changing educational landscape.

"If we have universities that are producing more creative-thinking students, then we have more people who can help come up with solutions for all of society's grand challenges," Jahnke said. "Creativity will lead to better innovators, entrepreneurs and business owners, but first we need to ask ourselves as educators if we are using technology to put our students in positions to be creative in the first place."

Credit: 
University of Missouri-Columbia

Soft robot fingers gently grasp deep-sea jellyfish

video: Video of ultra-gentle robot clutching jellyfish.

Image: 
Jason Jaacks

Marine biologists have adopted "soft robotic linguine fingers" as tools to conduct their undersea research. In a study appearing February 24 in the journal Current Biology, scientists found that jellyfish held by ultra-soft robotic fingers expressed significantly fewer stress-related genes than when braced by traditional submersible grippers. Shaped like the famous noodles, this new robotic technology allows for the collection of ecological data in a gentler, less invasive manner.

"Using genomics, we confirm that newly developed soft robots are a kinder way to handle some of the slipperiest organisms -- jellyfish," says first author Michael Tessler, a post-doctoral fellow at American Museum of Natural History. "With new technologies we can often make massive advances on techniques, like deep-sea animal handling."

Unlike a dog or cat, jellyfish can't hiss or whine about their discomfort. Instead, analysis of which genes they express can give insights into how they are reacting to their environment. Using gene sequencing, the researchers measured differences in jellyfish gene expression when they were swimming freely, held by the soft robot fingers, or gripped by the more standard rigid claw.

"Imagine you're sitting very happily at your desk and I take a measurement of what genes are active, and then I poke you with a claw hand. I'd then look at how differently your genes reacted compared to when you were sitting unbothered; the strength of that difference can act as an indicator of your level of stress," says senior author David Gruber, Professor of Biology, City University of New York, Baruch College & CUNY Graduate Center, PhD Program in Biology.

The gently held jellies displayed gene expression patterns most like the undisturbed individuals, demonstrating their relatively calm response to capture. What's more, jellies caught by the claw expressed "repair" genes, suggesting they were priming themselves for physical harm. "I think what was interesting is that when you start harassing them with standard grippers, they immediately go into self-repair/stress because--being such a fragile organism--being stressed out is quite common for them," says Gruber. The expression of these self-repair genes was at higher levels compared to the free-swimming or gently held jellyfish.

But the impacts of this study extend far beyond just jellyfish. "We just used them as our sample organisms," says Gruber. "Now that we've shown this method can cause less stress to something as fragile as a jellyfish, it really proves our hypothesis that soft robots in the deep sea can be effective tools for all manner of delicate interactions."

These sentiments are echoed by co-author Nina Sinatra, who was a graduate student at Harvard University's Wyss Institute for Biologically Inspired Engineering when working on the study and designed the gentle robot fingers. "Selecting materials that are flexible, tough, and lightweight allow soft robots to operate robustly in the deep-sea environment while being delicate enough to safely interact with some of the most fragile marine organisms," says Sinatra. "By expanding our toolbox of materials, engineers can unlock exciting and clever solutions to challenges that would not be tractable for conventional robots."

Further, these soft robotic tools can be brought to the surface for applications of direct benefit to humans. "They could be used to harvest fruits from trees without bruising them, rehabilitate the muscles of stroke patients, and many other things that rigid-bodied robots are just too clunky and overpowered to accomplish today," says co-author Rob Wood, a Wyss Core Faculty member and professor of Engineering and Applied Sciences at Harvard's John A. Paulson School of Engineering and Applied Sciences (SEAS).

Historically, ocean exploration has been a rough exercise; collecting data has required ripping material from the sea floor or killing specimens to then study at the surface. With the inclusion of soft robots, it's becoming possible to take swabs of DNA and even conduct medical checkups of deep-sea organisms in real time, with little physical impact.

"By integrating soft robots into how we conduct research of the deep sea, we are reshaping our vision of the future for marine biologists," says Gruber. "It's our philosophy that we should be as gentle and careful as possible as we study and approach these new frontiers."

Credit: 
Cell Press

Why monkeys choose to drink alone

Why do some people almost always drop $10 in the Salvation Army bucket and others routinely walk by? One answer may be found in an intricate and rhythmic neuronal dance between two specific brain regions, finds a new Yale University study published Feb. 24 in the journal Nature Neuroscience.

The biological roots of generosity and selfishness have long fascinated neuroscientists. As social animals, primates depend upon cooperation; yet in times of scarcity or in the quest for status, selfishness often wins out.

Global imaging studies in humans have shown many brain regions seem to be involved in decisions about sharing. Yale's Steve Chang and colleagues decided to focus on neuronal activity between two specific brain regions of monkeys faced with a decision about whether or not to share fruit juice with another monkey.

In one scenario, the monkey could decide to give a drink to a companion or throw it out. In an alternative scenario, they could drink fruit juice alone or simultaneously share a drink with another monkey.

It turns out monkeys like to drink alone. But, if the alternative is to see the drink dumped in a bin, they prefer to give the other monkey a juice break

In both scenarios, researchers found distinct patterns of interaction in neuronal activity between the amygdala, a relatively primitive area of the brain, and the medial frontal cortex, an area where more deliberate thoughts originate. When monkeys were generous or pro-social, the interactions between these brain regions were highly synchronized, occurring at the same rate. When they were being anti-social, this synchronicity was markedly suppressed.

The researchers found that they could use the differences in synchronicity of the interactions to predict what decision the monkey had made: They merely had to look at the neuronal data.

"We found a unique signature of neural synchrony that reflects whether a pro-social or an anti-social decision was made," said Chang, senior author of the Nature Neuroscience paper and an assistant professor of psychology and neuroscience.

He and his team also found other key differentiating characteristics in the brain during decision making. For instance, when animals were being pro-social, neuronal interactions were transmitted at one frequency, and when anti-social, at another frequency. The frequency was determined by the brain region in which neurons fired.

"We all know there are individual differences in levels of generosity," Chang said. "Maybe Scrooge did not have high levels of synchrony after all."

Credit: 
Yale University

Study puts spin into quantum technologies

image: By combining laser and microwave excitation the researchers were able to change the spin states, for example "up" to "down", of atom-like impurities hosted in the material and show the dependence of their energy on an external magnetic field.

Image: 
Dr M. Kianinia

A team of international scientists investigating how to control the spin of atom-like impurities in 2D materials have observed the dependence of the atom's energy on an external magnetic field for the first time.

The results of the study, published in Nature Materials, will be of interest to both academic and industry research groups working on the development of future quantum applications, the researchers say.

Researchers led by Prof Vladimir Dyakonov at the University of Würzburg in collaboration with scientists from the University of Technology Sydney (UTS), the Kazan Federal University and the Universidade Federal de Minas Gerais, demonstrated the ability to control the spin of atom-like impurities in 2D material hexagonal boron-nitride. By combining laser and microwave excitation the researchers were able to change the spin states, for example "up" to "down", of atom-like impurities hosted in the material and show the dependence of their energy on an external magnetic field.

This is the first time that the phenomenon has been observed in a material that is made of a single sheet of atoms like graphene. The researchers say that this newly demonstrated quantum spin-optical properties, combined with the ease of integrating with other 2D materials and devices, establishes hexagonal boron-nitride as an intriguing candidate for advanced quantum technology hardware.

"2D atomic crystals are currently some of the most studied materials in condensed matter physics and materials science," says UTS physicist Dr Mehran Kianinia, a co-author of the study.

"Their physics is intriguing from a fundamental point of view, but beyond that, we can think of stacking different 2D crystals to create completely new materials, heterostructures and devices with specific designer properties," he says.

UTS researcher, Dr Carlo Bradac, a senior co-author of the study says that in addition to adding another unique property, to an already impressive range of properties for a 2D material, the discovery has enormous potential for the field of quantum sensing.

"What really excites me is the potential [in the context of quantum sensing]. These spins are sensitive to their immediate surroundings. Unlike 3D solids, where the atom-like system can be as far as a few nanometres from the object to sense, here the controllable spin is right at the surface. Our hope is to use these individual spins as tiny sensors and map, with unprecedented spatial resolution, variations in temperature, as well as magnetic and electric fields onto variations in spin" Dr Bradac says.

"Imagine, for instance, being able to measure minuscule magnetic fields with sensors as small as single atoms. The possibilities are far reaching and range from nuclear magnetic resonance spectroscopy for nanoscale medical diagnostic and material chemistry to GPS-free navigation using the Earth's magnetic field," he says.

However quantum-based nanoscale magnetometry is "just one area where controlling single spins in solids is useful" says senior author of the study UTS Professor Igor Aharonovich.

"Beyond quantum sensing, many quantum computing and quantum communication applications rely on our ability to control the spin-state--zero, one and anything in between--of single atom-like systems in solid host materials. This allows us to encode, store and transfer information in the form of quantum bits or qubits," he says.

Amongst many others, this research highlights how scientists are quickly becoming masters in the craft of manipulating objects in the quantum regime. In fact, achievements like Lockheed Martin’s Black Ice project and Google’s quantum supremacy are proof that we are striding away from mere proof-of-concept experiments towards real world, quantum-enabled solutions to practical problems.

Credit: 
University of Technology Sydney

Watching magnetic nano 'tornadoes' in 3D

video: Reconstruction of 3D magnetic structure

Image: 
Claire Donnelly

Scientists have developed a three-dimensional imaging technique to observe complex behaviours in magnets, including fast-moving waves and 'tornadoes' thousands of times thinner than a human hair.

The team, from the Universities of Cambridge and Glasgow in the UK and ETH Zurich and the Paul Scherrer Institute in Switzerland, used their technique to observe how the magnetisation behaves, the first time this has been done in three dimensions. The technique, called time-resolved magnetic laminography, could be used to understand and control the behaviour of new types of magnets for next-generation data storage and processing. The results are reported in the journal Nature Nanotechnology.

Magnets are widely used in applications from data storage to energy production and sensors. In order to understand why magnets behave the way they do, it is important to understand the structure of their magnetisation, and how that structure reacts to changing currents or magnetic fields.

"Until now, it hasn't been possible to actually measure how magnets respond to changing magnetic fields in three dimensions," said Dr Claire Donnelly from Cambridge's Cavendish Laboratory, and the study's first author. "We've only really been able to observe these behaviours in thin films, which are essentially two dimensional, and which therefore don't give us a complete picture."

Moving from two dimensions to three is highly complex, however. Modelling and visualising magnetic behaviour is relatively straightforward in two dimensions, but in three dimensions, the magnetisation can point in any direction and form patterns, which is what makes magnets so powerful.

"Not only is it important to know what patterns and structures this magnetisation forms, but it's essential to understand how it reacts to external stimuli," said Donnelly. "These responses are interesting from a fundamental point of view, but they are crucial when it comes to magnetic devices used in technology and applications."

One of the main challenges in investigating these responses is tied to the very reason magnetic materials are so relevant for so many applications: changes in the magnetisation typically are extremely small, and happen extremely fast. Magnetic configurations - so-called domain structures - exhibit features on the order of tens to hundreds of nanometres, thousands of times smaller than the width of a human hair, and typically react to magnetic fields and currents in billionths of a second.

Now, Donnelly and her collaborators from the Paul Scherrer Institute, the University of Glasgow and ETH Zurich have developed a technique to look inside a magnet, visualise its nanostructure, and how it responds to a changing magnetic field in three dimensions, and at the size and timescales required.

The technique they developed, time-resolved magnetic laminography, uses powerful X-rays called synchrotron X-rays to probe the magnetic state from different directions at the nanoscale, and how it changes in response to a quickly alternating magnetic field. The resulting seven-dimensional dataset (three dimensions for the position, three for the direction and one for the time) is then obtained using a specially developed reconstruction algorithm, providing a map of the magnetisation dynamics with 70 picosecond temporal resolution, and 50 nanometre spatial resolution.

What the researchers saw with their technique was like a nanoscale storm: patterns of waves and tornadoes moving side to side as the magnetic field changed. The movement of these tornadoes, or vortices, had previously only been observed in two dimensions.

The researchers tested their technique using conventional magnets, but they say it could also be useful in the development of new types of magnets which exhibit new types of magnetism. These new magnets, such as 3D-printed nanomagnets, could be useful for new types of high-density, high-efficiency data storage and processing.

"We can now investigate the dynamics of new types of systems that could open up new applications we haven't even thought of," said Donnelly. "This new tool will help us to understand, and control, their behaviour."

Credit: 
University of Cambridge

Going super small to get super strong metals

image: A simulation of 3-nm-grain-sized nickel under strain. Colored lines indicate partial or full grain dislocation.

Image: 
Zhou et al

You can't see them, but most of the metals around you--coins, silverware, even the steel beams holding up buildings and overpasses--are made up of tiny metal grains. Under a powerful enough microscope, you can see interlocking crystals that look like a granite countertop.

It's long been known by materials scientists that metals get stronger as the size of the grains making up the metal get smaller - up to a point. If the grains are smaller than 10 nanometers in diameter the materials are weaker because, it was thought, they slide past each other like sand sliding down a dune. The strength of metals had a limit.

But experiments led by former University of Utah postdoctoral scholar Xiaoling Zhou, now at Princeton University, associate professor of geology Lowell Miyagi, and Bin Chen at the Center for High Pressure Science and Technology Advanced Research in Shanghai, China, show that that's not always the case - in samples of nickel with grain diameters as small as 3 nanometers, and under high pressures, the strength of the samples continued to increase with smaller grain sizes.

The result, Zhou and Miyagi say, is a new understanding of how individual atoms of metal grains interact with each other, as well as a way to use those physics to achieve super-strong metals. Their study, carried out with colleagues at the University of California, Berkeley and at universities in China, is published in Nature.

"Our results suggest a possible strategy for making ultrastrong metals," Zhou says. "In the past, researchers believed the strongest grain size was around 10-15 nanometers. But now we found that we could make stronger metals at below 10 nanometers."

Pushing past Hall-Petch

For most metallic objects, Miyagi says, the sizes of the metal grains are on the order of a few to a few hundred micrometers - about the diameter of a human hair. "High end cutlery often will have a finer, and more homogeneous, grain structure which can allow you to get a better edge," he says.

The previously-understood relationship between metal strength and grain size was called the Hall-Petch relationship. Metal strength increased as grain size decreased, according to Hall-Petch, down to a limit of 10-15 nanometers. That's a diameter of only about four to six strands of DNA. Grain sizes below that limit just weren't as strong. So to maximize strength, metallurgists would aim for the smallest effective grain sizes.

"Grain size refinement is a good approach to improve strength," Zhou says. "So it was quite frustrating, in the past, to find this grain size refinement approach no longer works below a critical grain size."

The explanation for the weakening below 10 nanometers had to do with the way grain surfaces interacted. The surfaces of grains have a different atomic structure than the interiors, Miyagi says. As long as the grains are held together by the power of friction, the metal would retain strength. But at small grain sizes, it was thought, the grains would simply slide past each other under strain, leading to a weak metal.

Technical limitations previously prevented direct experiments on nanograins, though, limiting understanding of how nanoscale grains behaved and whether there may yet be untapped strength below the Hall-Petch limit. "So we designed our study to measure the strength of nanometals," Zhou says.

Under pressure

The researchers tested samples of nickel, a material that's available in a wide range of nanograin sizes, down to three nanometers. Their experiments involved placing samples of various grain sizes under intense pressures in a diamond anvil cell and using x-ray diffraction to watch what was happening at the nanoscale in each sample.

"If you've ever played around with a spring, you've probably pulled on it hard enough to ruin it so that it doesn't do what it's supposed to do," Miyagi says. "That's basically what we're measuring here; how hard we can push on this nickel until we would deform it past the point of it being able to recover."

Strength continued to increase all the way down to the smallest grain size available. The 3 nm sample withstood a force of 4.2 gigapascals (about the same force as ten 10,000 lbs. elephants balanced on a single high heel) before deforming irreversibly. That's ten times stronger than nickel with a commercial-grade grain size.

It's not that the Hall-Petch relationship broke down, Miyagi says, but that the way the grains interacted was different under the experimental conditions. The high pressure likely overcame the grain sliding effects.

"If you push two grains together really hard," he says, "it's hard for them to slide past each other because the friction between grains becomes large, and you can suppress these grain boundary sliding mechanisms that turns out are responsible for this weakening."

When grain boundary sliding was suppressed at grain sizes below 20nm, the researchers observed a new atomic-scale deformation mechanism which resulted in extreme strengthening in the finest grained samples.

Ultrastrong possibilities

Zhou says that one of the advances of this study is in their method to measure the strength of materials at the nanoscale in a way that hasn't been done before.

Miyagi says another advance is a new way to think about strengthening metals--by engineering their grain surfaces to suppress grain sliding.

"We don't have many applications, industrially, of things where the pressures are as high as in these experiments, but by showing pressure is one way of suppressing grain boundary deformation we can think about other strategies to suppress it, maybe using complicated microstructures where you have grain shapes that inhibit sliding of grains past each other."

Credit: 
University of Utah

Magnetic field at Martian surface ten times stronger than expected

image: Sources of magnetization on Mars detected by the magnetic sensor aboard NASA's InSight spacecraft.

Image: 
NASA/JPL-Caltech and UBC Media Relations

New data gleaned from the magnetic sensor aboard NASA's InSight spacecraft is offering an unprecedented close-up of magnetic fields on Mars.

In a study published today in Nature Geoscience, scientists reveal that the magnetic field at the InSight landing site is ten times stronger than anticipated, and fluctuates over time-scales of seconds to days.

"One of the big unknowns from previous satellite missions was what the magnetization looked like over small areas," said lead author Catherine Johnson, a professor at the University of British Columbia and senior scientist at the Planetary Science Institute. "By placing the first magnetic sensor at the surface, we have gained valuable new clues about the interior structure and upper atmosphere of Mars that will help us understand how it - and other planets like it - formed."

Zooming in on magnetic fields

Before the InSight mission, the best estimates of Martian magnetic fields came from satellites orbiting high above the planet, and were averaged over large distances of more than 150 kilometres.

"The ground-level data give us a much more sensitive picture of magnetization over smaller areas, and where it's coming from," said Johnson. "In addition to showing that the magnetic field at the landing site was ten times stronger than the satellites anticipated, the data implied it was coming from nearby sources."

Scientists have known that Mars had an ancient global magnetic field billions of years ago that magnetized rocks on the planet, before mysteriously switching off. Because most rocks at the surface are too young to have been magnetized by this ancient field, the team thinks it must be coming from deeper underground.

"We think it's coming from much older rocks that are buried anywhere from a couple hundred feet to ten kilometres below ground," said Johnson. "We wouldn't have been able to deduce this without the magnetic data and the geology and seismic information InSight has provided."

The team hopes that by combining these InSight results with satellite magnetic data and future studies of Martian rocks, they can identify exactly which rocks carry the magnetization and how old they are.

Day-night fluctuations and things that pulse in the dark

The magnetic sensor has also provided new clues about phenomena that occur high in the upper atmosphere and the space environment around Mars.

Just like Earth, Mars is exposed to solar wind, which is a stream of charged particles from the Sun that carries an interplanetary magnetic field (IMF) with it, and can cause disturbances like solar storms. But because Mars lacks a global magnetic field, it is less protected from solar weather.

"Because all of our previous observations of Mars have been from the top of its atmosphere or even higher altitudes, we didn't know whether disturbances in solar wind would propagate to the surface," said Johnson. "That's an important thing to understand for future astronaut missions to Mars."

The sensor captured fluctuations in the magnetic field between day and night and short, mysterious pulsations around midnight, confirming that events in and above the upper atmosphere can be detected at the surface.

The team believe that the day-night fluctuations arise from a combination of how the solar wind and IMF drape around the planet, and solar radiation charging the upper atmosphere and producing electrical currents, which in turn generate magnetic fields.

"What we're getting is an indirect picture of the atmospheric properties of Mars - how charged it becomes and what currents are in the upper atmosphere," said co-author Anna Mittelholz, a postdoctoral fellow at the University of British Columbia.

And the mysterious pulsations that mostly appear at midnight and last only a few minutes?

"We think these pulses are also related to the solar wind interaction with Mars, but we don't yet know exactly what causes them," said Johnson. "Whenever you get to make measurements for the first time, you find surprises and this is one of our 'magnetic' surprises."

In the future, the InSight team wants to observe the surface magnetic field at the same time as the MAVEN orbiter passes over InSight, allowing them to compare data.

"The main function of the magnetic sensor was to weed out magnetic "noise," both from the environment and the lander itself, for our seismic experiments, so this is all bonus information that directly supports the overarching goals of the mission," said InSight principal investigator Bruce Banerdt of NASA's Jet Propulsion Laboratory in Pasadena, California. "The time-varying fields, for example, will be very useful for future studies of the deep conductivity structure of Mars, which is related to its internal temperature."

Credit: 
University of British Columbia

ETRI develops optical communications technology to double data transfer speed

image: The 200Gbps QSFP-DD transceiver developed by ETRI researchers.

Image: 
Electronics and Telecommunications Research Institute (ETRI)

Researchers in South Korea have developed a new optical communications technology that can transfer data in lightning speed. The new technology sends and receives twice as much data than conventional methods. It is expected to contribute to solving data traffic congestion in 5G networks.

The Electronics and Telecommunications Research Institute (ETRI) in South Korea has succeeded to develop a compact 200Gbps optical transceiver in the QSFP-DD (Quad Small Form-factor Pluggable Double Density) form factor. It would take about four seconds to transfer an ultra-high-definition 4K film which is about 100 gigabytes.

The new technology significantly improved data transfer speed by adopting a four-stage high-order modulation method, which is PAM-4 modulation with direct detection. While previous two-stage modulation technology sends one bit a time, the new technology sends two bits. Moreover, it allows efficient data transfer between telecom nodes to other local networks as far away as 80 kilometers.

The new technology, 200Gbps QSFP-DD transceiver provides a cost effective alternative to the other coherent modulation for the metro-access network (such as inter-data center network or mobile back-haul network).

The merit of the new technology is the minor sensitivity to changes in wavelength and temperature and its simple manufacturing process. Hence the power consumption is 1.5 times lower and the density is 4 times higher, thereby reducing the communications equipment investment cost.

ETRI has designed and developed a unique PAM-4 DSP algorithm. The new technology combined with the PAM-4 DSP algorithm has been proven their idea with real-time demonstration, which is a world-best record. The research outcome was published in the journal, Optics Express, a renowned journal in optical communications research sector.

Credit: 
National Research Council of Science & Technology

CNIO and Cabimer researchers show that DNA topological problems may cause lymphoma

image: Model to explain aberrant TOP II activity as a driver of ATM-deficient thymic malignancies.

Image: 
Álvarez-Quilón et al. 2020, <em>Nature Communications</em>

Researchers from the Spanish National Cancer Research Centre (CNIO), Madrid, and the Andalusian Molecular Biology and Regenerative Medicine Centre (Cabimer), Seville, published today a paper in Nature Communications that shows that DNA topological problems may cause endogenous DNA breaks that have a causal relationship with cancer.

The starting point of the study, entitled 'Endogenous topoisomerase II-mediated DNA breaks drive thymic cancer predisposition linked to ATM deficiency', is that, during lymphocyte maturation, a number of genomic regions must find each other and rearrange to generate the DNA sequence variability required for the immune response. 'This study demonstrates that movements and changes in the 3D genome structure create knots and tangles in DNA that, during their resolution, are a source of chromosomal breakage,' says Felipe Cortés Ledesma, lead author and Head of the Topology and DNA Breaks Group at CNIO. 'In conditions in which the response to these breaks is impaired, chromosomal translocations that may be associated with lymphoma development can occur. Actually, this is true upon mutations in the ATM gene, which are very common in haematological malignancies and the cause of the lymphoma-prone syndrome ataxia telangiectasia,' he adds.

Ataxia telangiectasia (A-T), also referred to as Louis-Barr syndrome, is a genetic disease caused by ATM gene mutation and inherited in an autosomal recessive pattern. The gene is located on the long arm of chromosome 11. The Spanish Ataxia Telangiectasia Association (AEFAT) funded part of this study, published today and co-authored by Felipe Cortés Ledesma, Alejandro Álvarez-Quilón and José Terrón Bautista, from Cabimer.

As described in the abstract of the paper, the ATM kinase is a master regulator of the DNA damage response to double-strand breaks and a well-established tumour suppressor whose loss is the cause of the neurodegenerative and cancer-prone syndrome A-T. A-T patients are particularly predisposed to develop lymphoid cancers. 'Our results show a strong causal relationship between topological problems and cancer development (...), confirming these lesions as a major driver of ATM-deficient lymphoid malignancies, and potentially other conditions and cancer types,' the authors say in the paper.

The research paper published in Nature Communications opens new ways to understand the causes of cancer. One of them, according to Cortés Ledesma, is the possibility of 'getting a deeper understanding of the molecular mechanisms involved in the appearance and repair of this type of DNA break,' as well as testing whether there is a relationship between these findings and other cancer types. Along this line, the research team points to the possibility of carrying out clinical studies in A-T patients.

Credit: 
Centro Nacional de Investigaciones Oncológicas (CNIO)

Scientists develop a composite membrane for long-life zinc-based flow batteries

image: Schematic illustration of the synergistic effect (thermal conductivity and mechanical strength) of BNNSs flake layer on zinc deposition.

Image: 
HU Jing

Researchers led by Profs. LI Xianfeng from the Dalian Institute of Chemical Physics (DICP) of the Chinese Academy of Sciences recently developed a composite membrane for long-life zinc-based flow batteries. Their study was published in Angewandte Chemie International Edition.

The zinc-based flow battery (ZFB) has captured much attention as a stationary energy storage application due to its low cost, intrinsic high safety and environmental friendliness. However, its development is limited by poor cycle life and poor charge-discharge performance, mainly due to zinc dendrite/accumulation issues.

Ion-conducting membranes play an important role in regulating the morphology of zinc deposition and inhibiting the growth of dendrites, thereby improving the cycling stability of the battery.

In the early stage of their research, LI's group adjusted the direction and morphology of zinc deposition by modulating the negative charge properties of the porous ion-conducting membrane, thus improving the area capacity and cycling stability of zinc-based flow batteries (Nat. Commun., 2018). 

Based on their previous work, the researchers then developed a composite membrane by coating boron nitride nanosheets (BNNSs) exhibiting high thermal conductivity and mechanical strength onto a porous membrane substrate.

The BNNSs flake layer facing the negative electrode serves as the heat-porter, thus improving the surface temperature distribution of the electrode and further adjusting the zinc morphology. Moreover, its high level of mechanical strength prevents the metallic zinc from damaging the membrane.

The synergistic effect of these two factors can improve the cycle life of ZFBs. Alkaline zinc-iron flow batteries assembled with this membrane can stably run for 500 charge-discharge cycles (~800 h) at a current density of 80 mA cm-2 without significant attenuation.

Most importantly, energy efficiency above 80% can be obtained even at 200 mA cm-2. These results may serve as a reference for the regulation of zinc anodes in zinc-based batteries.

Credit: 
Chinese Academy of Sciences Headquarters