Tech

Harnessing tomato jumping genes could help speed-breed drought-resistant crops

Researchers from the University of Cambridge's Sainsbury Laboratory (SLCU) and Department of Plant Sciences have discovered that drought stress triggers the activity of a family of jumping genes (Rider retrotransposons) previously known to contribute to fruit shape and colour in tomatoes. Their characterisation of Rider, published today in the journal PLOS Genetics, revealed that the Rider family is also present and potentially active in other crops, highlighting its potential as a source of new trait variations that could help plants better cope with more extreme conditions driven by our changing climate.

"Transposons carry huge potential for crop improvement. They are powerful drivers of trait diversity, and while we have been harnessing these traits to improve our crops for generations, we are now starting to understand the molecular mechanisms involved," said Dr Matthias Benoit, the paper's first author, formerly at SLCU.

Transposons, more commonly called jumping genes, are mobile snippets of DNA code that can copy themselves into new positions within the genome - the genetic code of an organism. They can change, disrupt or amplify genes, or have no effect at all. Discovered in corn kernels by Nobel prize-winning scientist Barbara McClintock in the 1940s, only now are scientists realising that transposons are not junk at all but actually play an important role in the evolutionary process, and in altering gene expression and the physical characteristics of plants.

Using the jumping genes already present in plants to generate new characteristics would be a significant leap forward from traditional breeding techniques, making it possible to rapidly generate new traits in crops that have traditionally been bred to produce uniform shapes, colours and sizes to make harvesting more efficient and maximise yield. They would enable production of an enormous diversity of new traits, which could then be refined and optimised by gene targeting technologies.

"In a large population size, such as a tomato field, in which transposons are activated in each individual we would expect to see an enormous diversity of new traits. By controlling this 'random mutation' process within the plant we can accelerate this process to generate new phenotypes that we could not even imagine," said Dr Hajk Drost at SLCU, a co-author of the paper.

Today's gene targeting technologies are very powerful, but often require some functional understanding of the underlying gene to yield useful results and usually only target one or a few genes. Transposon activity is a native tool already present within the plant, which can be harnessed to generate new phenotypes or resistances and complement gene targeting efforts. Using transposons offers a transgene-free method of breeding that acknowledges the current EU legislation on Genetically Modified Organisms.

The work also revealed that Rider is present in several plant species, including economically important crops such as rapeseed, beetroot and quinoa. This wide abundance encourages further investigations into how it can be activated in a controlled way, or reactivated or re-introduced into plants that currently have mute Rider elements so that their potential can be regained. Such an approach has the potential to significantly reduce breeding time compared to traditional methods.

"Identifying that Rider activity is triggered by drought suggests that it can create new gene regulatory networks that would help a plant respond to drought," said Benoit. "This means we could harness Rider to breed crops that are better adapted to drought stress by providing drought responsiveness to genes already present in crops. This is particularly significant in times of global warming, where there is an urgent need to breed more resilient crops."

Credit: 
University of Cambridge

New imaging technology could 'revolutionize' cancer surgery

image: University of Waterloo researcher Parsin Haji Reza works in his lab.

Image: 
University of Waterloo

Cancer treatment could be dramatically improved by an invention at the University of Waterloo to precisely locate the edges of tumors during surgery to remove them.

The new imaging technology uses the way light from lasers interacts with cancerous and healthy tissues to distinguish between them in real-time and with no physical contact, an advancement with the potential to eliminate the need for secondary surgeries to get missed malignant tissue.

"This is the future, a huge step towards our ultimate goal of revolutionizing surgical oncology," said Parsin Haji Reza, a systems design engineering professor who leads the project. "Intraoperatively, during surgery, the surgeon will be able to see exactly what to cut and how much to cut."

A paper on the work, All-optical Reflection-mode Microscopic Histology of Unstained Human Tissues, was published Sep.t 16 in the journal Scientific Reports.

Doctors now rely primarily on pre-operation MRI images and CT scans, experience and visual inspection to determine the margins of tumors during operations.

Tissue samples are then sent to labs for testing, with waits of up to two weeks for results to show if the tumor was completely removed or not.

In about 10 per cent of cases - the rates for different kinds of cancer involving tumors vary widely - some cancerous tissue has been missed and a second operation is required to remove it.

The photoacoustic technology developed at Waterloo works by sending laser light pulses into targeted tissue, which absorbs them, heats up, expands and produces soundwaves. A second laser reads those soundwaves, which are then processed to determine if the tissue is cancerous or non-cancerous.

The system has already been used to make accurate images of even relatively thick, untreated human tissue samples for the first time ever, a key breakthrough in the development process.

Next steps include imaging fresh tissue samples taken during surgeries, integrating the technology into a surgical microscope and, finally, using the system directly on patients during operations.

"This will have a tremendous impact on the economics of health-care, be amazing for patients and give clinicians a great new tool," said Haji Reza, director of the PhotoMedicine Labs at Waterloo. "It will save a great deal of time and money and anxiety."

Researchers hope to develop a fully functioning system within about two years, a process including the need to clear ethical hurdles and securing regulatory approvals.

Credit: 
University of Waterloo

Just add water

Semiconductors -- and our mastery of them -- have enabled us to develop the technology that underpins our modern society. These devices are responsible for a wide range of electronics, including circuit boards, computer chips and sensors.

The electrical conductance of semiconductors falls between those of insulators, like rubber, and conductors, like copper. By doping the materials with different impurities, scientists can control a semiconductor's electrical properties. This is what makes them so useful in electronics.

Scientists and engineers have been exploring new types of semiconductors with attractive properties that could result in revolutionary innovations. One class of these new materials is organic semiconductors (OSCs), which are based on carbon rather than silicon. OSCs are lighter and more flexible than their conventional counterparts, properties that lend themselves to all sorts of potential applications, such as flexible electronics, for instance.

In 2014, UC Santa Barbara's Professor Thuc-Quyen Nguyen and her lab first reported on doping of OSCs using Lewis acids to increase the conductance of some semiconducting polymers; however, no one knew why this increase happened until now.

Through a collaborative effort, Nguyen and her collages have parsed this mechanism, and their unexpected discovery promises to grant us greater control over these materials. The work was supported by the Department of Energy and the findings appear in the journal Nature Materials.

Researchers at UC Santa Barbara collaborated with an international team from the University of Kentucky, Humboldt University of Berlin and Donghua University in Shanghai. "The doping mechanism using Lewis acids is unique and complex; therefore, it requires a team effort," Nguyen explained.

"That's what this paper is all about," said lead author Brett Yurash, a doctoral candidate in Nguyen's lab, "figuring out why adding this chemical to the organic semiconductor increases its conductivity."

"People thought it was just the Lewis acid acting on the organic semiconductor," he explained. "But it turns out you don't get that effect unless water is present."

Apparently, water mediates a key part of this process. The Lewis acid grabs a hydrogen atom from the water and passes it over to the OSC. The extra positive charge makes the OSC molecule unstable, so an electron from a neighboring molecule migrates over to cancel out the charge. This leaves a positively charged "hole" that then contributes to the material's conductivity.

"The fact that water was having any role at all was really unexpected," said Yurash, the paper's lead author.

Most of these reactions are performed in controlled environments. For instance, the experiments at UC Santa Barbara were conducted in dry conditions under a nitrogen atmosphere. There wasn't supposed to be any humidity in the chamber at all. However, clearly some moisture had made it into the box with the other materials. "Just a tiny amount of water is all it took to have this doping effect," Yurash said.

Scientists, engineers and technicians need to be able to controllably dope a semiconductor in order for it to be practical. "We've totally mastered silicon," he said. "We can dope it the exact amount we want and it's very stable." In contrast, controllably doping OSCs has been a huge challenge.

Lewis acids are actually pretty stable dopants, and the team's findings apply fairly broadly, beyond simply the few OSCs and acids they tested. Most of the OSC doping work has used molecular dopants Which don't dissolve readily in many solvents "Lewis acids, on the other hand, are soluble in common organic solvents, cheap, and available in various structures," Nguyen explained.

Understanding the mechanism at work should enable researchers to purposefully design even better dopants. "This is hopefully going to be the springboard from which more ideas launch," Yurash said. Ultimately, the team hopes these insights help push organic semiconductors toward broader commercial realization.

Credit: 
University of California - Santa Barbara

Look out, invasive species: The robots are coming

video: A biomimetic robot that mimics the appearance and locomotion of the predator largemouth bass depleted the energy reserves of the invasive mosquitofish after only 15 minutes of exposure weekly. The research study led by the NYU Tandon School of Engineering raises the possibility of future applications in the wild.

Image: 
Maurizio Porfiri and Giovanni Polverino

BROOKLYN, New York, Monday, September 16, 2019 - Invasive species control is notoriously challenging, especially in lakes and rivers where native fish and other wildlife have limited options for escape. In his laboratory's latest foray into using biomimetic robots to understand and modify animal behavior, NYU Tandon School of Engineering Professor Maurizio Porfiri led an interdisciplinary team of researchers from NYU Tandon and the University of Western Australia toward demonstrating how robotic fish can be a valuable tool in the fight against one of the world's most problematic invasive species, the mosquitofish.

Found in freshwater lakes and rivers worldwide, soaring mosquitofish populations have decimated native fish and amphibian populations, and attempts to control the species through toxicants or trapping often fail or cause harm to local wildlife.

Porfiri and a team of collaborators have published the first experiments to gauge the ability of a biologically inspired robotic fish to induce fear-related changes in mosquitofish. Their findings indicate that even brief exposure to a robotic replica of the mosquitofish's primary predator -- the largemouth bass -- can provoke meaningful stress responses in mosquitofish, triggering avoidance behaviors and physiological changes associated with the loss of energy reserves, potentially translating into lower rates of reproduction.

The paper, "Behavioural and Life-History Responses of Mosquitofish to Biologically Inspired and Interactive Robotic Predators," appears in the current issue of the Journal of the Royal Society Interface.

"To the best of our knowledge, this is the first study using robots to evoke fear responses in this invasive species," Porfiri said. "The results show that a robotic fish that closely replicates the swimming patterns and visual appearance of the largemouth bass has a powerful, lasting impact on mosquitofish in the lab setting."

The team exposed groups of mosquitofish to a robotic largemouth bass for one 15-minute session per week for six consecutive weeks. The robot's behavior varied between trials, spanning several degrees of biomimicry. Notably, in some trials, the robot was programmed to incorporate real-time feedback based on interactions with live mosquitofish and to exhibit "attacks" typical of predatory behavior -- a rapid increase in swimming speed. Interactions between the live fish and the replica were tracked in real time and analyzed to reveal correlations between the degree of biomimicry in the robot and the level of stress response exhibited by the live fish. Fear-related behaviors in mosquitofish include freezing (not swimming), hesitancy in exploring open spaces that are unfamiliar and potentially dangerous, and erratic swimming patterns.

The researchers also measured physiologic parameters of stress response, anesthetizing the fish weekly to measure their weight and length. Decreases in weight indicate a stronger anti-predator response and result in lower energy reserves. Fish with lower reserves are less likely to survive long and devote energy toward future reproduction - factors with strong implications for population management in the wild.

Fish exposed to robotic predators that most closely mimicked the aggressive, attack-oriented swimming patterns of real-life predators displayed the highest levels of behavioral and physiological stress responses.

"Further studies are needed to determine if these effects translate to wild populations, but this is a concrete demonstration of the potential of a robotics to solve the mosquitofish problem," said Giovanni Polverino, Forrest Fellow in the Department of Biological Sciences at the University of Western Australia and the lead author of the paper. "We have a lot more work going on between our schools to establish new, effective tools to combat the spread of invasive species."

Porfiri's Dynamical Systems Laboratory is known for previous work using biomimetic robots alongside live fish to tease out the mechanisms of many collective animal behaviors, including leadership, mating preferences, and even the impact of alcohol on social behaviors. In addition to developing robots that offer fully controllable stimuli for studying animal behavior, the biomimetic robots minimize use of experimental animals.

Credit: 
NYU Tandon School of Engineering

Tomorrow's coolants of choice

image: Brought into a magnetic field, the temperature of certain materials changes significantly. Scientists want to use this effect to build eco-friendly cooling devices.

Image: 
HZDR / Juniks

Later during this century, around 2060, a paradigm shift in global energy consumption is expected: we will spend more energy for cooling than for heating. Meanwhile, the increasing penetration of cooling applications into our daily lives causes a rapidly growing ecological footprint. New refrigeration processes such as magnetic cooling could limit the resulting impact on the climate and the environment. Researchers at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and the Technische Universität Darmstadt have taken a closer look at today's most promising materials. The result of their work is the first systematic magnetocaloric material library with all relevant property data, which they have published now in the journal Advanced Energy Materials (DOI: 10.1002/aenm.201970130).

Artificial cooling using conventional gas compression has been around in commercial household applications for about one hundred years. However, the technology has barely changed during this time. Experts estimate that around one billion refrigerators based on this technology are in worldwide use today, in ever-growing numbers. "Cooling technology is now regarded as the largest power consumer in our own four walls. The potential for environmental pollution caused by typical coolants is just as problematic," Dr. Tino Gottschall from the Dresden High Magnetic Field Laboratory of the HZDR describes the motivation of his research.

The "magnetocaloric effect", which could become the heart of future cooling technologies, is a process where certain elements and alloys suddenly change their temperature when exposed to a magnetic field. There is a whole series of such magnetocaloric substances already known from research. "But whether they are suitable for household and industrial applications on a large scale - this is a whole different question," adds Prof. Oliver Gutfleisch from the Institute of Materials Science at the Technische Universität Darmstadt.

Substance database for cooling materials

The scientists were collecting data on substance properties to clarify these issues. However, they quickly ran into difficulties. "We were particularly surprised that only a few results from direct measurements can be found in the specialist literature," reports Gottschall. "In most cases, these parameters were indirectly derived from the observed magnetization data. We found that neither the measurement conditions, such as the strength and the profile of the applied magnetic field, nor the measuring regimes, are comparable. Consequently, the results do not match."

To dispel the inconsistencies in the previously published material parameters, the scientists devised an elaborate measurement program, which covers the entire spectrum of the currently most promising magnetocaloric materials and their relevant material properties. By coupling high-precision measurements with thermodynamic considerations, the researchers from Darmstadt and Dresden were able to generate consistent material data sets. The scientists now are presenting this solid database that can facilitate the selection of suitable materials for various magnetic cooling applications.

Which materials can take on gadolinium?

The suitability of a material for magnetic cooling purposes is ultimately determined by various parameters. It requires the proper combination of material properties to compete with well-established cooling technologies. To describe the most important properties for tomorrow's cooling materials, Gottschall states: "The temperature change achieved at room temperature should be large, and as much heat as possible should be dissipated at the same time."

To enter future mass applications, these substances must not possess harmful characteristics, both in terms of environment and health. "In addition, they should not consist of raw materials that are classified as critical due to supply risks and difficult in terms of replacement in technological applications," explains Gutfleisch. "In the overall assessment of technological processes, this aspect is often neglected. A mere focus on physical properties is no longer sufficient today. In this respect, magnetic cooling is also a prime example of the fundamental challenges coming along with the current energy transition, which will not be possible without sustainable access to suitable materials."

At ambient temperature, the prime magnetocaloric standard is still made of gadolinium. If the rare-earth element is brought into a magnetic field of 1 Tesla, the scientists measure a temperature change of almost 3 degrees Celsius. Keeping the economic viability of future magnetic cooling devices in mind, the generation of such field strengths will most likely rely on commercial permanent magnets.

Suitable materials: A look into the future

Despite its outstanding properties, the prospects of using gadolinium in household cooling devices are rather unrealistic. The element is one of those rare-earth metals that are classified as critical when it comes to a secure, long-term supply. Given an equal design, heat exchangers made of iron-rhodium alloys could dissipate even larger quantities of heat per cooling cycle. Nevertheless, the platinum group metal rhodium is likewise in the list of raw materials singled out by the European Commission due to a high criticality.

The researchers however have found candidate materials that are readily available in the near future and, at the same time, with a promising performance. Intermetallic compounds consisting of the elements lanthanum, iron, manganese and silicon, for example, in which hydrogen is stored in the crystal lattice, can even outperform gadolinium in terms of heat that could be transferred out of the refrigerator compartment.

Others could follow suit: Researchers at the HZDR and TU Darmstadt are working hard on expanding the range of magnetic cooling materials. In close cooperation, scientists of both institutions are preparing a new series of experiments investigating the properties of magnetocaloric substances. At the Dresden High Magnetic Field Laboratory for example, they are set to study how these substances behave in pulsed high magnetic fields. The wider focus of future research lies on a given material's response to the simultaneous impact of different stimuli like magnetic fields, strain and temperature, as well as the construction of efficient demonstrators.

Credit: 
Helmholtz-Zentrum Dresden-Rossendorf

One step closer future to quantum computers

Physicists at Uppsala University in Sweden have identified how to distinguish between true and 'fake' Majorana states in one of the most commonly used experimental setups, by means of supercurrent measurements. This theoretical study is a crucial step for advancing the field of topological superconductors and applications of Majorana states for robust quantum computers. New experiments testing this approach are expected next.

Majorana states exist as zero-energy states at the ends of topological superconductors (a special type of superconductors, materials that conduct with zero resistance when cooled close to absolute zero temperature), where low-energy states are robust against defects. Majorana states have exotic properties that make them promising candidates as qubits for fault-tolerant quantum computers. However, in experiments trivial zero-energy states mimicking Majorana states can also appear. The difficulty in telling apart the true and these 'fake' Majoranas is a problem that has hampered the experimental progress in this field of research and has been a thorn in the side of experts.

A solution to this problem has been proposed in a recent study by Annica Black-Schaffer's group. The authors simulated the entire system of one of the most common experimental setups used in engineering topological superconductors as accurately as possible and captured the main effects of all the components. By investigating the supercurrent (the current in superconductors) between two engineered superconductors, they found that there is a sign reversal in the supercurrent due to the trivial 'fake' Majorana state under magnetic field application, whereas such sign reversal is not produced by true Majorana states. They then concluded that supercurrents offer a powerful tool for the unambiguous distinction between trivial states and topological Majorana states.

"This study helps and motivates experimentalists towards the proper identification of topological Majoranas by using supercurrent measurements. Our study shows that we need to carry out more exact modelling," says Jorge Cayao, postdoctoral researcher at Uppsala University.

"It is crucial that we are certain that we have actually engineered Majorana states and not some trivial states. This study presents a way to accomplish that through supercurrent measurements," says Oladunjoye Awoga, PhD student at Uppsala University.

Credit: 
Uppsala University

The sleep neuron in threadworms is also a stop neuron

image: The RIS neuron (green) in the throat of the threadworm C. elegans.

Image: 
Wagner Steuer Costa

Wagner Steuer Costa in the team of Alexander Gottschalk, Professor for Molecular Cell Biology and Neurobiochemistry, discovered the sleep neuron RIS a few years ago by coincidence - simultaneously with other groups. To understand the function of individual neurons in the plexus, the researchers use genetic engineering to cause them to produce light-sensitive proteins. With these "photo-switches", the neurons can be activated or turned off in the transparent worm using light radiation of a certain wavelength. "When we saw that the worm froze when this neuron was stimulated by light, we were quite amazed. It was the beginning of a study that took several years," Gottschalk recalls.

The RIS neuron puts C. elegans to sleep when it is active for minutes or hours - for example after the shedding of the cuticle (a secreted form of skin) that is a process in the animal's development. It also sleeps in order to recover after experiencing cellular stress. On the other hand, the neuron serves to stop the worm during locomotion - for example, if it wants to change direction or to avoid danger. The neuron then slows down the animal's motion so that it has time to decide if it wants to continue crawling. In this case the neuron is only active for a few seconds. "Such stop neurons have only recently been discovered. This is the first one of its kind in a worm," explains Gottschalk.

Interestingly, the axon of the RIS neuron is apparently branched, with the two branches having different functions, so that RIS not only slows motions but can also introduce backwards motion. This is reported by Gottschalk and his collaboration partners, Professor Ernst Stelzer from Goethe University, Professor Sabine Fischer from the University of Würzburg, researchers from the American Vanderbilt University in Nashville and KU Leuven in the current issue of Nature Communications.

"We think that neurons with a double function exist in numerous simple life forms such as the worm. In the course of evolution they were then assigned to two different systems in the brain and further developed," says Gottschalk. It's a theme that will certainly be found to recur once other nerve cells of the worm are better understood. "The nervous system of C. elegans can be viewed as a kind of evolutionary test bed. If it works there, it will be used again and further refined in more complex animals."

The discovery of the double function of RIS is also an example of how a permanently connected neuronal network can be additionally operated by a "wireless" network of neuropeptides and neuromodulators. This enables several functional networks to be realised on a single anatomical network, enormously increasing the functionality of the worm's brain, as well as being very economical. "It should no longer be said that worm neurons are simple. Often, they can do more than mammalian neurons," says Gottschalk.

Credit: 
Goethe University Frankfurt

Eating cheese may offset blood vessel damage from salt

UNIVERSITY PARK, Pa. -- Cheese lovers, rejoice. Antioxidants naturally found in cheese may help protect blood vessels from damage from high levels of salt in the diet, according to a new Penn State study.

In a randomized, crossover design study, the researchers found that when adults consumed a high sodium diet, they also experienced blood vessel dysfunction. But, when the same adults consumed four servings of cheese a day alongside the same high sodium diet, they did not experience this effect.

Billie Alba, who led the study while finishing her PhD at Penn State, said the findings may help people balance food that tastes good with minimizing the risks that come with eating too much salt.

"While there's a big push to reduce dietary sodium, for a lot of people it's difficult," Alba said. "Possibly being able to incorporate more dairy products, like cheese, could be an alternative strategy to reduce cardiovascular risk and improve vessel health without necessarily reducing total sodium."

While sodium is a mineral that is vital to the human body in small doses, the researchers said too much dietary sodium is associated with cardiovascular risk factors like high blood pressure. The American Heart Association recommends no more than 2,300 milligrams (mg) of sodium a day, with the ideal amount being closer to 1,500 mg for most adults.

According to Lacy Alexander, professor of kinesiology at Penn State and another researcher on the study, previous research has shown a connection between dairy products -- even cheeses high in sodium -- and improved heart health measures.

"Studies have shown that people who consume the recommended number of dairy servings each day typically have lower blood pressure and better cardiovascular health in general," Alexander said. "We wanted to look at those connections more closely as well as explore some of the precise mechanisms by which cheese, a dairy product, may affect heart health."

The researchers recruited 11 adults without salt-sensitive blood pressure for the study. They each followed four separate diets for eight days at a time: a low-sodium, no-dairy diet; a low-sodium, high-cheese diet; a high-sodium, no-dairy diet; and a high-sodium, high-cheese diet.

The low sodium diets had participants consume 1,500 mg of salt a day, while the high sodium diets included 5,500 mg of salt per day. The cheese diets included 170 grams, or about four servings, of several different types of cheese a day.

At the end of each week-long diet, the participants returned to the lab for testing. The researchers inserted tiny fibers under the participants' skin and applied a small amount of the drug acetylcholine, a compound that signals blood vessels to relax. By examining how each participants' blood vessels reacted to the drug, the researchers were able to measure blood vessel function.

The participants also underwent blood pressure monitoring and provided a urine sample to ensure they had been consuming the correct amount of salt throughout the week.

The researchers found that after a week on the high sodium, no cheese diet, the participants' blood vessels did not respond as well to the acetylcholine -- which is specific to specialized cells in the blood vessel -- and had a more difficult time relaxing. But this was not seen after the high sodium, high cheese diet.

"While the participants were on the high-sodium diet without any cheese, we saw their blood vessel function dip to what you would typically see in someone with pretty advanced cardiovascular risk factors," Alexander said. "But when they consumed the same amount of salt, and ate cheese as a source of that salt, those effects were completely avoided."

Alba said that while the researchers cannot be sure that the effects are caused by any one specific nutrient in cheese, the data suggests that antioxidants in cheese may be a contributing factor.

"Consuming high amounts of sodium causes an increase in molecules that are harmful to blood vessel health and overall heart health," Alba said. "There is scientific evidence that dairy-based nutrients, specifically peptides generated during the digestion of dairy proteins, have beneficial antioxidant properties, meaning that they have the ability to scavenge these oxidant molecules and thereby protect against their damaging physiological effects."

Alba said that in the future, it will be important to study these effects in larger studies, as well as further research possible mechanisms by which dairy foods may preserve vascular health.

Credit: 
Penn State

New microscopes unravel the mysteries of brain organization

video: This large-scale dataset reveals the developing nervous system of a seven-day old chicken embryo captured with a mesoSPIM microscope.

Image: 
mesospim.org

Wyss Center, Geneva, Switzerland - The secret of capturing exquisite brain images with a new generation of custom-built microscopes is revealed today in Nature Methods. The new microscopes, known as mesoSPIMs, can image the minute detail of brain tissue down to individual neurons that are five times thinner than a human hair, and can uncover the 3D anatomy of entire small organs, faster than ever before. MesoSPIMs provide new insights into brain and spinal cord organization for researchers working to restore movement after paralysis or to investigate neuronal networks involved in cognition, pleasure, or drug addiction.

Because mesoSPIMs create high-resolution images of large samples faster than existing microscopes, they are beneficial for rapidly screening many samples. A new open-source initiative, comprising top European researchers in neuroscience, is driving dissemination of mesoSPIMs globally by sharing their expertise and excitement as well as stunning images and videos.

MesoSPIMs, short for 'mesoscale selective plane-illumination microscopes', are light-sheet microscopes. Unlike traditional microscopy in which specimens are cut in slices with a blade before being viewed on a slide under a microscope, light-sheet microscopes optically slice samples with a sheet of light. This optical sectioning captures slivers of image without damaging the sample. The imaged slices are then combined to reconstruct a detailed three-dimensional image of a whole organ or specimen. The data sets produced by standard light-sheet microscopes are very large and analysing them is time consuming. MesoSPIMs get around this problem with innovative optical technologies that allow fast scanning as well as direct visualization and quantification of the captured data.

The mesoSPIM Initiative, started by Dr. Fabian Voigt in the group of Prof. Fritjof Helmchen at the Brain Research Institute, University of Zurich, enables the integration of cutting-edge technologies into research labs worldwide. "We created the open-source mesoSPIM Initiative to share the latest developments in microscope instrumentation and software with the imaging community. Anyone seeking high-quality anatomical data from large samples now has the information they need to build and operate their own mesoSPIM." said Voigt.

The power of the initiative lies in the insights brought from differing disciplines, such as physics, developmental biology and neuroscience which allows microscope development, and brain research, to flourish.

There are currently seven mesoSPIMs in operation across Europe and several more instruments under construction. One of the new mesoSPIMs is hosted by the Advanced Lightsheet Imaging Center (ALICe) at the Wyss Center. Open to external users, ALICe offers a complete pipeline from sample preparation to image analysis, under the scientific guidance of experts from the University of Geneva and the École Polytechnique Fédérale de Lausanne (EPFL).

Dr Stéphane Pagès, ALICe Scientific Coordinator said: "We are very proud to have one of only seven mesoSPIM microscopes in the world at the Wyss Center. MesoSPIM microscopes solve the longstanding problem of how to achieve exceptional image quality in large samples over very short time-scales. We are delighted to be part of the initiative that brings this technology to the world."

The mesoSPIM Initiative is aimed at research groups and imaging facilities with experience in building and supporting custom microscopes. A mesoSPIM can be installed in few days and typically requires a budget of around $200K.

'The mesoSPIM initiative - open-source light-sheet microscopes for imaging cleared tissue' is published in Nature Methods. https://www.nature.com/articles/s41592-019-0554-0 DOI: 10.1038/s41592-019-0554-0

Credit: 
Wyss Center for Bio and Neuroengineering

New route to carbon-neutral fuels from carbon dioxide discovered by Stanford-DTU team

image: Artistic representation of a nickel-based electrode as a broken down fuel pump and of a cerium-based electrode as a new, productive pump.

Image: 
Cube3D

If the idea of flying on battery-powered commercial jets makes you nervous, you can relax a little. Researchers have discovered a practical starting point for converting carbon dioxide into sustainable liquid fuels, including fuels for heavier modes of transportation that may prove very difficult to electrify, like airplanes, ships and freight trains.

Carbon-neutral re-use of CO2 has emerged as an alternative to burying the greenhouse gas underground. In a new study published today in Nature Energy, researchers from Stanford University and the Technical University of Denmark (DTU) show how electricity and an Earth-abundant catalyst can convert CO2 into energy-rich carbon monoxide (CO) better than conventional methods. The catalyst - cerium oxide - is much more resistant to breaking down. Stripping oxygen from CO2 to make CO gas is the first step in turning CO2 into nearly any liquid fuel and other products, like synthetic gas and plastics. The addition of hydrogen to CO can produce fuels like synthetic diesel and the equivalent of jet fuel. The team envisions using renewable power to make the CO and for subsequent conversions, which would result in carbon-neutral products.

"We showed we can use electricity to reduce CO2 into CO with 100 percent selectivity and without producing the undesired byproduct of solid carbon," said William Chueh, an associate professor of materials science and engineering at Stanford, one of three senior authors of the paper.

Chueh, aware of DTU's research in this area, invited Christopher Graves, associate professor in DTU's Energy Conversion & Storage Department, and Theis Skafte, a DTU doctoral candidate at the time, to come to Stanford and work on the technology together.

"We had been working on high-temperature CO2 electrolysis for years, but the collaboration with Stanford was the key to this breakthrough," said Skafte, lead author of the study, who is now a postdoctoral researcher at DTU. "We achieved something we couldn't have separately - both fundamental understanding and practical demonstration of a more robust material."

Barriers to conversion

One advantage sustainable liquid fuels could have over the electrification of transportation is that they could use the existing gasoline and diesel infrastructure, like engines, pipelines and gas stations. Additionally, the barriers to electrifying airplanes and ships - long distance travel and the high weight of batteries - would not be problems for energy-dense, carbon-neutral fuels.

Although plants reduce CO2 to carbon-rich sugars naturally, an artificial electrochemical route to CO has yet to be widely commercialized. Among the problems: Devices use too much electricity, convert a low percentage of CO2 molecules, or produce pure carbon that destroys the device. Researchers in the new study first examined how different devices succeeded and failed in CO2 electrolysis.

With insights gained, the researchers built two cells for CO2 conversion testing: one with cerium oxide and the other with conventional nickel-based catalysts. The ceria electrode remained stable, while carbon deposits damaged the nickel electrode, significantly shortening the catalyst's lifetime.

"This remarkable capability of ceria has major implications for the practical lifetime of CO2 electrolyzer devices," said DTU's Graves, a senior author of the study and visiting scholar at Stanford at the time. "Replacing the current nickel electrode with our new ceria electrode in the next generation electrolyzer would improve device lifetime."

Road to commercialization

Eliminating early cell death could significantly lower the cost of commercial CO production. The suppression of carbon buildup also allows the new type of device to convert more of the CO2 to CO, which is limited to well below 50 percent CO product concentration in today's cells. This could also reduce production costs.

"The carbon-suppression mechanism on ceria is based on trapping the carbon in stable oxidized form. We were able to explain this behavior with computational models of CO2 reduction at elevated temperature, which was then confirmed with X-ray photoelectron spectroscopy of the cell in operation," said Michal Bajdich, a senior author of the paper and an associate staff scientist at the SUNCAT Center for Interface Science & Catalysis, a partnership between the SLAC National Accelerator Laboratory and Stanford's School of Engineering.

The high cost of capturing CO2 has been a barrier to sequestering it underground on a large scale, and that high cost could be a barrier to using CO2 to make more sustainable fuels and chemicals. However, the market value of those products combined with payments for avoiding the carbon emissions could help technologies that use CO2 overcome the cost hurdle more quickly.

The researchers hope that their initial work on revealing the mechanisms in CO2 electrolysis devices by spectroscopy and modeling will help others in tuning the surface properties of ceria and other oxides to further improve CO2 electrolysis.

Credit: 
Stanford University

Flavoring ingredient exceeds safety levels in e-cigarettes and smokeless tobacco

DURHAM, N.C. -- A potential carcinogen that has been banned as a food additive is present in concerningly high levels in electronic cigarette liquids and smokeless tobacco products, according to a new study from Duke Health.

The chemical -- called pulegone (pronounced pju-leh-goan) - is contained in menthol and mint flavored e-cigarettes and smokeless tobacco products. Because of its carcinogenic properties, the U.S. Food and Drug Administration banned pulegone as a food additive last year in response to petitions from consumer groups.

Yet the agency does not regulate the chemical's presence in e-cigarettes and smokeless tobacco, which are promoted as safer alternatives to regular cigarettes.

"Our findings suggest that the FDA should implement measures to mitigate pulegone-related health risks before suggesting mint- and menthol-flavored e-cigarettes and smokeless tobacco products as alternatives for people who use combustible tobacco products," said Sven-Eric Jordt, Ph.D., a professor of the Department of Anesthesiology at Duke and lead author of a study publishing online Sept. 16 in JAMA Internal Medicine.

Jordt and research partner Sairam V. Jabba became interested in the topic because the U.S. Centers for Disease Control and Prevention published studies showing that mint- and menthol-flavored e-cigarette liquids and smokeless tobacco products marketed in the U.S. contain substantial amounts of pulegone.

The two researchers analyzed whether several top brands of regular menthol cigarettes, three e-cigarette brands, and one smokeless tobacco brand contain enough pulegone to be a cause for concern. They compared the CDC-reported amounts of pulegone with the FDA's exposure risk data -- the levels at which exposure-related tumors were reported in animal studies.

Their analysis found that the levels in the e-cigarettes and smokeless tobacco exceeded the thresholds of concern. Regular menthol cigarettes contained levels below the thresholds.

"Our analysis suggests that users of mint- and menthol-flavored e-cigarettes and smokeless tobacco are exposed to pulegone levels higher than the FDA considers acceptable for intake in food, and higher than in smokers of combustible menthol cigarettes," Jordt said.

"The tobacco industry has long known about the dangers of pulegone and has continuously tried to minimize its levels in menthol cigarette flavorings, so the levels are much lower in menthol cigarettes than in electronic cigarettes," Jordt said. E-cigarette manufacturers may be less familiar with the dangers and use cheaper ingredients to lower costs.

One limitation of the study is that the FDA's exposure risk calculations are based on oral exposure in animal studies. These risks may apply to the oral exposure through smokeless tobacco but may differ from inhalation exposure through e-cigarette vapor. There is no toxicity data available on exposure via inhalation. This is concerning because toxicologists consider the lung to be more sensitive to toxic chemicals than the digestive tract.

Credit: 
Duke University Medical Center

Is headache from anesthesia after childbirth associated with risk of bleeding around brain?

Bottom Line: This study examined whether postpartum women with headache from anesthesia after neuraxial anesthesia (such as epidural) during childbirth had increased risk of being diagnosed with bleeding around the brain (intracranial subdural hematoma).

Authors: Albert R. Moore, M.D., of the Royal Victoria Hospital, McGill University, in Montreal, Canada, is the corresponding author.

(doi:10.1001/jamaneurol.2019.2995)

Editor's Note: The article includes funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network

The stellar nurseries of distant galaxies

image: Molecular clouds detected at the unprecedented resolution of 90 light-years in the Cosmic Snake, located more than 8 billion light-years away, a typical progenitor of our galaxy (left). Observed at resolutions 50,000 times better, each of these clouds resembles the very tormented gas of the Carina nebula located only 7500 light-years away, a veritable nursery of emerging stars (right).

Image: 
© UNIGE, Dessauges et NASA, ESA

Star clusters are formed by the condensation of molecular clouds, masses of cold, dense gas that are found in every galaxy. The physical properties of these clouds in our own galaxy and nearby galaxies have been known for a long time. But are they identical in distant galaxies that are more than 8 billion light-years away? For the first time, an international team led by the University of Geneva (UNIGE) has been able to detect molecular clouds in a Milky Way progenitor, thanks to the unprecedented spatial resolution achieved in such a distant galaxy. These observations, published in Nature Astronomy, show that the distant clouds have a higher mass, density and internal turbulence than the clouds hosted in nearby galaxies and that they produce far more stars. The astronomers attribute these differences to the ambient interstellar conditions in distant galaxies, which are too extreme for the molecular clouds typical of nearby galaxies to survive.

Molecular clouds consist of dense, cold molecular hydrogen gas that swirls around at supersonic velocities, generating density fluctuations that condense and form stars. In nearby galaxies, such as the Milky Way, a molecular cloud produces between 10+3 and 10+6 stars. In far-off galaxies, however, located more than 8 billion light-years away, astronomers have observed gigantic star clusters containing up to 100 times more stars. Why is there such a difference?

Exceptional observation made possible using a cosmic magnifying glass

To answer this question, the astronomers were able to make use of a natural telescope - the gravitational lens phenomenon - in combination with ALMA (Atacama Large Millimetre / Submillimetre Array), an interferometer made up of 50 millimetric radio antennas that reconstruct the entire image of a galaxy instantly. "Gravitational lenses are a natural telescope that produces a magnifying-glass effect when a massive object is aligned between the observer and the distant object," explains Miroslava Dessauges, a researcher in the Department of Astronomy in UNIGE's Faculty of Science and first author of the study. "With this effect, some parts of distant galaxies are stretched on the sky and can be studied at an unrivalled resolution of 90 light-years!" ALMA, meanwhile, can be employed to measure the level of carbon monoxide, which acts as a tracer of molecular hydrogen gas that constitutes the cold clouds.

This resolution made it possible to characterise the molecular clouds individually in a distant galaxy, nicknamed the "Cosmic Snake", 8 billion light-years away. "It's the first time we've been able to pinpoint molecular clouds one from each other," says Daniel Schaerer, professor in UNIGE's Department of Astronomy. The astronomers were therefore able to compare the mass, size, density and internal turbulence of molecular clouds in nearby and distant galaxies. "It was thought that the clouds had the same properties in all galaxies at all times, continues the Geneva-based researcher, but our observations have demonstrated the opposite!"

Molecular clouds resistant to extreme environments

These new observations revealed that the molecular clouds in distant galaxies had a mass, density and turbulence 10 to 100 times higher than those in nearby galaxies. "Such values had only been measured in clouds hosted in nearby interacting galaxies, which have interstellar medium conditions resembling those of distant galaxies," adds Miroslava Dessauges. The researchers could link the differences in the physical properties of the clouds with the galactic environments, which are more extreme and hostile in far-off galaxies than in closer galaxies. "A molecular cloud typically found in a nearby galaxy would instantly collapse and be destroyed in the interstellar medium of distant galaxies, hence its enhanced density and turbulence guarantee its survival and equilibrium," explains Miroslava Dessauges. "The characteristic mass of the molecular clouds in the Cosmic Snake appears to be in perfect agreement with the predictions of our scenario of fragmentation of turbulent galactic disks. As a result, this scenario can be put forward as the mechanism of formation of massive molecular clouds in distant galaxies," adds Lucio Mayer, a professor at the Centre for Physical and Cosmological Theory at the University of Zurich.

The international team also discovered that the efficiency of star formation in the Cosmic Snake galaxy is particularly high, likely triggered by the highly supersonic internal turbulence of the clouds. "In nearby galaxies, a molecular cloud forms about 5% of its mass in stars. In distant galaxies, this number climbs to 30%," observes Daniel Schaerer.

The astronomers will now study other distant galaxies in order to confirm their observational results obtained for the Cosmic Snake. Miroslava Dessauges says in conclusion: "We'll also push the resolution even further by taking advantage of the unique performance of the ALMA interferometer. In parallel, we need to understand in more detail the ability of molecular clouds in distant galaxies to form stars so efficiently."

Credit: 
Université de Genève

WVU astronomers help detect the most massive neutron star ever measured

image: Neutron stars are the compressed remains of massive stars gone supernova. WVU astronomers were part of a research team that detected the most massive neutron star to date.

Image: 
B. Saxton (NRAO/AUI/NSF)

West Virginia University researchers have helped discover the most massive neutron star to date, a breakthrough uncovered through the Green Bank Telescope in Pocahontas County.

The neutron star, called J0740+6620, is a rapidly spinning pulsar that packs 2.17 times the mass of the sun (which is 333,000 times the mass of the Earth) into a sphere only 20-30 kilometers, or about 15 miles, across. This measurement approaches the limits of how massive and compact a single object can become without crushing itself down into a black hole.

The star was detected approximately 4,600 light-years from Earth. One light-year is about six trillion miles.

These findings, from the National Science Foundation-funded NANOGrav Physics Frontiers Center, were published today (Sept. 16) in Nature Astronomy.

Authors on the paper include Duncan Lorimer, astronomy professor and Eberly College of Arts and Sciences associate dean for research; Eberly Distinguished Professor of Physics and Astronomy Maura McLaughlin; Nate Garver-Daniels, system administrator in the Department of Physics and Astronomy; and postdocs and former students Harsha Blumer, Paul Brook, Pete Gentile, Megan Jones and Michael Lam.

The discovery is one of many serendipitous results, McLaughlin said, that have emerged during routine observations taken as part of a search for gravitational waves.

"At Green Bank, we're trying to detect gravitational waves from pulsars," she said. "In order to do that, we need to observe lots of millisecond pulsars, which are rapidly rotating neutron stars. This (the discovery) is not a gravitational wave detection paper but one of many important results which have arisen from our observations."

The mass of the pulsar was measured through a phenomenon known as "Shapiro Delay." In essence, gravity from a white dwarf companion star warps the space surrounding it, in accordance with Einstein's general theory of relativity. This makes the pulses from the pulsar travel just a little bit farther as they travel through the distorted spacetime around white dwarf. This delay tells them the mass of the white dwarf, which in turn provides a mass measurement of the neutron star.

Neutron stars are the compressed remains of massive stars gone supernova. They're created when giant stars die in supernovas and their cores collapse, with the protons and electrons melting into each other to form neutrons.

To visualize the mass of the neutron star discovered, a single sugar-cube worth of neutron-star material would weigh 100 million tons here on Earth, or about the same as the entire human population.

While astronomers and physicists have studied these objects for decades, many mysteries remain about the nature of their interiors: Do crushed neutrons become "superfluid" and flow freely? Do they breakdown into a soup of subatomic quarks or other exotic particles? What is the tipping point when gravity wins out over matter and forms a black hole?

"These stars are very exotic," McLaughlin said. "We don't know what they're made of and one really important question is, 'How massive can you make one of these stars?' It has implications for very exotic material that we simply can't create in a laboratory on Earth."

Pulsars get their name because of the twin beams of radio waves they emit from their magnetic poles. These beams sweep across space in a lighthouse-like fashion. Some rotate hundreds of times each second.

Since pulsars spin with such phenomenal speed and regularity, astronomers can use them as the cosmic equivalent of atomic clocks. Such precise timekeeping helps astronomers study the nature of spacetime, measure the masses of stellar objects and improve their understanding of general relativity.

Credit: 
West Virginia University

Study shows importance of tailoring treatments to clearly defined weed control objectives

WESTMINSTER, Colorado - SEPTEMBER 16, 2019 - A new study in the journal Invasive Plant Science and Management shows that working smarter, not harder, can lead to better control of invasive weeds. And the first step is to clearly define your weed control objectives.

Do you want a quick, short-term reduction in a weed population or long-term control? Is your weed problem limited to a specific area, or are you also concerned about adjacent fields?

"Answering such questions can help you select the most appropriate management options and eliminate wasted effort," says Katriona Shea, a researcher at Pennsylvania State University.

To illustrate the importance of upfront decisions, researchers conducted a two-year study involving invasive thistle, a weed often found in pasturelands and rangelands. Mathematical models were used to determine which of 14 mowing strategies would best support each of three different management objectives: reducing the density of an existing thistle infestation, decreasing long-term population growth and limiting the weed's spread.

Contrary to conventional wisdom, researchers found that fewer, well-timed mowing events were more effective than mowing as often as possible - making it possible to produce a better outcome with less effort.

Intense mowing both before flowering and during the peak flowering period, for example, produced the best long-term control of invasive thistle and reduced both its abundance and its spatial spread. A single, intense mowing during the peak flowering period was the most effective approach for short-term management, which is good news for land managers with limited time and resources.

Credit: 
Cambridge University Press