Tech

Transporting energy through a single molecular nanowire

image: This is Associate Professor in Physics Richard Hildner of the Zernike Institute for Advanced Materials, University of Groningen

Image: 
Sylvia Germes

Photosynthetic systems in nature transport energy very efficiently towards a reaction centre, where it is converted into a useful form for the organism. Scientists have been using this as inspiration to learn how to transport energy efficiently in, for example, molecular electronics. Physicist Richard Hildner from the University of Groningen and his colleagues have investigated energy transport in an artificial system made from nanofibres. The results were published in the Journal of the American Chemical Society.

'Natural photosynthetic systems have been optimized by billions of years of evolution. We have found this very difficult to copy in artificial systems,' explains Hildner, associate professor at the University of Groningen. In the light-harvesting complexes of bacteria or plants, light is converted into energy, which is then transported to the reaction centre with minimal losses.

Bundles

Five years ago, Hildner and his colleagues developed a system in which disc-shaped molecules were stacked into nanofibres with lengths exceeding 4 micrometres and a diameter of just 0.005 micrometres. By comparison, the diameter of a human hair is 50-100 micrometres. This system can transport energy like the antennas in photosynthetic systems. 'But we sometimes saw that energy transport became stuck in the middle of our four micrometre-long fibres. Something in the system appeared to be unstable,' he recalls.

To improve the energy transport efficiency, Hildner and his colleagues created bundles of nanofibres. 'This is the same idea as that which is used in normal electronics: very thin copper wires are bundled together to create a more robust cable.' However, the bundled nanofibers turned out to be worse at transporting energy than single fibres.

Coherence

The reason for this lies in something called coherence. When energy is put into the molecules that make up the fibres, it creates an excited state or exciton. However, this excited state is not a packet of energy that is associated with a single molecule. Hildner: 'The energy is delocalized over several molecules and it can, therefore, move fast and efficiently across the fibre.' This delocalization means that the energy moves like a wave from one molecule to the next. By contrast, without coherence, the energy is limited to a single molecule and must hop from one molecule to the next. Such hopping is a much slower way to transport energy.

'In the bundles, coherence is lost,' explains Hildner. This is caused by the strain that the bundle imposes on each fibre within it. 'The fibres are compressed and this causes side groups of the molecules to crash into each other.' This changes the energy landscape. In a single fibre, the energy of the excited states of several neighbouring molecules are at the same level. In a bundle, the local environments of the molecules differ, leading to a difference in energy levels.

Bike tour

'Imagine that you are on a bike tour. The height profile of the tour represents the energy levels in the molecules that make up the fibres,' says Hildner. 'If you are cycling in the Netherlands, you will arrive at your destination quickly because the terrain is flat. In contrast, in the Alps, you must cycle uphill quite often, which is tough and slows you down.' Thus, when the molecules' energy levels in the fibres are different, transport becomes more difficult.

This discovery means that the team's original idea, to increase energy transport efficiency using bundles of nanofibres, turned out to be a failure. However, they have learned valuable lessons from this, which can now be used by theoretical physicists to calculate how to optimize transport in molecular fibres. 'My colleagues at the University of Groningen are currently doing just that. But we already know one thing: if you want good energy transport in nanofibres, do not use bundles!'

Simple Science Summary

Plants and photosynthetic bacteria catch sunlight via molecular antennas, which then transfer the energy to a reaction centre with minimal losses. Scientists would like to make molecular wires that can transfer energy just as efficiently. Scientists at the University of Groningen created tiny fibres by stacking certain molecules together. Single fibres transport energy, although they sometimes malfunction. Creating bundles of fibres (as is done with copper wiring) was thought to be the solution but this turned out not to be the case. Energy moves fast when spread out across several molecules. In single fibres, this works well but in bundled fibres, this spreading out is hampered as the molecules experience strain. These results can be used to better understand energy transport along molecular wires, which will help in the design of better wires.

Credit: 
University of Groningen

Immune system discovery paves way to lengthen organ transplant survival

image: Medical advances have helped dramatically lower the rates of acute rejection (within the first year) after transplant but chronic rejection continues to reduce long-term organ survival.

Image: 
Fadi Lakkis, University of Pittsburgh, data adapted from United States Renal Data System

PITTSBURGH, May 8, 2020 - Chronic rejection of transplanted organs is the leading cause of transplant failure, and one that the field of organ transplantation has not overcome in almost six decades since the advent of immunosuppressive drugs enabled the field to flourish.

Now, a new discovery led by researchers at the University of Pittsburgh School of Medicine and Houston Methodist Hospital suggesting the innate immune system can specifically remember foreign cells could pave the way to drugs that lengthen long-term survival of transplanted organs. The findings, based on results in a mouse model, are published this week in the journal Science.

"The rate of acute rejection within one year after a transplant has decreased significantly, but many people who get an organ transplant are likely to need a second one in their lifetime due to chronic rejection," said Fadi Lakkis, M.D., who holds the Frank & Athena Sarris Chair in Transplantation Biology and is scientific director of Pitt's Thomas E. Starzl Transplantation Institute. "The missing link in the field of organ transplantation is a specific way to prevent rejection, and this finding moves us one step closer to that goal."

The immune system is composed of innate and adaptive branches. The innate immune cells are the first to detect foreign organisms in the body and are required to activate the adaptive immune system. Immunological "memory" -- which allows our bodies to remember foreign invaders so they can fight them off quicker in the future -- was thought to be unique to the adaptive immune system. Vaccines, for example, take advantage of this feature to provide long-term protection against bacteria or viruses. Unfortunately, this very critical function of the immune system is also why transplanted organs are eventually rejected, even in the presence of immune-suppressing drugs.

In the new study, Lakkis, along with co-senior authors Martin Oberbarnscheidt, M.D., Ph.D., assistant professor of surgery at Pitt, and Xian Li, M.D., Ph.D., director of the Immunobiology & Transplant Science Center at Houston Methodist Hospital, used a genetically modified mouse organ transplant model to show that the innate immune cells, once exposed to a foreign tissue, could remember and initiate an immune response if exposed to that foreign tissue in the future.

"Innate immune cells, such as monocytes and macrophages, have never been thought to have memory," said Oberbarnscheidt. "We found that their capacity to remember foreign tissues is as specific as adaptive immune cells, such as T- cells, which is incredible."

The researchers then used molecular and genetic analyses to show that a molecule called paired Ig-like receptor-A (PIR-A) was required for this recognition and memory feature of the innate immune cells in the hosts. When PIR-A was either blocked with a synthetically engineered protein or genetically removed from the host animal, the memory response was eliminated, allowing transplanted tissues to survive for much longer.

"Knowing exactly how the innate immune system plays a role opens the door to developing very specific drugs, which allows us to move away from broadly immunosuppressive drugs that have significant side effects," said Lakkis.

The finding has implications beyond transplantation, according to Oberbarnscheidt. "A broad range of diseases, including cancer and autoimmune conditions, could benefit from this insight. It changes the way we think about the innate immune system."

Credit: 
University of Pittsburgh

To err is human, to learn, divine

image: A graphical model of the research conducted by Lynn et al.

Image: 
Blevmore Labs

The human brain is a highly advanced information processor composed of more than 86 billion neurons. Humans are adept at recognizing patterns from complex networks, such as languages, without any formal instruction. Previously, cognitive scientists tried to explain this ability by depicting the brain as a highly optimized computer, but there is now discussion among neuroscientists that this model might not accurately reflect how the brain works.

Now, Penn researchers have developed an different model for how the brain interprets patterns from complex networks. Published in Nature Communications, this new model shows that the ability to detect patterns stems in part from the brain's goal to represent things in the simplest way possible. Their model depicts the brain as constantly balancing accuracy with simplicity when making decisions. The work was conducted by physics Ph.D. student Christopher Lynn, neuroscience Ph.D. student Ari Kahn, and professor Danielle Bassett.

This new model is built upon the idea that people make mistakes while trying to make sense of patterns, and these errors are essential to get a glimpse of the bigger picture. "If you look at a pointillist painting up close, you can correctly identify every dot. If you step back 20 feet, the details get fuzzy, but you'll gain a better sense of the overall structure," says Lynn.

To test their hypothesis, the researchers ran a set of experiments similar to a previous study by Kahn. That study found that when participants were shown repeating elements in a sequence, such as A-B-C-B, etc., they were automatically sensitive to certain patterns without being explicitly aware that the patterns existed. "If you experience a sequence of information, such as listening to speech, you can pick up on certain statistics between elements without being aware of what those statistics are," says Kahn.

To understand how the brain automatically understands such complex associations within sequences, 360 study participants were shown a computer screen with five gray squares corresponding to five keys on a keyboard. As two of the five squares changed from gray to red, the participants had to strike the computer keys that corresponded to the changing squares. For the participants, the pattern of color-changing squares was random, but the sequences were actually generated using two kinds of networks.

The researchers found that the structure of the network impacted how quickly the participants could respond to the stimuli, an indication of their expectations of the underlying patterns. Responses were quicker when participants were shown sequences that were generated using a modular network compared to sequences coming from a lattice network.

While these two types of networks look different to the human eye at a large scale, they are actually statistically identical to one another at small scales. There are the same number of connections between the nodes and edges, even though the overall shape is different. "A computer would not care about this difference in large-scale structure, but it's being picked up by the brain. Subjects could better understand the modular network's underlying structure and anticipate the upcoming image," says Lynn.

Using tools from information theory and reinforcement learning, the researchers were able to use this data to implement a metric of complexity called entropy. "Being very random is the least complex thing you could do, whereas if you were learning the sequence very precisely, that's the most complex thing you can do. The balance between errors and complexity, or negative entropy, gives rise to the predictions that the model gives," says Lynn.

Their resulting model of how the brain processes information depicts the brain as balancing two opposing pressures: complexity versus accuracy. "You can be very complex and learn well, but then you are working really hard to learn patterns," says Lynn. "Or, you have a lower complexity process, which is easier, but you are not going to learn the patterns as well."

With their new model, the researchers were also able to quantify this balance using a parameter beta. If beta is zero, the brain makes a lot of errors but minimizes complexity. If beta is high, then the brain is taking precautions to avoid making errors. "All beta does is tune between which is dominating," says Lynn. In this study, 20% of the participants had a small beta, 10% had high beta values, and the remaining 70% were somewhere in between. "You do see this wide spread of beta values across people," he says.

Kahn says that this idea of balancing forces wasn't surprising, given the huge amount of information the brain has to process under a limited amount of resources and without spending too much time on simple decisions. "The brain is already using up a huge amount of metabolic costs, so you really want to maximize what you are getting out," he says. "If you think about something as basic as attention, there is an inherent trade off in maximizing accuracy versus everything else you are ignoring."

And what about the role of making mistakes? Their model provides support for the idea that the human brain isn't an optimal learning machine but rather that making mistakes, and learning from them, plays a huge role in behavior and cognition. It seems that being able to look at complex systems more broadly, like stepping away from a pointillist painting, gives the brain a better idea of overall relationships.

"Understanding structure, or how these elements relate to one another, can emerge from an imperfect encoding of the information. If someone were perfectly able to encode all of the incoming information, they wouldn't necessarily understand the same kind of grouping of experiences that they do if there's a little bit of fuzziness to it," says Kahn.

"The coolest thing is that errors in how people are learning and perceiving the world are influencing our ability to learn structures. So we are very much divorced from how a computer would act," says Lynn.

The researchers are now interested in what makes the modular network easier for the brain to interpret and are also conducting functional MRI studies to understand where in the brain these network associations are being formed. They are also curious as to whether people's balance of complexity and accuracy is fluid, whether people can change on their own or if they are "set," and also hope to do experiments using language inputs sometime in the future.

"After better understanding how healthy adult humans build these network models of our world, we are excited to turn to the study of psychiatric conditions like schizophrenia in which patients build inaccurate or otherwise altered models of their worlds," says Bassett. "Our initial work paves the way for new efforts in the emerging field of computational psychiatry."

Credit: 
University of Pennsylvania

To climb like a gecko, robots need toes

image: The spotted belly of a Tokay gecko used by UC Berkeley biologists to understand how the animal's five sticky toes help it climb on many types of surface.

Image: 
Yi Song

Robots with toes? Experiments suggest that climbing robots could benefit from having flexible, hairy toes, like those of geckos, that can adjust quickly to accommodate shifting weight and slippery surfaces.

Biologists from the University of California, Berkeley, and Nanjing University of Aeronautics and Astronautics observed geckos running horizontally along walls to learn how they use their five toes to compensate for different types of surfaces without slowing down.

"The research helped answer a fundamental question: Why have many toes?" said Robert Full, UC Berkeley professor of integrative biology.

As his previous research showed, geckos' toes can stick to the smoothest surfaces through the use of intermolecular forces, and uncurl and peel in milliseconds. Their toes have up to 15,000 hairs per foot, and each hair has "an awful case of split ends, with as many as a thousand nano-sized tips that allow close surface contact," he said.

These discoveries have spawned research on new types of adhesives that use intermolecular forces, or van der Waals forces, to stick almost anywhere, even underwater.

One puzzle, he said, is that gecko toes only stick in one direction. They grab when pulled in one direction, but release when peeled in the opposite direction. Yet, geckos move agilely in any orientation.

To determine how geckos have learned to deal with shifting forces as they move on different surfaces, Yi Song, a UC Berkeley visiting student from Nanjing, China, ran geckos sideways along a vertical wall while making high-speed video recordings to show the orientation of their toes. The sideways movement allowed him to distinguish downward gravity from forward running forces to best test the idea of toe compensation.

Using a technique called frustrated total internal reflection, Song, also measured the area of contact of each toe. The technique made the toes light up when they touched a surface.

To the researcher's surprise, geckos ran sideways just as fast as they climbed upward, easily and quickly realigning their toes against gravity. The toes of the front and hind top feet during sideways wall-running shifted upward and acted just like toes of the front feet during climbing.

To further explore the value of adjustable toes, researchers added slippery patches and strips, as well as irregular surfaces. To deal with these hazards, geckos took advantage of having multiple, soft toes. The redundancy allowed toes that still had contact with the surface to reorient and distribute the load, while the softness let them conform to rough surfaces.

"Toes allowed agile locomotion by distributing control among multiple, compliant, redundant structures that mitigate the risks of moving on challenging terrain," Full said. "Distributed control shows how biological adhesion can be deployed more effectively and offers design ideas for new robot feet, novel grippers and unique manipulators."

Credit: 
University of California - Berkeley

New study shines light on mysterious giant viruses

In recent years, giant viruses have been unearthed in several of the world's most mysterious locations, from the thawing permafrost of Siberia to locations unknown beneath the Antarctic ice. But don't worry, "The Thing" is still a work of science fiction. For now.

In a new study, a team of Michigan State University scientists shed light on these enigmatic, yet captivating giant microbes and key aspects of the process by which they infect cells. With the help of cutting-edge imaging technologies, this study developed a reliable model for studying giant viruses and is the first to identify and characterize several key proteins responsible for orchestrating infection.

Giant viruses are bigger than 300 nanometers in size and can survive for many millennia. For comparison, the rhinovirus -- responsible for the common cold -- is roughly 30 nanometers.

"Giant viruses are gargantuan in size and complexity," said principal investigator Kristin Parent, associate professor of Biochemistry and Molecular Biology at MSU. "The giant viruses recently discovered in Siberia retained the ability to infect after 30,000 years in permafrost."

The outer shells -- or capsids -- are rugged and able to withstand harsh environments, protecting the viral genome inside. The capsids of the species analyzed in this study -- mimivirus, Antarctica virus, Samba virus and the newly discovered Tupanviruses -- are icosahedral, or shaped like a twenty-sided die.

These species have a unique mechanism for releasing their viral genome. A starfish-shaped seal sits atop one of the outer shell vertices. This unique vertex is known as the 'stargate.' During infection, the 'starfish' and 'stargate' open to release the viral genome.

During the study, several roadblocks needed to be addressed. "Giant viruses are difficult to image due to their size and previous studies relied on finding the 'one-in-a-million' virus in the correct state of infection," Parent said.

To solve this issue, Parent's graduate student Jason Schrad developed a novel method for mimicking infection stages. Using the university's new Cryo-Electron Microscopy microscope and the university's Scanning Electron Microscope, Parent's group subjected various species to an array of harsh chemical and environmental treatments designed to simulate conditions a virus might experience during the infection process. "Cryo-EM allows us to study viruses and protein structures at the atomic level and to capture them in action," Parent said. "Access to this technology is very important and the new microscope at MSU is opening new doors for research on campus."

The results revealed three environmental conditions that successfully induced stargate opening: low pH, high temperature and high salt. Even more, each condition induced a different stage of infection.

With this new data, Parent's group designed a model to effectively and reliably mimic stages of infection for study. "This new model now allows scientists to mimic the stages reliably and with high frequency, opening the door for future study and dramatically simplifying any studies aimed at the virus," Parent said.

The results yielded several novel findings. "We discovered that the starfish seal above the stargate portal slowly unzips while remaining attached to the capsid rather than simply releasing all at once," Parent said. "Our description of a new giant virus genome release strategy signifies another paradigm shift in our understanding of virology."

With the ability to consistently recreate various stages of infection, the researchers studied the proteins released by the virus during the first stage. Proteins act as workers, orchestrating the many biological processes required for a virus to infect and hijack a cell's reproductive capabilities to make copies of itself.

"The results of this study help to assign putative -- or assumed -- roles to many proteins with previously unknown functions, highlighting the power of this new model," Parent said. "We identified key proteins released during the initial stages of infection responsible for helping mediate the process and complete the viral takeover."

As for future study? "The exact functions of many of these proteins and how they orchestrate giant virus infection are prime candidates for future study," Parent said. "Many of the proteins we identified matched proteins that one would expect to be released during the initial stages of viral infections. This greatly supports our hypothesis that the in vitro stages generated in this study are reflective of those that occur in vivo."

That many of the different giant virus types studied responded similarly in vitro leads the researchers to believe they all share common characteristics and likely similar proteins.

Whether giant viruses are capable of infecting humans ­- unlike the coronavirus ­- is an evolving topic of discussion amongst virologists.

Credit: 
Michigan State University

Individualized mosaics of microbial strains transfer from the maternal to the infant gut

image: Casey Morrow

Image: 
UAB

BIRMINGHAM, Ala. - Microbial communities in the intestine -- also known as the gut microbiome -- are vital for human digestion, metabolism and resistance to colonization by pathogens. The gut microbiome composition in infants and toddlers changes extensively in the first three years of life. But where do those microbes come from in the first place?

Scientists have long been able to analyze the gut microbiome at the level of the 500 to 1,000 different bacterial species that mainly have a beneficial influence; only more recently have they been able to identify individual strains within a single species using powerful genomic tools and supercomputers that analyze massive amounts of genetic data.

Researchers at the University of Alabama at Birmingham now have used their microbiome "fingerprint" method to report that an individualized mosaic of microbial strains is transmitted to the infant gut microbiome from a mother giving birth through vaginal delivery. They detailed this transmission by analyzing existing metagenomic databases of fecal samples from mother-infant pairs, as well as analyzing mouse dam and pup transmission in a germ-free, or gnotobiotic, mouse model at UAB, where the dams were inoculated with human fecal microbes.

"The results of our analysis demonstrate that multiple strains of maternal microbes -- some that are not abundant in the maternal fecal community -- can be transmitted during birth to establish a diverse infant gut microbial community," said Casey Morrow, Ph.D., professor emeritus in UAB's Department of Cell, Developmental and Integrative Biology. "Our analysis provides new insights into the origin of microbial strains in the complex infant microbial community."

The study used a strain-tracking bioinformatics tool previously developed at UAB, called Window-based Single-nucleotide-variant Similarity, or WSS. Hyunmin Koo, Ph.D., UAB Department of Genetics and Genomics Core, led the informatics analysis. The gnotobiotic mouse model studies were led by Braden McFarland, Ph.D., assistant professor in the UAB Department of Cell, Developmental and Integrative Biology.

Morrow and colleagues have used this microbe fingerprint tool in several previous strain-tracking studies. In 2017, they found that fecal donor microbes -- used to treat patients with recurrent Clostridium difficile infections -- remained in recipients for months or years after fecal transplants. In 2018, they showed that changes in the upper gastrointestinal tract through obesity surgery led to the emergence of new strains of microbes. In 2019, they analyzed the stability of new strains in individuals after antibiotic treatments, and earlier this year, they found that adult twins, ages 36 to 80 years old, shared a certain strain or strains between each pair for periods of years, and even decades, after they began living apart from each other.

In the current study, several individual-specific patterns of microbial strain-sharing were found between mothers and infants. Three mother-infant pairs showed only related strains, while a dozen other infants of mother-infant pairs contained a mosaic of maternal-related and unrelated microbes. It could be that the unrelated strains came from the mother, but they had not been the dominant strain of that species in the mother, and so had not been detected.

Indeed, in a second study using a dataset from nine women taken at different times in their pregnancies showed that strain variations in individual species occurred in seven of the women.

To further define the source of the unrelated strains, a mouse model was used to look at transmission from dam to pup in the absence of environmental microbes. Five different females were given transplants of different human fecal matter to create five unique humanized-microbiome mice, which were bred with gnotobiotic males. The researchers then analyzed the strains found in the human donors, the mouse dams and their mouse pups. They found four different patterns: 1) The pup's strain of a particular species was related to the dam's strain; 2) The pup's strain was related to both the dam's strain and the human donor's strain; 3) The pup's strain was related to the human donor's strain, but not to the dam's strain; and, importantly, 4) No related strains for a particular species were found between the pup, the dam and the human donor. Since these animals were bred and raised in germ-free conditions, the unrelated strains in the pups came from minor, undetected strains in the dams.

"The results of our studies support a reconsideration of the contribution of different maternal microbes to the infant enteric microbial community," Morrow said. "The constellation of microbial strains that we detected in the infants inherited from the mother was different in each mother-infant pair. Given the recognized role of the microbiome in metabolic diseases such as obesity and type 2 diabetes, the results of our study could help to further explain the susceptibility of the infant to metabolic disease found in the mother."

Credit: 
University of Alabama at Birmingham

The role of European policy for improving power plant fuel efficiency

A new study published in the Journal of the Association of Environmental and Resource Economists investigates the impact of the European Union Emissions Trading Scheme (EU ETS), the largest international cap-and-trade system for greenhouse gas emissions in the world, on power plant fuel efficiency.

In "The European Union Emissions Trading Scheme and Fuel Efficiency of Fossil Fuel Power Plants in Germany" author Robert Germeshausen studies German power plants and finds that a reduction in fuel use by fossil fuel power plants due to the introduction of the EU ETS translates into reductions in total annual carbon emissions of about 1.5 to 2 percent within the German power sector.

To put this improvement into context, this decrease in fuel input on average is equivalent to a reduction of around four to six million tons in annual carbon emissions. The results point to the role of actual investment in generation technology to improve fuel efficiency as Germeshausen finds positive effects on large investments in machinery.

The power sector is central to climate protection strategies, including those in Germany, where it accounts for around 40 percent of total annual carbon emissions. The Intergovernmental Panel on Climate Change states that reducing the carbon intensity of electricity generation (also known as decarbonizing) is a key component of cost-effective mitigation strategies. "Hence, understanding the effects of existing climate policies on the power sector is crucial for the further development of policies to achieve mitigation targets efficiently," writes Germeshausen.

The EU ETS puts a price on greenhouse gas emissions from regulated installations to achieve emission reductions and to provide incentives for investments in low-carbon technologies. Germeshausen utilizes administrative annual plant-level data covering around 85 percent of fossil fuel electricity generation in Germany from 2003 to 2012. Germany's electricity generation fleet consists of a variety of hard coal, lignite, nuclear, and natural gas power plants as well as renewable energy installations.

Germeshausen draws conclusions on the effect of carbon pricing on the optimal input combination in electricity generation and also on fuel efficiency improvements as a measure to reduce carbon emissions in the power sector. He additionally analyzes potential effects on labor efficiency, investments in machinery, and utilization of power plants.

Unlike previous studies on productivity and efficiency effects from policies and regulation in the electricity generation sector, which focus mainly on the effects of deregulation on productivity and efficiency, this study differs with respect to the nature of the influence. "Understanding the impacts on regulated entities is crucial for the assessment and the further development of mitigation policies such as emission trading schemes," Germeshausen writes. Given the high variable cost share of fuel in power generation, the introduction of a carbon price may provide carbon intensive power plants with an incentive to improve fuel efficiency.

Germeshausen finds that the ETS negatively impacts the capacity factor, i.e., carbon intensive plants produce less output in relation to their potential output compared to less carbon intensive plants. "Thus, the effect should be interpreted as a positive net effect on fuel efficiency, exceeding a potential negative fuel efficiency effect from decreased utilization of carbon intensive power plants."

Credit: 
University of Chicago Press Journals

Hayabusa2's touchdown on Ryugu reveals its surface in stunning detail

High-resolution images and video were taken by the Japanese space agency's Hayabusa2 spacecraft as it briefly landed to collect samples from Ryugu - a nearby asteroid that orbits mostly between Earth and Mars - allowing researchers to get an up-close look at its rocky surface, according to a new report. During the touchdown Hayabusa2 obtained a sample of the asteroid, which it will bring back to Earth in December 2020. The detailed new observations of Ryugu's surface during the touchdown operations help scientists understand the age and geologic history of the asteroid, suggesting that its surface color variations are likely due to rapid solar heating during a previous temporary orbital excursion near the Sun. On February 21, 2019, after months of orbital observations to select the target location, the Hayabusa2 spacecraft descended to the surface of Ryugu to conduct its first sample collection, picking up surface material from the carbon-rich asteroid. Previous Hayabusa2 observations have shown that Ryugu's surface is composed of two different types of material, one slightly redder and the other slightly bluer. The cause of this color variation, however, remained unknown. During Hayabusa2's touchdown, onboard cameras captured high-resolution observations of the surface surrounding the landing site in exceptional detail - including the disturbances caused by the sampling operation. Tomokatsu Morota and colleagues used these images to investigate the geology and evolution of Ryugu's surface. Unexpectedly, Morota et al. observed that Hayabusa2's thrusters disturbed a coating of dark, fine-grained material that appeared to correspond with the surface's redder materials. By relating these findings with the stratigraphy of the asteroid's craters, the authors conclude that surface reddening was caused by a short period of intense solar heating, which could be explained if Ryugu's orbit took a temporary turn towards the Sun.

Credit: 
American Association for the Advancement of Science (AAAS)

A new high-resolution, 3D map of the whole mouse brain

video: This video depicts a fusion of data in the CCF framework

The background grayscale image represents the average anatomy of 1675 individual specimens forming the basis for the common coordinate system.

The colored curved lines represented sampled streamlines. The mouse cortex is a 3D sheet organized into layers where connection between the layers are typically perpendicular to the surface, suggesting a hypothetical columnar organization. The curvature of the cortex makes it difficult to visualize along this theoretical dimension. These streamlines are an estimate of these "verticals" based on the curved geometry.

To see if the streamlines reflect the true curvature we compare them with real data. The hotmetal colored image are composite of multiple dataset to visualize the shape of thick-tufted dendrite of L5 pyramid neurons that were selectively labeled with Cre-dependent viral tracer injection into the Sim1-Cre_KJ18 or A930038C07Rik-Tg1-Cre driver line. Each dataset was registered to the CCF to allow the overlaying data from ~100 specimens.

Image: 
Allen Institute for Brain Science

After three years of intensive data-gathering and careful drawing, the mapmakers' work was complete.

The complex terrain they charted, with all its peaks, valleys and borders, is only about half an inch long and weighs less than a jellybean: the brain of the laboratory mouse.

In a paper published today in the journal Cell, the Allen Institute mapmakers describe this cartographical feat -- the third iteration of the Allen Mouse Brain Common Coordinate Framework, or CCFv3, a complete, high-resolution 3D atlas of the mouse brain.

The framework is meant to be a reference point for the neuroscience community, its creators said. Mice are widely used in biomedical research. Their brains contain approximately 100 million cells each across hundreds of different regions. As neuroscience datasets grow larger and more complex, a common spatial map of the brain becomes more critical, as does the ability to precisely co-register many different kinds of data into a common 3D space to compare and correlate.

Think of it as the neuroscience equivalent of your phone's GPS. Instead of manually searching for your location on a paper map based on what you see around you, the GPS (and the new brain atlas) tells you where you are. With datasets in the thousands or millions of different pieces of information, that common set of coordinates -- and pinpointing the corresponding brain landmarks for those coordinates -- is crucial.

"In the old days, people would define different regions of the brain by eye. As we get more and more data, that manual curation doesn't scale anymore," said Lydia Ng, Ph.D., Senior Director of Technology at the Allen Institute for Brain Science, a division of the Allen Institute, and one of the senior authors on the atlas paper along with Julie Harris, Ph.D., Associate Director of Neuroanatomy at the Allen Institute for Brain Science. "Just as we have a reference genome sequence, you need a reference anatomy."

Enabling whole-brain studies

The whole-brain CCFv3 builds on a partial version released in 2016 that mapped the entire mouse cortex, the outermost shell of the brain. Previous versions of the atlas were lower resolution 3D maps, while CCFv3's resolution is fine enough that it can pinpoint individual cells' locations. The latest full-brain atlas has been openly available for the community since late 2017, and several different neuroscience teams have already put it to use.

Nick Steinmetz, Ph.D., an Assistant Professor at the University of Washington and an Allen Institute for Brain Science Next Generation Leader, used the atlas in a recent study that looked at neuron activity as mice choose between different images they see in a laboratory test. The study used Neuropixels, tiny electrical probes that can capture the activity of hundreds of neurons at once across several different brain regions.

As they were analyzing their data, it became clear that more parts of the brain were involved in this visual choice than they previously realized, Steinmetz said. They would have to take a big-picture view, and the CCFv3 helped them look at all their results together.

"The atlas was a really necessary resource that enabled the very idea of doing studies at the brain-wide level," Steinmetz said. "When you're recording from hundreds of sites across the brain, that introduces a new scale of investigation. You have to have a bigger view of where all the recording sites are, and the CCF is what made that possible."

An evolving atlas

To make the atlas, the researchers broke up the brain into tiny virtual 3D blocks, known as voxels, and assigned each block a unique coordinate. The data that fed into that 3D construction came from the average brain anatomy of nearly 1,700 different animals. The team then assigned each of those voxels to one of hundreds of different known regions of the mouse brain, drawing careful borders between distinct areas. The datasets that fed into these two aspects of the atlas came from several different kinds of experiments conducted at the Allen Institute over the past several years -- the atlas's backbone of different types of data makes it unique among reference brain atlases, the researchers said.

Historically, brain atlases were drawn in 2D, taking sheet-like views of the brain at different depths and lining them up. For some types of data, this form of brain mapping works well. But for modern neuroscience studies looking at neuron activity or cell characteristics across the entire brain, a 3D atlas gives better context.

The researchers said future iterations of the atlas will likely rely on machine learning or other forms of automation, rather than the laborious manual curation that went into the current version.

"As we know now, atlases should be evolving and living resources, because as we learn more about how the brain is organized, we will need to make updates," Harris said. "Building atlases in an automatic, unbiased way is where the field is likely moving."

Credit: 
Allen Institute

Virgin birth has scientists buzzing

image: Cape honey bee workers laying parasitic eggs on a queen cell.

Image: 
Professor Benjamin Oldroyd/University of Sydney

In a study published today in Current Biology, researchers from University of Sydney have identified the single gene that determines how Cape honey bees reproduce without ever having sex. One gene, GB45239 on chromosome 11, is responsible for virgin births.

"It is extremely exciting," said Professor Benjamin Oldroyd in the School of Life and Environmental Sciences. "Scientists have been looking for this gene for the last 30 years. Now that we know it's on chromosome 11, we have solved a mystery."

Behavioural geneticist Professor Oldroyd said: "Sex is a weird way to reproduce and yet it is the most common form of reproduction for animals and plants on the planet. It's a major biological mystery why there is so much sex going on and it doesn't make evolutionary sense. Asexuality is a much more efficient way to reproduce, and every now and then we see a species revert to it."

In the Cape honey bee, found in South Africa, the gene has allowed worker bees to lay eggs that only produce females instead of the normal males that other honey bees do. "Males are mostly useless," Professor Oldroyd said. "But Cape workers can become genetically reincarnated as a female queen and that prospect changes everything."

But it also causes problems. "Instead of being a cooperative society, Cape honey bee colonies are riven with conflict because any worker can be genetically reincarnated as the next queen. When a colony loses its queen the workers fight and compete to be the mother of the next queen," Professor Oldroyd said.

The ability to produce daughters asexually, known as "thelytokous parthenogenesis", is restricted to a single subspecies inhabiting the Cape region of South Africa, the Cape honey bee or Apis mellifera capensis.

Several other traits distinguish the Cape honey bee from other honey bee subspecies. In particular, the ovaries of worker bees are larger and more readily activated and they are able to produce queen pheromones, allowing them to assert reproductive dominance in a colony.

These traits also lead to a propensity for social parasitism, a behaviour where Cape bee workers invade foreign colonies, reproduce and persuade the host colony workers to feed their larvae. Every year in South Africa, 10,000 colonies of commercial beehives die because of the social parasite behaviour in Cape honey bees.

"This is a bee we must keep out of Australia," Professor Oldroyd said.

The existence of Cape bees with these characters has been known for over a hundred years, but it is only recently, using modern genomic tools, that we have been able to understand the actual gene that gives rise to virgin birth.

"Further study of Cape bees could give us insight into two major evolutionary transitions: the origin of sex and the origin of animal societies," Professor Oldroyd said.

Perhaps the most exciting prospect arising from this study is the possibility to understand how the gene actually works functionally. "If we could control a switch that allows animals to reproduce asexually, that would have important applications in agriculture, biotechnology and many other fields," Professor Oldroyd said. For instance, many pest ant species like fire ants are thelytokous, though unfortunately it seems to be a different gene to the one found in Capensis."

Credit: 
University of Sydney

Plasma electrons can be used to produce metallic films

image: A view into the vacuum chamber showing the plasma above the surface on which the metallic film is created by researchers at Linköping University, Sweden.

Image: 
Magnus Johansson/Linköping University

Computers, mobile phones and all other electronic devices contain thousands of transistors, linked together by thin films of metal. Scientists at Linköping University, Sweden, have developed a method that can use the electrons in a plasma to produce these films.

The processors used in today's computers and phones consist of billions of tiny transistors connected by thin metallic films. Scientists at Linköping University, LiU, have now shown that it is possible to create thin films of metals by allowing the free electrons in a plasma take an active role. A plasma forms when energy is supplied that tears away electrons from the atoms and molecules in a gas, to produce an ionised gas. In our everyday life, plasmas are used in fluorescent lamps and in plasma displays. The method developed by the LiU researchers using plasma electrons to produce metallic films is described in an article in the Journal of Vacuum Science & Technology.

"We can see several exciting areas of application, such as the manufacture of processors and similar components. With our method it is no longer necessary to move the substrate on which the transistors are created backwards and forwards between the vacuum chamber and a water bath, which happens around 15 times per processor", says Henrik Pedersen, professor of inorganic chemistry in the Department of Physics, Chemistry and Biology at Linköping University.

A common method of creating thin films is to introduce molecular vapours containing the atoms that are required for the film into a vacuum chamber. There they react with each other and the surface on which the thin film is to be formed. This well-established method is known as chemical vapour deposition (CVD). In order to produce films of pure metal by CVD, a volatile precursor molecule is required that contains the metal of interest. When the precursor molecules have become absorbed onto the surface, surface chemical reactions involving another molecule are required to create a metal film. These reactions require molecules that readily donate electrons to the metal ions in the precursor molecules, such that they are reduced to metal atoms, in what is known as a "reduction reaction". The LiU scientists instead turned their attention to plasmas.

"We reasoned that what the surface chemistry reactions needed was free electrons, and these are available in a plasma. We started to experiment with allowing the precursor molecules and the metal ions to land on a surface and then attract electrons from a plasma to the surface", says Henrik Pedersen.

Researchers in inorganic chemistry and in plasma physics at IFM have collaborated and demonstrated that it is possible to create thin metallic films on a surface using the free electrons in an argon plasma discharge for the reduction reactions. In order to attract the negatively charged electrons to the surface, they applied a positive potential across it.

The study describes work with non-noble metals such as iron, cobalt and nickel, which are difficult to reduce to metal. Traditional CVD has been compelled to use powerful molecular reducing agents in these cases. Such reducing agents are difficult to manufacture, manage and control, since their tendency to donate electrons to other molecules makes them very reactive and unstable. At the same time, the molecules must be sufficiently stable to be vaporised and introduced in gaseous form into the vacuum chamber in which the metallic films are being deposited.

"What may make the method using plasma electrons better is that it removes the need to develop and manage unstable reducing agents. The development of CVD of non-noble metals is hampered due to a lack of suitable molecular reducing agents that function sufficiently well", says Henrik Pedersen.

The scientists are now continuing with measurements that will help them understand and be able to demonstrate how the chemical reactions take place on the surface where the metallic film forms. They are also investigating the optimal properties of the plasma. They would also like to test different precursor molecules to find ways of making the metallic films purer.

The research has obtained financial support from the Swedish Research Council, and has been carried out in collaboration with Daniel Lundin, guest professor at IFM.

Credit: 
Linköping University

Global trade in soy has major implications for the climate

image: The quantity of greenhouse gases released through the production, processing and export of soybean and derivatives varies greatly from municipality to municipality and from year to year.

Image: 
© Neus Escobar et. al., Global Environmental Change; DOI: 10.1016/j.gloenvcha.2020.102067

The extent to which Brazilian soy production and trade contribute to climate change depends largely on the location where soybeans are grown. This is shown by a recent study conducted by the University of Bonn together with partners from Spain, Belgium and Sweden. In some municipalities, CO2 emissions resulting from the export of soybean and derivatives are more than 200 times higher than in others. Between 2010 and 2015, the EU imported soy primarily from locations where large forest and savannah areas had previously been converted into agricultural land. The analysis is published in the journal Global Environmental Change.

Global soy trade is a major source of greenhouse gas emissions for multiple reasons. The conversion of natural vegetation into arable land is probably the most important cause, since the latter generally binds considerably less CO2 than the original ecosystems. Greenhouse gases are also released during the harvesting of soybeans and processing into derived products, the subsequent transport to ports of export and shipment.

To estimate the carbon footprint embodied in Brazil's soy exports, researchers used the Life Cycle Assessment (LCA) methodology. This allows quantifying the environmental footprint of a product, from its production until it is delivered to the importer. The researchers from the Institute for Food and Resource Economics (ILR) of the University of Bonn have performed this analysis for almost 90,000 supply chains that were identified in total soy exports from Brazil in the period 2010-2015. "Each of these 90,000 individual trade flows represents a specific combination of the producing municipality in Brazil, the location in which the soy was stored and pre-processed, the respective export and import ports, and, where applicable, the country where further processing takes place," explains the ILR researcher Dr Neus Escobar. "Put more simply, we have calculated the quantity of carbon dioxide released per tonne of soy exported through each of these supply chains."

Around 90,000 soy trade flows analysed

For this purpose, the researchers used a database developed at the Stockholm Environment Institute. It traces the trade routes of agricultural commodity exports from the production region to the importer in detail. "The database also contains spatially-explicit information on the deforestation associated with the soy cultivation in the production region," says Escobar. "We supplemented it with additional data, for instance, on means of transport involved in the corresponding export route, as well as their CO2 emission intensity. This enabled us to make a very detailed assessment of the impact of soy cultivation in Brazil and subsequent transport on global greenhouse gas emissions." Interestingly, results show that: "The resulting greenhouse gas emissions vary considerably from municipality to municipality, depending on underlying deforestation, cultivation practices and freight logistics," emphasizes Escobar. "The carbon footprint of some municipalities is more than 200 times larger than others. The variability is therefore much higher than so far reported in scientific literature."

The greatest CO2 emissions arise from the so-called MATOPIBA region in the northeast of the country. This region still has large areas covered with natural vegetation, particularly forests and savannahs, which have however been increasingly lost to agriculture in recent years. Furthermore, soy exports from municipalities in this region usually entail long transport distances to the ports of export, which are mostly covered by trucks due to the relatively poor infrastructure. Thus, greenhouse gas emissions from transport can be substantial and even surpass the effects of deforestation.

The researchers also investigated which countries generate particularly large quantities of greenhouse gas emissions by importing soy. First and foremost, the world's largest importer is China, however, the European Union does not fall far behind. "Although European countries imported considerably smaller amounts of soy, between 2010 and 2015, this came primarily from areas where sizeable deforestation took place," notes Escobar.

"Regional factors can have a significant influence on the environmental impacts embodied in global agricultural trade," explains the researcher. "Our study helps to shed light on such relationships." Policymakers urgently need such information: It can help to design low-carbon supply chains, for instance with improvements in the transport infrastructure or more effective forest conservation policies. Furthermore, it can also inform consumers about the environmental implications of high meat consumption, such as in many EU countries: A large proportion of the soy imported by Europe is used as animal feed.

Credit: 
University of Bonn

A closer look at superconductors

image: Deciphering previously invisible dynamics in superconductors -- Higgs spectroscopy could make this possible: Using cuprates, a high-temperature superconductor, as an example, an international team of researchers has been able to demonstrate the potential of the new measurement method. By applying a strong terahertz pulse (frequency ω), they stimulated and continuously maintained Higgs oscillations in the material (2ω). Driving the system resonant to the Eigenfrequency of the Higgs oscillations in turn leads to the generation of characteristic terahertz light with tripled frequency (3ω).

Image: 
HZDR / Juniks

From sustainable energy to quantum computers: high-temperature superconductors have the potential to revolutionize today's technologies. Despite intensive research, however, we still lack the necessary basic understanding to develop these complex materials for widespread application. "Higgs spectroscopy" could bring about a watershed as it reveals the dynamics of paired electrons in superconductors. An international research consortium centered around the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and the Max Planck Institute for Solid State Research (MPI-FKF) is now presenting the new measuring method in the journal Nature Communications (DOI: 10.1038/s41467-020-15613-1). Remarkably, the dynamics also reveal typical precursors of superconductivity even above the critical temperature at which the materials investigated attain superconductivity.

Superconductors transport electric current without a loss of energy. Utilizing them could dramatically reduce our energy requirements - if it weren't for the fact that superconductivity requires temperatures of -140 degrees Celsius and below. Materials only 'turn on' their superconductivity below this point. All known superconductors require elaborate cooling methods, which makes them impractical for everyday purposes. There is promise of progress in high temperature superconductors such as cuprates - innovative materials based on copper oxide. The problem is that despite many years of research efforts, their exact mode of operation remains unclear. Higgs spectroscopy might change that.

Higgs spectroscopy allows new insights into high-temperature superconductivity

"Higgs spectroscopy offers us a whole new 'magnifying glass' to examine the physical processes," Dr. Jan-Christoph Deinert reports. The researcher at the HZDR Institute of Radiation Physics is working on the new method alongside colleagues from the MPI-FKF, the Universities of Stuttgart and Tokyo, and other international research institutions. What the scientists are most keen to find out is how electrons form pairs in high-temperature superconductors.

In superconductivity, electrons combine to create "Cooper pairs", which enables them to move through the material in pairs without any interaction with their environment. But what makes two electrons pair up when their charge actually makes them repel each other? For conventional superconductors, there is a physical explanation: "The electrons pair up because of crystal lattice vibrations," explains Prof. Stefan Kaiser, one of the main authors of the study, who is researching the dynamics in superconductors at MPI-FKF and the University of Stuttgart. One electron distorts the crystal lattice, which then attracts the second electron. For cuprates, however, it has so far been unclear which mechanism acts in the place of lattice vibrations. "One hypothesis is that the pairing is due to fluctuating spins, i.e. magnetic interaction," Kaiser explains. "But the key question is: Can their influence on superconductivity and in particular on the properties of the Cooper pairs be measured directly?"

At this point "Higgs oscillations" enter the stage: In high-energy physics, they explain why elementary particles have mass. But they also occur in superconductors, where they can be excited by strong laser pulses. They represent the oscillations of the order parameter - the measure of a material's superconductive state, in other words, the density of the Cooper pairs. So much for the theory. A first experimental proof succeeded a few years ago when researchers at the University of Tokyo used an ultrashort light pulse to excite Higgs oscillations in conventional superconductors - like setting a pendulum in motion. For high-temperature superconductors, however, such a one-off pulse is not enough, as the system is damped too much by interactions between the superconducting and non-superconducting electrons and the complicated symmetry of the ordering parameter.

Terahertz light source keeps the system oscillating

Thanks to Higgs spectroscopy, the research consortium around MPI-FKF and HZDR has now achieved the experimental breakthrough for high-temperature superconductors. Their trick was to use a multi-cyclic, extremely strong terahertz pulse that is optimally tuned to Higgs oscillation and can maintain it despite the damping factors - continuously prodding the metaphorical pendulum. With the high-performance terahertz light source TELBE at HZDR, the researchers are able to send 100,000 such pulses through the samples per second. "Our source is unique in the world due to its high intensity in the terahertz range combined with a very high repetition rate," Deinert explains. "We can now selectively drive Higgs oscillations and measure them very precisely."

This success is owed to close cooperation between theoretical and experimental scientists. The idea was hatched at MPI-FKF; the experiment was conducted by the TELBE team, led by Dr. Jan-Christoph Deinert and Dr. Sergey Kovalev at HZDR under then group leader Prof. Michael Gensch, who is now researching at the German Aerospace Center and TU Berlin: "The experiments are of particular importance for the scientific application of large-scale research facilities in general. They demonstrate that a high-power terahertz source such as TELBE can handle a complex investigation using nonlinear terahertz spectroscopy on a complicated series of samples, such as cuprates."

That is why the research team expects to see high demand in the future: "Higgs spectroscopy as a methodological approach opens up entirely new potentials," explains Dr. Hao Chu, primary author of the study and postdoc at the Max Planck-UBC-UTokyo Center for Quantum Materials. "It is the starting point for a series of experiments that will provide new insights into these complex materials. We can now take a very systematic approach."

Just above the critical temperature: Where does superconductivity start?

Conducting several series of measurements, the researchers first proved that their method works for typical cuprates. Below the critical temperature, the research team was not only able to excite Higgs oscillations, but also proved that a new, previously unobserved excitation interacts with the Cooper pairs' Higgs oscillations. Further experiments will have to reveal whether these interactions are magnetic interactions, as is fiercely debated in expert circles. Furthermore, the researchers saw indications that Cooper pairs can also form above the critical temperature, albeit without oscillating together. Other measuring methods have previously suggested the possibility of such early pair formation. Higgs spectroscopy could support this hypothesis and clarify when and how the pairs form and what causes them to oscillate together in the superconductor.

Credit: 
Helmholtz-Zentrum Dresden-Rossendorf

New simple method for measuring the state of lithium-ion batteries

image: Yinan Hu, a member of Professor Dmitry Budker's research group at JGU, holding a battery cell, alongside a device which measures the state of charge

Image: 
photo/©: Arne Wickenbrock

Rechargeable batteries are at the heart of many new technologies involving, for example, the increased use of renewable energies. More specifically, they are employed to power electric vehicles, cell phones, and laptops. Scientists at Johannes Gutenberg University Mainz (JGU) and the Helmholtz Institute Mainz (HIM) in Germany have now presented a non-contact method for detecting the state of charge and any defects in lithium-ion batteries. For this purpose, atomic magnetometers are used to measure the magnetic field around battery cells. Professor Dmitry Budker and his team usually use atomic magnetometry to explore fundamental questions of physics, such as the search for new particles. Magnetometry is the term used to describe the measurement of magnetic fields. One simple example of its application is the compass, which the Earth's magnetic field causes to point north.

Non-contact quality assurance of batteries using atomic magnetometers

The demand for high-capacity rechargeable batteries is growing and so is the need for a form of sensitive, accurate diagnostic technology for determining the state of a battery cell. The success of many new developments will depend on whether batteries can be produced that can deliver sufficient capacity and a long effective life span. "Undertaking the quality assurance of rechargeable batteries is a significant challenge. Non-contact methods can potentially provide fresh stimulus for improvement in batteries," said Dr. Arne Wickenbrock, a member of Professor Dmitry Budker's work group at the JGU Institute of Physics and the Helmholtz Institute Mainz. The group has achieved a breakthrough by using atomic magnetometers to take measurements. The idea came about during a teleconference between Budker and his colleague Professor Alexej Jerschow of New York University. They developed a concept and, with close cooperation between the two groups, carried out the related experiments in Mainz.

"Our technique works in essentially the same way as magnetic resonance imaging, but it is much simpler because we use atomic magnetometers," said Wickenbrock, who is part of the team conducting the investigations. Atomic magnetometers are optically pumped magnetometers that use atoms in gaseous form as probes for a magnetic field. They are commercially available and are used in industrial applications as well as fundamental research. Budker's group at JGU and HIM, which also develops advanced magnetic sensors of their own, uses these atomic magnetometers for fundamental research in physics, such as in the search for dark matter and in attempts to solve the riddle as to why matter and antimatter did not immediately annihilate each other after the Big Bang.

Simple method enables fast, high-throughput measurements

In the case of battery measurements, the batteries are placed in a background magnetic field. The batteries alter this background magnetic field and the change is measured using atomic magnetometers. "The change gives us information about the state of charge of the battery, about how much charge is left in the battery, and about possible damage," added Wickenbrock. "The process is fast and, in our opinion, can be easily integrated into production processes." Recurring reports of serious injuries resulting from the explosion of e-cigarettes and the restrictions on taking certain types of cell phones on airplanes show that there is a need for detecting defects in battery cells.

"The diagnostic power of this technique is promising for the assessment of cells in research, for quality control, or during operation," the authors stated in their recent PNAS paper. Last summer, the same work group organized two events on applied atomic and nuclear physics with high-level international participation. About 200 researchers from all over the world addressed current questions of atomic magnetometry and other forms of quantum measurement techniques.

Credit: 
Johannes Gutenberg Universitaet Mainz

Beer was here! A new microstructural marker for malting in the archaeological record

image: The bowl-shaped charred cereal product ("brei mit napfförmiger oberfläche") from Hornstaad--Hörnle IA.
Find no. Ho 45/43-28. Top: light micrograph (red square: location of SEM subsample), bottom: SEM images. Left: patch of regularly arranged aleurone cells (A) with a conspicuous intercellular space (*) in between. L... longitudinal cells, right: fracture through the outer caryopsis layers, the multiple aleurone layers (A1 -A3) identify the material as cultivated barley (Hordeum vulgare) as do the thin-walled transverse cells (T). SE... starchy endosperm (fused remains), N? ... probably nucellus tissue, L?... probably longitudinal cells, E... epidermis (abraded).. Images: ÖAW-ÖAI / N. Gail (light micrograph), A. G. Heiss (SEM)

Image: 
Heiss et al, 2020 (PLOS ONE, CC BY)

A new method for reliably identifying the presence of beer or other malted foodstuffs in archaeological finds is described in a study published May 6, 2020 in the open-access journal PLOS ONE by Andreas G. Heiss from the Austrian Academy of Sciences (OeAW), Austria and colleagues.

A beverage with prehistoric roots, beer played ritual, social, and dietary roles across ancient societies. However, it's not easy to positively identify archaeological evidence of cereal-based alcoholic beverages like beer, since most clear markers for beer's presence lack durability or reliability.

To explore potential microstructural alterations in brewed cereal grains, Heiss and colleagues simulated archaeological preservation of commercially-available malted barley via charring (malting is the first step in the beer-brewing process.). They compared these experimental grains with ancient grains from five archaeological sites dating to the 4th millennium BCE: two known beer-brewing sites in Predynastic Egypt, and three central European lakeshore settlements where cereal-based foods were found in containers, but the presence of beer was not confirmed.

Using electron microscopy, the authors found their experimental barley grains had unusually thin aleurone cell walls (specific to grains of the grass family Poaceae, the aleurone layer is a tissue forming the outermost layer of the endosperm). The archaeological grain samples across all five prehistoric sites showed the same aleurone cell wall thinning.

Although there are other potential reasons for this type of thinned cell wall (such as fungal decay, enzymatic activity, or degradation during heating--all of which can be ruled out with careful analysis), these results suggest that this cell wall breakdown in the grain's aleurone layer can serve as a general marker for the malting process.

This new diagnostic feature for confirming the presence of beer (or other malted beverages/foodstuffs) in artifacts works even if no intact grains are present. A novel tool for identifying the possible presence of beer in archaeological sites where no further evidence of beer-making or -drinking is preserved, this method promises to broaden our knowledge of prehistoric malting and brewing.

The authors note: "Structural changes in the germinating grain, described decades ago by plant physiologists and brewing scientists alike, have now successfully been turned into a diagnostic feature for archaeological malt, even if the grains concerned are only preserved as pulverized and burnt crusts on pottery. A "small side effect" is the confirmation of the production of malt-based drinks (and beer?) in central Europe as early as the 4th millennium BC." Dr Heiss adds, "For over a year, we kept checking our new feature until we (and the reviewers) were happy. However, it took us quite a while to realize that en passant we had also provided the oldest evidence for malt-based food in Neolithic central Europe."

Credit: 
PLOS