Earth

2D materials offer unique stretching properties

Like most materials, an elastic band gets thinner when it is stretched. But some materials behave in the opposite way -- they grow thicker when stretched and thinner when compressed. These counterintuitive substances, known as auxetic materials, tend to have a high resistance to shear or fracture and are used in applications such as medical implants and sensors. But typically, this auxetic effect is only seen when the material is distorted in one particular direction.

Now, Minglei Sun and Udo Schwingenschlo?gl have predicted that a group of carbon-based materials, formed into atom-thin sheets, should show this auxetic effect in every direction. This phenomenon has never been observed before in any 2D anisotropic material, a growing family of flat materials that include several potentially auxetic materials.

The KAUST researchers calculated several key characteristics of three 2D materials called carbon sulfide, carbon selenide and carbon telluride, which unite carbon with elements collectively known as chalcogens. The calculations rely on density functional theory, a commonly used approach based on quantum mechanics, and they describe characteristics of the materials such as their structural stability, mechanical behavior and electronic properties.

All auxetic materials have a negative Poisson's ratio, a number that describes how a material deforms when it is stretched or compressed. But the researchers found that the three materials are uniquely auxetic because they have an omnidirectional negative Poisson's ratio. "We were surprised that we found a series of 2D anisotropic materials with negative Poisson's ratio in all directions," says Sun.

Sun and Schwingenschlo?gl's calculations predict that all three materials should be stable at room temperature, suggesting that it may be possible to synthesize and isolate them. They also explain the materials' omnidirectional auxetic effect in terms of their crystal structures and chemical bonding. Carbon telluride shows the strongest auxetic effect, which is larger in all directions than the highest values seen in most other 2D auxetic materials. It also has the highest fracture strain of the three materials investigated by the KAUST researchers.

According to the team, the materials should be semiconductors that are able to absorb near-infrared or visible light. The three carbon chalcogenides "turn out to be direct or quasi-direct bandgap semiconductors with impressive absorption of solar radiation," says Sun. This implies that the materials might be useful in photovoltaic devices or as light-powered catalysts. "Our next step is to predict more 2D auxetic materials with negative Poisson's ratio in all directions," says Schwingenschlo?gl.

Credit: 
King Abdullah University of Science & Technology (KAUST)

Open source tool can help identify gerrymandering in voting maps

image: Visualization of sampled county-preserving Virginia Congressional voting districts, created with the ReCom method in Gerrychain.

Image: 
Daryl DeFord, Washington State University

PULLMAN, Wash. -- With state legislatures nationwide preparing for the once-a-decade redrawing of voting districts, a research team has developed a better computational method to help identify improper gerrymandering designed to favor specific candidates or political parties.

In an article in the Harvard Data Science Review, the researchers describe the improved mathematical methodology of an open source tool called GerryChain. The tool can help observers detect gerrymandering in a voting district plan by creating a pool, or ensemble, of alternate maps that also meet legal voting criteria. This map ensemble can show if the proposed plan is an extreme outlier--one that is very unusual from the norm of plans generated without bias, and therefore, likely to be drawn with partisan goals in mind.

An earlier version of GerryChain was used to analyze maps proposed to remedy the Virginia House of Delegates districts that a federal court ruled in 2018 were unconstitutional racial gerrymanders. The updated tool will likely play a role in the upcoming redistricting using new census data.

"We wanted to build an open-source software tool and make that available to people interested in reform, especially in states where there are skewed baselines," said Daryl DeFord, assistant mathematics professor at Washington State University and a co-lead author on the paper. "It can be an impactful way for people to get involved in this process, particularly going into this year's redistricting cycle where there are going to be a lot of opportunities for pointing out less than optimal behavior."

The GerryChain tool, first created by a team led by DeFord as a part of the 2018 Voting Rights Data Institute, has already been downloaded 20,000 times. The new paper, authored by Deford along with Moon Duchin of Tufts University and Justin Solomon of the Massachusetts Institute of Technology, focuses on how the mathematical and computational models implemented in GerryChain can be used to put proposed voting districting plans into context by creating large samples of alternative valid plans for comparison. These alternate plans are often used when a voting plan is challenged in court as being unfair as well as to analyze potential impacts of redistricting reform.

For instance, the enacted 2010 House of Delegates plan in Virginia had 12 voting districts with a Black voting age population at or above 55%. By comparing that plan against an ensemble of alternate plans that all fit the legal criteria, advocates showed that map was an extreme outlier of what was possible. In other words, it was likely drawn intentionally to "pack" some districts with a Black voter population to "crack" other districts, breaking the influence of those voters.

One of the biggest challenges to creating voting maps are the sheer number of possibilities, DeFord said. Many states like Virginia have hundreds of thousands of census blocks. They also have many rules and goals for structuring voting districts: such as keeping them geographically contiguous and compact with units like counties and cities intact. Many states also want to protect "communities of interest" an often undefined term, but the federal Voting Rights Act explicitly aims to protect minority voters, since historically, gerrymanders have sought to weaken the effect of their vote. In addition, multiple states require that voting maps be drawn with an attempt at political neutrality.

Even with all these rules, voting maps can still be drawn in a myriad of different ways.

"There are more feasible plans in a lot of states than there are molecules in the universe," Deford said. "That's why you want this kind of mathematical tool."

Since the advent of computers, models have provided the ability to make an array of maps. Before the current version of Gerrychain, many models used a data method called a "Flip walk" to create alternatives, which involves changing just one assignment at time, such as a precinct or census block. Every change has a ripple effect on other districts, resulting in a different map.

The tool developed by DeFord and his colleagues uses a method called a spanning tree recombination or "ReCom" for short. To create an alternative voting map, the method involves taking two districts, merging them together before splitting them apart again in a different way. This creates a greater change with multiple voting blocks changing at a time.

The computational tool can create many alternative voting plans within a matter of hours or days, and it is freely available for use by voting reform groups or anyone who has knowledge of Python, the data software behind it.

The authors emphasize, however, that computers alone shouldn't create the voting plan that is ultimately adopted for use. Rather the ensemble method provides a tool for analyzing baselines and evaluating potential alternatives.

"This is not some sort of magic black box where you push the button, and you get a collection of perfect plans," said Deford. "It really requires serious engagement with social scientists and legal scholars. Because the rules are written and implemented by people, this is a fundamentally human process."

Credit: 
Washington State University

Physicists find a novel way to switch antiferromagnetism on and off

When you save an image to your smartphone, those data are written onto tiny transistors that are electrically switched on or off in a pattern of "bits" to represent and encode that image. Most transistors today are made from silicon, an element that scientists have managed to switch at ever-smaller scales, enabling billions of bits, and therefore large libraries of images and other files, to be packed onto a single memory chip.

But growing demand for data, and the means to store them, is driving scientists to search beyond silicon for materials that can push memory devices to higher densities, speeds, and security.

Now MIT physicists have shown preliminary evidence that data might be stored as faster, denser, and more secure bits made from antiferromagnets.

Antiferromagnetic, or AFM materials are the lesser-known cousins to ferromagnets, or conventional magnetic materials. Where the electrons in ferromagnets spin in synchrony -- a property that allows a compass needle to point north, collectively following the Earth's magnetic field -- electrons in an antiferromagnet prefer the opposite spin to their neighbor, in an "antialignment" that effectively quenches magnetization even at the smallest scales.

The absence of net magnetization in an antiferromagnet makes it impervious to any external magnetic field. If they were made into memory devices, antiferromagnetic bits could protect any encoded data from being magnetically erased. They could also be made into smaller transistors and packed in greater numbers per chip than traditional silicon.

Now the MIT team has found that by doping extra electrons into an antiferromagnetic material, they can turn its collective antialigned arrangement on and off, in a controllable way. They found this magnetic transition is reversible, and sufficiently sharp, similar to switching a transistor's state from 0 to 1. The results, published today in Physical Review Letters, demonstrate a potential new pathway to use antiferromagnets as a digital switch.

"An AFM memory could enable scaling up the data storage capacity of current devices -- same volume, but more data," says the study's lead author Riccardo Comin, assistant professor of physics at MIT.

Comin's MIT co-authors include lead author and graduate student Jiarui Li, along with Zhihai Zhu, Grace Zhang, and Da Zhou; as well as Roberg Green of the University of Saskatchewan; Zhen Zhang, Yifei Sun, and Shriram Ramanathan of Purdue University; Ronny Sutarto and Feizhou He of Canadian Light Source; and Jerzy Sadowski at Brookhaven National Laboratory.

Magnetic memory

To improve data storage, some researchers are looking to MRAM, or magnetoresistive RAM, a type of memory system that stores data as bits made from conventional magnetic materials. In principle, an MRAM device would be patterned with billions of magnetic bits. To encode data, the direction of a local magnetic domain within the device is flipped, similar to switching a transistor from 0 to 1.

MRAM systems could potentially read and write data faster than silicon-based devices and could run with less power. But they could also be vulnerable to external magnetic fields.

"The system as a whole follows a magnetic field like a sunflower follows the sun, which is why, if you take a magnetic data storage device and put it in a moderate magnetic field, information is completely erased," Comin says.

Antiferromagnets, in contrast, are unaffected by external fields and could therefore be a more secure alternative to MRAM designs. An essential step toward encodable AFM bits is the ability to switch antiferromagnetism on and off. Researchers have found various ways to accomplish this, mostly by using electric current to switch a material from its orderly antialignment, to a random disorder of spins.

"With these approaches, switching is very fast," says Li. "But the downside is, everytime you need a current to read or write, that requires a lot of energy per operation. When things get very small, the energy and heat generated by running currents are significant."

Doped disorder

Comin and his colleagues wondered whether they could achieve antiferromagnetic switching in a more efficient manner. In their new study, they work with neodymium nickelate, an antiferromagnetic oxide grown in the Ramanathan lab. This material exhibits nanodomains that consist of nickel atoms with an opposite spin to that of its neighbor, and held together by oxygen and neodymium atoms. The researchers had previously mapped the material's fractal properties.

Since then, the researchers have looked to see if they could manipulate the material's antiferromagnetism via doping -- a process that intentionally introduces impurities in a material to alter its electronic properties. In their case, the researchers doped neodymium nickel oxide by stripping the material of its oxygen atoms.

When an oxygen atom is removed, it leaves behind two electrons, which are redistributed among the other nickel and oxygen atoms. The researchers wondered whether stripping away many oxygen atoms would result in a domino effect of disorder that would switch off the material's orderly antialignment.

To test their theory, they grew 100-nanometer-thin films of neodymium nickel oxide and placed them in an oxygen-starved chamber, then heated the samples to temperatures of 400 degrees Celsius to encourage oxygen to escape from the films and into the chamber's atmosphere.

As they removed progressively more oxygen, they studied the films using advanced magnetic X-ray crystallography techniques to determine whether the material's magnetic structure was intact, implying that its atomic spins remained in their orderly antialignment, and therefore retained antiferomagnetism. If their data showed a lack of an ordered magnetic structure, it would be evidence that the material's antiferromagnetism had switched off, due to sufficient doping.

Through their experiments, the researchers were able to switch off the material's antiferromagnetism at a certain critical doping threshold. They could also restore antiferromagnetism by adding oxygen back into the material.

Now that the team has shown doping effectively switches AFM on and off, scientists might use more practical ways to dope similar materials. For instance, silicon-based transistors are switched using voltage-activated "gates," where a small voltage is applied to a bit to alter its electrical conductivity. Comin says that antiferromagnetic bits could also be switched using suitable voltage gates, which would require less energy than other antiferromagnetic switching techniques.

"This could present an opportunity to develop a magnetic memory storage device that works similarly to silicon-based chips, with the added benefit that you can store information in AFM domains that are very robust and can be packed at high densities," Comin says. "That's key to addressing the challenges of a data-driven world."

Credit: 
Massachusetts Institute of Technology

Researchers speed identification of DNA regions that regulate gene expression

image: Corresponding author Yong Cheng, Ph.D., of the St. Jude Departments of Hematology and Computational Biology, helped develop a highly efficient method in identifying the genetic switches that regulate gene expression.

Image: 
St. Jude Children's Research Hospital

St. Jude Children's Research Hospital scientists have developed an integrated, high-throughput system to better understand and possibly manipulate gene expression for treatment of disorders such as sickle cell disease and beta thalassemia. The research appears today in the journal Nature Genetics.

Researchers used the system to identify dozens of DNA regulatory elements that act together to orchestrate the switch from fetal to adult hemoglobin expression. The method can also be used to study other diseases that involve gene regulation.

Regulatory elements, also called genetic switches, are scattered throughout non-coding regions of DNA. These regions do not encode genes and make up about 98% of the genome. The elements have a variety of names--enhancer, repressor, insulator and more--but the specific genes they regulate, how the regulatory elements act together, and answers to other questions have been unclear.

"Without the high-throughput system, identifying key regulatory elements is often extremely slow," said corresponding author Yong Cheng, Ph.D., of the St. Jude Departments of Hematology and Computational Biology. Mitchell Weiss, M.D., Ph.D., Hematology chair, is co-corresponding author.

"For example, despite decades of research, fewer than half of regulatory elements and the associated genetic variants that account for fetal hemoglobin levels have been identified," Cheng said.

Precision editing provides key details about regulation of gene expression

The new system combines bioinformatic prediction algorithms and an adenine base editing tool with tests to measure how base gene editing affects gene expression. Base editing works more precisely than conventional gene-editing tools such as CRISPR/Cas9, by changing a single letter in the four-letter DNA alphabet at high efficiency without creating larger insertions or deletions.

Researchers used the base editor ABEmax to make 10,156 specific edits in 307 regulatory elements that were predicted to affect fetal hemoglobin expression. The expression can modify the severity of hemoglobin disorders such as sickle cell disease. The edits changed the DNA bases adenine and thymine to guanine and cytosine. The study focused on regulatory elements in the genes BCL11A, MYB-HBS1L, KLF1 and beta-like globin genes.

Using this approach, the scientists validated the few known regulatory elements of fetal hemoglobin expression and identified many new ones.

Credit: 
St. Jude Children's Research Hospital

PCB contamination in Icelandic orcas: a matter of diet

image: These killer whales may appear healthy, but a new study has found extremely high levels of PCB contamination in some of the whales. There was a 300-fold difference between the levels of PCBs among the most contaminated orcas compared to the least contaminated ones. The variation was mainly due to their eating habits.

Image: 
Filipa Samarra - Icelandic Orca Project

A new study from McGill University suggests that some Icelandic killer whales have very high concentrations of PCBs (polychlorinated biphenyls) in their blubber. But it seems that other orcas from the same population have levels of PCBs that are much lower. It mainly depends on what they eat.

PCBs were industrial chemicals banned decades ago, after they were found to affect the health of both humans and wildlife. But because they degrade very slowly after being released in the environment and they still accumulate in the bodies of marine mammals.

After collecting skin and blubber biopsies from 50 orcas in Iceland, the researchers found considerable variation in contaminant concentrations and profiles across the population. The killer whales that ate a mixed diet of both sea mammals (such as seals, or other marine mammals such as porpoises) and fish (mainly herring) had concentrations of PCBs in their blubber that were up tp 9 times higher on average than the killer whales that eat mainly fish. This finding unexpectedly contradicts earlier research that had found relatively low levels of PCBs in Icelandic orcas. The researchers argue that future assessments of the state of killer whale populations should take into account a factor that has previously been overlooked: the individual variations in food sources that may lead to elevated health risks from PCB exposures for some individuals within populations of the world's ultimate marine predator.

Exceeding known toxicity thresholds

"Killer whales are the ultimate marine predators and because they are at the top of the food web, they are among the most contaminated animals on the planet," explains Melissa McKinney, an Assistant Professor in McGill's Department of Natural Resource Sciences and the Canada Research Chair in Ecological Change and Environmental Stressors. She is the senior author on the study, which was published recently in Environmental Science and Technology.

"The concentrations of PCBs that we found in the whales that ate a mixed diet exceeded all known toxicity thresholds and are likely to affect both their immune and reproductive systems, putting their health at risk."

"The next step for us is to assess the proportion of marine mammals in the diets of these Icelandic and other North Atlantic orcas," adds Anaïs Remili, the first author on the study and a PhD candidate in McGill's Department of Natural Resource Sciences. "We also plan to put together a large dataset of contaminants in orcas across the Atlantic Ocean to contribute to their conservation efforts by quantifying potential health risks."

Credit: 
McGill University

Feeling younger buffers older adults from stress, protects against health decline

WASHINGTON -- People who feel younger have a greater sense of well-being, better cognitive functioning, less inflammation, lower risk of hospitalization and even live longer than their older-feeling peers. A study published by the American Psychological Association suggests one potential reason for the link between subjective age and health: Feeling younger could help buffer middle-aged and older adults against the damaging effects of stress.

In the study, published in Psychology and Aging, researchers from the German Centre of Gerontology analyzed three years of data from 5,039 participants in the German Ageing Survey, a longitudinal survey of residents of Germany age 40 and older. The survey included questions about the amount of perceived stress in peoples' lives and their functional health - how much they were limited in daily activities such as walking, dressing and bathing. Participants also indicated their subjective age by answering the question, "How old do you feel?"

The researchers found, on average, participants who reported more stress in their lives experienced a steeper decline in functional health over three years, and that link between stress and functional health decline was stronger for chronologically older participants.

However, subjective age seemed to provide a protective buffer. Among people who felt younger than their chronological age, the link between stress and declines in functional health was weaker. That protective effect was strongest among the oldest participants.

"Generally, we know that functional health declines with advancing age, but we also know that these age-related functional health trajectories are remarkably varied. As a result, some individuals enter old age and very old age with quite good and intact health resources, whereas others experience a pronounced decline in functional health, which might even result in need for long-term care," said study lead author Markus Wettstein, PhD, who is now at University of Heidelberg. "Our findings support the role of stress as a risk factor for functional health decline, particularly among older individuals, as well as the health-supporting and stress-buffering role of a younger subjective age."

The results suggest that interventions that aim to help people feel younger could reduce the harm caused by stress and improve health among older adults, according to the researchers - though further study is needed to help determine what kind of interventions would work best. For example, Wettstein said, messaging campaigns to counteract ageism and negative age stereotypes and to promote positive views on aging could help people feel younger. In addition, more general stress-reduction interventions and stress management training could prevent functional health loss among older adults, according to Wettstein.

Finally, more research is needed to figure out the ideal gap between subjective and chronological age, according to Wettstein, as previous research has suggested that it's helpful to feel younger up to a point but that benefits decrease as the gap between subjective and chronological age increases. "Feeling younger to some extent might be adaptive for functional health outcomes, whereas 'feeling too young' might be less adaptive or even maladaptive," he said.

Credit: 
American Psychological Association

Worth 1000 words: How the world saw Australia's black summer

image: Het Parool front page featuring photo by Photojournalist Matthew Abbott

Image: 
QUT

Australia's 'black summer' of bushfires was depicted on the front pages of the world's media with images of wildlife and habitat destruction, caused by climate change, while in Australia the toll on ordinary people remained the visual front-page focus.

QUT visual communication researcher Dr TJ Thomson compared the front-page bushfire imagery of the Sydney Morning Herald over three months from November 10, 2019 to January 31 2020 with 119 front pages from international media from the start of January, when the world sat up and took notice, to January 31.

"The international sample of front pages included the Americas and Europe (about 90 per cent) representing Australia's 'black summer'. Asia represented around 7 per cent of the international sample and Oceania, excluding Australia, represented 3.3 percent of the sample."

"Over the 83 days of the sample, 33 of the Sydney Morning Herald's front pages displayed 58 photos that were bushfire-related," Dr Thomson said.

"The domestic media's imagery portrayed the bushfires as a humanitarian crisis while overseas it was seen as an environmental crisis.

"Visual news values include impact, aesthetic appeal, proximity and personalisation which includes, events with personal angles or human presence.

"More than 80 per cent of the Herald's coverage depicted people which relied on the personalisation news value.

"The Herald focussed most heavily on firefighters in nature (36.2 per cent of all front-page images and followed this closely with images of ordinary citizens and the effect of the fires on them (32.7 per cent, in all).

"Noticeably absent were images of the affected animals and the environment which were rather sparsely represented."

Dr Thomson said that while media could not control how people interpreted situations, the media could limit the range of interpretation by controlling the information it presented and the way in which it was represented.

"By focussing on people, particularly firefighters, the Herald depicted the disaster not as a faceless calamity but as a crisis whose solution was in human hands," he said.

"The low prevalence of politicians, officials and celebrities (13.7 per cent) in the domestic sample reflects the Australian news media's power to shape the discourse and portray the issue as one that affected ordinary Australians the most.

"It was less of a political issue, despite Prime Minister Scott Morrison being criticised for going on holiday in the midst of the crisis, the government's pro-coal policies, or the ignored warnings of a lack of preparedness for a major bushfire season as far back as April 2019.

Dr Thomson said the environment alone was featured in only nine images on the Herald's front pages and animals ("a solitary koala") made a single image appearance.

In contrast, Australia's bushfires hit the international media in earnest after the evacuation by the Navy of 1000 fire-stranded people from the beach in Victoria in early January and continued to January 31 with 110 front pages containing 142 bushfire-related images.

"An Australian photographer interviewed for the study said the international media hadn't taken any interest in the bushfires until people were having to be rescued from the beach - 'that was the day it went from a big national story to a massive, international story'".

"Our near neighbours, New Zealand, featured pics of their orange and smoky skies."

Dr Thomson found the most resonant photo internationally was the aerial image of this massive smoke tower rising from East Gippsland, which featured on 17 front pages.

"International media's images focused on the fires' impacts on the country's iconic flora and fauna, as 52.1 per cent of all coverage was devoid of humans and depicted only bushfire-affected landscapes or animals.

"They used high-intensity, large-in-scope images of Australia's woes as a warning to their populations to slow or halt climate change's deadly effects.

"By not focussing on the attributes that divide us (skin colour, ability, class, gender) images of the destruction of the natural environment and Australian animal icons were prime targets for symbolic appropriation to a diverse and heterogenous audience because of their universality.

"From kangaroos and koalas to cattle and alpacas, international outlets featured animals more than 10 times as much as the Australian front pages.

"While kangaroos and koalas are iconic animals and symbols of Australia, they were over-represented in coverage despite not being the most affected animals, as mammals accounted only for an estimated 143 million (4.7 percent) out of the three billion animals lost in the fires.

"This image won a World Press Photo Award and represents Australia's black summer if not climate change itself.

"It also lacks people but the letterbox and burning home make the human presence unmistakable. It was republished across media over time and used extensively in social media, including being posted by teenage climate activist Greta Thunberg to her millions of followers."

Dr Thomson said about 10 per cent of international coverage was critical of Australia's government for its role in the factors that contributed to the mega fires or in its handling of them.

"About 6.7 per cent of those criticised the Prime Minister by name or by title. The remaining 3.3 per cent criticised the country's government or its political leaders for their role in the disaster and its management," he said.

"For example, the Tampa Bay Times' front page on 3 January 2020 wrote: "As record flames and devastation batter Australia, residents turn their anger on the prime minister and his policies. At least 17 people have died."

Credit: 
Queensland University of Technology

First nanoscale look at a reaction that limits the efficiency of generating hydrogen fuel

video: This animation combines images of a tiny, plate-like catalyst particle as it carries out a reaction that splits water and generates oxygen gas - part of a clean, sustainable process for producing hydrogen fuel. Made with an atomic force microscope in a Stanford lab, the images reveal how the catalyst changes shape and size as it operates - part of an in-depth study that showed the chemistry of the process is much different than previously assumed.

Image: 
Tyler Mefford and Andrew Akbashev/Stanford University

Transitioning from fossil fuels to a clean hydrogen economy will require cheaper and more efficient ways to use renewable sources of electricity to break water into hydrogen and oxygen.

But a key step in that process, known as the oxygen evolution reaction or OER, has proven to be a bottleneck. Today it's only about 75% efficient, and the precious metal catalysts used to accelerate the reaction, like platinum and iridium, are rare and expensive.

Now an international team led by scientists at Stanford University and the Department of Energy's SLAC National Accelerator Laboratory has developed a suite of advanced tools to break through this bottleneck and improve other energy-related processes, such as finding ways to make lithium-ion batteries charge faster. The research team described their work in Nature today.

Working at Stanford, SLAC, DOE's Lawrence Berkeley National Laboratory (Berkeley Lab) and Warwick University in the UK, they were able to zoom in on individual catalyst nanoparticles - shaped like tiny plates and about 200 times smaller than a red blood cell - and watch them accelerate the generation of oxygen inside custom-made electrochemical cells, including one that fits inside a drop of water.

They discovered that most of the catalytic activity took place on the edges of particles, and they were able to observe the chemical interactions between the particle and the surrounding electrolyte at a scale of billionths of a meter as they turned up the voltage to drive the reaction.

By combining their observations with prior computational work performed in collaboration with the SUNCAT Institute for Interface Science and Catalysis at SLAC and Stanford, they were able to identify a single step in the reaction that limits how fast it can proceed.

"This suite of methods can tell us the where, what and why of how these electrocatalytic materials work under realistic operating conditions," said Tyler Mefford, a staff scientist with Stanford and the Stanford Institute for Materials and Energy Sciences (SIMES) at SLAC who led the research. "Now that we have outlined how to use this platform, the applications are extremely broad."

Scaling up to a hydrogen economy

The idea of using electricity to break water down into oxygen and hydrogen dates back to 1800, when two British researchers discovered that they could use electric current generated by Alessandro Volta's newly invented pile battery to power the reaction.

This process, called electrolysis, works much like a battery in reverse: Rather than generating electricity, it uses electrical current to split water into hydrogen and oxygen. The reactions that generate hydrogen and oxygen gas take place on different electrodes using different precious metal catalysts.

Hydrogen gas is an important chemical feedstock for producing ammonia and refining steel, and is increasingly being targeted as a clean fuel for heavy duty transportation and long-term energy storage. But more than 95% of the hydrogen produced today comes from natural gas via reactions that emit carbon dioxide as a byproduct. Generating hydrogen through water electrolysis driven by electricity from solar, wind, and other sustainable sources would significantly reduce carbon emissions in a number of important industries.

But to produce hydrogen fuel from water on a big enough scale to power a green economy, scientists will have to make the other half of the water-splitting reaction - the one that generates oxygen ­- much more efficient, and find ways to make it work with catalysts based on much cheaper and more abundant metals than the ones used today.

"There aren't enough precious metals in the world to power this reaction at the scale we need," Mefford said, "and their cost is so high that the hydrogen they generate could never compete with hydrogen derived from fossil fuels."

Improving the process will require a much better understanding of how water-splitting catalysts operate, in enough detail that scientists can predict what can be done to improve them. Until now, many of the best techniques for making these observations did not work in the liquid environment of an electrocatalytic reactor.

In this study, scientists found several ways to get around those limitations and get a sharper picture than ever before.

New ways to spy on catalysts

The catalyst they chose to investigate was cobalt oxyhydroxide, which came in the form of flat, six-sided crystals called nanoplatelets. The edges were sharp and extremely thin, so it would be easy to distinguish whether a reaction was taking place on the edges or on the flat surface.

About a decade ago, Patrick Unwin's research group at the University of Warwick had invented a novel technique for putting a miniature electrochemical cell inside a nanoscale droplet that protrudes from the tip of a pipette tube. When the droplet is brought into contact with a surface, the device images the topography of the surface and electronic and ionic currents with very high resolution.

For this study, Unwin's team adapted this tiny device to work in the chemical environment of the oxygen evolution reaction. Postdoctoral researchers Minkyung Kang and Cameron Bentley moved it from place to place across the surface of a single catalyst particle as the reaction took place.

"Our technique allows us to zoom in to study extremely small regions of reactivity," said Kang, who led out the experiments there. "We are looking at oxygen generation at a scale more than one hundred million times smaller than typical techniques."

They discovered that, as is often the case for catalytic materials, only the edges were actively promoting the reaction, suggesting that future catalysts should maximize this sort of sharp, thin feature.

Meanwhile, Stanford and SIMES researcher Andrew Akbashev used electrochemical atomic force microscopy to determine and visualize exactly how the catalyst changed shape and size during operation, and discovered that the reactions that initially changed the catalyst to its active state were much different than had been previously assumed. Rather than protons leaving the catalyst to kick off the activation, hydroxide ions inserted themselves into the catalyst first, forming water inside the particle that made it swell up. As the activation process went on, this water and residual protons were driven back out.

In a third set of experiments, the team worked with David Shapiro and Young-Sang Yu at Berkeley Lab's Advanced Light Source and with a Washington company, Hummingbird Scientific, to develop an electrochemical flow cell that could be integrated into a scanning transmission X-ray microscope. This allowed them to map out the oxidation state of the working catalyst - a chemical state that's associated with catalytic activity - in areas as small as about 50 nanometers in diameter.

"We can now start applying the techniques we developed in this work toward other electrochemical materials and processes," Mefford said. "We would also like to study other energy-related reactions, like fast charging in battery electrodes, carbon dioxide reduction for carbon capture, and oxygen reduction, which allows us to use hydrogen in fuel cells."

Credit: 
DOE/SLAC National Accelerator Laboratory

Strange isotopes: Scientists explain a methane isotope paradox of the seafloor

image: The Guaymas Basin hydrothermal vents - the "home" of the studied methane-oxidizing microorganisms. The heat loving microorganisms thrive under the orange microbial mat in the background. The high temperatures of the rising waters blur parts of the image.

Image: 
Woods Hole Oceanographic Institution

Methane, a chemical compound with the molecular formula CH4, is not only a powerful greenhouse gas, but also an important energy source. It heats our homes, and even seafloor microbes make a living of it. The microbes use a process called anaerobic oxidation of methane (AOM), which happens commonly in the seafloor in so-called sulfate-methane transition zones - layers in the seafloor where sulfate from the seawater meets methane from the deeper sediment. Here, specialized microorganisms, the ANaerobically MEthane-oxidizing (ANME) archaea, consume the methane. They live in close association with bacteria, which use electrons released during methane oxidation for sulfate reduction. For this purpose, these organisms form characteristic consortia.

This process takes place globally in the seafloor and hence is an important part of the carbon cycle. However, studying the AOM process is challenging because the reaction is very slow. For its investigation, researchers often use a chemical knack: the stable isotope ratios in methane. But unfortunately, these isotopes do not always behave as expected, which led to serious confusion on the role and function of the microbes involved. Now researchers from the Max Planck Institute for Marine Microbiology and the MARUM - Center for Marine Environmental Sciences in Germany together with colleagues from the Weizmann Institute of Science in Israel have solved this isotope enigma and published their results in the journal Science Advances. This paves the way for a better understanding of the important process of anaerobic methane oxidation.

Isotopes reveal reaction pathways

The puzzle and its solution in detail: Isotopes are different "versions" of an element with different masses. The isotopes of an element have the same number of protons (positively charged particles) in the nucleus and therefore the same position in the periodic table (iso topos = Greek, same place). However, they differ in the number of neutrons (neutral particles) in the nucleus. For example, carbon has two stable isotopes, the lighter 12C and the heavier 13C. Additionally, there is the familiar radioactive isotope 14C, a very rare carbon species that is used to determine the age of carbon-bearing materials. Although the chemical properties of the two stable isotopes are identical, the difference in mass results in different reaction rates. When chemical compounds react, the ones with the lighter isotopes are usually converted faster, leaving the heavier variant in the initial reactant. This change in isotopic composition is known as isotopic fractionation, and has been used for decades to track chemical reactions. In the case of methane oxidation, this means that 12C-methane is primarily consumed, leading to an enrichment of 13C in the remaining methane. Conversely, a microbial production of methane (methanogenesis) would result in particularly light methane. "Reality, however, is surprisingly different", Gunter Wegener reports. "Contrary to the logic described above, we often find very light methane in sulfate-methane transition zones."

Nature doesn't follow the textbook: Light methane in sulfate-methane transition zones

This paradox raises questions, such as: Is methane not consumed there, but rather produced? And who, if not the numerous ANME archaea, should be responsible for this? "In my lab, we have the world's largest collection of ANME cultures. There we could try to find out if and how the methane oxidizers themselves could be responsible for the formation of light methane," Wegener continues. "The first results were deflating: At the high sulfate concentrations we normally find in seawater, the cultured microorganisms behaved according to the textbook. The remaining methane was enriched in the heavier isotopes." However, if the same experiments were carried out with little sulfate, methane got enriched in 12C, it became lighter. And this happened even though methane continued to be consumed at the same time - an effect that at first glance had little logic.

The availability of sulfate governs the isotopes effects in AOM

So how could they explain the unusual behavior of the methane isotopes? Jonathan Gropp and his mentor Itay Halevy from the Weizmann Institute of Science in Israel have spent years studying the isotope effects of microbial metabolisms, including methanogenesis - a reaction that is catalyzed by the same enzymes as the anaerobic oxidation of methane (AOM). Thus, they were the ideal partners for the team located in Bremen. "Both processes are based on a very similar cascade of seven reactions," says Gropp. "Previous studies have shown that all of these reactions are potentially reversible, meaning that they can take place in both directions. Each reaction also has its own isotope effects." With the help of a model, Gropp was able to show that, depending on how much sulfate is available, the partial reactions can be reversed to varying degrees. This could then lead to the situation that heavy isotopes are not as usual left behind but are stuck in the reaction chain, while light isotopes are channeled back to methane. "The microbes want to perform the reaction but are limited to do so because of the low sulfate concentrations," explains Gropp, adding that "Our designed model fits the isotope experiments very nicely."

The long hours in the laboratory and in front of the computer paid off for the researchers. With their study, Wegener, Gropp and their colleagues could show how AOM results in 13C-depleted methane. The experiments with little sulfate in particular nicely reflect the conditions in the natural habitat of the microorganisms, the sulfate-methane transition zones in the seafloor. There, the microorganisms often thrive on only little sulfate, as in the low-sulfate experiments. "Now we know that methane oxidizers can be responsible for the build-up of light isotopes in methane at sulfate-methane transition zones. Methanogenesis is not required for that. As we suspected, the ANME are methane oxidizers," concludes Marcus Elvert, last author of the current study. Now the researchers are ready for the next step and want to find if other reactions show similar isotope effects.

Credit: 
Max Planck Institute for Marine Microbiology

How accurate were early expert predictions on COVID-19, and how did they compare to the public?

Who made more accurate predictions about the course of the COVID-19 pandemic - experts or the public? A study from the University of Cambridge has found that experts such as epidemiologists and statisticians made far more accurate predictions than the public, but both groups substantially underestimated the true extent of the pandemic.

Researchers from the Winton Centre for Risk and Evidence Communication surveyed 140 UK experts and 2,086 UK laypersons in April 2020 and asked them to make four quantitative predictions about the impact of COVID-19 by the end of 2020. Participants were also asked to indicate confidence in their predictions by providing upper and lower bounds of where they were 75% sure that the true answer would fall - for example, a participant would say they were 75% sure that the total number of infections would be between 300,000 and 800,000.

The results, published in the journal PLOS ONE, demonstrate the difficulty in predicting the course of the pandemic, especially in its early days. While only 44% of predictions from the expert group fell within their own 75% confidence ranges, the non-expert group fared far worse, with only 12% of predictions falling within their ranges. Even when the non-expert group was restricted to those with high numeracy scores, only 16% of predictions fell within the ranges of values that they were 75% sure would contain the true outcomes.

"Experts perhaps didn't predict as accurately as we hoped they might, but the fact that they were far more accurate than the non-expert group reminds us that they have expertise that's worth listening to," said Dr Gabriel Recchia from the Winton Centre for Risk and Evidence Communication, the paper's lead author. "Predicting the course of a brand-new disease like COVID-19 just a few months after it had first been identified is incredibly difficult, but the important thing is for experts to be able to acknowledge uncertainty and adapt their predictions as more data become available."

Throughout the COVID-19 pandemic, social and traditional media have disseminated predictions from experts and nonexperts about its expected magnitude.

Expert opinion is undoubtedly important in informing and advising those making individual and policy-level decisions. However, as the quality of expert intuition can vary drastically depending on the field of expertise and the type of judgment required, it is important to conduct domain-specific research to establish how good expert predictions really are, particularly in cases where they have the potential to shape public opinion or government policy.

"People mean different things by 'expert': these are not necessarily people working on COVID-19 or developing the models to inform the response," said Recchia. "Many of the people approached to provide comment or make predictions have relevant expertise, but not necessarily the most relevant." Recchia noted that in the early COVID-19 pandemic, clinicians, epidemiologists, statisticians, and other individuals seen as experts by the media and the general public, were frequently asked to give off-the-cuff answers to questions about how bad the pandemic might get. "We wanted to test how accurate some of these predictions from people with this kind of expertise were, and importantly, see how they compared to the public."

For the survey, participants were asked to predict how many people living in their country would have died and would have been infected by the end of 2020; they were also asked to predict infection fatality rates both for their country and worldwide.

Both the expert group and the non-expert group underestimated the total number of deaths and infections in the UK. The official UK death toll at 31 December was 75,346. The median prediction of the expert group was 30,000, while the median prediction for the non-expert group was 25,000.

For infection fatality rates, the median expert prediction was that 10 out of every 1,000 people with the virus worldwide would die from it, and 9.5 out of 1,000 people with the virus in the UK would die from it. The median non-expert response to the same questions was 50 out of 1,000 and 40 out of 1,000. The real infection fatality rate at the end of 2020--as best the researchers could determine, given the fact that the true number of infections remains difficult to estimate--was closer to 4.55 out of 1,000 worldwide and 11.8 out of 1,000 in the UK.

"There's a temptation to look at any results that says experts are less accurate than we might hope and say we shouldn't listen to them, but the fact that non-experts did so much worse shows that it remains important to listen to experts, as long as we keep in mind that what happens in the real world can surprise you," said Recchia.

The researchers caution that it is important to differentiate between research evaluating the forecasts of 'experts'--individuals holding occupations or roles in subject-relevant fields, such as epidemiologists and statisticians--and research evaluating specific epidemiological models, although expert forecasts may well be informed by epidemiological models. Many COVID-19 models have been found to be reasonably accurate over the short term, but get less accurate as they try to predict outcomes further into the future.

Credit: 
University of Cambridge

Cryptic sense of orientation of bats localised: the sixth sense of mammals lies in the eye

image: A captured Nathusius bat (Pipistrellus nathusii) during the experiments.

Image: 
Photo by Oliver Lindecke

Mammals see with their eyes, hear with their ears and smell with their nose. But which sense or organ allows them to orient themselves on their migrations, which sometimes go far beyond their local foraging areas and therefore require an extended ability to navigate? Scientific experiments led by the Leibniz Institute for Zoo and Wildlife Research (Leibniz-IZW), published together with Prof. Richard A. Holland (Bangor University, UK) and Dr. Gunārs P?tersons (Latvia University of Life Sciences and Technologies) now show that the cornea of the eyes is the location of such an important sense in migrating bats. If the cornea is anaesthetised, the otherwise reliable sense of orientation is disturbed while light detection remains unimpaired. The experiment suggests the localisation of a magnetic sense in mammals. The paper is published in the scientific journal Communications Biology.

A research team led by Dr Oliver Lindecke and PD Dr Christian Voigt from Leibniz-IZW demonstrated for the first time that environmental signals that are important for navigating over long distances are picked up via the cornea of the eyes. They conducted experiments with Nathusius' bats (Pipistrellus nathusii) during the late summer migration period. In bats of one test group, the scientists locally anaesthetised the cornea with a drop of oxybuprocaine. This surface anaesthetic is widely used in ophthalmology, where it is used to temporarily desensitise the patients' cornea when eyes of humans or animals get overly irritated. Effects on orientation, however, had not been previously recorded. In another test group of bats, the research team anaesthetised the cornea of only one eye. The individuals in the control group were not anaesthetised, but instead received an isotonic saline solution as eye drops. All animals in this scientific experiment were captured within a migration corridor at the coastline of the Baltic Sea and released singly in the open field 11 kilometres inland from the capture site immediately after treatment. The scientists first used bat detectors to make sure that there were no other bats above the field at the time of release that the test animals could have followed. The person observing the direction of movement of released bats was unaware about how bats were treated experimentally. "The control group and the group with unilateral corneal anaesthesia oriented themselves clearly in the expected southerly directions, whereas the bats with bilateral anaesthetised corneas flew off in random directions," explains Dr Oliver Lindecke, first author of the paper. "This evident difference in behaviour suggests that corneal anaesthesia disrupted a sense of direction, yet orientation apparently still works well with one eye." As corneal treatment wears off after a short time, the bats were able to resume their journeys south after the experiment. "We observed here for the first time in an experiment how a migrating mammal was literally blown off course - a milestone in behavioural and sensory biology that allows us to study the biological navigation system in a more targeted way."

In order to rule out the possibility that the anaesthetisation of the cornea also affects the sense of sight and that the scientists would thus come to the wrong conclusions, they carried out a complementary test. Once again divided into experimental and control groups, they tested whether the response of bats to light changed after anaesthesia of the corneas on one or both sides. "We know from previous research that bats prefer an illuminated exit when leaving a simple Y-shaped labyrinth," explains PD Dr Christian Voigt, head of the Leibniz-IZW Department of Evolutionary Ecology. "In our experiment, the animals with one-sided or two-sided anaesthesia also showed this preference; we therefore can rule out that the ability to see light was altered after corneal treatment. The ability to see light would of course also influence long-distance navigation."

Many vertebrates such as bats, dolphins, whales, fish and turtles, for example, are able to safely navigate in darkness, whether it is under the open night sky, when it is cloudy at night or in caves and tunnels as well as in the depths of the oceans. For many decades, scientists have been searching for the sense or a sensory organ that enables animals to perform orientation and navigation tasks that seemed difficult to imagine for people. A magnetic sense, so far only demonstrated in a few mammals but poorly understood, is an obvious candidate. Experiments suggest that iron oxide particles within cells may act as "microscopic compass needles", as is the case in some species of bacteria.

Recent laboratory experiments on Ansell's mole-rat, relatives of the well-known naked mole rats that spend their lives in elaborate underground tunnel systems, suggest that the magnetic sense is located in the eye. Such a (magnetic) sense of orientation has not been checked in migratory mammals nor has it been possible to identify the specific organ or tissue which could provide the morphological basis for the required sensory receptors. The experiments of the team around Lindecke and Voigt now provide, for the first time, reliable data for the localisation of a sense of orientation in free-ranging, migratory mammals. Exactly what the sense in the cornea of the bats looks like, how it works and whether it is the long sought-after magnetic sense must be shown in future scientific investigations.

Credit: 
Leibniz Institute for Zoo and Wildlife Research (IZW)

A surprising discovery: Bats know the speed of sound from birth

image: Prof. Yossi Yovel

Image: 
Tel Aviv University

A new Tel Aviv University study has revealed, for the first time, that bats know the speed of sound from birth. In order to prove this, the researchers raised bats from the time of their birth in a helium-enriched environment in which the speed of sound is higher than normal. They found that unlike humans, who map the world in units of distance, bats map the world in units of time. What this means is that the bat perceives an insect as being at a distance of nine milliseconds, and not one and a half meters, as was thought until now.

The Study was published in PNAS.

In order to determine where things are in a space, bats use sonar - they produce sound waves that hit objects and are reflected back to the bat. Bats can estimatethe position of the object based on the time that elapses between the moment the sound wave is produced and the moment it is returned to the bat. This calculation depends on the speed of sound, which can vary in different environmental conditions, such as air composition or temperature. For example, there could be a difference of almost 10% between the speed of sound at the height of the summer, when the air is hot and the sound waves spread faster, and the winter season. Since the discovery of sonar in bats 80 years ago, researchers have been trying to figure out whether bats acquire the ability to measure the speed of sound over the course of their lifetime or are born with this innate, constant sense.

Now, researchers led by Prof. Yossi Yovel, head of the Sagol School of Neuroscience and a faculty member of the School of Zoology in the Faculty of Life Sciences and his former doctoral student Dr. Eran Amichai (currently studying at Dartmouth College) have succeeded in answering this question. The researchers conducted an experiment in which they were able to manipulate the speed of sound. They enriched the air composition with helium to increase the speed of sound, and under these conditions raised bat pups from the time of their birth, as well as adult bats. Neither the adult bats nor the bat pups were able to adjust to the new speed of sound and consistently landed in front of the target, indicating that they perceived the target as being closer - that is, they did not adjust their behavior to the higher speed of sound.

Because this occurred both in the adult bats that had learned to fly in normal environmental conditions and in the pups that learned to fly in an environment with a higher-than-normal speed of sound, the researchers concluded that the rate of the speed of sound in bats is innate - they have a constant sense of it. "Because bats need to learn to fly within a short time of their birth," explains Prof. Yovel, "we hypothesize that an evolutionary 'choice' was made to be born with this knowledge in order to save time during the sensitive development period."

Another interesting conclusion of the study is that bats do not actually calculate the distance to the target according to the speed of sound. Because they do not adjust the speed of sound encoded in their brains, it seems that they also do not translate the time it takes for the sound waves to return into units of distance. Therefore, their spatial perception is actually based on measurements of time and not distance.

Prof. Yossi Yovel: "What most excited me about this study is that we were able to answer a very basic question - we found that in fact bats do not measure distance, but rather time, to orient themselves in space. This may sound like a semantic difference, but I think that it means that their spatial perception is fundamentally different than that of humans and other visual creatures, at least when they rely on sonar. It's fascinating to see how diverse evolution is in the brain-computing strategies it produces."

Credit: 
Tel-Aviv University

Ancient DNA reveals origin of first Bronze Age civilizations in Europe

image: Skeleton of one of the two individuals who lived in the middle of the Bronze Age and whose complete genome was reconstructed and sequenced by the Lausanne team. It comes from the archaeological site of Elati-Logkas, in northern Greece.

Image: 
Ephorate of Antiquities of Kozani, Hellenic Ministry of Culture, Greece. Courtesy of Dr Georgia Karamitrou-Mentessidi.

The first civilisations to build monumental palaces and urban centres in Europe are more genetically homogenous than expected, according to the first study to sequence whole genomes gathered from ancient archaeological sites around the Aegean Sea. The study has been published in the journal Cell.

Despite marked differences in burial customs, architecture, and art, the Minoan civilization in Crete, the Helladic civilization in mainland Greece and the Cycladic civilization in the Cycladic islands in the middle of the Aegean Sea, were genetically similar during the Early Bronze age (5000 years ago).

The findings are important because it suggests that critical innovations such as the development of urban centres, metal use and intensive trade made during the transition from the Neolithic to the Bronze Age were not just due to mass immigration from east of the Aegean as previously thought, but also from the cultural continuity of local Neolithic groups.

The study also finds that by the Middle Bronze Age (4000-4,600 years ago), individuals from the northern Aegean were considerably different compared to those in the Early Bronze Age. These individuals shared half their ancestry with people from the Pontic-Caspian steppe, a large geographic region stretching between the Danube and the Ural rivers and north of the Black Sea, and were highly similar to present-day Greeks.

The findings suggest that migration waves from herders from the Pontic-Caspian steppe, or populations north of the Aegean that bear Pontic-Caspian Steppe like ancestry, shaped present-day Greece. These potential migration waves all predate the appearance of the earliest documented form of Greek, supporting theories explaining the emergence of Proto-Greek and the evolution of Indo-European languages in either Anatolia or the Pontic-Caspian Steppe region.

The team took samples from well-preserved skeletal remains at archaeological sites. They sequenced six whole genomes, four from all three cultures during the Early Bronze Age and two from a Helladic culture during the Middle Bronze Age.

The researchers also sequenced the mitochondrial genomes from eleven other individuals from the Early Bronze Age. Sequencing whole genomes provided the researchers with enough data to perform demographic and statistical analyses on population histories.

Sequencing ancient genomes is a huge challenge, particularly due to the degradation of the biological material and human contamination. A research team at the CNAG-CRG, played an important role in overcoming this challenge through using machine learning.

According to Oscar Lao, Head of the Population Genomics Group at the CNAG-CRG, "Taking an advantage that the number of samples and DNA quality we found is huge for this type of study, we have developed sophisticated machine learning tools to overcome challenges such as low depth of coverage, damage, and modern human contamination, opening the door for the application of artificial intelligence to palaeogenomics data."

"Implementation of deep learning in demographic inference based on ancient samples allowed us to reconstruct ancestral relationships between ancient populations and reliably infer the amount and timing of massive migration events that marked the cultural transition from Neolithic to Bronze Age in Aegean," says Olga Dolgova, postdoctoral researcher in the Population Genomics Group at the CNAG-CRG.

The Bronze Age in Eurasia was marked by pivotal changes on the social, political, and economic levels, visible in the appearance of the first large urban centres and monumental palaces. The increasing economic and cultural exchange that developed during this time laid the groundwork for modern economic systems--including capitalism, long-distance political treaties, and a world trade economy.

Despite their importance for understanding the rise of European civilisations and the spread of Indo-European languages, the genetic origins of the peoples behind the Neolithic to Bronze Age transition and their contribution to the present-day Greek population remain controversial.

Future studies could investigate whole genomes between the Mesolithic and Bronze Age in the Armenian and Caucasus to help further pinpoint the origins of migration into the Aegean, and to better integrate the genomic data with the existing archaeological and linguistic evidence.

Credit: 
Center for Genomic Regulation

Large bumblebees start work earlier

video: Bumblebees in the study

Image: 
Katie Hall

Larger bumblebees are more likely to go out foraging in the low light of dawn, new research shows.
University of Exeter scientists used RFID - similar technology to contactless card payments - to monitor when bumblebees of different sizes left and returned to their nest.

The biggest bees, and some of the most experienced foragers (measured by number of trips out), were the most likely to leave in low light.

Bumblebee vision is poor in low light, so flying at dawn or dusk raises the risk of getting lost or being eaten by a predator.

However, the bees benefit from extra foraging time and fewer competitors for pollen in the early morning.

"Larger bumblebees have bigger eyes than their smaller-sized nest mates and many other bees, and can therefore see better in dim light," said lead author Katie Hall, of the University of Exeter.

"We might expect all bumblebee foragers to leave the colony to forage as soon as there is enough light to allow them to fly.

"In fact, colonies seem to regulate the start of foraging.

"There is a balance of risks and rewards in low light - and most bees wait for higher light levels when they can see better and fly faster, with less risk from predators or getting lost and running out of energy.

"Our finding that more experienced bees are more likely to fly in lower light suggests that knowledge of food locations helps them navigate safely."

The study tracked the bees' behaviour over five days during warm periods of the flowering season.

Only a small proportion of foragers left the colony at dawn when light levels were below 10 lux.

Credit: 
University of Exeter

Many patients with cancer are experiencing loneliness and related symptoms during the COVID-19 pandemic

Loneliness and social isolation, which can have negative effects on health and longevity, are being exacerbated by the COVID-19 pandemic. More than half of surveyed adults with cancer have been experiencing loneliness in recent months, according to a study published early online in CANCER, a peer-reviewed journal of the American Cancer Society.

Studies conducted before the pandemic reported that 32 percent to 47 percent of patients with cancer are lonely. In this latest survey, which was administered in late May 2020, 53 percent of 606 patients with a cancer diagnosis were categorized as experiencing loneliness. Patients in the lonely group reported higher levels of social isolation, as well as more severe symptoms of anxiety, depression, fatigue, sleep disturbance, cognitive dysfunction, and pain. They were also less likely to be married or partnered, more likely to live alone, and more likely to have a lower annual household income.

The researchers note that while previous pre- and during COVID-19 studies found links between loneliness and the symptoms of anxiety, depression, fatigue, sleep disturbance, cognitive dysfunction, and pain, this study is the first to evaluate all of these symptoms in the same group of patients.

"Patients with cancer, as well as survivors, need to realize that feelings of loneliness and social isolation are very common during the COVID-19 pandemic. In addition to this sense of loneliness, they may be having feelings of anxiety, sadness, and fatigue, as well as problems sleeping and high rates of unrelieved pain--all at the same time," said lead author Christine Miaskowski, RN, PhD, FAAN, of the University of California, San Francisco.

Importantly, the study included individuals who were primarily white, well-educated, and had a high annual household income. "Given the racial/ethnic disparities associated with the COVID-19 pandemic, we hypothesize that the high symptom burden reported by the patients in our study will be higher in patients who are socioeconomically disadvantaged," said Dr. Miaskowski.

The investigators stressed that clinicians should ask patients about feelings of loneliness and assess for multiple co-occurring symptoms, and patients and survivors should not hesitate to report such symptoms to their primary care providers or oncologists. "Patients may warrant referrals to psychological services to assist with symptom management," said Dr. Miaskowski. "In addition, to decrease these feelings, patients and survivors can develop a schedule of social interactions; develop a structure to their daily activities; engage in regular exercise particularly in the outdoors; use stress reduction exercises; and eat a healthy diet."

May is Mental Health Awareness Month.

Credit: 
Wiley