Tech

Swimming at the mesoscale

A team of researchers from Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), the University of Liège and the Helmholtz Institute Erlangen-Nürnberg for Renewable Energy have developed a microswimmer that appears to defy the laws of fluid dynamics: their model, consisting of two beads that are connected by a linear spring, is propelled by completely symmetrical oscillations. The Scallop theorem states that this cannot be achieved in fluid microsystems. The findings have now been published in the academic journal Physical Review Letters.

Scallops can swim in water by quickly clapping their shells together. They are large enough to still be able to move forwards through the moment of inertia while the scallop is opening its shell for the next stroke. However, the Scallop theorem applies more or less depending on the density and viscosity of the fluid: A swimmer that makes symmetrical or reciprocal forward or backward motions similar to the opening and closing of the scallop shell will not move an inch. 'Swimming through water is as tough for microscopic organisms as swimming through tar would be for humans,' says Dr. Maxime Hubert. 'This is why single-cell organisms have comparatively complex means of propulsion such as vibrating hairs or rotating flagella.'

Swimming at the mesoscale

Dr. Hubert is a postdoctoral researcher in Prof. Dr. Ana-Suncana Smith's group at the Institute of Theoretical Physics at FAU. Together with researchers at the University of Liège and the Helmholtz Institute Erlangen-Nürnberg for Renewable Energy, the FAU team has developed a swimmer which does not seem to be limited by the Scallop theorem: The simple model consists of a linear spring that connects two beads of different sizes. Although the spring expands and contracts symmetrically under time reversal, the microswimmer is still able to move through the fluid.ü

'We originally tested this principle using computer simulations,' says Maxime Hubert. 'We then built a functioning model'. In the practical experiment, the scientists placed two steel beads measuring just a few hundred micrometres in diameter on the surface of water contained in a Petri dish. The surface tension of the water represented the contraction of the spring and expansion in the opposite direction was achieved with a magnetic field which caused the microbeads to periodically repel other.

Vision: Swimming robots for transporting drugs

The swimmer is able to propel itself because the beads are of different sizes. Maxime Hubert says, 'The smaller bead reacts much faster to the spring force than the larger bead. This causes asymmetrical motion and the larger bead is pulled along with the smaller bead. We are therefore using the principle of inertia, with the difference that here we are concerned with the interaction between the bodies rather than the interaction between the bodies and water.'

Although the system won't win any prizes for speed - it moves forwards about a thousandth of its body length during each oscillation cycle - the sheer simplicity of its construction and mechanism is an important development. 'The principle that we have discovered could help us to construct tiny swimming robots,' says Maxime Hubert. 'One day they might be used to transport drugs through the blood to a precise location.'

Credit: 
Friedrich-Alexander-Universität Erlangen-Nürnberg

3D printed replicas reveal swimming capabilities of ancient cephalopods

image: Reconstruction of the orthocone ammonite, Baculites compressus.

Image: 
David Peterman

University of Utah paleontologists David Peterman and Kathleen Ritterbush know that it's one thing to use math and physics to understand how ancient marine creatures moved through the water. It's another thing to actually put replicas of those creatures into the water and see for themselves. They're among the scientists who are, through a range of methods including digital models and 3-D printed replicas, "de-fossilizing" animals of the past to learn how they lived.

Peterman, Ritterbush and their colleagues took 3-D printed reconstructions of fossil cephalopods to actual water tanks (including a University of Utah swimming pool) to see how their shell structure may have been tied to their movement and lifestyle. Their research is published in PeerJ and in an upcoming memorial volume to the late paleontologist William Cobban. They found that cephalopods with straight shells called orthocones likely lived a vertical life, jetting up and down to catch food and evade predators. Others with spiral shells, called torticones, added a gentle spin to their vertical motions.

"Thanks to these novel techniques," says Peterman, a postdoctoral scholar in the Department of Geology and Geophysics, "we can trudge into a largely unexplored frontier in paleobiology. Through detailed modeling, these techniques help paint a clearer picture of the capabilities of these ecologically significant animals while they were alive."

The researchers are veterans of this style of "virtual paleontology," having worked with digital ammonoid models and 3-D printed versions to test hypotheses about their evolution and lifestyles. Most ammonoids have coiled shells, like today's chambered nautilus, and darted around the ocean in all directions.

But in their researcher published in PeerJ, Peterman and Ritterbush, assistant professor of Geology & Geophysics, explored a different shell shape--the straight-shelled orthocone. Straight shells evolved several times in different lineages throughout the fossil record, suggesting they had some adaptive value.

"This is important because orthocones span a huge chunk of time and are represented by hundreds of genera [plural of genus]," Peterman says, and many reconstructions and dioramas show orthocones as horizontal swimmers like squid. "They were major components of marine ecosystems, yet we know very little about their swimming capabilities."

So he and Ritterbush took 3-D scans of fossils of Baculites compressus, an orthocone species that lived during the Cretaceous, and designed four different digital models, each with different physical properties. Find an orthocone digital model here.

How did they know how to weight the structures of the models? "Math," Peterman says. They adjusted the centers of mass and counterweights within the models, representing the balances of soft tissue and air-filled voids that the orthocone would likely have maintained in its life. "The resultant model is balanced the same as the living animal, allowing very detailed analyses of their movement," he says.

The resultant 3-D printed models were nearly two feet long. With the help of Emma Janusz and Mark Weiss at the U's George S. Eccles Student Life Center, the researchers set up a camera rig in a 7-foot-deep part of the Crimson Lagoon pool and released the models underwater to see how they naturally moved.

The results showed clearly that the most efficient method of movement was vertical, since moving side to side created a lot of drag. "I was surprised by how stable they are," Peterman says. "Any amount of rotation away from their vertical orientation is met with a strong restoring moment so many species of living orthocones were likely unable to modify their own orientations. Furthermore, the source of jet thrust is situated so low that, during lateral movement, much energy would be lost due to rocking."

The results also showed that orthocones may have been capable of high velocities among shelled cephalopods. That could have come in handy in evading predators. Looking at the results of the pool experiments and calculating the time needed to escape modern predators (as stand-ins for the orthocones' long-extinct predators), they found that orthocones may have been able to jet upward fast enough to evade animals similar to crocodiles or whales. They may not have been as lucky against fast swimmers like sharks, however.

So most species of orthocones couldn't have lived a horizonal-swimming lifestyle. "Instead," Peterman says, "species without counterweights in their shells assumed a vertical life habit, either feeding near the seafloor or vertically migrating in the water column. While orthocones were not as athletic or active as modern squid, they could have maintained the ability to thwart predators with upward dodges."

Peterman and Ritterbush, along with recent graduate Nicholas Hebdon and Ryan Shell from the Cincinnati Museum Center, also ran a similar set of experiments with torticones, smaller cephalopods with a corkscrew-shaped shell. The results will be published in the American Association of Petroleum Geologists and Wyoming Geological Association Special Volume - Insights into the Cretaceous: Building on the Legacy of William A. Cobban (1916-2015). Although the torticones also likely preferred vertical movement, their shape caused a different result in the water, Peterman says."While orthocones were masters of vertical movement, torticones were masters of rotation."

Many mollusks today have similar helical shells, and some researchers previously assumed that torticones may have had a similar lifestyle, crawling along the seafloor. "However," Peterman says, "the hydrostatic models demonstrate that the chambered shells of torticone ammonoids had the capacity for neutral buoyancy, which would have liberated them from the seafloor. These ammonoids experience different forms of movement only possible in a free-swimming lifestyle."

In experiments conducted in a 50-gallon water tank (no swimming pool needed for the 6-inch-long torticone models that are available digitally here) the team found that the torticones naturally and efficiently rotated in the water just due to the shape of the shell, gently spinning face-first when descending and spinning the opposite direction when ascending. Also, they found, the placement of the torticones' source of thrust relative to their center of mass would have improved the efficiency of active rotation.

Rotating while descending, Peterman says, may have helped the torticones feed, allowing them to graze on small planktonic organisms.

"I was surprised at how easily torticones could rotate," Peterman says. "Even small thrusts such as breathing [gill ventilation] could have produced rotation of 20 degrees per second."

Both orthocones and torticones, because of their repeated appearance throughout the fossil record, not only show that cephalopods found some advantage to a straight or helical shell, as opposed to their nautilus-shaped coiled shell, but that an uncoiled shell might have evolved in times of "ecological saturation," when the ecological niches of coiled cephalopods were full.

Peterman says this work calls for a revision of how we envision the ancient ocean.

"These experiments," he says, "transform our understanding of ancient ecosystems. Rather than crawling along the seafloor like snails, or swiftly swimming like modern squid, these animals were assuming rather unique lifestyles. These experiments refine our understanding of these animals by painting a picture of ancient seascapes dotted with pirouetting helical cephalopods and vertically-oriented orthocones."

Credit: 
University of Utah

Thinking without a brain

If you didn't have a brain, could you still figure out where you were and navigate your surroundings? Thanks to new research on slime molds, the answer may be "yes." Scientists from the Wyss Institute at Harvard University and the Allen Discovery Center at Tufts University have discovered that a brainless slime mold called Physarum polycephalum uses its body to sense mechanical cues in its surrounding environment, and performs computations similar to what we call "thinking" to decide in which direction to grow based on that information. Unlike previous studies with Physarum, these results were obtained without giving the organism any food or chemical signals to influence its behavior. The study is published in Advanced Materials.

"People are becoming more interested in Physarum because it doesn't have a brain but it can still perform a lot of the behaviors that we associate with thinking, like solving mazes, learning new things, and predicting events," said first author Nirosha Murugan, a former member of the Allen Discovery Center who is now an Assistant Professor at Algoma University in Ontario, Canada. "Figuring out how proto-intelligent life manages to do this type of computation gives us more insight into the underpinnings of animal cognition and behavior, including our own."

Slimy action at a distance

Slime molds are amoeba-like organisms that can grow to be up to several feet long, and help break down decomposing matter in the environment like rotting logs, mulch, and dead leaves. A single Physarum creature consists of a membrane containing many cellular nuclei floating within a shared cytoplasm, creating a structure called a syncytium. Physarum moves by shuttling its watery cytoplasm back and forth throughout the entire length of its body in regular waves, a unique process known as shuttle streaming.

"With most animals, we can't see what's changing inside the brain as the animal makes decisions. Physarum offers a really exciting scientific opportunity because we can observe its decisions about where to move in real-time by watching how its shuttle streaming behavior changes," said Murugan. While previous studies have shown that Physarum moves in response to chemicals and light, Murugan and her team wanted to know if it could make decisions about where to move based on physical cues in its environment alone.

The researchers placed Physarum specimens in the center of petri dishes coated with a semi-flexible agar gel and placed either one or three small glass discs next to each other atop the gel on opposite sides of each dish. They then allowed the organisms to grow freely in the dark over the course of 24 hours, and tracked their growth patterns. For the first 12 to 14 hours, the Physarum grew outwards evenly in all directions; after that, however, the specimens extended a long branch that grew directly over the surface of the gel toward the three-disc region 70% of the time. Remarkably, the Physarum chose to grow toward the greater mass without first physically exploring the area to confirm that it did indeed contain the larger object.

How did it accomplish this exploration of its surroundings before physically going there? The scientists were determined to find out.

It's all relative

The researchers experimented with several variables to see how they impacted Physarum's growth decisions, and noticed something unusual: when they stacked the same three discs on top of each other, the organism seemed to lose its ability to distinguish between the three discs and the single disc. It grew toward both sides of the dish at roughly equal rates, despite the fact that the three stacked discs still had greater mass. Clearly, Physarum was using another factor beyond mass to decide where to grow.

To figure out the missing piece of the puzzle, the scientists used computer modeling to create a simulation of their experiment to explore how changing the mass of the discs would impact the amount of stress (force) and strain (deformation) applied to the semi-flexible gel and the attached growing Physarum. As they expected, larger masses increased the amount of strain, but the simulation revealed that the strain patterns the masses produced changed, depending on the arrangement of the discs.

"Imagine that you are driving on the highway at night and looking for a town to stop at. You see two different arrangements of light on the horizon: a single bright point, and a cluster of less-bright points. While the single point is brighter, the cluster of points lights up a wider area that is more likely to indicate a town, and so you head there," said co-author Richard Novak, Ph.D., a Lead Staff Engineer at the Wyss Institute. "The patterns of light in this example are analogous to the patterns of mechanical strain produced by different arrangements of mass in our model. Our experiments confirmed that Physarum can physically sense them and make decisions based on patterns rather than simply on signal intensity."

The team's research demonstrated that this brainless creature was not simply growing toward the heaviest thing it could sense - it was making a calculated decision about where to grow based on the relative patterns of strain it detected in its environment.

But how was it detecting these strain patterns? The scientists suspected it had to do with Physarum's ability to rhythmically contract and tug on its substrate, because the pulsing and sensing of the resultant changes in substrate deformation allows the organism to gain information about its surroundings. Other animals have special channel proteins in their cell membranes called TRP-like proteins that detect stretching, and co-author and Wyss Institute Founding Director Donald Ingber, M.D., Ph.D had previously shown that one of these TRP proteins mediates mechanosensing in human cells. When the team created a potent TRP channel-blocking drug and applied it to Physarum, the organism lost its ability to distinguish between high and low masses, only selecting the high-mass region in 11% of the trials and selecting both high- and low-mass regions in 71% of trials.

"Our discovery of this slime mold's use of biomechanics to probe and react to its surrounding environment underscores how early this ability evolved in living organisms, and how closely related intelligence, behavior, and morphogenesis are. In this organism, which grows out to interact with the world, its shape change is its behavior. Other research has shown that similar strategies are used by cells in more complex animals, including neurons, stem cells, and cancer cells. This work in Physarum offers a new model in which to explore the ways in which evolution uses physics to implement primitive cognition that drives form and function," said corresponding author Mike Levin, Ph.D., a Wyss Associate Faculty member who is also the Vannevar Bush Chair and serves and Director of the Allen Discovery Center at Tufts University.

The research team is continuing its work on Physarum, including investigating at what point in time it makes the decision to switch its growth pattern from generalized sampling of its environment to directed growth toward a target. They are also exploring how other physical factors like acceleration and nutrient transport could affect Physarum's growth and behavior.

"This study confirms once again that mechanical forces play as important a role in the control of cell behavior and development as chemicals and genes, and the process of mechanosensation uncovered in this simple brainless organism is amazingly similar to what is seen in all species, including humans," said Ingber. "Thus, a deeper understanding how organisms use biomechanical information to make decisions will help us to better understand our our own bodies and brains, and perhaps even provide insight into new bioinspired forms of computation." Ingber is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children's Hospital, and Professor of Bioengineering at the Harvard John A. Paulson School of Engineering and Applied Sciences.

Credit: 
Wyss Institute for Biologically Inspired Engineering at Harvard

Removing the lead hazard from perovskite solar cells

image: A transparent phosphate crystal, which incorporated into solar cells, can instantaneously immobilize the lead in case of failure and block its leaching out from the device.

Image: 
Endre Horváth (EPFL)

"The solar energy-to-electricity conversion of perovskite solar cells is unbelievably high, around 25%, which is now approaching the performance of the best silicon solar cells," says Professor László Forró at EPFL's School of Basic Sciences. "But their central element is lead, which is a poison; if the solar panel fails, it can wash out into the soil, get into the food chain, and cause serious diseases."

The problem is that in most of the halide perovskites lead can dissolve in water. This water solubility and solubility in other solvents is actually a great advantage, as it makes building perovskite solar panels simpler and inexpensive - another perk along with their performance. But the water solubility of lead can become a real environmental and health hazard when the panel breaks or gets wet, e.g. when it rains.

So the lead must be captured before it gets to the soil, and it must be possible to recycle it. This issue has drawn much and intensive research because it is the main obstacle for regulatory authorities approving the production of perovskite solar cells on a large, commercial scale. However, attempts to synthesize non-water-soluble and lead-free perovskites have yielded poor performance.

Now, Forró's group has come up with an elegant and efficient solution, which involves using a transparent phosphate salt, which does not block solar light, so it doesn't affect performance. And if the solar panel fails, the phosphate salt immediately reacts with lead to produce a water-insoluble compound that cannot leach out to the soil, and which can be recycled. The work is published in ACS Applied Materials & Interfaces.

"A few years ago, we discovered that cheap and transparent phosphate salt crystals, like those in soil fertilizers, can be incorporated into various parts of the sandwich-like lead halide perovskite devices, like photodetectors, LEDs or solar cells," says Endre Horváth, the study's first author. "These salts instantaneously react with lead ions in the presence of water, and precipitate them into extremely non-water-soluble lead phosphates."

"The 'fail-safe' chemistry keeps lead ions from leaching out and can render perovskite devices safer to use in the environment or close to humans," says Márton Kollár, the chemist behind the growth of perovskite crystals.

"We show that this approach can be used to build functional photodetectors, and we suggest that the broad community of researchers and R&D centers working on various devices like solar cells and light-emitting diodes implements it in their respective prototypes," adds Pavao Andričevic, who characterized the sensitive photodetectors.

Forró concludes: "This is an extremely important study - I would say, a central one - for large-scale commercialization of perovskite-based solar cells."

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Extraordinary carbon emissions from El Nino-induced biomass burning estimated using Japanese aircraft and shipboard observations in Equatorial Asia

video: A movie to demonstrate the aircraft and ship observations and overview the results.

Image: 
NIES

Equatorial Asia, which includes Indonesia, Malaysia, Papua New Guinea, and surrounding areas, experienced devastating biomass burning in 2015 due to the severe drought condition induced by the extreme El Niño and a positive anomaly of the Indian Ocean dipole. This biomass burning emitted a significant amount of carbon, mainly in the form of carbon dioxide (CO2), into the atmosphere.

Equatorial Asia has very few ground-based stations that observe CO2 and other related atmospheric constitutents. Meanwhile, a few satellites could observe atmospheric CO2; however, their observations were less available and subject to errors due to cumulus cloulds typical in the tropics and smokes from the biomass burning.

To estimate the fire-induced carbon emissions from Equatorial Asia for 2015, the research team of the National Institute for Envrionmental Studies (NIES), Japan, and Meteorological Research Institute (MRI), Japan, expoloited high-precision observations onboard commercial passenger aircraft and a cargo ship that traveled in Equatorial Asia. These observations are unique because measurements are made on a moving platform, enabling to capture three-dimensional gradients of atmospheric CO2 concentrations. The aircraft observations are conducted by the NIES-MRI collaborative research project named CONTRAIL. The shipboard observations are operated by NIES as a part of the Global Environmental Monitoring project.

Using these aircraft and ship observations, the team performed an inverse analysis, which is based on numerical simulations of atmospheric transport, and estimated that the amount of carbon emitted from Equatorial Asia in September - October 2015 was 273 Tg C. The validity of the simulation-based analysis was carefully evaluated by comparing the simulated atomospheric concentrations with those of the observations, not only for CO2, but also for carbon monoxide, which was used as a proxy for combustion sources. In fact, this estimate is slightly smaller than the estimates of previous studies. However, nearly 300 Tg C emissions for only two months are extraordinary because they are comparable to the annual CO2 emissions from Japan.

"Our analysis is the first study that used in-situ high-precision observations for estimating the fire-induced emissions from Equatorial Asia for 2015, which contributes to improving our understanding of biomass burning in this region. It is considered that biomass burning here is dominated by fires in peatland, which has a remarkably high carbon density. Because Equatorial Asia has a significant amount of peatland, the region has a distinct role in the global carbon cycle despite its small terrestrial coverage", Yosuke Niwa (NIES & MRI), the leading author of the study, said. "Peatland forms over thousands of years. Therefore, it is difficult to restore carbon in burnt peatland. Meanwhile, forests could recover by taking CO2 from the atmosphere. However, such CO2 uptake by forest regrowth would not be large enough if the burnt land was converted to a crop field or fires frequently occurred at the same place", said Akihiko Ito (NIES), a coauthor of the study who works on terrestrial biosphere simulations. "It is important to keep on monitoring atmospheric CO2 concentrations. Despite difficult circumstances due to the COVID-19 pandemic, these observations are ongoing thanks to great efforts of the commercial companies that operate the aircraft and the ship. To provide useful information for mitigating climate change, we will continue these observations for many years to come", Toshinobu Machida (NIES), the leader of the CONTRAIL project, said.

Credit: 
National Institute for Environmental Studies

Stakeholders' sentiment can make or break a new CEO

image: Dovev Lavie, Bocconi University

Image: 
Paolo Tonato

When a CEO steps down or is dismissed, the attention of the board is on how to choose the right executive to succeed that CEO. However, Bocconi University professor Dovev Lavie claims that managing the process of introducing the new CEO and choking the negative sentiment that can arise among stakeholders in a moment of uncertainty could be a more critical task, especially when the new CEO comes from outside the firm.

The effect of such a negative sentiment, which is a form of psychological bias, on a firm's performance is stronger than the implications of the new CEO's previous experience and fit between the CEO's corporate background and the appointing firm's characteristics.

Professor Lavie and co-authors Thomas Keil (University of Zurich) and Stevo Pavi?evi? (Frankfurt School of Finance and Management), in a paper in-press on the Academy of Management Journal, investigate the link between the appointment of an outside CEO and a firm's performance. The appointment of outside CEOs has become increasingly common in recent years, with about a third of appointed CEOs originating from outside the firm. Yet most studies report that they underperform compared to inside CEOs and exhibit greater performance variability.

Analyzing 1,275 appointments in 882 publicly listed US firms during 2001-2014, the study refines existing CEO-centric theories on the effect of a CEO's previous experience and fit, while introducing a novel stakeholder-centric perspective, which relates post-succession performance to the sentiment of stakeholders toward CEO appointments.

"Our findings reveal that, counter to expectations, the length and breadth of a CEO's executive experience do not enhance firm performance, nor explain the performance differences between inside and outside CEOs," Professor Lavie says. "Rather, the misfit between the CEO's corporate background and the appointing firm's characteristics, in terms of industry, firm size and firm age, undermines firm performance, especially following outside CEO appointments."

More importantly, the authors contend that negative sentiment originating from the firm's stakeholders toward the appointment of the CEO may hamper a new CEO's effectiveness, and hence firm performance.

They capture stakeholder sentiment using textual analysis of 27,092 press items around the announcement of CEO successions and further study the reactions of relevant stakeholders, such as analysts' buy/sell recommendations, executives' sales of their firm's stocks, and employees' ratings of their new CEOs.

Negative sentiment turns out to undermine post-succession performance and this effect is more significant than that of corporate misfit. Negative sentiment, if left unattended, can escalate, leading to increased scrutiny, deprived support, organizational resistance, and reputational damage.

Both inside and outside CEOs can suffer negative sentiment, but the effects are stronger for outside CEOs because they are less familiar with the hiring firm and cannot readily use workplace politics and personal networks to muffle them. The study finds that outside CEOs have an advantage so long as their corporate background makes a good fit with the hiring firm and the negative sentiment is effectively dealt with.

"Boards should develop practices that enable outside CEOs to quickly integrate into the firm and its social networks, while carefully and promptly managing the reactions of all stakeholders. For their part, newly appointed CEOs should consider tactics for offsetting negative sentiment before it escalates, e.g., by restructuring the top management team, leveraging public relations, or building social ties in the firm prior to taking office," the authors suggest.

Credit: 
Bocconi University

Staying on schedule

Tsukuba, Japan - A team of scientists led by Associate Professor Haruka Ozaki of the Center for Artificial Intelligence Research at the University of Tsukuba in collaboration with Dr. Koichi Takahashi from RIKEN used mathematical algorithms to optimize the schedule of automated biology laboratory robots. By analyzing the needs of time-sensitive samples that require investigation using multiple instruments, the researchers were able to maximize the number of experiments that can be performed within time and laboratory resource constraints. This work may help in the design of future automated biology labs and other workspaces.

Biology laboratories have seen increasing automation because many tasks, like pipetting solutions or moving cells from one instrument to another, can be performed by robots. Controlling the schedule for these machines to get the most experiments done is complicated by the fact that living cells and perishable reagents often come with their own special time constraints. Previous scheduling algorithms did not focus on "time constraints by mutual boundaries," in which there is a limited time allowed between the start or end of an operation with the boundary of another. These types of constraints occur frequently in biology, where, for example, a protein will become denatured or degraded if not processed right away.

Now, researchers at the University of Tsukuba have developed a new mathematical framework that takes into account these time constraints, along with the possibility of resource conflicts, such as limited instrument capacity. "We call our approach 'S-LAB,' which stands for 'scheduling for laboratory automation in biology,' to emphasize the special time limitations encountered in these types of facilities," author Associate Professor Ozaki says.

Even in an age of multipurpose robots, several different types of laboratory instruments, such as thermal cyclers or evaporators, are required to execute even simple biology experiments. The robots must be programmed to shuttle the samples back and forth without the cells becoming exposed to the elements for too long. "By using the proposed scheduling method, automated laboratories can improve the efficiencies in a wide range of life science experiments," Associate Professor Ozaki says. This research may also be applied to other industrial processes that involve perishable materials that cannot be left out indefinitely.

Credit: 
University of Tsukuba

Emergent magnetic monopoles isolated using quantum-annealing computer

image: Researchers have used a D-Wave quantum-annealing computer as a testbed to examine the behavior of emergent magnetic monopoles. Shown here, emergent magnetic monopoles traverse a lattice of qubits in a superconducting quantum annealer. Nonzero flux programmed around the boundary creates a trapped monopole in the degenerate ground state.

Image: 
Los Alamos National Laboratory and D-Wave Systems

LOS ALAMOS, N.M., July 15, 2021-- Using a D-Wave quantum-annealing computer as a testbed, scientists at Los Alamos National Laboratory have shown that it is possible to isolate so-called emergent magnetic monopoles, a class of quasiparticles, creating a new approach to developing "materials by design."

"We wanted to study emergent magnetic monopoles by exploiting the collective dynamics of qubits," said Cristiano Nisoli, a lead Los Alamos author of the study. "Magnetic monopoles, as elementary particles with only one magnetic pole, have been hypothesized by many, and famously by Dirac, but have proved elusive so far."

They realized an artificial spin ice by using the superconducting qubits of the quantum machine as a magnetic building block. Generating magnetic materials with exotic properties in this way is ground-breaking in many ways. Their process used Gauss's law to trap monopoles, allowing the scientists to observe their quantum-activated dynamics and their mutual interaction. This work demonstrates unambiguously that magnetic monopoles not only can emerge from an underlying spin structure, but can be controlled, isolated and studied precisely.

"It was shown in the last decade or so that monopoles can emerge as quasiparticles to describe the excitation spin ices of various geometries. Previously, the National High Magnetic Field Laboratory's Pulsed Field Facility here at Los Alamos was able to 'listen' to monopole noise in artificial spin ices. And now, utilizing a D-Wave quantum annealing system, we have enough control to actually trap one or more of these particles and study them individually. We saw them walking around, getting pinned down, and being created and annihilated in pairs of opposite magnetic charge. And we could thus confirm our quantitative theoretical predictions, that they interact and in fact screen each other," said Nisoli.

"D-Wave's processors are designed to excel in optimization, but can also be used as quantum simulators. By programming the desired interactions of our magnetic material into D-Wave's qubits, we can perform experiments that are otherwise extremely difficult," said Andrew King, director of Performance Research at D-Wave and an author on the paper. "This collaborative, proof-of-principle work demonstrates new experimental capabilities, improving the power and versatility of artificial spin ice studies. The ability to programmatically manipulate emergent quasiparticles may become a key aspect to materials engineering and even topological quantum computing; we hope it will be foundational for future research."

Nisoli added, "We have only scratched the surface of this approach. Previous artificial spin ice systems were realized with nanomagnets, and they obeyed classical physics. This realization is instead fully quantum. To avoid leapfrogging we concentrated so far on a quasi-classical study, but in the future, we can really crank up those quantum fluctuations, and investigate very timely issues of decoherence, memory, quantum information, and topological order, with significant technological implications."

"These results also have technological consequences particularly relevant to DOE and Los Alamos, specifically in the idea of materials-by-design, to produce future nanomagnets that might show advanced and desirable functionality for sensing and computation. Monopoles, as binary information carriers, can be relevant to spintronics. They also contribute significantly to Los Alamos D-Wave investments," noted Alejandro Lopez- Bezanilla of Los Alamos, who works on the D-Wave processor and assembled the team.

Nisoli, moreover, suggests that beside fruitful applications, these results could perhaps also provide food for thought to fundamental physics.

"Our fundamental theories of particles are parametrized models. One wonders: what is a particle? We show here experimentally that not only particles but also their long-range interactions can be a higher-level description of a very simple underlying structure, one only coupled at nearest-neighbors. Could even 'real' particles and interactions that we consider fundamental, such as leptons and quarks, instead be construed as an emergent, higher-level description of a more complex lower-level binary substratum, much like our monopoles emerging from a bunch of qubits?"

Credit: 
DOE/Los Alamos National Laboratory

Diversity of US health care workers

What The Study Did: Researchers examined the diversity and representation by race/ethnicity and sex in select health care occupations in the United States from 2000 to 2019.

Authors: Anupam B. Jena, M.D., Ph.D., of Harvard Medical School in Boston, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamanetworkopen.2021.17086)

Editor's Note: The article includes conflict of interest disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, conflict of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

Measuring nitrogen to improve its management

The business world is familiar with Peter Drucker's assertion that "If you can't measure it, you can't improve it." For the sake of environmental sustainability and food security, there is an urgent need for agriculture to improve its use of nitrogen fertilizers, but can we properly measure it?

A new paper* published in Nature Food offers the first comprehensive comparison of the most advanced international efforts to measure how nitrogen is managed in agriculture. Zhang et al synthesize results from nearly thirty researchers from ten different research groups across the world, including universities, private sector fertilizer associations, and the United Nations Food and Agriculture Organization (FAO). They each estimated how much nitrogen is added to croplands as fertilizer and manure, how much of the added nitrogen is harvested in crops, and how much is left over as potential environmental pollution.

"This intercomparison project enables researchers, agronomists, and policy makers to identify where we can improve nitrogen budget estimates," said lead author, Associate Professor Xin Zhang of the University of Maryland Center for Environmental Science. "This knowledge is the basis for improving sustainable nitrogen management and for addressing food security and environmental pollution challenges."

Nitrogen matters because it is essential for farmers to obtain good crop yields, but when a large fraction of it is not taken up by the intended crops, it leaks into the environment as nitrate in groundwater, rivers, lakes, and estuaries, where it contributes to noxious and harmful algal blooms and can pose human health risks. Excess nitrogen can also be lost from croplands as gaseous pollutants that pose respiratory human health risks and contribute to climate change and stratospheric ozone destruction. Hence, nitrogen needs to be managed carefully to maximize food production but minimize environmental pollution.

''Learning how to monitor nitrogen use in agriculture is a fundamental component of the 2030 Sustainable Development Agenda," said coauthor Dr. Francesco Tubiello of the FAO in Italy. "This study supports the development of improved national statistics that can be used to this end.''

"At first blush, this new study demonstrated some surprising and troubling differences among the ten research groups, suggesting that our ability to measure, and thus manage this essential nutrient and potent pollutant is not as good as it needs to be," said Eric Davidson, Professor at the University of Maryland Center for Environmental Science. "Digging into the data more deeply, however, many of these differences were explained by varying definitions and methods used by the different groups."

There is widespread agreement among these experts that use of nitrogen fertilizers in still growing, the average global efficiency of their use is stagnant, and so the surplus nitrogen that is not taken up by crops is also growing at a troubling rate. The types of crops and the geographic regions where improvements in measurement were also identified, thus facilitating needed improvements in both measurements and management.

"The United Nations Environment Programme adopted a resolution in 2019 calling for a global action to promote sustainable nitrogen management," noted contributing author Dr. Luis Lassaletta of Universidad Politécnica de Madrid. "Cutting nitrogen waste in half by 2030 would be an ambitious goal that would significantly improve environmental quality," he added.

The first step to action, however, is to obtain good estimates of nitrogen budgets in agricultural systems, as demonstrated in this study, so that we can better manage what we are able to measure with greater confidence.

Credit: 
University of Maryland Center for Environmental Science

Report outlines how public transit agencies can advance equity

Austin, Texas (July 15, 2021)

Access to high-quality public transportation can make communities more equitable by increasing access to critical opportunities such as employment, health care and healthy food, particularly for low-income individuals and people of color. A new paper published today in the Transportation Research Record identifies six broad categories of equity-advancing practices that reach beyond existing guidelines and could be widely employed by public transit agencies nationwide.

"Many of the established practices for understanding and advancing public transit equity focus on precise quantitative measurements that are disconnected from riders' day-to-day experiences," said Alex Karner, an assistant professor of community and regional planning at The University of Texas at Austin and the study's lead author. "In transit, equity goes far beyond simply assessing how service is distributed. We wanted to lift up practices that agencies were using to create fairer and more just public transit systems."

The report studied eight public transit providers in various cities across the country and identified six practices that can help ensure that public transit works well for those who need it the most. These are:

Establishing advisory committees to provide more formal, regular and specialized channels for public input than can be achieved through traditional meetings;

Partnering with advocacy organizations, which can overcome barriers to public involvement and include hard-to-reach populations;

Incorporating equity into capital planning to ensure that transit vehicles, maintenance and system expansions equitably benefit population groups;

Planning with other regional transportation agencies that are often a critical venue for equity-related conversations that cross regional boundaries, covering issues such as gentrification, housing affordability, commuter-oriented public transit and other issues;

Using ride-hailing and microtransit solutions, where appropriate, to facilitate public transit use and reduce gaps in service; and

Creating an equity culture by altering hiring, contracting and organizational practices to better weave equity principles throughout an entire agency.

In addition to establishing these broad categories, the paper assesses each method, offering insight into its limitations and opportunities by assessing real-world implementation as employed by the eight public transit organizations included in the report. Some of the highlights include the convening of a "Transit Equity Advisory Committee" by the Tri-County Metropolitan District of Oregon (TriMet) that successfully advocated for a reduced-fare program and decriminalized fare evasion; and TriMet's subsequent creation of a dedicated Department of Equity, Inclusion and Community Affairs to assist with their equity-related goals.

"At the end of the day, transportation equity is about fairness," Karner said. "There are many ways that public transit agencies can pursue this goal. Our key result is that the agencies doing the most in this space have made it their mission to incorporate equity into all aspects of their day-to-day operations. And they are the most likely to succeed."

The transit organizations included in the study are Capital Metro in Austin; the Champaign-Urbana Mass Transit District; LINK Houston, an equity-oriented nonprofit organization in Houston; the Metropolitan Transit Authority of Harris County in the Houston Metro; the Massachusetts Bay Transportation Authority; the Massachusetts Department of Transportation; TriMet; and rabbittransit, a rural transit provider in southeast Pennsylvania.

Credit: 
SAGE

New research at ESMT Berlin shows potential variance in academic research

The research seeks to understand what drives decisions in data analyses and the process through which academics test a hypothesis by comparing the analyses of different researchers who tested the same hypotheses on the same dataset. Analysts reported radically different analyses and dispersed empirical outcomes, including, in some cases, significant effects in opposite directions from each other. Decisions about variable operationalizations explained the lack of consistency in results beyond statistical choices (i.e., which analysis or covariates to use).

"Our findings illustrate the importance of analytical choices and how different statistical methods can lead to different conclusions," says Martin Schweinsberg. "An academic research question can sometimes be investigated in different ways, even if the answers are derived from the same dataset and by analysts without any incentives to find a particular result, and this research highlights this."

To conduct the research, Professor Schweinsberg recruited a crowd of analysts from all over the world to test two hypotheses regarding the effects of scientists' gender and professional status on active participation in group conversations. Using the online academic forum Edge, researchers analyzed group discussion data of scientific discussions from more than two decades (1996-2014). The dataset contained more than 3 million words from 728 contributors and 150 variables related to the conversation, its contributors, or the textual level of the transcript. Then, using the new platform DataExplained, developed by co-authors Michael Feldman, Nicola Staub, and Abraham Bernstein, researchers analyzed the data in R to identify whether there was a link between a scientist's gender or professional status with their levels of verbosity.

Analysts utilized various sets of sample sizes, statistical approaches, and covariates, which led to several different results in relation to the hypotheses. This, therefore, resulted in various, yet defensible findings from the various analysts. By using Data Explained, Professor Schweinsberg and colleagues were able to understand precisely how these analytical choices differed, despite the data and hypotheses being the same. A qualitative study of the R-code used by analysts revealed a process model for the psychology behind data analyses.

Professor Schweinsberg says, "Our study illustrates the benefits of transparent and open science practices. Subjective analytical choices are unavoidable, and we should embrace them because a collection of diverse analytical backgrounds and approaches can reveal the true consistency of an empirical claim."

This research shows the critical role subjective researcher decisions play in influencing reported empirical results. According to the researchers, these findings stress the importance of open data, which is publicly available, systematic robustness checks in academic research, and as much transparency as possible regarding both analytic paths taken and not taken, in order to ensure research is as accurate as possible. They also suggest humility when communicating research findings and caution in applying them to organizational decision-making.

Credit: 
ESMT Berlin

Unlocking efficient light-energy conversion with stable coordination nanosheets

image: Scientists from Japan and Taiwan designed a nanosheet material using iron and benzenehexathiol that made for a high-performance self-powered UV photodetector with a record current stability after 60 days of air exposure.

Image: 
Hiroshi Nishihara from Tokyo University of Science

Converting light to electricity effectively has been one of the persistent goals of scientists in the field of optoelectronics. While improving the conversion efficiency is a challenge, several other requirements also need to be met. For instance, the material must conduct electricity well, have a short response time to changes in input (light intensity), and, most importantly, be stable under long-term exposure.

Lately, scientists have been fascinated with "coordination nanosheets" (CONASHs), that are organic-inorganic hybrid nanomaterials in which organic molecules are bonded to metal atoms in a 2D network. The interest in CONASHs stems mainly from their ability to absorb light at multiple wavelength ranges and convert them into electrons with greater efficiency than other types of nanosheets. This feat was observed in a CONASH comprising a zinc atom bonded with a porphyrin-dipyrrin molecule. Unfortunately, the CONASH quickly became corroded due to the low stability of organic molecules in liquid electrolytes (a medium commonly used for current conduction).

"The durability issue needs to be solved to realize the practical applications of CONASH-based photoelectric conversion systems," says Prof. Hiroshi Nishihara from Tokyo University of Science (TUS), Japan, who conducts research on CONASH and has been trying to solve the CONASH stability problem.

Now, in a recent study published in Advanced Science as a result of a collaborative research between National Institute for Materials Science (NIMS), Japan and TUS, Prof. Nishihara and his colleagues, Dr. Hiroaki Maeda and Dr. Naoya Fukui from TUS, Dr. Ying-Chiao Wang and Dr. Kazuhito Tsukagoshi from NIMS, Mr. Chun-Hao Chiang and Prof. Chun-Wei Chen from National Taiwan University, Taiwan, and Dr. Chi-Ming Chang and Prof. Wen-Bin Jian from National Chiao-Tung University, Taiwan, have designed a CONASH comprising an iron (Fe) ion bonded to a benzene hexathiol (BHT) molecule that has demonstrated the highest stability under air exposure reported so far. The new FeBHT CONASH-based photodetector can retain over 94% of its photocurrent after 60 days of exposure! Moreover, the device requires no external power source.

What made such a feat possible? Put simply, the scientists made some smart choices. Firstly, they went for an all-solid architecture by replacing the liquid electrolyte with a solid-state layer of Spiro-OMeTAD, a material known to be an efficient transporter of "holes" (vacancies left behind by electrons). Secondly, they synthesized the FeBHT network from a reaction between iron ammonium sulfate and BHT, which accomplished two things: one, the reaction was slow enough to keep the sulfur group protected from being oxidized, and two, it helped the resultant FeBHT network become resilient to oxidation, as the scientists confirmed using density functional theory calculations.

In addition, the FeBHT CONASH favored high electrical conductivity, showed an enhanced photoresponse with a conversion efficiency of 6% (the highest efficiency previously reported was 2%), and a response time

With these results, the scientists are thrilled about the prospects of CONASH in commercialized optoelectronic applications. "The high performance of the CONASH-based photodetectors coupled with the fact that they are self-powered can pave the way for their practical applications such as in light-receiving sensors that can be used for mobile applications and recording the light exposure history of objects," says Prof. Nishihara excitedly.

And his vision may not be too far from being realized!

Credit: 
Tokyo University of Science

'Get out of the water!' Monster shark movies massacre shark conservation

image: 96% of shark movies overtly portray sharks as a threat to humans.

Image: 
Unsplash

Undeniably the shark movie to end all shark movies, the 1975 blockbuster, Jaws, not only smashed box office expectations, but forever changed the way we felt about going into the water - and how we think about sharks.

Now, more than 40 years (and 100+ shark movies) on, people's fear of sharks persists, with researchers at the University of South Australia concerned about the negative impact that shark movies are having on conservation efforts of this often-endangered animal.

In a world-first study, conservation psychology researchers, UniSA's Dr Briana Le Busque and Associate Professor Carla Litchfield have evaluated how sharks are portrayed in movies, finding that 96 per cent of shark films are overtly portraying sharks as a threat to humans.

Dr Le Busque says sensationalised depictions of sharks in popular media can unfairly influence how people perceive sharks and harm conservation efforts.

"Most of what people know about sharks is obtained through movies, or the news, where sharks are typically presented as something to be deeply feared," Dr Le Busque says.

"Since Jaws, we've seen a proliferation of monster shark movies - Open Water, The Meg, 47 Metres Down, Sharknado - all of which overtly present sharks as terrifying creatures with an insatiable appetite for human flesh. This is just not true.

"Sharks are at much greater risk of harm from humans, than humans from sharks, with global shark populations in rapid decline, and many species at risk of extinction.

"Exacerbating a fear of sharks that's disproportionate to their actual threat, damages conservation efforts, often influencing people to support potentially harmful mitigation strategies.

"There's no doubt that the legacy of Jaws persists, but we must be mindful of how films portray sharks to capture movie-goers. This is an important step to debunk shark myths and build shark conservation."

Credit: 
University of South Australia

Protein-based vaccine candidate combined with potent adjuvant yields effective SARS-CoV-2 protection

A new protein-based vaccine candidate combined with a potent adjuvant provided effective protection against SARS-CoV-2 when tested in animals, suggesting that the combination could add one more promising COVID-19 vaccine to the list of candidates for human use. The protein antigen, based on the receptor binding domain (RBD) of SARS-CoV-2, was expressed in yeast instead of mammalian cells - which the authors say could enable a scalable, temperature-stable, low-cost production process well suited for deployment in the developing world. In a study by Maria Pino and colleagues, the adjuvant - a TLR7/TLR8 agonist named 3M-052, formulated with alum - substantially improved performance of the vaccine compared with vaccine adjuvanted with alum alone, inducing stronger antibody and T cell responses in vaccinated rhesus macaques. The vaccine and adjuvant combination also significantly reduced the quantity of virus in the respiratory tracts of macaques challenged by infection with SARS-CoV-2, and reduced lung inflammation as well. Pino et al. vaccinated 5 macaques with the RBD protein and the 3M-052/alum adjuvant and another 5 with the RBD protein and alum alone, each at 0, 4, and 9 weeks; they also included 5 unvaccinated macaques as controls. The vaccine and adjuvant combination induced more neutralizing antibodies with higher binding affinity for the virus RBD and also enhanced CD4+ and CD8+ T cell responses compared with the alum-only formulation. About one month after the third round of vaccinations, the researchers then infected the macaques with SARS-CoV-2, and noted the macaques vaccinated with the novel adjuvant formulation showed a reduced viral load in their nasal mucus and lung fluid, as well as fewer inflammatory cytokines in their plasma.

Credit: 
American Association for the Advancement of Science (AAAS)