Tech

New study finds inaccuracies in arsenic test kits in Bangladesh

ANN ARBOR--Researchers at the University of Michigan have raised serious concerns with the performance of some arsenic test kits commonly used in Bangladesh to monitor water contamination.

Their study tested eight commercially available arsenic test kits, and found that several--including the most widely used in Bangladesh--performed poorly.

"The implication is that well waters could have arsenic well above the safe drinking water limit, even though the test result says the level is much lower and safe," said Kim Hayes, U-M professor in civil and environmental engineering and one of the study's lead researchers. "These findings point to the need for manufacturers to ensure the accuracy of kits, and in particular, that the calibration color charts provided are consistent with the kit performance."

The study was published online in November in Water Research and will appear in the March 2020 print edition of the journal.

Arsenic in water is a long-standing problem in Bangladesh. About 25 million Bangladeshis face serious risks of developing skin lesions and cancers due to unsafe levels of arsenic in drinking water. Testing water quality and sharing the results with residents has been one of the most effective interventions for reducing the number of households consuming arsenic-contaminated water.

In rural arsenic-affected locations, government agencies and nongovernmental organizations often rely on field-test kits to monitor drinking water quality. Raghav Reddy, who recently obtained his doctorate from U-M in environmental engineering, learned through conversations with local users of field-test kits that there were concerns about the accuracy of the kits.

This led the U-M team, in collaboration with Asia Arsenic Network, an NGO in Bangladesh, to test eight commercially available arsenic field-test kits manufactured by Hach, Econo Quick, Econo Quick II, LaMotte, Quick, Quick II, Wagtech and Merck. Kit accuracy was assessed by comparing kit results with arsenic measurements by an established laboratory method (hydride generation atomic absorption spectroscopy, HG-AAS) using certified arsenic standards.

More than 300 arsenic test kit measurements were run across 21 different water samples in Bangladesh and 14 different test kit boxes of eight commercial products. For a subset of water samples, the completed test strips were also presented to a five-person panel consisting of government and NGO employees who regularly use arsenic field-test kits. Comparison by the panel of the test strip color to calibration color blocks allowed an assessment of the variability of color matching by eye by expert users.

Finally, a color scanner, employed to eliminate user-dependent color matching errors, was used to assess the accuracy and precision of each kit. The most accurate kits returned field-test values closest to lab-tested arsenic values. The most precise kits were highly consistent when repeated. Two kits (LaMotte and Quick II) provided accurate and precise estimates of arsenic; four kits (Econo-Quick, Quick, Wagtech and Merck) were either accurate or precise, but not both; and two kits (Hach and Econo-Quick II) were neither accurate nor precise.

The Hach and Econo-Quick kits are the two most widely used field kits in Bangladesh today. The Hach kit results varied between replicate measurements and always underestimated arsenic levels across the range of concentrations tested. The Econo-Quick kit showed good repeatability between replicate measurements but tended to overestimate arsenic by a factor of two.

Arsenic detection for all kits in this study is based on the Gutzeit method, in which arsenic species form a colored complex on a paper test strip with color intensity proportional to arsenic concentration in water samples. The manufacturers provide a color calibration chart for manual color matching.

"Simply put, the Hach test strip colors are too light, in comparison to the darker color block charts provided by the manufacturer, for a verified arsenic concentration in laboratory measurements, while the Econo-Quick test strips are too dark," Hayes said.

"Particularly worrisome are kits that produce lighter color test strips than they should.

"Such an underestimation of arsenic means a well might be labeled as meeting a safe water quality standard when in fact it does not. Our results mean that individuals could be drinking and cooking with arsenic-contaminated water while believing it to be safe because of faulty test kits."

The study provides three recommendations: kit manufacturers should address reported inaccuracies; decision-makers should carefully evaluate their use of field kits for arsenic measurements; and kit users should conduct quality control checks to identify potentially erroneous results.

Credit: 
University of Michigan

Certain factors predict smoking cessation in patients with rheumatoid arthritis

Smoking doubles the risk of developing rheumatoid arthritis and continuing to smoke after being diagnosed has negative effects on patients. In an Arthritis Care & Research study of patients with rheumatoid arthritis who smoked, certain healthcare factors were linked with a higher likelihood that patients would quit smoking.

In the study that included 507 patients who smoked, 29% quit over a median follow-up of 4.75 years. Compared with other patients, patients new to rheumatology care were 60% more likely to quit smoking and those in the rural community health system were 66% more likely to quit. Conversely, seropositive patients--who have elevated blood levels of antibodies thought to cause symptoms of their disease--were 43% less likely to quit. Demographic factors were not predictive of smoking cessation.

"Our findings point to the impact of health teams that systematically support tobacco cessation, processes that were in place at the rural clinic. Likewise, they highlight the need to engage seropositive patients who smoke and are at risk for worse rheumatoid arthritis and cardiopulmonary diseases, which we know are leading causes of death in rheumatoid arthritis," said senior author Christie M. Bartels, MD, MS, of the University of Wisconsin School of Medicine and Public Health.

Credit: 
Wiley

Cover crops can benefit hot, dry soils

image: Cover crop and winter wheat fields at the NMSU Agricultural Science Center, Clovis NM.

Image: 
Rajan Ghimire

The Southern High Plains of the United States have low annual rainfall. When it does rain, though, intense storms can cause severe soil erosion. Strong winds also strip away valuable topsoil.

Enter cover crops.

Usually grown during seasons when primary crops aren't cultivated, cover crops can include legumes such as pea and hairy vetch, or grassy crops like oats and barley.

Cover crops do more than just cover fields between growing seasons. They help soils retain rainwater and reduce erosion from wind and water.

In a new study, researchers from New Mexico State University and the United States Department of Agriculture show that cover crops can increase soil health in a semi-arid region of New Mexico.

"There was a lot of skepticism on the effectiveness of cover cropping in the hot, dry environment of the southern High Plains," says Rajan Ghimire. Ghimire is a researcher at New Mexico State University.

"Our research shows that cover crops increased the biological health of soils in the study area within two years," says Ghimire.

To determine soil health, the researchers measured soil carbon dioxide emissions. These emissions were higher in test plots with cover crops compared to fallow plots.

Soil microbes are tiny creatures that live and breathe in healthy soil. Carbon dioxide is released from soils during plant root and soil microbial respiration. "The higher the biological activity is in soils, the greater the carbon dioxide emissions," says Ghimire.

The plots were located in Clovis, New Mexico - about 200 miles east of Albuquerque. Ghimire and colleagues tested a variety of cover crops over two growing seasons. They also tested combinations of cover crops, such as growing peas and oats together.

Plots with peas alone, and a combination of peas and canola, showed the highest soil carbon dioxide emissions during one of the study years. However, the emissions trend was not consistent in the second year, making results difficult to interpret.

The researchers showed that the interaction of soil temperature and rainfall play a major role in determining how much carbon dioxide is emitted. Therefore, those factors influence soil health.

But unchecked soil carbon dioxide emissions can be a problem. That's because carbon dioxide is a greenhouse gas. "Soil carbon dioxide release needs to be balanced with soil carbon storage," says Ghimire. Luckily, cover crops help take that gas from the atmosphere and store it in the soil.

Cover crops increase soil carbon storage in two ways. First, their root and aboveground biomass are largely made of carbon, which will eventually decompose into soil organic matter.

They also provide housing and food for the soil microbes. These microbes, especially fungi, are associated with even more carbon storage.

Biological activity also improves soil structure, and microbes can release nutrients crops need. "These changes greatly benefit both the environment and farming," says Ghimire.

Microbes living in the roots of legumes can fix atmospheric nitrogen to make it available to crops. However, this activity can also increase soil carbon dioxide emissions when legumes are grown as cover crops.

Grassy cover crops, such as oat and barley, contribute well to soil carbon accumulation without the extra emissions from fixing nitrogen. But that means plants will need to get nitrogen elsewhere, and these grasses also tend to need more water than legumes.

"Finding a balance is key," says Ghimire. "Mixing grasses with legumes may help increase soil carbon and nitrogen while minimizing carbon dioxide release."

The researchers plan to continue this experiment as a long-term study.

"Cover crops are a great way to sequester carbon, reduce global warming and increase agricultural resilience," says Ghimire. "But there is still a lot to learn about cover cropping, especially in semi-arid environments."

Credit: 
American Society of Agronomy

Supercomputers drive ion transport research

image: Scientists are using supercomputers to help understand the relatively rare event of salts in water (blue) passing through atomically-thin nanoporous membranes. A traversing chloride ion (peach) induces charge anisotropy at its rear (e.g., the light purple sodium ion in the bottom left), which pulls it backward.

Image: 
Malmir et al.

For a long time, nothing. Then all of a sudden, something. Wonderful things in nature can burst on the scene after long periods of dullness -- rare events such as protein folding, chemical reactions, or even the seeding of clouds. Path sampling techniques are computer algorithms that deal with the dullness in data by focusing on the part of the process in which the transition occurs.

Scientists are using XSEDE-allocated supercomputers to help understand the relatively rare event of salts in water passing through atomically-thin, nanoporous membranes. From a practical perspective, the rate of ion transport through a membrane needs to be minimized. In order to achieve this goal, however, it is necessary to obtain a statistically representative picture of individual transport events to understand the factors that control its rate. This research could not only help make progress in desalination for fresh water; it has applications in decontaminating the environment, better pharmaceuticals, and more.

Advanced path sampling techniques and molecular dynamics (MD) simulations captured the kinetics of solute transport through nanoporous membranes, according to a study published online in the Cell journal Matter, January 2020.

"The goal was to calculate the mean first passage times for solutes irrespective of their magnitude," said study co-author Amir Haji-Akbari, an assistant professor of chemical and environmental engineering at Yale University.

The team was awarded supercomputing time by XSEDE, the Extreme Science and Engineering Discovery Environment (XSEDE) funded by the National Science Foundation. The XSEDE-allocated Stampede2 system at TACC was used for the simulations in this study, in particular the Skylake nodes of Stampede2.

"XSEDE was extremely useful and indispensable to what we did," Haji-Akbari said. "That's because the underlying trajectories that are part of the forward flux sampling method are fairly expensive atomistic simulations. We definitively couldn't have finished these studies using the resources that we have locally at the Yale lab."

MD simulations were used to calculate forces in the system studied at the atomic level. The problem with MD is that even today's most powerful supercomputers can only handle number crunching at timescales of a few hundred microseconds. The semi-permeable membranes under study that rejected certain solutes or ions had mean first passage times that could be much longer than the times accessible to MD.

"We used a technique called forward flux sampling, which can be equally used with equilibrium and non-equilibrium MD. The non-equilibrium aspect is particularly important for us because, when you're thinking about driven solute or ion transport, you're dealing with a non-equilibrium process that is either pressure-driven or is driven through external electric fields," Haji-Akbari said.

One can get an idea for this by imagining salty water being pushed by pistons against a membrane skin that only squeezes water out, leaving the sodium and chloride ions behind.

Haji-Akbari and colleagues used this experimental set-up with a special membrane with a nanopore through three layers of graphene. Surprisingly, even at that small scale, solutes that are supposed to be rejected can still fit.

"Geometrically, these solutes can enter the pores and pass the membrane accordingly," Haji-Akbari said. "However, what seems to be keeping them back from doing that is the fact that, when you have a solute that is in water, for example, there usually is a strong association between that solute and what we call its solvation shell, or in the case of aqueous solutions, the hydration shell."

In this example, solvent molecules can clump together, binding to the central solute. In order for the solute to enter the membrane, it has to lose some of these chunky molecules, and losing the molecules costs energy, which amounts to a barrier for their entrance into the membrane. However, it turns out that this picture, although accurate, is not complete.

"When you have an ion that passes through a nanoporous membrane, there is another factor that pulls it back and prevents it from entering and traversing the pore," Haji-Akbari said. "We were able to identify a very interesting, previously unknown mechanism for ion transport through nanopores. That mechanistic aspect is what we call induced charge anisotropy."

To give you a simple perspective of what that is, imagine a chloride ion that enters a nanopore. Once it approaches and then enters the nanopore, it sorts the remaining ions that are in in the feed. Because of the presence of that chloride inside the pore, it will be more likely for sodium ions in the feed to be closer to the pore mouth than the chloride ions.

"That is the additional factor that pulls back the leading ion," Haji-Akbari explained. "You basically have two factors, partial dehydration, which was previously known; but also this induced charge anisotropy that as far as we know is the first time this has been identified."

The science team based their computational method on forward flux sampling, which is parallelizable because the computational components do not interact that strongly with one another. "High performance computing is very suitable for using these types of methods," Haji-Akbari said. "We have previously used it to study crystal nucleation. This is the first time that we're using it to study ion transport through membranes."

As supercomputers get better and better, they offer scientists tools to explore the unexplained in a more realistic way.

"We know that in real systems, the electronic cloud of any molecule or ion will be affected by its environment," Haji-Akbari said. "Those kinds of effects are usually accounted for in polarizable force fields, which are more accurate, but more expensive to simulate. Because the calculation that we conducted was already very expensive, we didn't afford to use those polarizable force fields. That's something that we would like to do at some point, especially if we have the resources to do so."

"Supercomputers are extremely useful in addressing questions that we can't address with regular computing resources. For example, we couldn't have done this calculation without a supercomputer. They're extremely valuable in accessing scales that are not accessible to either experiments, because of their lack of resolution; or simulations, because you need a large number of computer nodes and processors to be able to address that," Haji-Akbari concluded.

Credit: 
University of Texas at Austin, Texas Advanced Computing Center

The health of coral reefs in the largest marine protected area in the world

image: Coral reefs face many threats, including pollution, climate change, overfishing, storm damage, and outbreaks of crown-of-thorns starfish. In order to see how these threats impacted reefs, scientists on the Global Reef Expedition conducted over 400 coral and reef fish surveys and created over 406 square kilometers of detailed seafloor habitat maps in the Cook Islands.

Image: 
©Khaled bin Sultan Living Oceans Foundation

The Khaled bin Sultan Living Oceans Foundation (KSLOF) has published their latest findings from the Global Reef Expedition--the largest coral reef survey and mapping expedition in history. Released today, the Global Reef Expedition: Cook Islands Final Report contains critical information on the health and resiliency of coral reef ecosystems in the Cook Islands. It provides scientists, policymakers, and stakeholders with invaluable information they can use to protect and restore these fragile marine ecosystems.

Over the course of five years, the Global Reef Expedition nearly circumnavigated the globe collecting data on the status of coral reefs in the Atlantic, Pacific, and Indian Oceans. In 2013, the expedition arrived in the Cook Islands, where scientists worked closely with local leaders, government officials, and members of the Cook Islands Marine Park Steering Committee to study the reefs. Together, they completed over 400 surveys of the coral and reef fish communities surrounding Rarotonga, Aitutaki, and Palmerston Atoll.

Scientists on the research mission also created over 400 square kilometers of detailed marine habitat and bathymetric maps of these three island areas. This information can help managers identify priority sites for conservation action and track changes to the reef over time. The Foundation produced an award-winning film, Mapping the Blue, which shows how these maps were made and illustrates how they can inform marine spatial planning efforts in the Cook Islands. The film stars international rugby star and conservationist Kevin Iro, who helped to establish Cook Islands' marine park, Marae Moana, the largest marine protected area in the world.

"We are excited to receive the report and are most appreciative of the work done by Living Oceans foundation," said Marae Moana Ambassador Kevin Iro. "This report will definitely help with our current marine spatial planning of the Marae Moana and it also demonstrates that government and non-government organizations can work cooperatively to better understand our ocean environment."

The report released today contains a comprehensive summary of the research findings from the expedition along with conservation recommendations that can help preserve Cook Island's reefs into the future. Scientists on the expedition found that many coral reefs in the Cook Islands were in good shape, with high coral cover and diverse and abundant fish communities. For the most part, reefs in remote areas tended to be healthier than those near population centers. But while the reefs surrounding Palmerston Atoll were healthy, and the reefs in Rarotonga were doing alright, Aitutaki's corals were being ravaged by an outbreak of crown-of-thorns starfish (COTS).

"When we arrived in Aitutaki, it was evident immediately that there was a problem. Reefs that should have been flourishing were being eaten alive before our eyes by thousands of starfish," said Alexandra Dempsey, the Director of Science Management at the Foundation and one of the report's authors. The reef was in crisis. In some places in Aitutaki, one of the more popular island destinations in the Cook Islands, crown-of-thorns starfish had damaged 80-99% of coral on the seafloor. "We couldn't help but intervene."

Over the course of a few days, scientific divers on the Global Reef Expedition collected 540 COTS from reefs around Aitutaki. The starfish were collected by hand, a daunting task as the starfish are covered in large venomous spines. Scientists returned to the reefs in 2015 to assess the damage and remove any remaining crown-of-thorns starfish. Although the reefs have likely changed since then, reefs in the Cook Islands showed many signs of resilience. When scientists returned to Aitutaki, they noted that healthy fish populations and a diverse coral community allowed new coral to settle and grow on damaged reefs, beginning the process of recovery. Based on their experience, the scientists created a best-practices guide for dealing with future COTS outbreaks and shared their findings with government officials in the Cook Islands.

Despite the damage done by COTS in Aitutaki, the report's authors are optimistic about the future of Cook Islands' reefs. "With continued efforts to protect and preserve their reefs, coral reefs in the Cook Islands could become some of the best in the South Pacific," said KSLOF marine ecologist Renée Carlton.

The Cook Islands is regarded as a global leader in marine conservation, most notably for establishing Marae Moana marine park and expanding it to include all of their waters. The development of a zoning plan for the marine protected area is currently underway to determine which activities will be allowed where.

"It was a privilege for the Foundation to work in the Cook Islands which turned out to offer some of the more vibrant reef systems encountered on the Global Reef Expedition," said Sam Purkis, KSLOF's Chief Scientist as well as Professor and Chair of the Department of Marine Geosciences at the University of Miami's Rosenstiel School of Marine and Atmospheric Science. "It was particularly reassuring to see that the conservation initiatives already in place in the country, such as the widespread use of marine protected areas and reserves, are paying dividends. It is our sincere hope that the habitat and benthic maps that we have produced for the Cook Islands from satellite, along with the extensive portfolio of field data, will serve to bolster these ongoing efforts and catalyze even more ambitious conservation actions."

Credit: 
Khaled bin Sultan Living Oceans Foundation

'Triangle 2' plastic containers may see environmental makeover

ITHACA, N.Y. - Recyclable plastic containers with the No. 2 designation could become even more popular for manufacturers as plastic milk jugs, dish soap containers and shampoo bottles may soon get an environmental makeover.

Cornell chemists can demonstrate how to make high-density polyethylene with better control over polymer chain lengths, which allows for improvement over physical properties such as processability and strength, according to research published Dec. 27, 2019, in the Journal of the American Chemical Society.

"The grand challenge has been to minimize the energy cost of plastic production and to create new ways to precisely tune the properties of consumer plastics," said Renee Sifri, doctoral candidate in chemistry in the laboratory of Brett Fors, associate professor of chemistry in the College of Arts and Sciences.

To strengthen plastic products, manufacturers might add extra material in the production and recycling process, which requires energy to melt and mold the plastic into its final form, Sifri said.

Handling high-density polyethylene - known as HDPE, which has a No. 2 recycling symbol - requires a large amount of resources.

"Developing new ways to reduce the amount of plastic production is vital to minimizing plastic waste," said Omar Padilla-Vélez, doctoral candidate in chemistry in the laboratory of Geoffrey Coates, the Tisch University Professor in the Department of Chemistry and Chemical Biology.

Plastics consist of polymers, which are bonded together chemically into chains. Long chains create stronger materials, Sifri said, while shorter chains allow for easy processing.

Commercial polymers are commonly produced with a range of chain lengths, to optimize performance and processability.

Until now, little was known about the influence of the molecular weight distribution shape on these properties. The new laboratory processes provided precise control over this variable, in turn offering a systematic study of its influence on the material's physical properties.

"Our work shows promise for lowering the amount of energy needed in processing without compromising polymer strength," Sifri said. "The take-home message - reduce the processing energy while ensuring the strength is not compromised. That way the plastic containers don't fall apart."

To test the tensile strength of the plastic, the researchers molded the new plastic into tiny, flat pieces shaped like dog bones. Tensile testing revealed that the molecular weight distribution (polymer chains) does not impact the strain at its breaking point, which shows that the updated the ability to influence HDPE processing does not compromise material strength.

Said Padilla-Vélez: "We pulled them apart and noted how much force is required to break them. Our new way lost no strength."

Credit: 
Cornell University

Study find delta helps to decrease the impact of river flooding

image: A ground-level view of the Tombigbee-Alabama Delta.

Image: 
Steve Dykstra, Dauphin Island Sea Lab

Most coastal cities and ports face a double threat from storm surge and river flooding. Infrastructure development along waterways and sea-level rise increase vulnerability for these communities. In a recent publication, The Propagation of Fluvial Flood Waves Through a Backwater-Estuarine Environment, historical data is examined to determine how to reduce the risk of coastal river flooding to communities.

Usually, in rivers, large flooding events move from upstream to downstream faster than small events. This study identified a different model by tracking flooding events as moved from the river to the coastal ocean. The river delta, which is common in many natural systems, turned out to be very important for understanding when and where flooding is likely to happen.

Using years of observations (in some cases 9 decades of data), this study found that the Tombigbee-Alabama Delta (also known as the Mobile-Tensaw Delta) delays and reduces flooding for cities along the delta and bay. Amazingly, this effect is largely caused by the vegetation that naturally occurs in the delta.

Most of the delta is a densely packed tupelo-bald cypress swamp, supporting the most biodiverse location in temperate North America. For large events, the delta swamp acts like a sponge quickly absorbing the initial floodwaters, and then slowly releases the water back to the main rivers. This gives communities more time to prepare and reduces the risk of river flooding overlapping with a storm surge during a hurricane. The slower release of water from the delta also slows the impact on the bay, delaying the initial flushing while also keeping the salinity low for a longer period of time. In contrast, smaller flooding events moved downstream faster. This occurs because smaller flooding events remain in the confines of the river channel, where they are not impacted by the swamps of the delta.

These findings indicate the intensity of coastal flooding can be decreased and provide more time to prepare by allowing inland regions of rivers to flood and/or by managing vegetation type, both of which reduce the downstream height of water.

Credit: 
Dauphin Island Sea Lab

A new model of vision

image: MIT cognitive scientists have developed a computer model of face recognition that performs a series of computations that reverse the steps that a computer graphics program would use to generate a 2D representation of a face.

MIT cognitive scientists have developed a computer model of face recognition that performs a series of computations that reverse the steps that a computer graphics program would use to generate a 2D representation of a face.

Image: 
MIT

When we open our eyes, we immediately see our surroundings in great detail. How the brain is able to form these richly detailed representations of the world so quickly is one of the biggest unsolved puzzles in the study of vision.

Scientists who study the brain have tried to replicate this phenomenon using computer models of vision, but so far, leading models only perform much simpler tasks such as picking out an object or a face against a cluttered background. Now, a team led by MIT cognitive scientists has produced a computer model that captures the human visual system's ability to quickly generate a detailed scene description from an image, and offers some insight into how the brain achieves this.

"What we were trying to do in this work is to explain how perception can be so much richer than just attaching semantic labels on parts of an image, and to explore the question of how do we see all of the physical world," says Josh Tenenbaum, a professor of computational cognitive science and a member of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM).

The new model posits that when the brain receives visual input, it quickly performs a series of computations that reverse the steps that a computer graphics program would use to generate a 2D representation of a face or other object. This type of model, known as efficient inverse graphics (EIG), also correlates well with electrical recordings from face-selective regions in the brains of nonhuman primates, suggesting that the primate visual system may be organized in much the same way as the computer model, the researchers say.

Ilker Yildirim, a former MIT postdoc who is now an assistant professor of psychology at Yale University, is the lead author of the paper, which appears today in Science Advances. Tenenbaum and Winrich Freiwald, a professor of neurosciences and behavior at Rockefeller University, are the senior authors of the study. Mario Belledonne, a graduate student at Yale, is also an author.

Inverse graphics

Decades of research on the brain's visual system has studied, in great detail, how light input onto the retina is transformed into cohesive scenes. This understanding has helped artificial intelligence researchers develop computer models that can replicate aspects of this system, such as recognizing faces or other objects.

"Vision is the functional aspect of the brain that we understand the best, in humans and other animals," Tenenbaum says. "And computer vision is one of the most successful areas of AI at this point. We take for granted that machines can now look at pictures and recognize faces very well, and detect other kinds of objects."

However, even these sophisticated artificial intelligence systems don't come close to what the human visual system can do, Yildirim says.

"Our brains don't just detect that there's an object over there, or recognize and put a label on something," he says. "We see all of the shapes, the geometry, the surfaces, the textures. We see a very rich world."

More than a century ago, the physician, physicist, and philosopher Hermann von Helmholtz theorized that the brain creates these rich representations by reversing the process of image formation. He hypothesized that the visual system includes an image generator that would be used, for example, to produce the faces that we see during dreams. Running this generator in reverse would allow the brain to work backward from the image and infer what kind of face or other object would produce that image, the researchers say.

However, the question remained: How could the brain perform this process, known as inverse graphics, so quickly? Computer scientists have tried to create algorithms that could perform this feat, but the best previous systems require many cycles of iterative processing, taking much longer than the 100 to 200 milliseconds the brain requires to create a detailed visual representation of what you're seeing. Neuroscientists believe perception in the brain can proceed so quickly because it is implemented in a mostly feedforward pass through several hierarchically organized layers of neural processing.

The MIT-led team set out to build a special kind of deep neural network model to show how a neural hierarchy can quickly infer the underlying features of a scene -- in this case, a specific face. In contrast to the standard deep neural networks used in computer vision, which are trained from labeled data indicating the class of an object in the image, the researchers' network is trained from a model that reflects the brain's internal representations of what scenes with faces can look like.

Their model thus learns to reverse the steps performed by a computer graphics program for generating faces. These graphics programs begin with a three-dimensional representation of an individual face and then convert it into a two-dimensional image, as seen from a particular viewpoint. These images can be placed on an arbitrary background image. The researchers theorize that the brain's visual system may do something similar when you dream or conjure a mental image of someone's face.

The researchers trained their deep neural network to perform these steps in reverse -- that is, it begins with the 2D image and then adds features such as texture, curvature, and lighting, to create what the researchers call a "2.5D" representation. These 2.5D images specify the shape and color of the face from a particular viewpoint. Those are then converted into 3D representations, which don't depend on the viewpoint.

"The model gives a systems-level account of the processing of faces in the brain, allowing it to see an image and ultimately arrive at a 3D object, which includes representations of shape and texture, through this important intermediate stage of a 2.5D image," Yildirim says.

Model performance

The researchers found that their model is consistent with data obtained by studying certain regions in the brains of macaque monkeys. In a study published in 2010, Freiwald and Doris Tsao of Caltech recorded the activity of neurons in those regions and analyzed how they responded to 25 different faces, seen from seven different viewpoints. That study revealed three stages of higher-level face processing, which the MIT team now hypothesizes correspond to three stages of their inverse graphics model: roughly, a 2.5D viewpoint-dependent stage; a stage that bridges from 2.5 to 3D; and a 3D, viewpoint-invariant stage of face representation.

"What we show is that both the quantitative and qualitative response properties of those three levels of the brain seem to fit remarkably well with the top three levels of the network that we've built," Tenenbaum says.

The researchers also compared the model's performance to that of humans in a task that involves recognizing faces from different viewpoints. This task becomes harder when researchers alter the faces by removing the face's texture while preserving its shape, or distorting the shape while preserving relative texture. The new model's performance was much more similar to that of humans than computer models used in state-of-the-art face-recognition software, additional evidence that this model may be closer to mimicking what happens in the human visual system.

The researchers now plan to continue testing the modeling approach on additional images, including objects that aren't faces, to investigate whether inverse graphics might also explain how the brain perceives other kinds of scenes. In addition, they believe that adapting this approach to computer vision could lead to better-performing AI systems.

"If we can show evidence that these models might correspond to how the brain works, this work could lead computer vision researchers to take more seriously and invest more engineering resources in this inverse graphics approach to perception," Tenenbaum says. "The brain is still the gold standard for any kind of machine that sees the world richly and quickly."

Credit: 
Massachusetts Institute of Technology

Tunnel fire safety

image: Fire accidents in tunnels can have disastrous consequences in terms of human loses and structural damage.

Image: 
Pixabay/wal_172619

Global risk management experts are calling for fire education initiatives to be included in driver safety programs so that drivers are better prepared for an emergency if faced with it on the roads.

The call follows a new research study where researchers from University of South Australia and the National Technical University of Athens assessed fire safety mechanisms of road tunnels, finding that risks to human life could be reduced through greater awareness and education.

Using a newly developed evacuation model, researchers were able to simulate the behaviours of trapped-commuters and their movement to estimate potential outcomes and fatalities following a fire in a road tunnel.

UniSA Adjunct Associate Professor Konstantinos Kirytopoulos says being able to forecast human behaviour in a fire-risk scenario provides critical information for safety analysts and tunnel managers.

"To mitigate potential fire accidents and disastrous consequences in road tunnels, safety analysts not only have to fulfil standard regulatory requirements, but also need to conduct a complex risk assessment which includes defining the issues, identifying hazards, calculating and prioritizing risks, and doing so for different environments," Assoc Prof Kirytopoulos says.

"An evacuation simulation model such as ours is particularly valuable because it lets analysts thoroughly inspect all parameters within an emergency.

"Uniquely, it also simulates human behaviour and movement in conjunction with the use of safety mechanisms, letting us project the likelihood of successful evacuations under different combinations of human behaviour, safety procedures implementation and safety infrastructure employed, which provide an extremely useful tool for tunnel safety analysts.

"Safety levels are dictated by the operation of the whole system - organisation, technical and human elements - so anything we can do to increase the success rates of these individual factors can have a massive impact on the whole.

"Having a familiarity with emergency protocols in a confined or enclosed space such as a road tunnel can help trapped-commuters to appropriately respond, and this, we believe will improve successful evacuations."

Road tunnels are fundamental elements of road transport systems contributing to profitable economies and societies. Given the sheer number of vehicles on the road - about 262 million passenger cars in the European Union; more than 272 million vehicles in the United States; and an estimated 19 million vehicles in Australia - safety is paramount.

Fire accidents in tunnels can have disastrous consequences in terms of human loses and structural damage. Although tunnel safety has increased significantly since previous fire tunnel tragedies such as the Mont Blanc Tunnel fire (France,1999 with 39 fatalities), Fréjus tunnel fire (France, 2005 with 2 fatalities and 21 injuries) and Yanhou Tunnel fire (China, 2014 with 40 fatalities), these incidences highlight the severity of potential impacts and provide insights for mitigating future risks.

Assoc Prof Kirytopoulos says commuters are the most variable factor in the event of a tunnel fire because they're the first to confront the consequences of the fire and in most cases are inadequately trained for such circumstances.

"When the tunnel operator calls for an emergency evacuation through public address systems, radio rebroadcast or electronic tunnel message signs, people should respond immediately and evacuate, without any delay," Assoc Prof Kirytopoulos says.

"Fires are very complex phenomena, and when in an enclosed tunnel environment, they're characterised by turbulence, combustion irregularity and high radiation. Tunnels fires are also known to have an incredibly intense heat release rate - even four times the intensity of fires in an open environment - as well as large amounts of toxic fumes and smoke.

"In an emergency, time is crucial. The evacuation of the tunnel should be as quick and efficient as the evacuation of an airplane after crash landing. Educating drivers on what they should do as well as making them aware of the evacuation systems which are in place are the best means for mitigating risk and ensuring a safe outcome."

Credit: 
University of South Australia

Drinking weakens bones of people living with HIV: BU study

For people living with HIV, any level of alcohol consumption is associated with lower levels of a protein involved in bone formation, raising the risk of osteoporosis, according to a new study by researchers from the Boston University School of Public Health (BUSPH) and School of Medicine (BUSM) and published in the journal Alcoholism: Clinical and Experimental Research.

"We did not find an amount of alcohol consumption that appeared 'safe' for bone metabolism," says study lead author Dr. Theresa W. Kim, an assistant professor at BUSM and a faculty member of the Clinical Addiction Research Education (CARE) program at Boston Medical Center.

"As you get older, your ability to maintain adequate bone formation declines," Kim says. "These findings suggest that for people with HIV, alcohol may make this more difficult."

Low bone density is common among people living with HIV, even those who have successfully suppressed their viral loads with antiretroviral therapy.

"Our finding highlights an under-recognized circumstance in which people with HIV infection often find themselves: Their viral load can be well controlled by efficacious, now easier-to-take medications, while other health conditions and risks that commonly co-occur--like substance use and other medical conditions--are less well-addressed," says Dr. Richard Saitz, professor of community health sciences at BUSPH and the study's senior author.

The researchers used data from 198 participants in the Boston ARCH cohort, a long-running study led by Saitz and funded by the National Institute on Alcohol Abuse and Alcoholism that includes people living with HIV and current or past alcohol or drug use disorder. For the current study, the researchers analyzed participants' blood samples, looking at biomarkers associated with bone metabolism (a life-long process of absorbing old bone tissue and creating new bone tissue) and a biomarker associated with recent alcohol consumption. They also used data from interviews in their analyses, and controlled for other factors such as age, sex, race/ethnicity, other substance use, medications, vitamin D levels, and HIV viral suppression.

The researchers found a significant association between a participant's drinking and their levels of serum procollagen type 1 N-terminal propeptide (P1NP), a marker of bone formation. For every additional drink per day on average, a participant's P1NP levels dropped by 1.09ng/mL (the range for healthy P1NP levels is 13.7 to 42.4?ng/mL). Participants who drank more than 20 days out of each month also had lower P1NP levels than those who drank fewer than 20 days per month, and participants with high levels of the alcohol-associated biomarker also had lower P1NP levels.

"If I were counseling a patient who was concerned about their bone health, besides checking vitamin D and recommending exercise, I would caution them about alcohol use, given that alcohol intake is a modifiable risk factor and osteoporosis can lead to fracture and functional decline," says Kim, who is also a primary care physician at the Boston Health Care for the Homeless Program.

Credit: 
Boston University School of Medicine

A PLOS Medicine special issue devoted to refugee and migrant health

image: This week, the open-access journal PLOS Medicine launches its latest Special Issue, focused on research and commentary about the health of refugees and migrants.

Image: 
geralt, Pixabay

This week, the open-access journal PLOS Medicine launches its latest Special Issue, focused on research and commentary about the health of refugees and migrants.

As discussed in a recent Editorial in the journal, in recent years there has been a large increase in the number of people migrating both within countries and internationally, voluntarily or otherwise. Migration can be motivated by diverse factors, including conflict and violence, population growth and environmental degradation, and escape from economic hardship. The health of those people migrating can be adversely affected by the situation which led to their displacement, and further threats to their safety, health and wellbeing may arise during the process of migration or in a destination country.

Continued growth in migration is expected, and the aim of the Special Issue is to document the health challenges faced by refugees and migrants in different settings worldwide, and to highlight opportunities to develop policy and practice aimed at improving their health and wellbeing. Guest editors Paul Spiegel, Terry McGovern and Kolitha Wickramage have advised on the content of the issue, which begins with a report from Megan Doherty of the Children's Hospital of Eastern Ontario, Ottawa, Canada, and colleagues.

In the past 5 years, large numbers of Rohingya people have been displaced from Myanmar into neighbouring Bangladesh. Doherty and co-authors studied 311 people, 156 with serious health problems and 155 caregivers, living in refugee camps in 2017. Among those reporting health problems, 64% had a significant physical disability, 21% reported having treatment-resistant tuberculosis, and 10% had cancer. Caregivers were often family members, and reported providing more than 13 hours of care per day, on average. Importantly, 62% of those with health problems reported experiencing significant pain, and the majority of available pain treatments were judged to be ineffective, indicating a need to provide palliative care components in humanitarian settings.

In a second research study, Sarah Crede and colleagues from the University of Sheffield, UK report on the use of paediatric emergency care by mothers in Bradford born outside, as compared with those born in, the UK or Ireland. Among 10,168 mothers in the Born in Bradford study who gave birth to children in the period from April 2007 to June 2011, about one third were born outside the UK.

Crede and colleagues found that mothers born outside the UK were less likely to make a first visit to the emergency department with their children (odds ratio 0.88, 95% CI 0.80-0.97, p=0.012), which could indicate a limited awareness of, or hesitancy about attending, health care by some migrant groups. Reasons for attending and the proportions admitted to hospital were similar across the two populations. On the other hand, among the study participants using the emergency department, utilization rates were higher for children of migrant mothers (incidence rate ratio 1.19, 95% CI 1.01-1.40, p=0.04). Migrants from Europe and those who had been in the UK for longer than 5 years had higher utilization rates. Studies of this type in high-income countries can help to guide provision of appropriate health services for children, and highlight populations for whom access should be improved.

Credit: 
PLOS

Researchers develop new explanation for destructive earthquake vibrations

image: Research suggest that rocks colliding inside fault zones, like this one in Maine, may contribute to damaging high-frequency earthquake vibrations.

Image: 
Julia Carr

PROVIDENCE, R.I. [Brown University] -- Earthquakes produce seismic waves with a range of frequencies, from the long, rolling motions that make skyscrapers sway, to the jerky, high-frequency vibrations that cause tremendous damage to houses and other smaller structures. A pair of Brown University geophysicists has a new explanation for how those high-frequency vibrations may be produced.

In a paper published in Geophysical Research Letters, Brown faculty members Victor Tsai and Greg Hirth propose that rocks colliding inside a fault zone as an earthquake happens are the main generators of high-frequency vibrations. That's a very different explanation than the traditional one, the researchers say, and it could help explain puzzling seismic patterns made by some earthquakes. It could also help scientists predict which faults are likely to produce the more damaging quakes.

"The way we normally think of earthquakes is that stress builds up on a fault until it eventually fails, the two sides slip against each other, and that slip alone is what causes all the ground motions we observe," said Tsai, an associate professor in Brown's Department of Earth, Environmental and Planetary Sciences. "The idea of this paper is to evaluate whether there's something other than just slip. The basic question is: If you have objects colliding inside the fault zone as it slips, what physics could result from that?"

Drawing from mathematical models that describe the collisions of rocks during landslides and other debris flows, Tsai and Hirth developed a model that predicts the potential effects of rock collisions in fault zones. The model suggested the collisions could indeed be the principal driver of high-frequency vibrations. And combining the collision model with more traditional frictional slip models offers reasonable explanations for earthquake observations that don't quite fit the traditional model alone, the researchers say.

For example, the combined model helps explain repeating earthquakes -- quakes that happen at the same place in a fault and have nearly identical seismic wave forms. The odd thing about these quakes is that they often have very different magnitudes, yet still produce ground motions that are nearly identical. That's difficult to explain by slip alone, but makes more sense with the collision model added, the researchers say.

"If you have two earthquakes in the same fault zone, it's the same rocks that are banging together -- or at least rocks of basically the same size," Tsai said. "So if collisions are producing these high-frequency vibrations, it's not surprising that you'd get the same ground motions at those frequencies regardless of the amount of slip that occurs."

The collision model also may help explain why quakes at more mature fault zones -- ones that have had lots of quakes over a long period of time -- tend to produce less damage compared to quakes of the same magnitude at more immature faults. Over time, repeated quakes tend to grind down the rocks in a fault, making the faults smoother. The collision model predicts that smoother faults with less jagged rocks colliding would produce weaker high-frequency vibrations.

Tsai says that more work needs to be done to fully validate the model, but this initial work suggests the idea is promising. If the model does indeed prove valid, it could be helpful in classifying which faults are likely to produce more or less damaging quakes.

"People have made some observations that particular types of faults seem to generate more or less high-frequency motion than others, but it has not been clear why faults fall into one category or the other," he said. "What we're providing is a potential framework for understanding that, and we could potentially generalize this to all faults around the world. Smoother faults with rounded internal structures may generally produce less high-frequency motions, while rougher faults would tend to produce more."

The research also suggests that some long-held ideas about how earthquakes work might need revising.

"In some sense it might mean that we know less about certain aspects of earthquakes than we thought," Tsai said. "If fault slip isn't the whole story, then we need a better understanding of fault zone structure."

Credit: 
Brown University

Team sheds new light on design of inorganic materials for brain-like computing

Ever wish your computer could think like you do or perhaps even understand you?

That future may not be now, but it's one step closer, thanks to a Texas A&M University-led team of scientists and engineers and their recent discovery of a materials-based mimic for the neural signals responsible for transmitting information within the human brain.

The multidisciplinary team, led by Texas A&M chemist Sarbajit Banerjee in collaboration with Texas A&M electrical and computer engineer R. Stanley Williams and additional colleagues across North America and abroad, has discovered a neuron-like electrical switching mechanism in the solid-state material β'-CuxV2O5 -- specifically, how it reversibly morphs between conducting and insulating behavior on command.

The team was able to clarify the underlying mechanism driving this behavior by taking a new look at β'-CuxV2O5, a remarkable chameleon-like material that changes with temperature or an applied electrical stimulus. In the process, they zeroed in on how copper ions move around inside the material and how this subtle dance in turn sloshes electrons around to transform it. Their research revealed that the movement of copper ions is the linchpin of an electrical conductivity change which can be leveraged to create electrical spikes in the same way that neurons function in the cerebral nervous system -- a major step toward developing circuitry that functions like the human brain.

Their resulting paper, which features Texas A&M chemistry graduate students Abhishek Parija (now at Intel Corporation), Justin Andrews and Joseph Handy as first authors, is published today (Feb. 27) in the Cell Press journal Matter.

In their quest to develop new modes of energy efficient computing, the broad-based group of collaborators is capitalizing on materials with tunable electronic instabilities to achieve what's known as neuromorphic computing, or computing designed to replicate the brain's unique capabilities and unmatched efficiencies.

"Nature has given us materials with the appropriate types of behavior to mimic the information processing that occurs in a brain, but the ones characterized to date have had various limitations," Williams said. "The importance of this work is to show that chemists can rationally design and create electrically active materials with significantly improved neuromorphic properties. As we understand more, our materials will improve significantly, thus providing a new path to the continual technological advancement of our computing abilities."

While smart phones and laptops seemingly get sleeker and faster with each iteration, Parija notes that new materials and computing paradigms freed from conventional restrictions are required to meet continuing speed and energy-efficiency demands that are straining the capabilities of silicon computer chips, which are reaching their fundamental limits in terms of energy efficiency. Neuromorphic computing is one such approach, and manipulation of switching behavior in new materials is one way to achieve it.

"The central premise -- and by extension the central promise -- of neuromorphic computing is that we still have not found a way to perform computations in a way that is as efficient as the way that neurons and synapses function in the human brain," said Andrews, a NASA Space Technology Research Fellow. "Most materials are insulating (not conductive), metallic (conductive) or somewhere in the middle. Some materials, however, can transform between the two states: insulating (off) and conductive (on) almost on command."

By using an extensive combination of computational and experimental techniques, Handy said the team was able to demonstrate not only that this material undergoes a transition driven by changes in temperature, voltage and electric field strength that can be used to create neuron-like circuitry but also comprehensively explain how this transition happens. Unlike other materials that have a metal-insulator transition (MIT), this material relies on the movement of copper ions within a rigid lattice of vanadium and oxygen.

"We essentially show that a very small movement of copper ions within the structure brings about a massive change in conductance in the whole material," Handy added. "Because of this movement of copper ions, the material transforms from insulating to conducting in response to external changes in temperature, applied voltage or applied current. In other words, applying a small electrical pulse allows us to transform the material and save information inside it as it works in a circuit, much like how neurons function in the brain."

Andrews likens the relationship between the copper-ion movement and electrons on the vanadium structure to a dance.

"When the copper ions move, electrons on the vanadium lattice move in concert, mirroring the movement of the copper ions," Andrews said. "In this way, incredibly small movements of the copper ions induce large electronic changes in the vanadium lattice without any observable changes in vanadium-vanadium bonding. It's like the vanadium atoms 'see' what the copper is doing and respond."

Transmitting, storing and processing data currently accounts for about 10 percent of global energy use, but Banerjee says extrapolations indicate the demand for computation will be many times higher than the projected global energy supply can deliver by 2040. Exponential increases in computing capabilities therefore are required for transformative visions, including the Internet of Things, autonomous transportation, disaster-resilient infrastructure, personalized medicine and other societal grand challenges that otherwise will be throttled by the inability of current computing technologies to handle the magnitude and complexity of human- and machine-generated data. He says one way to break out of the limitations of conventional computing technology is to take a cue from nature -- specifically, the neural circuitry of the human brain, which vastly surpasses conventional computer architectures in terms of energy efficiency and also offers new approaches for machine learning and advanced neural networks.

"To emulate the essential elements of neuronal function in artificial circuitry, we need solid-state materials that exhibit electronic instabilities, which, like neurons, can store information in their internal state and in the timing of electronic events," Banerjee said. "Our new work explores the fundamental mechanisms and electronic behavior of a material that exhibits such instabilities. By thoroughly characterizing this material, we have also provided information that will instruct the future design of neuromorphic materials, which may offer a way to change the nature of machine computation from simple arithmetic to brain-like intelligence while dramatically increasing both the throughput and energy efficiency of processors."

Because the various components that handle logic operations, store memory and transfer data are all separate from each other in conventional computer architecture, Banerjee says they are plagued by inherent inefficiencies regarding both the time it takes for information to be processed and how physically close together device elements can be before thermal waste and electrons "accidentally" tunneling between components become major problems. By contrast, in the human brain, logic, memory storage and data transfer are simultaneously integrated into the timed firing of neurons that are densely interconnected in 3-D fanned-out networks. As a result, the brain's neurons process information at 10 times lower voltage and an almost 5,000 times lower synaptic operation energy in comparison to silicon computing architectures. To come close to achieving this kind of energetic and computational efficiency, he says new materials are needed that can undergo rapid internal electronic switching in circuits in a way that mimics how neurons fire in timed sequences.

Handy notes that the team still needs to optimize many parameters, such as transition temperature and switching speed along with the magnitude of the change in electrical resistance. By determining the underlying principles of the MIT in β'-CuxV2O5 as a prototype material within an expansive field of candidates, however, the team has identified certain design motifs and tunable chemical parameters that ultimately prove useful in the design of future neuromorphic computing materials, a major endeavor that has been seeded by the Texas A&M X-Grant Program.

"This discovery is very exciting because it provides fertile ground for the development of new design principles for tuning materials properties and also suggests exciting new approaches to researchers in the field for thinking about energy efficient electronic instabilities," Parija said. "Devices that incorporate neuromorphic computing promise improved energy efficiency that silicon-based computing has yet to deliver, as well as performance improvements in computing challenges like pattern recognition -- tasks that the human brain is especially well-equipped to tackle. The materials and mechanisms we describe in this work bring us one step closer to realizing neuromorphic computing and in turn actualizing all of the societal benefits and overall promise that comes with it."

Credit: 
Texas A&M University

How our brains create breathing rhythm is unique to every breath

image: Synchronized firing of neuron 'choir' that signals inhalation.

Image: 
UCLA/Feldman lab

Breathing propels everything we do--so its rhythm must be carefully organized by our brain cells, right?

Wrong. Every breath we take arises from a disorderly group of neurons - each like a soloist belting out its song before uniting as a chorus to harmonize on a brand-new melody. Or, in this case, a fresh breath.

That's the gist of a new UCLA study published in this week's online edition of Neuron.

"We were surprised to learn that how our brain cells work together to generate breathing rhythm is different every time we take a breath," explained senior author Jack Feldman, a professor of neurobiology at the David Geffen School of Medicine at UCLA and a member of the UCLA Brain Research Institute. "Each breath is a like a new song with the same beat."

Feldman and his colleagues studied a small network of neurons called the preBötzinger Complex. Early in his career, he'd named the region after suggesting it was the chief driver of breathing rhythm in the brain.

In 2015, Feldman's lab found that surprisingly low levels of activity in the preBötzinger Complex were driving breathing rhythm. The discovery left a riddle in its wake: how could such minor cues generate a foolproof breathing rhythm - whose failure means death?

To answer that puzzle, the UCLA team studied slices of brain tissue from mice and meticulously isolated preBötzinger Complex neurons from the brainstem.

By recording the cells' electrical activity in a dish, the team could eavesdrop on the neurons' conversations with their network neighbors.

According to first author Sufyan Ashhad, the neurons' activity resembled a choir whose members are practicing and singing over each other without benefit of a conductor.

"It's like each neuron is clearing its throat and rehearsing its tune off-key, so their collective sound does not make sense," said Ashhad, a postdoctoral researcher in Feldman's lab. "As the neurons interact, though, they quickly synchronize to sing in tune, transforming their individual solos from cacophony into harmony."

Each breath begins as hundreds of individual neurons haphazardly fire at low levels, then quickly synchronize. The synchronized effort prompts a burst of activity that signals muscles in the diaphragm and chest to contract, causing the chest to expand. Air rushes in and fills the lungs for inhalation.

As the signal subsides, the chest pushes air out of the lungs for exhalation. The cycle repeats, generating the rhythm of breathing.

"Given the reliability of breathing, we were stunned to discover that how these neurons move to synchronize and generate rhythm is different in every breathing cycle," said Feldman.

Why is this important? Consider all the times your breathing adjusts. It quickens when you are anxious or exercising, and slows as you fall sleep.

"Breathing rhythm changes constantly--from when you rise from seated to standing and walk out of your house," said Feldman. "If your brain couldn't automatically adapt, you'd pass out from lack of oxygen before reaching the street."

Breathing underlies all aspects of brain function, he added. The UCLA findings could suggest new approaches to treating breathing disorders in autistic children and sleep apnea.

Understanding how breathing rhythm is generated may also help scientists combat the rising death rate from opioid use, which suppresses the brain's ability to regulate breathing.

"Our take-home message is that it's important to study neurons' effect at the collective level, not just in individual cells," said Ashhad. "We're optimistic that this finding will open up new directions for research and resolve a question that's persisted for centuries."

Credit: 
University of California - Los Angeles Health Sciences

Scientists created an 'impossible' superconducting compound

image: The sample inside the diamond anvil cell connected with four electrodes.

Image: 
Science Advances

Scientists have created new superconducting compounds of hydrogen and praseodymium, a rare-earth metal, one substance being quite a surprise from the perspective of classical chemistry. The study helped find the optimal metals for room-temperature superconductors. The results were published in Science Advances.

A theory that has evolved in the past fifteen years assumes that hydrogen compounds (hydrides) can make excellent superconductors, that is, substances which have zero electrical resistance when cooled down to a certain temperature and are capable of carrying electricity without any losses, which is particularly valuable for power networks. However, the sticking point that scientists are still striving to work out is the temperature at which a substance becomes superconducting. For most compounds it is very low, so superconductors used in real life are typically cooled with liquid helium using complex and costly equipment. Physicists are busy searching for a substance that becomes superconducting at room temperature. One of the likely candidates is metallic hydrogen, but the pressure required to produce it exceeds 4 million atmospheres!

A group of Russian scientists from Skoltech and Chinese researchers from Jilin University published a paper, with Dmitry Semenok and Di Zhou as the first authors, featuring their research results. Their team created compounds of hydrogen and praseodymium, a metal from the lanthanide series, and studied their physical properties. The authors synthesized several compounds with different ratios of atoms of each element. To do this, they placed praseodymium and hydrogen samples in a special chamber where they were pressed between two cone-shaped diamonds so that pressure increased to 40 GPa, and were laser-heated. The elements got compressed and reacted to form the compound PrH3. The downside is that diamonds tend to become too fragile and break up when coming into contact with hydrogen. The scientists then replaced pure hydrogen with ammonium borane, a compound containing a large amount of hydrogen readily released when heated and reacting with praseodymium. The researchers found this method to be more effective and continued to use it in further experiments. By increasing the pressure, they obtained PrH9. Earlier, they had synthesized compounds of hydrogen and lanthanum, another metal from the same series, using the same technique. The molecules they obtained are special in that they are an "outlaw" in classical chemistry, as they do not obey by its rules. Even though, formally, the praseodymium atom's electronic structure is such that it does not allow it to bond with so many other atoms, the existence of such "improper" compounds can be predicted by complex quantum calculations and proved by experiments.

Also, the scientists investigated the superconductivity of the new substances by measuring electrical resistance at different temperatures and pressures and found that praseodymium hydride becomes superconducting at -264 °C, which is much lower compared to LaH10, although the two compounds are similar both chemically and structurally. The authors looked into the reasons for the difference in the characteristics by comparing their results to other studies and found that the metal's position in the periodic table and its properties play a pivotal role. It transpired that praseodymium atoms act as donors for electrons: unlike their neighbors, lanthanum and cerium, they carry small magnetic moments that suppress superconductivity which can still occur although at lower temperatures.

"We applied the method used previously to synthesize lanthanum hydrides and succeeded in creating new superconducting metallic praseodymium hydrides. We made two main conclusions. First, you can get abnormal compounds with compositions having nothing to do with valence, that is, the number of bonds an atom can have with other atoms. Second, we validated the new principle for creating superconductors. We found that the metals from the "lability zone" located between groups II and III of the periodic table are the best candidates. The elements nearest to the "lability zone" are lanthanum and cerium. Going forward, we will proceed from this finding to obtain new high-temperature superconductors," said Skoltech and MIPT professor, Artem Oganov.

Credit: 
Skolkovo Institute of Science and Technology (Skoltech)