Earth

Newly discovered rare dinosaur embryos show sauropods had rhino-like horns

image: Sauropod embryo after acid preparation

Image: 
Martin Kundrat, Evolutionary Biodiversity Research Group Pavol Jozef Šafárik University

An incredibly rare dinosaur embryo discovered perfectly preserved inside its egg has shown scientists new details of the development and appearance of sauropods which lived 80 million years ago.

Sauropods were the giant herbivores made famous as being 'veggie-saurs' in the 1993 film Jurassic Park. The incredible new find of an intact embryo has shown for the first time that these dinosaurs had stereoscopic vision and a horn on the front of the face which was then lost in adulthood.

The international research team say that this is the most complete and articulate skull known from any titanosaur, the last surviving group of long-necked sauropods and largest land animals known to have ever existed.

The sauropod egg was discovered in Patagonia, Argentina, in an area not previously known to provide evidence of dinosaur fossils. It was imperative the egg was repatriated to Argentina however as it is illegal to permanently remove fossils from the country.

Dr John Nudds from The University of Manchester said: "The preservation of embryonic dinosaurs preserved inside their eggs is extremely rare. Imagine the huge sauropods from Jurassic Park and consider that the tiny skulls of their babies, still inside their eggs, are just a couple of centimetres long.

"We were able to reconstruct the embryonic skull prior to hatching. The embryos possessed a specialised craniofacial anatomy that precedes the post-natal transformation of the skull in adult sauropods. Part of the skull of these embryonic sauropods was extended into an elongated snout or horn, so that they possessed a peculiarly shaped face."

The examination of the amazing specimen enabled the team to revise opinions of how babies of these giant dinosaurs may be hatched and to test previously held ideas about sauropodomorph reproduction. The elongated horn is now thought to have been used as an 'egg tooth' on hatching to allow babies to break through their shell.

The findings, published today in Current Biology, were the result of a novel technique to reveal embryonic dinosaurs in their shells. The embryo within the egg was revealed by carefully dissolving the egg around it using an acid preparation. The team were then able to perform a virtual dissection of the specimen at the European Synchrotron Radiation Facility (ESRF) in Grenoble.

Sauropod embryology remains one of the least explored areas of the life history of dinosaurs. The first definitive discovery of sauropod embryos came with the finding of an enormous nesting ground of titanosaurian dinosaurs discovered in Upper Cretaceous deposits of northern Patagonia, Argentina, 25 years ago. This new discovery however, is the first time a fully intact embryo has been able to be studied.

Other eggs were also found at the Argentinian site which the scientists now aim to examine in a similar fashion. It is thought that some of the eggs could contain well-preserved dinosaur skin which could help further piece together the mysteries of some of the most fascinating animals to ever walk the Earth.

Credit: 
University of Manchester

Trapping of acetylene

Ethylene, a key feedstock in the chemical industry, often includes traces of acetylene contaminants, which need to be removed. In the journal Angewandte Chemie, researchers describe a robust and regenerable porous metal-organic framework that captures acetylene with extraordinary efficiency and selectively. Its synergistic combination of tailor-made pore sizes and chemical docking sites makes the material especially efficient, the study says.

Ethylene is the most important chemical precursor for ethanol and polyethylene and is mainly produced by steam cracking. Although the ethylene fraction is usually very pure (more than 99%), remaining traces of acetylene contaminants can destroy the catalysts used in downstream processes.

As ethylene and acetylene are very similar and only differ in the amount of hydrogen atoms--ethylene has four hydrogen atoms bound to two carbon atoms, acetylene has two--the separation of both gases is elaborate and difficult. The current industrial processes rely on distillation, which consumes a huge amount of energy.

However, hydrocarbon compounds bind to porous substances called metal-organic frameworks (MOFs). MOFs are made of metal ions and organic ligands and contain pores and chemical docking sites that can be designed to capture specific molecules from a stream of gas at ambient conditions. However, for the separation of ethylene and acetylene, the industry demands robust, regenerable, highly selective, and cheap materials, which have not been found so far.

Dan Zhao and his colleagues at the National University of Singapore have now developed a MOF specific for acetylene capture that may meet the demands of extraordinary selectivity and robustness. The scientists focused on an established MOF with nickel sites, but they "opened" up these nickel sites for the binding of more molecules by activating them and exposing them to the pores so that they were able to bind two guest molecules at once.

In addition, the scientists adjusted the pore sizes of the MOF to allow entry only for very small gas molecules, and filled the pore walls with chemical groups that would attract acetylene over ethylene through their stronger electrostatic and chemical interactions.

Thus, combining small pore sizes with the open nickel sites and sites for preferential acetylene binding, the scientists have created a Ni-MOF called Ni(3)(pzdc)(2)(7Hade)(2) that is extraordinarily selective, robust, stable, and can be regenerated. According to the study, the Ni-MOF purified the ethylene stream by a factor of a thousand and kept the selectivity high across a range of pressures and regeneration cycles. In addition, the Ni-MOF can be prepared in a standard hydrothermal procedure, the scientists say.

The authors point out that the synergy of pore geometry and size, combined with chemical interactions, can be further enhanced and may lead to even more effective separations. This is interesting for industrial application.

Credit: 
Wiley

Elderly in the US: Risk of dementia has been rising for years - instead of falling

image: Odds ratios and 95% confidence intervals for time from logistic regression models of dementia. Model 1 is the base model, Model 2 takes the number of tests into account.

Image: 
MPIDR

Over the past 20 years or so, the risk for US men and women of suffering from cognitive impairment and dementia has increased. This is the conclusion of a new study by researchers at the Max Planck Institute for Demographic Research (MPIDR) and colleagues that takes into account learning effects when repeating the same dementia test.

The burden might be heavier than long assumed: For years, most studies using survey data suggested that the risk of suffering from cognitive impairment is declining in high-income countries. They often use longitudinal surveys in which the same individuals take the same test over and over. This results in learning, which if not taken into account, may bias results.

That is why Mikko Myrskylä, Joanna Hale, Jutta Gampe, Neil Mehta and Daniel Schneider analyzed the prevalence in cognitive impairment in the United States from 1996 to 2014 taking into account testing experience and selective mortality.

"Results based on models that do not control for test experience suggest that risk of cognitive impairment and dementia decreases over the study period. However, when we controlled for testing experience in our model the trend reverses", says Mikko Myrskylä, Director at the Max Planck Institute for Demographic Research (MPIDR) in Rostock, Germany.

In their models, the prevalence of any cognitive impairment increases for both women and men. The increase was particularly strong among Latinas, the least educated, and people over 85. Some of the increase may be driven by people living longer with dementia.

For their study, recently published in the journal Epidemiology, the researchers used data from more than 32,000 participants from the Health and Retirement Study (HRS). That is a nationally representative, biennial panel survey of US residents age 50 and older and their spouses. Among other things, it contains a version of the Telephone Interview for Cognitive Status (TICS-M), specifically modified to detect decline of cognitive abilities.

Credit: 
Max-Planck-Gesellschaft

Duchenne: "Crosstalk" between muscle and spleen

image: The figure shows an overview of the potential protein-protein interaction patterns at reduced (left) and increased (right) protein concentrations in the spleen of Duchenne mice.

Image: 
© Maynooth University

Duchenne muscular dystrophy (DMD) is the most common muscle disease in children and is passed on by X-linked recessive inheritance. Characteristic is a progressive muscular atrophy. The disease often results in death before the third decade of life. Researchers of the Universities of Maynooth (Ireland) and Bonn have found a connection between dystrophic muscles and the lymphatic system in mice with Duchenne disease. The results have now been published in the journal iScience.

The muscular atrophy in Duchenne disease is caused by a lack of dystrophin, a protein of the cytoskeleton. In vertebrates, dystrophin is found in the muscle fiber membrane and is important for muscle contraction. Although the disease is principally caused by a defective single gene (DMD gene), as a primarily neuromuscular disease it also has far-reaching and complex health-relevant effects on non-muscular tissues and organ systems.

In recent years, the research groups associated with Prof. Dr. Dieter Swandulla, Physiological Institute of the University of Bonn, and Prof. Dr. Kay Ohlendieck, National University of Ireland, Maynooth, have used mass spectrometric protein analysis (proteomics) to show that Duchenne muscular dystrophy causes changes in the respective set of proteins (proteome) in a number of organs including heart, brain, kidney and liver as well as in saliva, serum and urine.

Search for disease-specific marker proteins

"Proteomics is a reliable and effective analytical method for identifying disease-specific marker proteins that provide information about the course of the disease, possible therapeutic targets and the effectiveness of therapeutic interventions," says senior author Prof. Swandulla.

In the current study, the researchers used proteomics in mice suffering from Duchenne muscular dystrophy to model how the skeletal muscles and the spleen influence each other in view of the dystrophin deficiency. The spleen plays a key role in the immune response and is located in the abdominal cavity near the stomach. It ensures the proliferation of lymphocytes, which are white blood cells, and also stores monocyte-type immune cells and disposes of worn-out red blood cells.

The researchers used the Duchenne mice to decode for the first time the set of proteins (proteome) of the spleen in comparison to healthy control animals and created a comprehensive protein archive for this organ. "The mice with Duchenne disease showed numerous changes in the proteomic signature of the spleen compared to the controls," says Prof. Kay Ohlendieck of the National University of Ireland, Maynooth.

Furthermore, the researchers found for the first time a shorter form of dystrophin (DP71), which is synthesized as a protein in the spleen. "This dystrophin variant is apparently not affected by the disease because it occurs unchanged in Duchenne mice," says Swandulla. The "crosstalk" is expressed especially by the fact that a large number of proteins in the spleen are drastically reduced due to the loss of the long form of dystrophin. "This includes proteins that are involved in lipid transport and metabolism and in the immune response and inflammatory processes."

Secondary effects in the lymphatic system

Furthermore, the study provides evidence that the loss of the long form of dystrophin, as observed in Duchenne muscular dystrophy in skeletal muscle, apparently causes secondary effects in the lymphatic system. "It's a real 'crosstalk' between skeletal muscles and the lymphatic system," says lead author Dr. Paul Dowling of Maynooth University.

The term "crosstalk" is used, for example, when there is a disruptive overlay of another conversation on the phone that can be heard in the background. In the specific case of Duchenne muscular dystrophy, the "crosstalk" was particularly expressed by the fact that the short form of the dystrophin was still produced as normal in the spleen, but there were disruptive changes of the proteomic signature in the other protein species.

The researchers point out that the results of the study suggest that the mechanisms of the inflammatory processes which occur in the course of Duchenne muscular dystrophy merit special attention. This is because these inflammatory mechanisms are an important feature of muscle fiber degeneration and contribute significantly to the progression of the disease. "The specific interactions of the dystrophin deficiency with the immune system might therefore open up new therapeutic approaches," says Prof. Swandulla.

Credit: 
University of Bonn

Engineers use heat-free technology to make metallic replicas of a rose's surface texture

image: This lab demonstration shows how a rose petal and a metallic replica of the petal's surface texture repel water. The replica was created using the "frugal science/innovation" of Martin Thuo and his research group.

Image: 
Photo courtesy of Martin Thuo and his research group.

AMES, Iowa - Nature has worked for eons to perfect surface textures that protect, hide and otherwise help all kinds of creatures survive.

There's the shiny, light-scattering texture of blue morpho butterfly wings, the rough, drag-reducing texture of shark skin and the sticky, yet water-repelling texture of rose petals.

But how to use those natural textures and properties in the engineered world? Could the water-repelling, ultrahydrophobic texture of a lotus plant somehow be applied to an aircraft wing as an anti-icing device? Previous attempts have involved molding polymers and other soft materials, or etching patterns on hard materials that lacked accuracy and relied on expensive equipment. But what about making inexpensive, molded metallic biostructures?

Iowa State University's Martin Thuo and the students in his research group have found a way in their pursuit of "frugal science/innovation," what he describes as "the ability to minimize cost and complexity while providing efficient solutions to better the human conditions."

For this project, they're taking their previous development of liquid metal particles and using them to make perfectly molded metallic versions of natural surfaces, including a rose petal. They can do it without heat or pressure, and without damaging a petal.

They describe the technology they're calling BIOMAP in a paper recently published online by Angewandte Chemie, a journal of the German Chemical Society. Thuo, an associate professor of materials science and engineering with a courtesy appointment in electrical and computer engineering, is the corresponding author. Co-authors are all Iowa State students in materials science and engineering: Julia Chang, Andrew Martin and Chuanshen Du, doctoral students; and Alana Pauls, an undergraduate.

Iowa State supported the project with intellectual property royalties generated by Thuo.

"This project comes from an observation that nature has a lot of beautiful things it does," Thuo said. "The lotus plant, for example, lives in water but doesn't get wet. We like those structures, but we've only been able to mimic them with soft materials, we wanted to use metal."

Key to the technology are microscale particles of undercooled liquid metal, originally developed for heat-free soldering. The particles are created when tiny droplets of metal (in this case Field's metal, an alloy of bismuth, indium and tin), are exposed to oxygen and coated with an oxidation layer, trapping the metal inside in a liquid state, even at room temperature.

The BIOMAP process uses particles of varying sizes, all of them just a few millionths of a meter in diameter. The particles are applied to a surface, covering it and form-fitting all the crevices, gaps and patterns through the autonomous processes of self-filtration, capillary pressure and evaporation.

A chemical trigger joins and solidifies the particles to each other and not to the surface. That allows solid metallic replicas to be lifted off, creating a negative relief of the surface texture. Positive reliefs can be made by using the inverse replica to create a mold and then repeating the BIOMAP process.

"You lift it off, it looks exactly the same," Thuo said, noting the engineers could identify different cultivars or roses through subtle differences in the metallic replicas of their textures.

Importantly, the replicas kept the physical properties of the surfaces, just like in elastomer-based soft lithography.

"The metal structure maintains those ultrahydrophobic properties - exactly like a lotus plant or a rose petal," Thuo said. "Put a droplet of water on a metal rose petal, and the droplet sticks, but on a metal lotus leaf it just flows off."

Those properties could be applied to airplane wings for better de-icing or to improve heat transfer in air conditioning systems, Thuo said.

That's how a little frugal innovation "can mold the delicate structures of a rose petal into a solid metal structure," Thuo said. "This is a method that we hope will lead to new approaches of making metallic surfaces that are hydrophobic based on the structure and not the coatings on the metal."

Credit: 
Iowa State University

High walk and bike scores associated with greater crash risk

image: UBC civil engineering professor Tarek Sayed

Image: 
UBC

Neighbourhoods with high bikeability and walkability scores actually present higher crash risks to cyclists and pedestrians in Vancouver, according to new research from the University of British Columbia.

In a study outlined in Transportation Research Record, researchers used five years of crash data from provincial auto insurer ICBC to identify high crash-risk zones in the city of Vancouver. They compared these zones to their walk scores and bike scores--popular metrics used by researchers, transportation planners and real estate agents to indicate how conducive an area is to walking or biking--and discovered that although these zones were deemed to be highly walkable or bikeable, they were associated with higher risks of pedestrian and cyclist crashes, respectively.

"Among the public, there is an implicit assumption that 'walkable' and 'bikeable' means safe for walking and biking, but these indices do not actually include objective measures of safety," said principal investigator Tarek Sayed, a UBC professor in the department of civil engineering. "When we objectively analyzed the data, we found that the zones with better bikeability and walkability scores had higher collision risks. We controlled for traffic volume and pedestrian and cyclist numbers, so this reflects actual collision risks to individuals."

Collision hot spots

Areas of the city found to pose the highest collision risk to cyclists included zones in the downtown core and in Strathcona and Mount Pleasant, which are rated highly on the Bike Score index. Hot spots for pedestrian-involved collisions were found in zones within the downtown core, Fairview, Mount Pleasant, Strathcona and Grandview-Woodland, which have high walk scores. A big part of what makes neighbourhoods walkable and bikeable is a high density of attractive destinations, which increases walking and cycling trips, but also creates conflicts among road users. Other areas of the city may be safer for walking and cycling, but with few destinations to walk or cycle to.

Sayed acknowledges walk score and bike score indices do not claim to reflect safety.

"If these indices are clearly defined as a reflection of the ease of reaching a destination, they may be good. But we want to make clear to the public that they do not indicate safer areas for walking and biking," he said. "We need to have two indices--one for bike attractiveness and another one for safety."

New bike safety index

Sayed and colleagues are proposing a new composite Bike Safety Index that reflects both biking appeal and safety. Areas of Vancouver that rate highest in this index include zones within Point Grey, Stanley Park, False Creek, the River District, Kerrisdale, and along the Fraser River in Marpole.

The index, described in a paper published earlier this year, takes into account a number of elements, including the complexity of routes, density of traffic signals, kilometres travelled by vehicles, bike network coverage and average link length.

"The very low correlation between bike safety and bike attractiveness in our research indicates the need for this composite index," said Sayed. "If you compare the maps using the Bike Safety Index ratings compared to bike attractiveness ratings, they look very different."

Sayed said he hopes to see this index widely adopted by cities around the globe. "By providing the public with an objective measure of safety, my hope is that cyclists and pedestrians will be better equipped to navigate their cities safely, and with an accurate understanding of where they are at greater risk of injuries from car collisions," said Sayed.

Credit: 
University of British Columbia

How sticklebacks dominate perch

image: This animated gif shows how stickleback domination (in red) moves ever closer towards the coastline. The image is based on data gathered between 1979 and 2017.

Image: 
Mårten Erlandsson, Swedish University of Agricultural Sciences

A research project on algal blooms along the Swedish coast, caused by eutrophication, revealed that large predators such as perch and pike are also necessary to restrict these blooms. Ecologist Britas Klemens Eriksson from the University of Groningen and his colleagues from Stockholm University and the Swedish University of Agricultural Sciences, Sweden have now shown that stickleback domination moves like a wave through the island archipelagos, changing the ecosystem from predator-dominated to algae-dominated. Their study was published on 27 August in the journal Communications Biology.

Eriksson experimented with the effects of nutrients on algal blooms while working as a postdoctoral researcher in Sweden. When he added nutrients to exclusion cages in the brackish coastal waters, algae began to dominate. This was no surprise. However, when he excluded large predators, he saw similar algal domination. 'Adding nutrients and excluding large predators had a huge effect,' he recalls, 10 years later.

Food web

The big question that arose from these results using small exclusion cages was whether the results would be the same for the real Swedish coastal ecosystem. This coast consists of countless archipelagos that stretch up to 20 kilometres into the sea, creating a brackish environment. Here, perch and pike are the top predators, feeding on sticklebacks, which themselves eat the small crustaceans that live off algae.

To investigate how this food web developed over the past 40 years, Eriksson (who had moved to the University of Groningen in the Netherlands) connected with his colleagues at Stockholm University and the Swedish University of Agricultural Sciences to gather data on fish abundances and to carry out a series of field studies. They were inspired by recent suggestions that regime shifts can occur in closed systems such as lakes and wondered whether algal blooms in the Baltic sea could also be a consequence of such a regime change.

Grazers

Eriksson and his colleagues sampled 32 locations along a 400-kilometre stretch of coastline. 'We visited these sites in the spring and autumn of 2014 and sampled all levels of the food web, from algae to top predators.' These data were subsequently entered into a food web model, which helped them to find connections between species. The models showed that the small sticklebacks were important for the reproduction of the larger predators. And a local increase in sticklebacks means that a lot of the grazers in the ecosystem are eaten, which drives algal domination.

'If you just look at the abundances of fish, you find a mixed system in which different species dominate,' Eriksson explains. But looking at the changes in these fishery data over time showed an increase in sticklebacks that started in the late 1990s, initially in the outer parts of the archipelagos. 'This is presumably caused by a reduction in the number of large predators. The reduction is the combined result of habitat destruction, fishing and increased predation by cormorants and seals.' Sticklebacks migrate from the outer archipelagos inwards to reproduce, linking coastal and offshore processes.

Predation

Reduced predation increases the survival of sticklebacks, while both eutrophication and warming help to increase their numbers even further. As the sticklebacks reduced the number of grazers, algae began to replace seagrass and other vegetation. Furthermore, the sticklebacks also fed on the larvae of perch and pike, thereby further reducing their numbers. 'This is a case of predator-prey reversal,' explains Eriksson. Instead of top predators eating sticklebacks, the smaller fish strongly reduced the number of perch and pike larvae.

Over time, the stickleback domination moved inwards like a wave: regional change propagated throughout the entire ecosystem. This has important consequences for ecosystem restoration. 'To counter algal blooms, you should not only reduce the eutrophication of the water but also increase the numbers of top predators.' It means that those organizations that manage fisheries must start working together with those that manage water quality. 'We should not look at isolated species but at the entire food web,' says Eriksson. 'This is something that the recent EU fishery strategy is slowly starting to implement.'

Furthermore, the propagation of local changes throughout a system has wider implications in ecology, especially in natural ecosystems that have complex interaction and information pathways. 'And we know this from politics and human behaviour studies. A good example is the Arab Spring, which started locally and then propagated across the Middle East.'

Credit: 
University of Groningen

Quantum simulation of quantum crystals

The quantum properties underlying crystal formation can be replicated and investigated with the help of ultracold atoms. A team led by Dr. Axel U. J. Lode from the University of Freiburg's Institute of Physics has now described in the journal Physical Review Letters how the use of dipolar atoms enables even the realization and precise measurement of structures that have not yet been observed in any material. The theoretical study was a collaboration involving scientists from the University of Freiburg, the University of Vienna and the Technical University of Vienna in Austria, and the Indian Institute of Technology in Kanpur, India.

Crystals are ubiquitous in nature. They are formed by many different materials - from mineral salts to heavy metals like bismuth. Their structures emerge because a particular regular ordering of atoms or molecules is favorable, because it requires the smallest amount of energy. A cube with one constituent on each of its eight corners, for instance, is a crystal structure that is very common in nature. A crystal's structure determines many of its physical properties, such as how well it conducts a current or heat or how it cracks and behaves when it is illuminated by light. But what determines these crystal structures? They emerge as a consequence of the quantum properties of and the interactions between their constituents, which, however, are often scientifically hard to understand and also hard measure.

To nevertheless get to the bottom of the quantum properties of the formation of crystal structures, scientists can simulate the process using Bose-Einstein condensates - trapped ultracold atoms cooled down to temperatures close to absolute zero or minus 273.15 degrees Celsius. The atoms in these highly artificial and highly fragile systems are extremely well under control.
With careful tuning, the ultracold atoms behave exactly as if they were the constituents forming a crystal. Although building and running such a quantum simulator is a more demanding task than just growing a crystal from a certain material, the method offers two main advantages: First, scientists can tune the properties for the quantum simulator almost at will, which is not possible for conventional crystals. Second, the standard readout of cold-atom quantum simulators are images containing information about all crystal particles. For a conventional crystal, by contrast, only the exterior is visible, while the interior - and in particular its quantum properties - is difficult to observe.

The researchers from Freiburg, Vienna, and Kanpur describe in their study that a quantum simulator for crystal formation is much more flexible when it is built using ultracold dipolar quantum particles. Dipolar quantum particles make it possible to realize and investigate not just conventional crystal structures, but also arrangements that were hitherto not seen for any material. The study explains how these crystal orders emerge from an intriguing competition between kinetic, potential, and interaction energy and how the structures and properties of the resulting crystals can be gauged in unprecedented detail.

Original Publication:
Budhaditya Chatterjee, Camille Lévêque, Jörg Schmiedmayer, Axel U.?J. Lode: Detecting One-Dimensional Dipolar Bosonic Crystal Orders via Full Distribution Functions. Physical Review Letters 125, 093602. https://doi.org/10.1103/PhysRevLett.125.093602

The quantum properties underlying crystal formation can be replicated and investigated with the help of ultracold atoms. A team led by Dr. Axel U. J. Lode from the University of Freiburg's Institute of Physics has now described in the journal Physical Review Letters how the use of dipolar atoms enables even the realization and precise measurement of structures that have not yet been observed in any material. The theoretical study was a collaboration involving scientists from the University of Freiburg, the University of Vienna and the Technical University of Vienna in Austria, and the Indian Institute of Technology in Kanpur, India.

Crystals are ubiquitous in nature. They are formed by many different materials - from mineral salts to heavy metals like bismuth. Their structures emerge because a particular regular ordering of atoms or molecules is favorable, because it requires the smallest amount of energy. A cube with one constituent on each of its eight corners, for instance, is a crystal structure that is very common in nature. A crystal's structure determines many of its physical properties, such as how well it conducts a current or heat or how it cracks and behaves when it is illuminated by light. But what determines these crystal structures? They emerge as a consequence of the quantum properties of and the interactions between their constituents, which, however, are often scientifically hard to understand and also hard measure.

To nevertheless get to the bottom of the quantum properties of the formation of crystal structures, scientists can simulate the process using Bose-Einstein condensates - trapped ultracold atoms cooled down to temperatures close to absolute zero or minus 273.15 degrees Celsius. The atoms in these highly artificial and highly fragile systems are extremely well under control.
With careful tuning, the ultracold atoms behave exactly as if they were the constituents forming a crystal. Although building and running such a quantum simulator is a more demanding task than just growing a crystal from a certain material, the method offers two main advantages: First, scientists can tune the properties for the quantum simulator almost at will, which is not possible for conventional crystals. Second, the standard readout of cold-atom quantum simulators are images containing information about all crystal particles. For a conventional crystal, by contrast, only the exterior is visible, while the interior - and in particular its quantum properties - is difficult to observe.

The researchers from Freiburg, Vienna, and Kanpur describe in their study that a quantum simulator for crystal formation is much more flexible when it is built using ultracold dipolar quantum particles. Dipolar quantum particles make it possible to realize and investigate not just conventional crystal structures, but also arrangements that were hitherto not seen for any material. The study explains how these crystal orders emerge from an intriguing competition between kinetic, potential, and interaction energy and how the structures and properties of the resulting crystals can be gauged in unprecedented detail.

Credit: 
University of Freiburg

Russian scientists predicted increased unrest in the United States back in 2010

Beginning in May 2020, after the police killing of George Floyd, a Black American man, 'Black Lives Matter' demonstrations and riots engulfed the United States, the United Kingdom, and several European countries. Though Mr. Floyd's killing served as the immediate catalyst for the unrest, many scholars suggest that the COVID-19 pandemic and the resulting economic crisis played a deeper, more pivotal role in creating conditions that led to the protests.

There has been a steady increase in protests in the United States and Great Britain since 2011, which, as Peter Turchin and other scientists suggest, is the result of a predictable 50-year cycle of socio-political dynamics that has culminated with a surge of violence. This cycle was identified by Russian experts in cliodynamics and structural-demographic theory. Back in 2010, they predicted the current course of events. And now they have been able to verify their mathematical models.

In 2010, the Russian-American scientist Peter Turchin used structural-demographic theory (SDT) to predict the dynamics of socio-political conditions in the United States and Western Europe until 2020. His model predicted that, over the next decade, political instability and an increase in social conflicts would occur in Western democracies. In a new article, Turchin, together with Andrey Korotayev, another leading specialist in SDT at HSE University, conducted a retrospective assessment of the forecasts made in 2010-2012 and confirmed the accuracy of the conclusions. The paper was published in PLoS ONE journal: https://doi.org/10.1371/journal.pone.0237458.

The following approach is applied: the postulated historical hypothesis is turned into a mathematical model. It is then calculated. A specific prediction is extracted from the model. This forecast is then tested on real historical events. Thus, mathematical models can be tweaked, fine-tuned and, as a result, provide fairly accurate predictive analytics.

Historians are helped by the theory of complex systems, originally developed by physicists to describe nonlinear, chaotic processes, which can be used for climate modelling and weather prediction, for example. The American sociologist and historian Jack Goldstone was the first scholar to apply a mathematical apparatus from the theory of complex systems to historical processes. He developed the structural-demographic theory (SDT), which made it possible to take into account the many forces interacting in society that put pressure on it and lead to riots, revolutions, and civil wars.

Using the SDT, Goldstone established that every major coup or revolution is preceded by a surge in fertility. As a result, the size of the population exceeds its economic possibilities for self-sufficiency. A crisis comes, the population's standard of living the drops sharply, and unrest begins. At the same time, the state loses political flexibility and the elites split, with some of them siding with the protesters against the current system. A coup takes place, usually accompanied by an explosion of violence and a civil war.

Later, Goldstone's ideas were picked up and developed by Russian scientists and scholars, including not only Peter Turchin but also Sergei Nefyodov, Leonid Grinin and HSE Professor Andrei Korotayev. They applied their developments to predict socio-historical dynamics in the United States and Great Britain, as well as other Western European countries.

Structural demographic theory consists of four main components:

the state (size, income, expenses, debts, the legitimacy of power, etc.);

population (size, age structure, urbanization, wage level, social optimism, etc.);

elites (number and structure, sources of their income and current welfare, conspicuous consumption, internal competition, social norms);

factors of instability (radical ideologies, terrorist and revolutionary movements, acts of violence, riots, and revolutions).

Goldstone himself also proposed methods to operationalize and measure them, as well as a general integral indicator that allows future unrest to be predicted--an indicator of political stress Ψ (PSI, or the political stress indicator). Retrospective studies have shown that Ψ was off the charts before the French Revolution, the English Civil War, and the crisis of the Ottoman Empire. Therefore, if the mathematical model shows the growth of Ψ curve at any time intervals in the future, then we can confidently speak about coming socio-political instability at this time in this region.

In general terms, the equation for calculating Ψ looks like this:

Ψ = MMP * EMP * SFD

Here, MMP stands for Mass Mobilization Potential, EMP stands for Elite Mobilization Potential, and SFD represents the level of State Fiscal Distress in the state. Each of the equation indicators is calculated separately using many other socio-demographic variables and various mathematical tools, including differential equations.

In a new paper, scientists drew information from the Cross-National Time-Series Data Archive (CNTS) database. It contains information on the 200 most important indicators for more than 200 countries around the world from 1815 to the present. The researchers were most interested in data on anti-government demonstrations, riots, government crises, revolutions and purges (although for the United States and Great Britain there is little data for reliable statistical analysis with regard to the last two phenomena). An independent dataset from the US Political Violence Database (USPVD) and an archive of publications from The New York Times were also used to check and correct the information.

It turned out that in full accordance with the forecasts for 2010-2012 in the United States over the past 10 years, the number of anti-government demonstrations has sharply increased, and the number of street riots has increased significantly (see graph below). It is important to note that the prediction made at that point in time was completely at variance with the current trends and could not be a simple extrapolation, since from the early 1980s to 2010 the level of social unrest remained consistently low.

It is important to note that the events of 2020 do not affect or change the simulation results in any way. All the trends that have clearly manifested themselves in the USA, Great Britain, and a number of European countries have been slowly but steadily growing throughout the decade. The COVID-19 pandemic, of course, has also had an impact, and it was impossible to predict it based on historical data (although virologists and epidemiologists have regularly written about the potential danger of coronaviruses in scientific periodicals since the 2000s). But epidemics of dangerous diseases often arise during periods of social crisis and hit the most vulnerable sectors of society (as happened in the United States), which only mobilizes the masses even more and takes them to the streets.

Credit: 
National Research University Higher School of Economics

Brain-inspired electronic system could vastly reduce AI's carbon footprint

image: A wafer filled with memristors

Image: 
Courtesy of UCL

Extremely energy-efficient artificial intelligence is now closer to reality after a study by UCL researchers found a way to improve the accuracy of a brain-inspired computing system.

The system, which uses memristors to create artificial neural networks, is at least 1,000 times more energy efficient than conventional transistor-based AI hardware, but has until now been more prone to error.

Existing AI is extremely energy-intensive - training one AI model can generate 284 tonnes of carbon dioxide, equivalent to the lifetime emissions of five cars. Replacing the transistors that make up all digital devices with memristors, a novel electronic device first built in 2008, could reduce this to a fraction of a tonne of carbon dioxide - equivalent to emissions generated in an afternoon's drive.

Since memristors are so much more energy-efficient than existing computing systems, they can potentially pack huge amounts of computing power into hand-held devices, removing the need to be connected to the Internet.

This is especially important as over-reliance on the Internet is expected to become problematic in future due to ever-increasing data demands and the difficulties of increasing data transmission capacity past a certain point.

In the new study, published in Nature Communications, engineers at UCL found that accuracy could be greatly improved by getting memristors to work together in several sub-groups of neural networks and averaging their calculations, meaning that flaws in each of the networks could be cancelled out.

Memristors, described as "resistors with memory", as they remember the amount of electric charge that flowed through them even after being turned off, were considered revolutionary when they were first built over a decade ago, a "missing link" in electronics to supplement the resistor, capacitor and inductor. They have since been manufactured commercially in memory devices, but the research team say they could be used to develop AI systems within the next three years.

Memristors offer vastly improved efficiency because they operate not just in a binary code of ones and zeros, but at multiple levels between zero and one at the same time, meaning more information can be packed into each bit.

Moreover, memristors are often described as a neuromorphic (brain-inspired) form of computing because, like in the brain, processing and memory are implemented in the same adaptive building blocks, in contrast to current computer systems that waste a lot of energy in data movement.

In the study, Dr Adnan Mehonic, PhD student Dovydas Joksas (both UCL Electronic & Electrical Engineering), and colleagues from the UK and the US tested the new approach in several different types of memristors and found that it improved the accuracy of all of them, regardless of material or particular memristor technology. It also worked for a number of different problems that may affect memristors' accuracy.

Researchers found that their approach increased the accuracy of the neural networks for typical AI tasks to a comparable level to software tools run on conventional digital hardware.

Dr Mehonic, director of the study, said: "We hoped that there might be more generic approaches that improve not the device-level, but the system-level behaviour, and we believe we found one. Our approach shows that, when it comes to memristors, several heads are better than one. Arranging the neural network into several smaller networks rather than one big network led to greater accuracy overall."

Dovydas Joksas further explained: "We borrowed a popular technique from computer science and applied it in the context of memristors. And it worked! Using preliminary simulations, we found that even simple averaging could significantly increase the accuracy of memristive neural networks."

Professor Tony Kenyon (UCL Electronic & Electrical Engineering), a co-author on the study, added: "We believe now is the time for memristors, on which we have been working for several years, to take a leading role in a more energy-sustainable era of IoT devices and edge computing."

Credit: 
University College London

Cardiology compensation continues to rise; new interventional measures reported

MedAxiom, an American College of Cardiology Company and the premier source for cardiovascular organizational performance solutions, has released its eighth annual Cardiovascular Provider Compensation and Production Survey. The report reveals trends across cardiology, surgery, advanced practice providers (APPs) and non-clinical compensation that help cardiovascular organizations as they face a new normal and are reevaluating compensation models and the definition of work productivity.

Key findings include:

All regions of the country reported increases in median total cardiology compensation with the South remaining in the lead.

Electrophysiologists ($678,495) and interventional physicians ($674,910) are the top earners. Although the gap has narrowed over the years, cardiologists in integrated ownership models out-earn private physicians at every subspecialty level.

Data showed groups in the top quartile for their deployment of APPs per cardiologist were able to maintain significantly larger (22%) patient panel sizes.

Advanced heart failure physicians reported the lowest Work Relative Value Unit (wRVU) production per FTE, yet their compensation per wRVU is high compared to other specialties.

There was a decline in discharge volumes per cardiologist potentially due to procedures like elective percutaneous coronary intervention (PCI) moving to the ambulatory setting.

"There are several national physician surveys that provide good data for cardiovascular provider compensation and wRVU production," noted report author Joel Sauer, MBA, MedAxiom's executive vice president of consulting. "At MedAxiom, we work hard to go beyond just providing the numbers to tell you what the data mean, digging deep into cardiovascular production irrespective of the location - be it hospital, office, ambulatory surgery center and even at home. Looking ahead we see virtual care, hardly utilized in cardiovascular medicine prior to the pandemic, will play a prominent role in our survey beginning in 2021. This is an example of the continual evolution of MedAxiom's survey and rich member data."

The comprehensive report provides data and expert analysis on compensation and production trends by subspecialty, geographic region, ownership model and more. It looks at the diverse set of data points and factors including compensation per wRVU, key cardiology volumes and ratios, diagnostic testing trends, and the roles of APPs, part-time physicians and non-clinical roles. Making their debut in 2020, MedAxiom has added new interventional measures: PCI acute myocardial infarction only and PCI chronic total occlusion only.

"Before virtual care delivery entered the spotlight, access was trending as one of the top issues for cardiovascular care," said Jerry Blackwell, MD, MBA, FACC, MedAxiom's president and CEO. "We face an impending shortage of physicians to care for the cardiovascular patient population and, in many programs, the inability to get patients into the practice due to location and/or capacity constraints. Practice managers and health systems alike are reexamining their approach to team-based care and expansion of care delivery into the ambulatory setting. In this climate, reliable and comprehensive data that go deep into the cardiovascular program are critical. The importance has been amplified as a result of the public health emergency."

At the beginning of each year MedAxiom surveys its membership which represents more than one-third of all cardiology and cardiovascular groups in the country. The data collected contain financial, staffing, productivity and compensation metrics, and a number of demographic measures. Data for the 2020 report were collected over the 2008-2019 timeframe and include 168 groups, representing 2,363 full-time physicians, 1,458 APPs and 119 part-time physicians (3,940 total physicians and APPs).

Credit: 
American College of Cardiology

Study examines link between sperm quality and light from devices at night

DARIEN, IL - Men might want to think twice before reaching for their smartphone at night. A new study found correlations between electronic media use at night and poor sperm quality.

Preliminary results show that greater self-reported exposure to light-emitting media devices in the evening and after bedtime is associated with a decline in sperm quality. Sperm concentration, motility and progressive motility -- the ability of sperm to "swim" properly -- were all lower, and the percentage of immotile sperm that are unable to swim was higher, in men who reported more smartphone and tablet usage at night.

"Smartphone and tablet use in the evening and after bedtime was correlated with decline in sperm quality. Furthermore, smartphone use in the evening, tablet use after bedtime, and television use in the evening were all correlated with the decline of sperm concentration," said principal investigator Amit Green, PhD, head of research and development at the Sleep and Fatigue Institute at the Assuta Medical Center in Tel-Aviv, Israel. "To the best of our knowledge, this is the first study to report these types of correlations between sperm quality and exposure time to short-wavelength light emitted from digital media, especially smartphones and tablets, in the evening and after bedtime."

The researchers obtained semen samples from 116 men between the ages of 21 and 59 years who were undergoing fertility evaluation. Participants completed questionnaires about their sleep habits and use of electronic devices.

The study also found a correlation between longer sleep duration and higher sperm total and greater progressive motility. In contrast, greater sleepiness was associated with poorer sperm quality.

The research abstract was published recently in an online supplement of the journal Sleep for Virtual SLEEP 2020, which will be held Aug. 27-30. SLEEP is the annual meeting of the Associated Professional Sleep Societies, a joint venture of the American Academy of Sleep Medicine and the Sleep Research Society.

Credit: 
American Academy of Sleep Medicine

New tech extracts potential to identify quality graphene cheaper and faster

Engineers at Australia's Monash University have developed world-first technology that can help industry identify and export high quality graphene cheaper, faster and more accurately than current methods.

Published today in international journal Advanced Science, researchers used the data set of an optical microscope to develop a machine-learning algorithm that can characterise graphene properties and quality, without bias, within 14 minutes.

This technology is a game changer for hundreds of graphene or graphene oxide manufacturers globally. It will help them boost the quality and reliability of their graphene supply in quick time.

Currently, manufacturers can only detect the quality and properties of graphene used in a product after it has been manufactured.

Through this algorithm, which has the potential to be rolled out globally with commercial support, graphene producers can be assured of quality product and remove the time-intensive and costly process of a series of characterisation techniques to identify graphene properties, such as the thickness and size of the atomic layers.

Professor Mainak Majumder from Monash University's Department of Mechanical and Aerospace Engineering and the Australian Research Council's Hub on Graphene Enabled Industry Transformation led this breakthrough study.

Study co-authors are Md. Joynul Abedin and Dr Mahdokht Shaibani (Monash, Department of Mechanical and Aerospace Engineering), and Titon Barua (Vimmaniac Ltd., Bangladesh).

"Graphene possesses extraordinary capacity for electric and thermal conductivity. It is widely used in the production of membranes for water purification, energy storage and in smart technology, such as weight loading sensors on traffic bridges," Professor Majumder said.

"At the same time, graphene is rather expensive when it comes to usage in bulk quantities. One gram of high quality graphene could cost as much as $1,000 AUD ($720 USD) a large percentage of it is due to the costly quality control process.

"Therefore, manufacturers need to be assured that they're sourcing the highest quality graphene on the market. Our technology can detect the properties of graphene in under 14 minutes for a single dataset of 1936 x 1216 resolution. This will save manufacturers vital time and money, and establish a competitive advantage in a growing marketplace."

Discovered in 2004, graphene is touted as a wonder material for its outstanding lightweight, thin and ultra-flexible properties. Graphene is produced through the exfoliation of graphite. Graphite, a crystalline form of carbon with atoms arranged hexagonally, comprises many layers of graphene.

However, the translation of this potential to real-life and usable products has been slow. One of the reasons is the lack of reliability and consistency of what is commercially often available as graphene.

The most widely used method of producing graphene and graphene oxide sheets is through liquid phase exfoliation (LPE). In this process, the single layer sheets are stripped from its 3D counterpart such as graphite, graphite oxide film or expanded graphite by shear-forces.

But, this can only be imaged using a dry sample (i.e. once the graphene has been coated on a glass slide).

"Although there has been a strong emphasis on standardisation guidelines of graphene materials, there is virtually no way to monitor the fundamental unit process of exfoliation, product quality varies from laboratory to laboratory and from one manufacturer to other," Dr Shaibani said.

"As a result, discrepancies are often observed in the reported property-performance characteristics, even though the material is claimed to be graphene.

"Our work could be of importance to industries that are interested in delivering high quality graphene to their customers with reliable functionality and properties. There are a number of ASX listed companies attempting to enter this billion-dollar market, and this technology could accelerate this interest."

Researchers applied the algorithm to an assortment of 18 graphene samples - eight of which were acquired from commercial sources and the rest produced in a laboratory under controlled processing conditions.

Using a quantitative polarised optical microscope, researchers identified a technique for detecting, classifying and quantifying exfoliated graphene in its natural form of a dispersion.

To maximise the information generated from hundreds of images and large numbers of samples in a fast and efficient manner, researchers developed an unsupervised machine-learning algorithm to identify data clusters of similar nature, and then use image analysis to quantify the proportions of each cluster.

Mr Abedin said this method has the potential to be used for the classification and quantification of other two-dimensional materials.

"The capability of our approach to classify stacking at sub-nanometer to micrometer scale and measure the size, thickness, and concentration of exfoliation in generic dispersions of graphene/graphene oxide is exciting and holds exceptional promise for the development of energy and thermally advanced products," Mr Abedin said.

Professor Dusan Losic, Director of Australian Research Council's Hub on Graphene Enabled Industry Transformation, said: "These outstanding outcomes from our ARC Research Hub will make significant impact on the emerging multibillion dollar graphene industry giving graphene manufacturers and end-users new a simple quality control tool to define the quality of their produced graphene materials which is currently missing."

Credit: 
Monash University

New study warns: We have underestimated the pace at which the Arctic is melting

Temperatures in the Arctic Ocean between Canada, Russia and Europe are warming faster than researchers' climate models have been able to predict.

Over the past 40 years, temperatures have risen by one degree every decade, and even more so over the Barents Sea and around Norway's Svalbard archipelago, where they have increased by 1.5 degrees per decade throughout the period.

This is the conclusion of a new study published in Nature Climate Change.

"Our analyses of Arctic Ocean conditions demonstrate that we have been clearly underestimating the rate of temperature increases in the atmosphere nearest to the sea level, which has ultimately caused sea ice to disappear faster than we had anticipated," explains Jens Hesselbjerg Christensen, a professor at the University of Copenhagen's Niels Bohr Institutet (NBI) and one of the study's researchers.

Together with his NBI colleagues and researchers from the Universities of Bergen and Oslo, the Danish Metrological Institute and Australian National University, he compared current temperature changes in the Arctic with climate fluctuations that we know from, for example, Greenland during the ice age between 120,000-11,000 years ago.

"The abrupt rise in temperature now being experienced in the Arctic has only been observed during the last ice age. During that time, analyses of ice cores revealed that temperatures over the Greenland Ice Sheet increased several times, between 10 to 12 degrees, over a 40 to 100-year period," explains Jens Hesselbjerg Christensen.

He emphasizes that the significance of the steep rise in temperature is yet to be fully appreciated. And, that an increased focus on the Arctic and reduced global warming, more generally, are musts.

Climate models ought to take abrupt changes into account
Until now, climate models predicted that Arctic temperatures would increase slowly and in a stable manner. However, the researchers' analysis demonstrates that these changes are moving along at a much faster pace than expected.

"We have looked at the climate models analysed and assessed by the UN Climate Panel. Only those models based on the worst-case scenario, with the highest carbon dioxide emissions, come close to what our temperature measurements show over the past 40 years, from 1979 to today," says Jens Hesselbjerg Christensen.

In the future, there ought to be more of a focus on being able to simulate the impact of abrupt climate change on the Arctic. Doing so will allow us to create better models that can accurately predict temperature increases:

"Changes are occurring so rapidly during the summer months that sea ice is likely to disappear faster than most climate models have ever predicted. We must continue to closely monitor temperature changes and incorporate the right climate processes into these models," says Jens Hesselbjerg Christensen. He concludes:

"Thus, successfully implementing the necessary reductions in greenhouse gas emissions to meet the Paris Agreement is essential in order to ensure a sea-ice packed Arctic year-round."

Credit: 
University of Copenhagen

Proven: Historical climate changes occurred simultaneously in several parts of the world

A kind of domino effect -- a convergence of rising temperatures and changing precipitation rates occurred across the planet during the last ice age, stretching from 120,000-11,700 years ago.

According to a study published 21'st of august 2020 in the journal Science, climate changes in several areas of the world affected each other.

"By analyzing stalactite measurements from caves in South America, Asia, Europe and ice core samples from Greenland, we are able to see that 34 of the last ice age's 37 abrupt climate changes occurred simultaneously in each of these regions. In Greenland, abrupt warming caused temperatures to rapidly increase by roughly 20 degrees on 37 occasions during the last ice age, while other regions were particularly impacted by sudden changes in precipitation patterns," explains Sune Olander Rasmussen, an associate professor at the University of Copenhagen's Niels Bohr Institute.

"While we don't completely understand what caused these climatic changes, they are most likely the result of changes to the strength of the Gulf Stream," elaborates Rasmussen.

For many years, there has been a running assumption that climate changes spread like ripples across water, thereby affecting several areas of the planet at once.

But until now, such occurrences had never been mapped accurately.

By measuring the presence of radioactive elements with known half-lives in stones from stalactite caves, as well as by counting annual layers in ice cores drilled from the Greenland Ice Sheet, the researchers were able to pinpoint when climate changes occurred.

"This is the first time that we have been able to gather so much data from across the last ice age, at so many different sites, and prove that climate change occurred simultaneously in several parts of the world -- with as little as 100 years of uncertainty. When working on climate change spanning tens of thousands of years, this is considered to be very accurate," says Sune Olander Rasmussen.

We should prepare for abrupt temperature swings

While the study does not delve directly into modern day climate change, Rasmussen believes that its results are valuable for identifying the mechanisms at play today and in the possible future, vis-à-vis abrupt climate change.

"Abrupt changes in temperature or global sea level could transpire at rates that are difficult for us to adapt to. Denmark is no exception, even though we have more resources than many other countries for dealing with the problems associated," he says.

Much of the current policy agenda is based upon reducing gradual global warming. But a new agenda, one that deals with the risk of fundamentally different scenarios -- abrupt climate changes -- is on the way. According to Rasmussen:

"It is at least as important for us to reduce the risks associated with abrupt climate changes, as it is to work to reduce the scope of gradual climate change -- just as we as a society devote considerable resources to reduce the risks of serious, yet unlikely disasters such as nuclear accidents and plane crashes."

Dating methods confirmed

Beyond cementing that the climate changes of the last ice age occurred simultaneously, the researchers were also able to confirm that their dating methods work:

"We have worked to date ice cores from the Greenland Ice Sheet by counting annual core layers for many years. The new results demonstrate that we are better at doing so than we could have ever imagined," explains Sune Olander Rasmussen.

For example, if the researchers had missed just one or two percent of the annual layers in the ice cores, they would have miscalculated by thousands of years.

"We have steered clear of major errors and find the same patterns of climate change in both the ice cores and stalactites. The number of years between each of the 37 climate changes match so well that it confirms our interpretation and counting of the annual core layers. This is important for the future of climate change research because it means that we can combine and better use ice core and stalactite data together," concludes Sune Olander Rasmussen.

Credit: 
University of Copenhagen