Tech

Signals from inside the Earth: Borexino experiment releases new data on geoneutrinos

image: View into the interior of the Borexino detector.

Image: 
Copyright: Borexino Collaboration

Scientists involved in the Borexino collaboration have presented new results for the measurement of neutrinos originating from the interior of the Earth. The elusive "ghost particles" rarely interact with matter, making their detection difficult. With this update, the researchers have now been able to access 53 events - almost twice as many as in the previous analysis of the data from the Borexino detector, which is located 1,400 metres below the Earth's surface in the Gran Sasso massif near Rome. The results provide an exclusive insight into processes and conditions in the earth's interior that remain puzzling to this day.

The earth is shining, even if it is not at all visible to the naked eye. The reason for this is geoneutrinos, which are produced in radioactive decay processes in the interior of the Earth. Every second, about one million of these elusive particles penetrate every square centimetre of our planet's surface.

The Borexino detector, located in the world's largest underground laboratory, the Laboratori Nazionali del Gran Sasso in Italy, is one of the few detectors in the world capable of observing these ghostly particles. Researchers have been using it to collect data on neutrinos since 2007, i.e. for over ten years. By 2019, they were able to register twice as many events as at the time of the last analysis in 2015 - and reduce the uncertainty of the measurements from 27 to 18 percent, which is also due to new analysis methods.

„Geoneutrinos are the only direct traces of the radioactive decays that occur inside the Earth, and which produce an as yet unknown portion of the energy driving all the dynamics of our planet," explains Livia Ludhova, one of the two current scientific coordinators of Borexino and head of the neutrino group at the Nuclear Physics Institute (IKP) at Forschungszentrum Jülich.

The researchers in the Borexino collaboration have extracted with an improved statistical significance the signal of geoneutrinos coming from the Earth's mantle which lies below the Earth crust by exploiting the well-known contribution from the Earth's uppermost mantle and crust -- the so called lithosphere.

The intense magnetic field, the unceasing volcanic activity, the movement of the tectonic plates, and mantle convection: The conditions inside the Earth are in many ways unique in the entire solar system. Scientists have been discussing the question of where the Earth's internal heat comes from for over 200 years.

"The hypothesis that there is no longer any radioactivity at depth in the mantle can now be excluded at 99% confidence level for the first time. This makes it possible to establish lower limits for uranium and thorium abundances in the Earth's mantle," says Livia Ludhova.

These values are of interest for many different Earth model calculations. For example, it is highly probable (85%) that radioactive decay processes inside the Earth generate more than half of the Earth's internal heat, while the other half is still largely derived from the original formation of the Earth. Radioactive processes in the Earth therefore provide a non-negligible portion of the energy that feeds volcanoes, earthquakes, and the Earth's magnetic field.

The latest publication in Phys. Rev. D not only presents the new results, but also explains the analysis in a comprehensive way from both the physics and geology perspectives, which will be helpful for next generation liquid scintillator detectors that will measure geoneutrinos. The next challenge for research with geoneutrinos is now to be able to measure geoneutrinos from the Earth's mantle with greater precision, perhaps with detectors distributed at different positions on our planet. One such detector will be the JUNO detector in China where the IKP neutrino group is involved. The detector will be 70 times bigger than Borexino which helps in achieving higher statistical significance in a short time span.

Credit: 
Forschungszentrum Juelich

Mapping the path of climate change

image: The maximum likelihood transition from the current temperature state to the warmer one for global warming, under the influence of Levy noise and the greenhouse effect. (y-axis is temperature in Kelvin; x-axis is time in undetermined segments).

Image: 
Yayun Zheng

WASHINGTON, January 22, 2020 - Since 1880, the Earth's temperature has risen by 1.9 degrees Fahrenheit and is predicted to continue rising, according to the NASA Global Climate Change website. Scientists are actively seeking to understand this change and its effect on Earth's ecosystems and residents.

In Chaos, by AIP Publishing, scientists Yayun Zheng, Fang Yang, Jinqiao Duan, Xu Sun, Ling Fu and Jürgen Kurths present detailed research on climate change shifts, describing the mechanisms behind abrupt transitions in global weather. Predicting a major transition, such as climate change, is extremely difficult, but the probabilistic framework developed by the authors is the first step in identifying the path between a shift in two environmental states.

The researchers develop a climate change model based on probabilistic framework to explore the maximum likelihood climate change for an energy balance system under the influence of greenhouse effect and Lévy fluctuations. These fluctuations, which can present themselves as volcanic eruptions or huge solar outbreaks, for example, are suggested to be one factor that can trigger an abrupt climatic transition.

Some results of these noise fluctuations are the rapid climate changes that occurred 25 times during the last glacial period, a series of pauses in geophysical turbulence, and protein production in gene regulation, which occurs in bursts.

"Although the climate changes may not easily be accurately predicted, we offer insights about the most likely trend in such changes for the surface temperature," said Duan. "In the present paper, we have uncovered that the maximum likelihood path, under an enhanced greenhouse effect, is a step-like growth process when transferring from the current temperature state to the high temperature one."

By understanding the step-like growth process of the greenhouse effect, the authors can map out the path that climate changes may take. The researchers found larger influences of noise fluctuations can result in abrupt shifts from a cold climate state to a warmer one.

"The maximum likelihood path will be expected to be an efficient research tool, in order to better understand the climate changes under the greenhouse effect combined with non-Gaussian fluctuations in the environment," said Duan.

Credit: 
American Institute of Physics

The color of your clothing can impact wildlife

video: This is a brief explainer video on research showing that the color of your clothing can impact wildlife.

Image: 
Binghamton University, State University of New York

BINGHAMTON, N.Y. - Your choice of clothing could affect the behavioral habits of wildlife around you, according to a study conducted by a team of researchers, including faculty at Binghamton University, State University of New York.

Lindsey Swierk, assistant research professor of biological sciences at Binghamton University, collaborated on a study with the goal of seeing how ecotourists could unintentionally have an effect on wildlife native to the area. Swierk and her team went to Costa Rica to conduct this study on water anoles (Anolis aquaticus), a variety of the anole lizard. 

"I've studied water anoles for five years now, and I still find myself surprised by their unique natural history and behavior," Swierk said. "One reason water anoles were chosen is because they are restricted to a fairly small range and we could be pretty sure that these particular populations hadn't seen many humans in their lifetimes. So we had a lot of confidence that these populations were not biased by previous human interactions."

The researchers went to the Las Cruces Biological Center in Costa Rica. To collect data, they visited three different river locations wearing one of three different colored shirts: orange, green or blue. The study's focus was to see how these water anoles reacted to the different colors. Orange was chosen because the water anole has orange sexual signals. Blue was chosen as a contrast, as the water anole's body lacks the color blue. Green was selected as a similar color to the tropical forest environment of the testing site. 

"Based on previous work on how animals respond to color stimuli, we developed a hypothesis that wearing colors that are 'worn' by water anoles themselves would be less frightening to these lizards," Swierk said.

The results of the study supported that hypothesis, with researchers with orange shirts reporting more anoles seen per hour and a higher anole capture percentage. Despite predicting the result, Swierk said that she was surprised by some of the findings. "It was still very surprising to see that the color green, which camouflaged us well in the forest, was less effective than wearing a very bright orange!" 

Swierk said that one of the biggest takeaways from this study is that we may not yet quite understand how animals view the world. 

"We (both researchers and ecotourists) need to recognize that animals perceive the world differently than we do as humans," said Swierk. "They have their own 'lenses' based on their unique evolutionary histories. What we imagine is frightening for an animal might not be; conversely, what we imagine is non-threatening could be terrifying in reality."

Swierk hopes that these results may be used within the ecotourism community to reduce impacts on the wildlife that they wish to view.

She said that more research needs to be done before this study can be related to other animals with less-sophisticated abilities of color perception, such as mammals. 

Swierk collaborated with Breanna J. Putman, Department of Biology at California State University at San Bernardino; and Andrea Fondren, undergraduate researcher in the Iowa State University Department of Ecology, Evolution and Organismal Biology.

The paper, "Clothing color mediates lizard responses to humans in a tropical forest," was published in Biotropica

Credit: 
Binghamton University

A new approach to reveal the multiple structures of RNA

image: The figure represents the conformational ensemble of a RNA hairpin obtained using a combination of nuclear magnetic resonance (NMR) and molecular dynamics simulations. Although the dominant structure is the top-left one, all of them are required to explain the NMR data and some of them might be relevant for the biological function of this RNA molecule.

Image: 
Sabine Reisser

Experimental data and computer simulations come together to provide an innovative technique able to characterise the different configurations of an RNA molecule. The work, published in Nucleic Acids Research, opens new roads to study dynamic molecular systems.

The key of the extraordinary functionality of ribonucleic acid, better known as RNA, is a highly flexible and dynamic structure. Yet, the experimental characterisation of its different configurations is rather complex. A study conducted by SISSA and published on Nucleic Acids Research combines experimental data and molecular dynamics simulations to reconstruct the different dominant and minority structures of a single RNA fragment, providing an innovative method to study highly dynamic molecular systems.

"RNA plays a central role in the process of protein synthesis and in its regulation. This is also thanks to a structure that is not static but varies dynamically from one state to another," explains Giovanni Bussi, physicist at SISSA - Scuola Internazionale Superiore di Studi Avanzati and principal investigator of this work. "It is a 'set' of configurations, with a dominant structure maintained by the molecule for the majority of the time and some 'low population' structures that are rarer but equally important for its function and possible interaction with other molecules."

Experimental techniques, such as nuclear magnetic resonance (NMR), theoretically allow to probe these sets of configurations. However, to obtain the structure of the molecule, the assumption is often made that a single relevant conformation exists. Unless particularly sophisticated experimental approaches are used, they provide the 'average' structure, in short a mathematical average of the multiple states present, which, in dynamic systems, does not exist in reality.

Giovanni Bussi and Sabine Reißer, in collaboration with the group of the neurobiologist Stefano Gustincich at the Italian Institute of Technology (IIT), have developed a new way to identify the different configurations of an RNA molecule. "We studied a fragment of RNA with a characteristic hairpin structure that was identified in Gustincich's group and has an important role in regulating protein synthesis. By combining the NMR data with molecular dynamics simulations, we have mimicked different states of the molecule, including those of low population, and identified those that were able to reproduce the 'average' structure determined empirically."

The scientists then searched the Protein Data Bank, a database of molecular structures acquired experimentally: "We found some of the low population configurations identified by the simulations within other RNA molecules, including the ribosomes, the molecular machines responsible for protein synthesis ribosomal RNAs. Thus, we confirmed the existence of these multiple structures in nature and suggested a potential role at the ribosomal level," concludes Bussi. "We have demonstrated the validity of this innovative approach, which opens new roads for the study of molecular structures in highly dynamic systems in which states with different populations co-exist."

Credit: 
Scuola Internazionale Superiore di Studi Avanzati

Old molecule, new tricks

Fifty years ago, scientists hit upon what they thought could be the next rocket fuel. Carboranes -- molecules composed of boron, carbon and hydrogen atoms clustered together in three-dimensional shapes -- were seen as the possible basis for next-generation propellants due to their ability to release massive amounts of energy when burned.

It was technology that at the time had the potential to augment or even surpass traditional hydrocarbon rocket fuel, and was the subject of heavy investment in the 1950s and 60s.

But things didn't pan out as expected.

"It turns out that when you burn these things you actually form a lot of sediment," said Gabriel Ménard, an assistant professor in UC Santa Barbara's Department of Chemistry and Biochemistry. In addition to other problems found when burning this so-called "zip fuel," its residue also gummed up the works in rocket engines, and so the project was scrapped.

"So they made these huge stockpiles of these compounds, but they actually never used them," Ménard said.

Fast forward to today, and these compounds have come back into vogue with a wide range of applications, from medicine to nanoscale engineering. For Ménard and fellow UCSB chemistry professor Trevor Hayton, as well as Tel Aviv University chemistry professor Roman Dobrovetsky, carboranes could hold the key to more efficient uranium ion extraction. And that, in turn, could enable things like better nuclear waste reprocessing and uranium (and other metal) recovery from seawater.

Their research -- the first example of applying electrochemical carborane processes to uranium extraction -- is published in a paper (link) that appears in the journal Nature.

Key to this technology is the versatility of the cluster molecule. Depending on their compositions these structures can resemble closed cages, or more open nests, due to control of the compound's redox activity -- its readiness to donate or gain electrons. This allows for the controlled capture and release of metal ions, which in this study was applied to uranium ions.

"The big advancement here is this 'catch and release' strategy where you can switch between two states, where one state binds the metal and another state releases the metal," Hayton said.

Conventional processes, such as the popular PUREX process that extracts plutonium and uranium, rely heavily on solvents, extractants and extensive processing.

"Basically, you could say it's wasteful," Ménard said. "In our case, we can do this electrochemically -- we can capture and release the uranium with the flip of a switch.

"What actually happens," added Ménard, "is that the cage opens up." Specifically, the formerly closed ortho-carborane becomes an opened nido- ("nest") carborane capable of capturing the positively-charged uranium ion.

Conventionally, the controlled release of extracted uranium ions, however, is not as straightforward and can be somewhat messy. According to the researchers, such methods are "less established and can be difficult, expensive and or destructive to the initial material."

But here, the researchers have devised a way to reliably and efficiently flip back and forth between open and closed carboranes, using electricity. By applying an electrical potential using an electrode dipped in the organic portion of a biphasic system, the carboranes can receive and donate the electrons needed to open and close and capture and release uranium, respectively.

"Basically you can open it up, capture uranium, close it back up and then release uranium," Ménard said. The molecules can be used multiple times, he added.

This technology could be used for several applications that require the extraction of uranium and by extension, other metal ions. One area is nuclear reprocessing, in which uranium and other radioactive "trans-uranium" elements are extracted from spent nuclear material for storage and reuse (the PUREX process).

"The problem is that these trans-uranium elements are very radioactive and we need to be able to store these for a very long time because they're basically very dangerous," Ménard said. This electrochemical method could allow for the separation of uranium from plutonium, similar to the PUREX process, he explained. The extracted uranium could then be enriched and put back into the reactor; the other high-level waste can be transmuted to reduce their radioactivity.

Additionally, the electrochemical process could also be applied to uranium extraction from seawater, which would ease pressure on the terrestrial mines where all uranium is currently sourced.

"There's about a thousand times more dissolved uranium in the oceans than there are in all the land mines," Ménard said. Similarly, lithium -- another valuable metal that exists in large reserves in seawater -- could be extracted this way, and the researchers plan to take this research direction in the near future.

"This gives us another tool in the toolbox for manipulating metal ions and processing nuclear waste or doing metal capture out of oceans," Hayton said. "It's a new strategy and new method to achieve these types of transformations."

Research in this study was conducted also by Megan Keener (lead author), Camden Hunt and Timothy G. Carroll at UCSB; and by Vladimir Kampel at Tel Aviv University.

Credit: 
University of California - Santa Barbara

Tiny price gaps cost investors billions

image: Looking a bit like a bowl of spaghetti, this map shows the general scheme of the US stock market -- formally known as the National Market System -- as described by a team of scientist at the University of Vermont and The MITRE Corporation. Spread out between four communities in northern New Jersey, and with many back-and-forth flows of information, some faster than others, this complex system has contributed to some investors seeing prices earlier than other investors.

Image: 
UVM/MITRE

Imagine standing in the grocery store, looking at a pile of bananas. On your side of the pile, the manager has posted yesterday's newspaper flyer, showing bananas at 62¢ per pound--so that's what you pay at the register. But on the other side of the pile, there's an up-to-the-minute screen showing that the price of bananas has now dropped to 48¢ per pound--so that's what the guy over there pays. Exact same bananas, but the price you see depends on which aisle you're standing in.

New research from the University of Vermont and The MITRE Corporation shows that a similar situation--that the scientists call an "opportunity cost due to information asymmetry"--appears to be happening in the U.S. stock market.

And, the research shows, it's costing investors at least two billion dollars each year.

The first of three studies, "Fragmentation and inefficiencies in the US equity markets: Evidence from the Dow 30," was published on January 22 in the open-access journal PLOS ONE.

LIGHT SPEED

Instead of price discrepancies over days or even seconds, these stock market "dislocations" blink into existence for mere microseconds--far faster than a person could perceive--but still real and driven by the strange fact that information can move no faster than the speed of light.

This ultimate limit has become more important as trading computers have gotten faster--especially since 2005 when regulation changed and as various outlets of the ostensibly singular US stock market have been spread to several locations over dozens of miles across the Hudson River from Manhattan in northern New Jersey. "Even in cartoon form, some refer to our simple map of the stock market as a gigantic bowl of spaghetti," says Brian Tivnan, a research scientist with both UVM and MITRE, who co-led the new study.

This increasingly complex trading arrangement--formally known as the "National Market System"--includes the New York Stock Exchange, NASDAQ, and many other nodes including ominous-sounding private trading venues called "dark pools." Therefore, as price information, even at near the speed of light, winds about in this electronic spaghetti, it reaches some traders later than others.

And, like the two aisles in the supermarket, some people buying and selling stocks use a relatively inexpensive, slower public feed of information about prices, called the Securities Information Processor, or "SIP," while other traders--millions of times each day--are shown a price earlier, if they have access to very expensive, faster, proprietary information called a "direct feed."

The result: not all traders see the best available price at any moment in time, as they should according to both leading academic theories and market regulation. "That's not supposed to happen," say UVM scientist Chris Danforth, who co-led the new study, "but our close look at the data shows that it does."

This early information presents the opportunity for what economists call "latency arbitrage," which brings us back to the bananas. Now imagine that the guy in the other aisle, who knows that bananas can be had at this moment for 48¢/pound, buys the whole bunch, steps into your aisle and sells them to all the people who can only see the 62¢ price. Each pound of banana only profits him 14¢--but suppose he could sell a million of pounds of bananas each day.

The research team, housed in UVM's Computational Finance Lab --and with crucial work by UVM doctoral students David Dewhurst, Colin Van Oort, John Ring and Tyler Gray, as well as MITRE scientists Matthew Koehler, Matthew McMahon, David Slater and Jason Veneman and research intern, Brendan Tivnan--found billions of similar opportunities for latency arbitrage in the U.S. stock market over the course of the year they studied. Using blazing-fast computers, so-called high-frequency traders can buy stocks at slightly better prices, and then, in far less than the blink of an eye, turn around and sell them at a profit.

"We're not commenting on whether this is fair. It is certainly permissible under current regulation. As scientists, we're just rigorously looking at the data and showing that it is true," says Tivnan. For the new PLOS ONE study, the research team used data from the thirty stocks that make up the Dow Jones Industrial Average--and studied every price quote and trade made for an entire year, 2016.

APPLES TO APPLE

In one case highlighted in the new PLOS study, the team looked at the sale of shares of Apple, Inc. on the morning of January 7, 2016. The scientists picked out any price dislocation greater than a penny that lasted longer than 545 millionths of second--enough time for a high-speed trade. In one moment, "on the offer side from 9:48:55.396886 to 9:48:55.398749 (a duration of 1863 microseconds)," the researchers write, "the SIP best offer remained at $99.11 and the Direct best offer remained at $99.17. Thus, any bid orders submitted during this period stood to save $0.06 per share."

And, in fact, one hundred shares of Apple--at approximately 9:48:55.396951 in the morning--sold for $99.11 when they might have fetched six cents per share more, costing that investor a few dollars, about the price of a few bananas. But, multiplied by 120 millions times in just the thirty stocks that make up the Dow Jones Industrial Average--as the scientists report in their new study--this kind of price gap cost investors more than $160 million. And over the larger Russell 3000 index, the result across the market was a cost of at least $2 billion.

The new PLOS study, and two related ones, are the first public research to make direct observation of the most comprehensive stock market dataset available to regulators and investors. With support from the Departments of Defense and Homeland Security, and the National Science Foundation, the researchers at MITRE and UVM were able to examine direct feeds that customarily cost high-end investors hundreds of thousands of dollars each month.

"In short, what we discovered is that from these momentary blips in the market, some people must have made a lot of money," say UVM's Chris Danforth, a professor in the Department of Mathematics & Statistics and Complex Systems Center.

ON WALL STREET

The Wall Street Journal broke the news on these studies last year, when they were still in a pre-print public server, the "arXiv." Now the first of them has completed peer review and is being published in PLOS ONE. The second, that examines a broader pool of evidence of these market "inefficiencies" in nearly 3000 different stocks, is in revisions and remains posted on the pre-print arXiv. And a third, even more far-reaching study, is in development by the team.

Since the Wall Street Journal article was published, the Securities and Exchange Commission appears to have grown more concerned about these price gaps and the different data streams that investors have to work with. On January 8, the SEC put out a request for comment on a newly proposed set of rules to modernize the governance of how the National Market System produces and disseminates data. Since 2005, "the speed and dispersion of trading activity have increased substantially," the commission writes, and, "there have not been adequate improvements made to address important differences between consolidated market data and proprietary data products."

The scientists in UVM's Computational Finance Lab saw this coming. "Along with others in the scientific community, we identified these same concerns, probably five years ago or more," notes Brian Tivnan. "But our study is the first to quantify the implications of these concerns."

How to fix these differences between players in the market will be difficult, the researchers think. "Dislocations are intrinsic to a fragmented market," Tivnan says, such as now exists in the U.S. stock market with multiple exchanges spread out between four New Jersey communities and with many complex back-and-forth flows of information.

"No technological upgrade will eliminate dislocations," Tivnan says, "even if the exchanges could upgrade the underlying technology to transmit information at the speed of light."

Why can't faster shared technology fix the problem? "Even when controlling for technology, such that all investors rely on the same tech, relativistic effects dictate that the location of the investor will determine what that investor may observe," says Brian Tivnan. "That is, what you see depends on where you are in the market."

Credit: 
University of Vermont

Physicist obtain atomically thin molybdenum disulfide films on large-area substrates

image: An atomic layer deposition reactor used for obtaining ultrathin molybdenum oxide films, which were subsequently sulfurized to 2D molybdenum disulfide.

Image: 
Atomic Layer Deposition Lab, MIPT

Researchers from the Moscow Institute of Physics and Technology have managed to grow atomically thin films of molybdenum disulfide spanning up to several tens of square centimeters. It was demonstrated that the material's structure can be modified by varying the synthesis temperature. The films, which are of interest to electronics and optoelectronics, were obtained at 900-1,000 degrees Celsius. The findings were published in the journal ACS Applied Nano Materials.

Two-dimensional materials are attracting considerable interest due to their unique properties stemming from their structure and quantum mechanical restrictions. The family of 2D materials includes metals, semimetals, semiconductors, and insulators. Graphene, which is perhaps the most famous 2D material, is a monolayer of carbon atoms. It has the highest charge-carrier mobility recorded to date. However, graphene has no band gap under standard conditions, and that limits its applications.

Unlike graphene, the optimal width of the bandgap in molybdenum disulfide (MoS2) makes it suitable for use in electronic devices. Each MoS2 layer has a sandwich structure, with a layer of molybdenum squeezed between two layers of sulfur atoms. Two-dimensional van der Waals heterostructures, which combine different 2D materials, show great promise as well. In fact, they are already widely used in energy-related applications and catalysis. Wafer-scale (large-area) synthesis of 2D molybdenum disulfide shows the potential for breakthrough advances in the creation of transparent and flexible electronic devices, optical communication for next-generation computers, as well as in other fields of electronics and optoelectronics.

"The method we came up with to synthesize MoS2 involves two steps. First, a film of MoO3 is grown using the atomic layer deposition technique, which offers precise atomic layer thickness and allows conformal coating of all surfaces. And MoO3 can easily be obtained on wafers of up to 300 millimeters in diameter. Next, the film is heat-treated in sulfur vapor. As a result, the oxygen atoms in MoO3 are replaced by sulfur atoms, and MoS2 is formed. We have already learned to grow atomically thin MoS2 films on an area of up to several tens of square centimeters," explains Andrey Markeev, the head of MIPT's Atomic Layer Deposition Lab.

The researchers determined that the structure of the film depends on the sulfurization temperature. The films sulfurized at 500 ? contain crystalline grains, a few nanometers each, embedded in an amorphous matrix. At 700 ?, these crystallites are about 10-20 nm across and the S-Mo-S layers are oriented perpendicular to the surface. As a result, the surface has numerous dangling bonds. Such structure demonstrates high catalytic activity in many reactions, including the hydrogen evolution reaction. For MoS2 to be used in electronics, the S-Mo-S layers have to be parallel to the surface, which is achieved at sulfurization temperatures of 900-1,000 ?. The resulting films are as thin as 1.3 nm, or two molecular layers, and have a commercially significant (i.e., large enough) area.

The MoS2 films synthesized under optimal conditions were introduced into metal-dielectric-semiconductor prototype structures, which are based on ferroelectric hafnium oxide and model a field-effect transistor. The MoS2 film in these structures served as a semiconductor channel. Its conductivity was controlled by switching the polarization direction of the ferroelectric layer. When in contact with MoS2, the La:(HfO2-ZrO2) material, which was earlier developed in the MIPT lab, was found to have a residual polarization of approximately 18 microcoulombs per square centimeter. With a switching endurance of 5 million cycles, it topped the previous world record of 100,000 cycles for silicon channels.

Credit: 
Moscow Institute of Physics and Technology

Pitt researchers propose solutions for networking lag in massive IoT devices

image: Wei Gao, Ph.D., associate professor in the Department of Electrical and Computer Engineering.

Image: 
University of Pittsburgh

PITTSBURGH (Jan 22, 2020) -- The internet of things (IoT) widely spans from the smart speakers and Wi-Fi-connected home appliances to manufacturing machines that use connected sensors to time tasks on an assembly line, warehouses that rely on automation to manage inventory, and surgeons who can perform extremely precise surgeries with robots. But for these applications, timing is everything: a lagging connection could have disastrous consequences.

Researchers at the University of Pittsburgh's Swanson School of Engineering are taking on that task, proposing a system that would use currently underutilized resources in an existing wireless channel to create extra opportunities for lag-free connections. The process, which wouldn't require any additional hardware or wireless spectrum resources, could alleviate traffic backups on networks with many wireless connections, such as those found in smart warehouses and automated factories.

The researchers announced their findings at the Association for Computing Machinery's 2019 International Conference on Emerging Networking Experiments and Technologies, one of the best research conferences in networking techniques.The paper, titled"EasyPass: Combating IoT Delay with Multiple Access Wireless Side Channels," (DOI: 10.1145/3359989.3365421), was named Best Paper at the Conference. It was authored by Haoyang Lu, PhD, Ruirong Chen, and Wei Gao, PhD.

"The network's automatic response to channel quality, or the signal-to-noise ratio (SNR), is almost always a step or two behind," explains Gao, associate professor in the Department of Electrical and Computer Engineering. "When there is heavy traffic on a channel, the network changes to accommodate it. Similarly, when there is lighter traffic, the network meets it, but these adaptations don't happen instantaneously. We used that lag - the space between the channel condition change and the network adjustment - to build a side channel solely for IoT devices where there is no competition and no delay."

This method, which the authors call "EasyPass," would exploit the existing SNR margin, using it as a dedicated side channel for IoT devices. Lab tests have demonstrated a 90 percent reduction in data transmission delay in congested IoT networks, with a throughput up to 2.5 Mbps over a narrowband wireless link that can be accessed by more than 100 IoT devices at once.

"The IoT has an important future in smart buildings, transportation systems, smart manufacturing, cyber-physical health systems, and beyond," says Gao. "Our research could remove a very important barrier holding it back."

Credit: 
University of Pittsburgh

Traces of the European enlightenment found in the DNA of western sign languages

image: An 1886 version of the American Sign Language manual alphabet.

Image: 
Gordon, Joseph C. (1886). . Washington, D.C.: Bretano Bros.

AUSTIN, Texas -- Sign languages throughout North and South America and Europe have centuries-long roots in five European locations, a finding that gives new insight into the influence of the European Enlightenment on many of the world's signing communities and the evolution of their languages.

Linguists and biologists from The University of Texas at Austin and the Max Planck Institute for the Science of Human History adapted techniques from genetic research to study the origins of Western sign languages, identifying five European lineages -- Austrian, British, French, Spanish and Swedish -- that began spreading to other parts of the world in the late 18th century. The study, published in Royal Society Open Science, highlights the establishment and growth of European deaf communities during an age of widespread enlightenment and their impact on sign languages used today.

"While the evolution of spoken languages has been studied for more than 200 years, research on sign language evolution is still in its infancy," said Justin Power, a doctoral candidate in linguistics at UT Austin and the study's first author, noting that most research has been based on historical records from deaf educators and institutions. "But there is no a priori reason that one should only study spoken languages if one wants to better understand human language evolution in general."

Much like a geneticist would look to DNA to study traits passed down through generations, the study's researchers investigated data from dozens of Western sign languages, in particular, the manual alphabets, or sets of handshapes signers use to spell written words. As certain handshape forms were passed on within a lineage, they became characteristic of the lineage itself and could be used as markers to identify it.

To identify such markers and decipher the origins of each language, the researchers built the largest cross-linguistic comparative database to map the complex evolutionary relationships among 40 contemporary and 36 historical manual alphabets.

"In both biological and linguistic evolution, traits are passed on from generation to generation. But the types of traits that are passed on and the ways in which they are passed on differ. So, we might expect many differences in the ways that humans and their languages evolve," Power said. "This database allowed us to analyze a variety of reasons for commonalities between languages, such as inheriting from a common ancestral language, borrowing from an unrelated language, or developing similar forms independently."

The researchers grouped the sign languages into five main evolutionary lineages, which developed independently of one another between the mid-18th to early 19th centuries. They were also interested to find that three of the main continental lineages -- Austrian, French and Spanish -- all appeared to have been influenced by early Spanish manual alphabets, which represented the 22 letters of Latin.

"It's likely that the early Spanish manual alphabets were used in limited ways by clergy or itinerant teachers of the deaf, but later signing communities added new handshapes to represent letters in the alphabets of their written languages," Power said. "When large-scale schools for the deaf were established, the manual alphabets came into use in signing communities by relatively large groups of people. It is at this point where we put the beginnings of most of these five lineages."

Data from the languages themselves confirmed many of the sign language dispersal events known from historical records, such as the influence of French Sign Language on deaf education and signing communities in many countries, including in Western Europe and the Americas. However, the researchers were surprised to trace the dispersal of Austrian Sign Language to central and northern Europe, as well as to Russia -- a lineage about which little was previously known.

"The study's findings give us a clearer picture about an important period in the histories of many signing communities and their languages," Power said. "We've also provided a blueprint for how methods from evolutionary biology can be applied to sign language data to gain new insights into sign language evolution and to generate new hypotheses that researchers can now test to push our understanding of the historical development of sign languages forward."

Credit: 
University of Texas at Austin

Climate-friendly food choices protect the planet, promote health, reduce health costs

image:  Jono Drew is lead researcher and Otago medical student.

Image: 
Jono Drew

Increased uptake of plant-based diets in New Zealand could substantially reduce greenhouse gas emissions while greatly improving population health and saving the healthcare system billions of dollars in the coming decades, according to a new University of Otago study.

Lead researcher and Otago medical student Jono Drew explains the global food system is driving both the climate crisis and the growing burden of common chronic diseases like cardiovascular disease, diabetes, and cancer.

"International research has highlighted the climate and health co-benefits that arise from consuming a diet that is rich in plant foods like vegetables, fruits, whole grains and legumes. We wanted to understand if this holds true here in New Zealand, and to tease out which eating patterns could offer the greatest co-benefits in this context."

The research team developed a New Zealand-specific food emissions database that, in estimating greenhouse gas emissions arising from foods commonly consumed in New Zealand, considers important parts of the 'lifecycle' of each food, including farming and processing, transportation, packaging, warehouse and distribution, refrigeration needs, and supermarket overheads. Using their database, the team was then able to model climate, health, and health system cost impacts stemming from a range of dietary scenarios.

Senior author Dr Alex Macmillan, Senior Lecturer in Environmental Health, says results from the study show that greenhouse gas emissions vary considerably between different foods in New Zealand. As a general rule, the climate impact of animal-based foods, particularly red and processed meats, tends to be substantially higher than that of whole plant-based foods, including vegetables, fruits, legumes, and whole grains.

"Fortunately, foods that are health-promoting tend also to be those that are climate friendly. Conversely, certain foods that carry known health risks are particularly climate-polluting. Red and processed meat intake, for instance, is associated with an increased risk of cardiovascular disease, type-2 diabetes and certain cancers," Dr Macmillan says.

The research ultimately shows that a population-level dietary shift could, depending on the extent of changes made, offer diet-related emissions savings of between 4 to 42 per cent annually, along with health gains of between 1.0 to 1.5 million quality-adjusted life-years (a single quality-adjusted life-year is equal to one year of optimal health) and cost savings to the health system of NZD $14 to $20 billion over the lifetime of the current New Zealand population.

Mr Drew says the analysis reveals emissions savings equivalent to a 59 per cent reduction in New Zealand's annual light passenger vehicle emissions could be possible if New Zealand adults consumed an exclusively plant-based diet and avoided wasting food unnecessarily.

"All of our scenarios were designed to meet New Zealand's dietary guidelines. We began with a baseline scenario where we looked at minimal dietary changes required, relative to what New Zealanders are consuming now, to meet the guidelines. These changes included increased intake of vegetables, fruits, whole grains and milk, along with decreased intake of highly processed foods. From there, we tailored our dietary scenarios to be progressively more plant-based- that is, substituting animal-based foods with plant-based alternatives.

"We thought it was important to show what was possible if people were willing to make changes to their eating pattern, and what would be possible if our entire population made a significant shift in that same direction," Mr Drew says.

"As our modelled dietary scenarios became increasingly plant-based and therefore more climate-friendly, we found that associated population-level health gains and healthcare cost savings tended also to increase. A scenario that replaced all meat, seafood, eggs, and dairy products with plant-based alternatives, and that also required people to cut out all unnecessary household food waste, was found to offer the greatest benefit across all three of these parameters," he says.

Mr Drew says this is exciting because we can now better understand what it means to promote a climate-friendly eating pattern in the New Zealand context. "Essentially, the message is highly comparable to that being delivered in other countries already, and we should be rapidly looking for ways to effectively support our population in making eating pattern changes."

The researchers argue that these findings should prompt national policy action, including revising the New Zealand dietary guidelines to include messaging on climate-friendly food choices. They also advocate for the implementation of other policy tools, such as pricing strategies, labeling schemes, and food procurement guidelines for public institutions.

"Well-designed public policy is needed worldwide to support the creation of a global food system that no longer exacerbates the climate crisis, nor the burden of non-communicable disease," Mr Drew says.

Credit: 
University of Otago

Banning food waste: Lessons for rural America

image: As Vermont prepares to implement the first statewide food waste ban, new research shows rural communities could lead the way when it comes to reducing food waste.

Image: 
Brian Jenkins/UVM

While Vermonters support banning food waste from landfills - and a whopping 72 percent already compost or feed food scraps to their pets or livestock - few say they are willing to pay for curbside composting pick-up, new University of Vermont research shows.

The study comes as Vermont prepares to implement a mandatory law that makes it illegal to throw food items in the trash beginning July 1, 2020. Several large cities including San Francisco and Seattle have implemented similar policies, but Vermont is the first state to ban household food waste from landfills. The policy is the last phase of a universal state recycling law passed in 2012 that bans all food waste, "blue bin" recyclables and yard debris from landfills statewide by 2020.

"Reducing household food waste is a powerful way individuals can help reduce the impacts of climate change and save money," said Meredith Niles, UVM Food Systems and Nutrition and Food Sciences assistant professor and lead author of the study. "Vermont has made a significant commitment to this effort and it's exciting to see the majority of Vermonters are already composting to do their part."

Previous research by Niles and other UVM colleagues showed Americans waste nearly a pound of food daily, roughly one third of a person's recommended daily calories. When disposed of in a landfill, food waste rots and produces methane, a greenhouse gas 25 times more powerful than carbon dioxide over a 100-year period, according to the U.S. Environmental Protection Agency. Conversely, composting can aid in carbon sequestration and creates a natural fertilizer for farms and gardens.

While several states and municipalities are exploring food waste strategies, few studies have examined food waste perceptions and behaviors in rural communities.

"The trend in big cities has been to offer curbside compost pickup programs, especially in densely populated areas, but there isn't a one size fits all for how we manage food waste," said Niles. "Our study suggests that, especially in more rural areas, people may already be managing their food waste in a way that leaves it out of the landfills."

Niles surveyed nearly 600 households through the 2018 state Vermonter Poll, conducted annually by UVM's Center for Rural Studies. The study showed support for the new food waste ban, but only a minority of residents indicated they would be willing to pay for a future curbside compost pickup program. People in urban counties were significantly more likely to want curbside compost pickup compared to those managing their food waste through backyard composting or by feeding to pets or livestock.

"In a rural state like Vermont, households are generally further apart, which can increase food waste transport costs and have a negative environmental impact, especially if participation in a curbside compost program is low," said Niles, who is also a fellow at UVM's Gund Institute for Environment. "Instead, focusing curbside programs in densely populated areas may be more cost and environmentally effective and also garner greater household participation."

Research has shown the rates of home composting in Vermont are much higher than in other regions. One third of Vermonters indicated they are exclusively composting or feeding food scraps to pets or livestock, with no food scraps ending up in the trash. This research suggests that investing in education, outreach and infrastructure to help households manage their own food waste could have significant environmental and economic impacts in other rural regions seeking food waste management solutions.

Credit: 
University of Vermont

Global study finds predators are most likely to be lost when habitats are converted for human use

image: A Malaysian spider, one of the small predators found in our study to be most affected by habitat loss.

Image: 
Credit Tim Newbold

A first of its kind, global study on the impacts of human land-use on different groups of animals has found that predators, especially small invertebrates like spiders and ladybirds, are the most likely to be lost when natural habitats are converted to agricultural land or towns and cities. The findings are published in the British Ecological Society journal Functional Ecology.

Small ectotherms (cold blooded animals such as invertebrates, reptiles and amphibians), large endotherms (mammals and birds) and fungivores (animals that eat fungi) were also disproportionally affected, with reductions in abundance of 25-50% compared to natural habitats.

The researchers analysed over one million records of animal abundance at sites ranging from primary forest to intensively managed farmland and cities. The data represented over 25,000 species across 80 countries. Species were grouped by size, whether they were warm or cold blooded and by what they eat. Species ranged from the oribatid mite weighing only 2x10-6g, to an African elephant weighing 3,825kg.

Dr. Tim Newbold at UCL (University College London) and lead author of the research said: "Normally when we think of predators, we think of big animals like lions or tigers. These large predators did not decline as much as we expected with habitat loss, which we think may be because they have already declined because of human actions in the past (such as hunting). We find small predators - such as spiders and ladybirds - to show the biggest declines."

The results indicate that the world's ecosystems are being restructured with disproportionate losses at the highest trophic levels (top of the food chain). Knowing how different animal groups are impacted by changing land-use could help us better understand how these ecosystems function and the consequences of biodiversity change.

"We know that different types of animals play important roles within the environment - for example, predators control populations of other animals. If some types of animals decline a lot when we lose natural habitats, then they will no longer fulfil these important roles." said Dr. Tim Newbold.

The conversion of land to human use is associated with the removal of large amounts of natural plant biomass, usually to give space for livestock and crops. The limiting of the quantity and diversity of resources available at this level potentially explains the disproportionate reductions in predators seen in this study. As you go up the trophic levels (food chain), resource limitations are compounded through a process known as bottom-up resource limitation.

The study is part of the PREDICTS project which explores how biodiversity responds to human pressures. The researchers analysed 1,184,543 records of animal abundance in the PREDICTS database, gathered from 460 published scientific studies. This database included all major terrestrial vertebrate taxa and many invertebrate taxa (25,166 species, 1.8% of described animals).

Species were sorted into functional groups defined by their size, trophic level (what they consumed) and thermal regulation strategy (warm or cold blooded). The type of land-use at each of the 13,676 sample sites was classified from the description of the habitat in the source publication. The six broad categories were primary vegetation, secondary vegetation, plantation forest, cropland, pasture and urban. Three levels of human use intensity were also recorded: minimal, light and intense.

Dr. Tim Newbold explained that studies like this are limited by the available data: "As with all global studies, we are limited in the information that is available to us about where animals are found and what they eat. We were able to get information for more animals than ever before, but this was still only around 1 out of every 100 animals known to science."

The researchers also observed biases in the spread of data across types of land-use, animal groups and parts of the world. "Natural habitats and agricultural areas have been studied more than towns and cities. We think this is because ecologists tend to find these environments more interesting than urban areas as there tend to be more animals in them." said Dr. Tim Newbold. The researchers also found that large parts of Asia were under sampled for several functional groups. Birds were also better represented among vertebrates and insects better represented among invertebrates.

The researchers are now interested in exploring how groups of animals that play particularly important roles for agriculture, such as pollinating or controlling crop pests, are affected by habitat loss.

Credit: 
British Ecological Society

Examining low-carbohydrate, low-fat diets, risk of death

What The Study Did: An analysis of self-reported national dietary data from more than 37,000 U.S. adults suggests associations between low-carbohydrate and low-fat diets and the risk of death may depend on the quality and food sources of the carbohydrates, proteins and fats people eat. The diet scores in this observational study don't mimic particular versions of diets so the results cannot be used to assess the health benefits or risks of popular diets. Researchers report overall low-carbohydrate-diet and low-fat-diet scores weren't associated with risk of total mortality but unhealthy low-carbohydrate-diet and low-fat-diet scores were associated with higher total mortality and healthy low-carbohydrate-diet and low-fat-diet scores were associated with lower total mortality.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

Authors: Zhilei Shan, M.D., Ph.D., of the Harvard T.H. Chan School of Public Health in Boston, is the corresponding author.

(doi:10.1001/jamainternmed.2019.6980)

Editor's Note: The article includes conflict of interest and funding/support disclosures. Please see the article for additional information, including other authors, author contributions and affiliations, financial disclosures, funding and support, etc.

Credit: 
JAMA Network

Emissions of potent greenhouse gas have grown, contradicting reports of huge reductions

Despite reports that global emissions of the potent greenhouse gas, , were almost eliminated in 2017, an international team of scientists, led by the University of Bristol, has found atmospheric levels growing at record values.

Over the last two decades, scientists have been keeping a close eye on the atmospheric concentration of a hydrofluorocarbon (HFC) gas, known as HFC-23.

This gas has very few industrial applications. However, levels have been soaring because it is vented to the atmosphere during the production of another chemical widely used in cooling systems in developing countries.

Scientists are concerned, because HFC-23 is a very potent greenhouse gas, with one tonne of its emissions being equivalent to the release of more than 12,000 tonnes of carbon dioxide.

Starting in 2015, India and China, thought to be the main emitters of HFC-23, announced ambitious plans to abate emissions in factories that produce the gas. As a result, they reported that they had almost completely eliminated HFC-23 emissions by 2017.

In response to these measures, scientists were expecting to see global emissions drop by almost 90 percent between 2015 and 2017, which should have seen growth in atmospheric levels grind to a halt.

Now, an international team of researchers have shown, in a paper published today in the journal Nature Communications, that concentrations were increasing at an all-time record by 2018.

Dr Matt Rigby, who co-authored the study, is a Reader in Atmospheric Chemistry at the University of Bristol and a member of the Advanced Global Atmospheric Gases Experiment (AGAGE), which measures the concentration of greenhouse gases around the world, said: "When we saw the reports of enormous emissions reductions from India and China, we were excited to take a close look at the atmospheric data.

"This potent greenhouse gas has been growing rapidly in the atmosphere for decades now, and these reports suggested that the rise should have almost completely stopped in the space of two or three years. This would have been a big win for climate."

The fact that this reduction has not materialised, and that, instead, global emissions have actually risen, is a puzzle, and one that may have implications for the Montreal Protocol, the international treaty that was designed to protect the stratospheric ozone layer.

In 2016, Parties to the Montreal Protocol signed the Kigali Amendment, aiming to reduce the climate impact of HFCs, whose emissions have grown in response to their use as replacements to ozone depleting substances.

Dr Kieran Stanley, the lead author of the study, visiting research fellow in the University of Bristol's School of Chemistry and a post-doctoral researcher at the Goethe University Frankfurt, added: "To be compliant with the Kigali Amendment to the Montreal Protocol, countries who have ratified the agreement are required to destroy HFC-23 as far as possible.

"Although China and India are not yet bound by the Amendment, their reported abatement would have put them on course to be consistent with Kigali. However, it looks like there is still work to do.

"Our study finds that it is very likely that China has not been as successful in reducing HFC-23 emissions as reported. However, without additional measurements, we can't be sure whether India has been able to implement its abatement programme."

Had these HFC-23 emissions reductions been as large as reported, the researchers estimate that the equivalent of a whole year of Spain's CO2 emissions could have been avoided between 2015 and 2017.

Dr Rigby added: "The magnitude of the CO2-equivalent emissions shows just how potent this greenhouse gas is.

"We now hope to work with other international groups to better quantify India and China's individual emissions using regional, rather than global, data and models."

Dr Stanley added: "This is not the first time that HFC-23 reduction measures attracted controversy.

"Previous studies found that HFC-23 emissions declined between 2005 and 2010, as developed countries funded abatement in developing countries through the purchase of credits under the United Nations Framework Convention on Climate Change Clean Development Mechanism.

"However, whilst in that case, the atmospheric data showed that emissions reductions matched the reports very well, the scheme was thought to create a perverse incentive for manufacturers to increase the amount of waste gas they generated, in order to sell more credits".

Credit: 
University of Bristol

Preparing land for palm oil causes most climate damage

image: Peat swamp deforestation and drainage for new oil palm plantations in North Selangor Peat Swamp Forest, Malaysia.

Image: 
Photo taken by researcher Stephanie Evers

New research has found preparing land for palm oil plantations and the growth of young plants causes significantly more damage to the environment, emitting double the amount of greenhouse gases than mature plantations.

This is the first study to examine the three main greenhouse gas emissions across the different age stages of palm oil plantations. It was carried out by plant scientists from the University of Nottingham in the North Selangor peat swamp forest in Malaysia with support from the Salangor State Forestry Department. It has been published today in Nature Communications.

Palm oil is the most consumed and widely traded vegetable oil in the world. Global demand has more than tripled in the last eighteen years, from around 20 million tonnes in 2000 to over 70 million in 2018 and Malaysia is the world's second largest producer.
The University of Nottingham researchers analysed five sites at four different stages of land use: secondary forest, recently drained but uncleared forest, cleared and recently planted young oil palm plantation and mature oil palm plantation.

Laboratory analysis of soil and gas from these sites showed that the greatest fluxes of CO2 occurred during the drainage and young oil palm stages with 50% more greenhouse gas emissions than the mature oil palms. These emissions also account for almost a quarter of the total greenhouse emissions for the region.

Intensive drainage

Tropical peat swamp forests hold around 20% of global peatland carbon. However, the contribution of peat swamp forests to carbon storage is currently under threat from large-scale expansion of drainage-based agriculture including oil palm and pulp wood production on peatlands.

Draining peatlands increases the oxygen levels in the soil, which in turn increases the rate of decomposition of organic material, resulting in high CO2 emissions from drained peatlands. In addition to CO2, peatlands also emit the powerful greenhouse gases (CH4 and N2O8)..

Dr Sofie Sjogersten from the University of Nottingham's School of Biosciences led the research and said: "Tropical peat swamps have historically been avoided by palm oil growers due to the amount of preparation and drainage the land needs, but as land becomes more scarce there has been an increased demand to convert sites and the periphery of North Selangor is being heavily encroached upon by palm oil plantations. Our research shows that this conversion comes at a heavy cost to the environment with greater carbon and greenhouse gas emissions being caused by the early stages of the growth of palm oil."

Credit: 
University of Nottingham