Tech

Researchers build transistor-like gate for quantum information processing -- with qudits

image: A two-qudit gate, among the first of its kind, maximizes the entanglement of photons so that quantum information can be manipulated more predictably and reliably.

Image: 
Purdue University image/Allison Rice

WEST LAFAYETTE, Ind. -- Quantum information processing promises to be much faster and more secure than what today's supercomputers can achieve, but doesn't exist yet because its building blocks, qubits, are notoriously unstable.

Purdue University researchers are among the first to build a gate - what could be a quantum version of a transistor, used in today's computers for processing information - with qudits. Whereas qubits can exist only in superpositions of 0 and 1 states, qudits exist in multiple states, such as 0 and 1 and 2. More states mean that more data can be encoded and processed.

The gate would not only be inherently more efficient than qubit gates, but also more stable because the researchers packed the qudits into photons, particles of light that aren't easily disturbed by their environment. The researchers' findings appear in npj Quantum Information.

The gate also creates one of the largest entangled states of quantum particles to date - in this case, photons. Entanglement is a quantum phenomenon that allows measurements on one particle to automatically affect measurements on another particle, bringing the ability to make communication between parties unbreakable or to teleport quantum information from one point to another, for example.

The more entanglement in the so-called Hilbert space - the realm where quantum information processing can take place - the better.

Previous photonic approaches were able to reach 18 qubits encoded in six entangled photons in the Hilbert space. Purdue researchers maximized entanglement with a gate using four qudits - the equivalent of 20 qubits - encoded in only two photons.

In quantum communication, less is more. "Photons are expensive in the quantum sense because they're hard to generate and control, so it's ideal to pack as much information as possible into each photon," said Poolad Imany, a postdoctoral researcher in Purdue's School of Electrical and Computer Engineering.

The team achieved more entanglement with fewer photons by encoding one qudit in the time domain and the other in the frequency domain of each of the two photons. They built a gate using the two qudits encoded in each photon, for a total of four qudits in 32 dimensions, or possibilities, of both time and frequency. The more dimensions, the more entanglement.

Starting from two photons entangled in the frequency domain and then operating the gate to entangle the time and frequency domains of each photon generates four fully-entangled qudits, which occupy a Hilbert space of 1,048,576 dimensions, or 32 to the fourth power.

Typically, gates built on photonic platforms to manipulate quantum information encoded in separate photons work only some of the time because photons naturally don't interact with each other very well, making it extremely difficult to manipulate the state of one photon based on the state of another. By encoding quantum information in the time and frequency domains of photons, Purdue researchers made operating the quantum gate deterministic as opposed to probabilistic.

The team implemented the gate with a set of standard off-the-shelf equipment used daily in the optical communication industry.

"This gate allows us to manipulate information in a predictable and deterministic way, which means that it could perform the operations necessary for certain quantum information processing tasks," said Andrew Weiner, Purdue's Scifres Family Distinguished Professor of Electrical and Computer Engineering, whose lab specializes in ultrafast optics.

Next, the team wants to use the gate in quantum communications tasks such as high-dimensional quantum teleportation as well as for performing quantum algorithms in applications such as quantum machine learning or simulating molecules.

Credit: 
Purdue University

University of Guelph researchers track how cats' weights change over time

video: University of Guelph researchers have become the first to access data on more than 19 million North American cats and have discovered that most cats continue to put on weight as they age.

Image: 
University of Guelph

Are cats getting fatter?

Until now, pet owners and veterinarians didn't know for sure. Now University of Guelph researchers have become the first to access data on more than 19 million cats to get a picture of typical weight gain and loss over their lifetimes.

The researchers at U of G's Ontario Veterinary College (OVC) discovered most cats continue to put on weight as they age, and their average weight is on the rise.

The findings, published in the Journal of the American Veterinary Medical Association, reveal that even after cats mature from the kitten phase, their weight still creeps up until they are, on average, eight years old.

This research -- the first of its kind to use such a large data pool -- provides important baseline information for vets and pet owners about cat weight changes, said Prof. Theresa Bernardo, the IDEXX Chair in Emerging Technologies and Bond-Centered Animal Healthcare.

"As humans, we know we need to strive to maintain a healthy weight, but for cats, there has not been a clear definition of what that is. We simply didn't have the data," said Bernardo. "Establishing the pattern of cat weights over their lifetimes provides us with important clues about their health."

Lead author Dr. Adam Campigotto, along with Bernardo and colleague Dr. Zvonimir Poljak, analyzed 54 million weight measurements taken at vets' offices on 19 million cats as part of his PhD research. The research team broke down the data to stratify any differences over gender, neutering status and breed.

They found male cats tended to reach higher weight peaks than females and spayed or neutered cats tended to be heavier than unaltered cats.  Among the four most common purebred breeds (Siamese, Persian, Himalayan and Maine Coon), the mean weight peaked between six and 10 years of age. Among common domestic cats, it peaked at eight years.

As well, the team noted that the mean weight of neutered, eight-year-old domestic cats increased between 1995 and 2005 but remained steady between 2005 and 2015.

"We do have concerns with obesity in middle age, because we know that can lead to diseases for cats, such as diabetes, heart disease, osteoarthritis and cancer," said Campigotto.

"Now that we have this data, we can see that cat weights tend to follow a curve. We don't yet know the ideal weight trajectory, but it's at least a starting point to begin further studies."

The team noted that 52 per cent of the cats among the study group had only one body weight measurement on file, which may suggest their owners did not bring the animals back in for regular vet checkups or took them to a different veterinary clinic.

Bernardo said just as humans need to be aware of maintaining a healthy weight as they age, it's important to monitor weight changes in cats.

"Cats tend to be overlooked because they hide their health problems and they don't see a vet as often as dogs do. So one of our goals is to understand this so that we can see if there are interventions that can provide more years of healthy life to cats."

Discussions about body weight throughout a pet's lifetime could be a useful gateway for veterinarians to engage more cat owners in the health of their pets, she added.

"The monitoring of body weight is an important indicator of health in both humans and animals. It's a data point that is commonly collected at each medical appointment, is simple to monitor at home and is an easy point of entry into data-driven animal wellness."

For owners concerned about their cat's health or weight gain, Campigotto advises buying a scale and getting in the habit of weighing their pet.

"If your cat is gaining or losing weight, it may be an indicator of an underlying problem," he said.

The research team plans to study ways of reducing cat obesity including looking at the use of automated feeders that could dispense the appropriate amount of food for a cat. These feeders could even be equipped with built-in scales.

"We are ultimately changing the emphasis to cat health rather than solely focusing on disease," said Campigotto.

"As we investigate the data and create new knowledge, it will enable veterinarians to offer clients evidence-based wellness plans, allow for earlier identification and treatment of disease and an enhanced quality of life for their animals."

Credit: 
University of Guelph

Supernova observation first of its kind using NASA satellite

COLUMBUS, Ohio--When NASA's Transiting Exoplanet Survey Satellite launched into space in April 2018, it did so with a specific goal: to search the universe for new planets.

But in recently published research, a team of astronomers at The Ohio State University showed that the survey, nicknamed TESS, could also be used to monitor a particular type of supernova, giving scientists more clues about what causes white dwarf stars to explode--and about the elements those explosions leave behind.

"We have known for years that these stars explode, but we have terrible ideas of why they explode," said Patrick Vallely, lead author of the study and an Ohio State astronomy graduate student. "The big thing here is that we are able to show that this supernova isn't consistent with having a white dwarf (take mass) directly from a standard star companion and explode into it--the kind of standard idea that had led to people trying to find hydrogen signatures in the first place. That is, because the TESS light curve doesn't show any evidence of the explosion slamming into the surface of a companion, and because the hydrogen signatures in the SALT spectra don't evolve like the other elements, we can rule out that standard model."

Their research, detailed in the Monthly Notices of the Royal Astronomical Society, represents the first published findings about a supernova observed using TESS, and add new insights to long-held theories about the elements left behind after a white dwarf star explodes into a supernova.

Those elements have long troubled astronomers.

A white dwarf explodes into a specific type of supernova, a 1a, after gathering mass from a nearby companion star and growing too big to remain stable, astronomers believe. But if that is true, then the explosion should, astronomers have theorized, leave behind trace elements of hydrogen, a crucial building block of stars and the entire universe. (White dwarf stars, by their nature, have already burned through their own hydrogen and so would not be a source of hydrogen in a supernova.)

But until this TESS-based observation of a supernova, astronomers had never seen those hydrogen traces in the explosion's aftermath: This supernova is the first of its type in which astronomers have measured hydrogen. That hydrogen, first reported by a team from the Observatories of the Carnegie Institution for Science, could change the nature of what astronomers know about white dwarf supernovae.

"The most interesting thing about this particular supernova is the hydrogen we saw in its spectra (the elements the explosion leaves behind)," Vallely said. "We've been looking for hydrogen and helium in the spectra of this type of supernova for years--those elements help us understand what caused the supernova in the first place."

The hydrogen could mean that the white dwarf consumed a nearby star. In that scenario, the second star would be a normal star in the middle of its lifespan--not a second white dwarf. But when astronomers measured the light curve from this supernova, the curve indicated that the second star was in fact a second white dwarf. So where did the hydrogen come from?

Professor of Astronomy Kris Stanek, Vallely's adviser at Ohio State and a co-author on this paper, said it is possible that the hydrogen came from a companion star--a standard, regular star--but he thinks it is more likely that the hydrogen came from a third star that happened to be near the exploding white dwarf and was consumed in the supernova by chance.

"We would think that because we see this hydrogen, it means that the white dwarf consumed a second star and exploded, but based on the light curve we saw from this supernova, that might not be true," Stanek said.

"Based on the light curve, the most likely thing that happened, we think, is that the hydrogen might be coming from a third star in the system," Stanek added. "So the prevailing scenario, at least at Ohio State right now, is that the way to make a Type Ia (pronounced 1-A) supernova is by having two white dwarf stars interacting--colliding even. But also having a third star that provides the hydrogen."

For the Ohio State research, Vallely, Stanek and a team of astronomers from around the world combined data from TESS, a 10-centimeter-diameter telescope, with data from the All-Sky Automated Survey for Supernovae (ASAS-SN for short.) ASAS-SN is led by Ohio State and is made up of small telescopes around the world watching the sky for supernovae in far-away galaxies.

TESS, by comparison, is designed to search the skies for planets in our nearby galaxy--and to provide data much more quickly than previous satellite telescopes. That means that the Ohio State team was able to use data from TESS to see what was happening around the supernova in the first moments after it exploded--an unprecedented opportunity.

The team combined data from TESS and ASAS-SN with data from the South African Large Telescope to evaluate the elements left behind in the supernova's wake. They found both hydrogen and helium there, two indicators that the exploding star had somehow consumed a nearby companion star.

"What is really cool about these results is, when we combine the data, we can learn new things," Stanek said. "And this supernova is the first exciting case of that synergy."

The supernova this team observed was a Type Ia, a type of supernova that can occur when two stars orbit one another--what astronomers call a binary system. In some cases of a Type I supernova, one of those stars is a white dwarf.

A white dwarf has burned off all its nuclear fuel, leaving behind only a very hot core. (White dwarf temperatures exceed 100,000 degrees Kelvin--nearly 200,000 degrees Fahrenheit.) Unless the star grows bigger by stealing bits of energy and matter from a nearby star, the white dwarf spends the next billion years cooling down before turning into a lump of black carbon.

But if the white dwarf and another star are in a binary system, the white dwarf slowly takes mass from the other star until, eventually, the white dwarf explodes into a supernova.

Type I supernovae are important for space science--they help astronomers measure distance in space, and help them calculate how quickly the universe is expanding (a discovery so important that it won the Nobel Prize in Physics in 2011.)

"These are the most famous type of supernova--they led to dark energy being discovered in the 1990s," Vallely said. "They are responsible for the existence of so many elements in the universe. But we don't really understand the physics behind them that well. And that's what I really like about combining TESS and ASAS-SN here, that we can build up this data and use it to figure out a little more about these supernovae."

Scientists broadly agree that the companion star leads to a white dwarf supernova, but the mechanism of that explosion, and the makeup of the companion star, are less clear.

This finding, Stanek said, provides some evidence that the companion star in this type of supernova is likely another white dwarf.

"We are seeing something new in this data, and it helps our understanding of the Ia supernova phenomenon," he said. "And we can explain this all in terms of the scenarios we already have--we just need to allow for the third star in this case to be the source of the hydrogen."

Credit: 
Ohio State University

Limitation exposed in promising quantum computing material

image: Vikram Deshpande, assistant professor in the Department of Physics & Astronomy (left) and doctoral candidate Su Kong Chong (right) stand in the "coolest lab on campus." Deshpande leads a lab that can cool topological materials down to just a few fractions of a degree above absolute zero at -273.15°C (-459.67°F). It is literally the coldest laboratory on campus.

Image: 
Lisa Potter/University of Utah

Quantum computers promise to perform operations of great importance believed to be impossible for our technology today. Current computers process information via transistors carrying one of two units of information, either a 1 or a 0. Quantum computing is based on the quantum mechanical behavior of the logic unit. Each quantum unit, or "qubit," can exist in a quantum superposition rather than taking discrete values. The biggest hurdles to quantum computing are the qubits themselves--it is an ongoing scientific challenge to create logic units robust enough to carry instructions without being impacted by the surrounding environment and resulting errors.

Physicists have theorized that a new type of material, called a three-dimensional (3-D) topological insulator (TI), could be a good candidate from which to create qubits that will be resilient from these errors and protected from losing their quantum information. This material has both an insulating interior and metallic top and bottom surfaces that conduct electricity. The most important property of 3-D topological insulators is that the conductive surfaces are predicted to be protected from the influence of the surroundings. Few studies exist that have experimentally tested how TIs behave in real life.

A new study from the University of Utah found that in fact, when the insulating layers are as thin as 16 quintuple atomic layers across, the top and bottom metallic surfaces begin to influence each other and destroy their metallic properties. The experiment demonstrates that the opposite surfaces begin influencing each other at a much thicker insulating interior than previous studies had shown, possibly approaching a rare theoretical phenomenon in which the metallic surfaces also become insulating as the interior thins out.

"Topological insulators could be an important material in future quantum computing. Our findings have uncovered a new limitation in this system," said Vikram Deshpande, assistant professor of physics at the University of Utah and corresponding author of the study. "People working with topological insulators need to know what their limits are. It turns out that as you approach that limit, when these surfaces start "talking" to each other, new physics shows up, which is also pretty cool by itself."

The new study published on July 16, 2019 in the journal Physics Review Letters.

Sloppy sandwiches built from topological insulators

Imagine a hardcover textbook as a 3-D topological insulator, Deshpande said. The bulk of the book are the pages, which is an insulator layer--it can't conduct electricity. The hardcovers themselves represent the metallic surfaces. Ten years ago, physicists discovered that these surfaces could conduct electricity, and a new topological field was born.

Deshpande and his team created devices using 3-D TIs by stacking five few-atom-thin layers of various materials into sloppy sandwich-like structures. The bulk core of the sandwich is the topological insulator, made from a few quintuple layers of bismuth antimony tellurium selenide (Bi2-xSbxTe3-ySey). This core is sandwiched by a few layers of boron nitride, and is topped off with two layers of graphite, above and below. The graphite works like metallic gates, essentially creating two transistors that control conductivity. Last year Deshpande led a study that showed that this topological recipe built a device that behaved like you would expect - bulk insulators that protect the metallic surfaces from the surrounding environment.

In this study, they manipulated the 3-D TI devices to see how the properties changed. First, they built van der Waal heterostructures--those sloppy sandwiches--and exposed them to a magnetic field. Deshpande's team tested many at his lab at the University of Utah and first author Su Kong Chong, doctoral candidate at the U, traveled to the National High Magnetic Field Lab in Tallahassee to perform the same experiments there using one of the highest magnetic fields in the country. In the presence of the magnetic field, a checkerboard pattern emerged from the metallic surfaces, showing the pathways by which electrical current will move on the surface. The checkerboards, consisting of quantized conductivities versus voltages on the two gates, are well-defined, with the grid intersecting at neat intersection points, allowing the researchers to track any distortion on the surface.

They began with the insulator layer at 100 nanometers thick, about a thousandth of the diameter of a human hair, and progressively got thinner down to 10 nanometers. The pattern started distorting until the insulator layer was at 16 nanometers thick, when the intersection points began to break up, creating a gap that indicated that the surfaces were no longer conductive.

"Essentially, we've made something that was metallic into something insulating in that parameter space. The point of this experiment is that we can controllably change the interaction between these surfaces," said Deshpande. "We start out with them being completely independent and metallic, and then start getting them closer and closer until they start 'talking,' and when they're really close, they are essentially gapped out and become insulating."

Previous experiments in 2010 and 2012 had also observed the energy gap on the metallic surfaces as the insulating material thins out. But those studies concluded that the energy gap appeared with much thinner insulating layers--five nanometers in size. This study observed the metallic surface properties break down at much larger interior thickness, up to 16 nanometers. The other experiments used different "surface science" methods where they observed the materials through a microscope with a very sharp metallic tip to look at every atom individually or studied them with highly energetic light.

"These were extremely involved experiments which are pretty far removed from the device-creation that we are doing," said Deshpande.

Next, Deshpande and the team will look more closely into the physics creating that energy gap on the surfaces. He predicts that these gaps can be positive or negative depending on material thickness.

Credit: 
University of Utah

Plant protection products: More clarity about residues in food

This indicator should give information on total intake of plant protection product residues from food. The three categories of low, moderate and high intake to which the active substances in the plant protection products can then be allocated are essential here. "Consumer safety is strengthened by these valuable indicators for risk identification," says BfR President Professor Dr. Dr. Andreas Hensel. "Politics benefits from this too, because it is then easier to take specific measures to protect the population".

In future, foods offered for sale on the German market are to have even fewer plant protection product residues above the maximum permitted level than is currently the case. This is one of the goals of the National Action Plan, under the auspices of which the BfR proposals were made.

Although foods are allowed to contain traces of pesticides, these may not exceed a legally determined maximum level. In principle, maximum residue levels of plant protection products in foods are set so low that they do not pose a health risk to consumers. In the vast majority of cases, a toxicological limit value such as the acute reference dose (ARfD) is only reached with much higher concentrations, meaning that a health risk can then no longer be excluded.

The percentage of samples from German food monitoring in which the maximum levels of plant protection product active substances is exceeded has been determined annually up to now. The BfR has proposed supplementation of the indicator used up to now. In future, not only the number of times a maximum level is exceeded should be recorded but also the number of ARfD exceedances. As a rule, fewer than ten (from several thousands) of food samples per year are affected by an exceedance of the ARfD, with this figure lying at seven in 2017, for example. Particular attention is paid here to foods imported to Germany. Although the same maximum levels apply here as they do for domestic foods, they are exceeded more often.

The BfR has also recommended the introduction of a new status indicator which gives information on total intake of plant protection product residues with food. Short and long-term uptake (exposure) are to be determined regularly on the basis of data provided by German consumption studies and food monitoring data. Monitoring is built up in six-year cycles in which the most important foods are examined so that an overall statement representative for all of Germany can be made. The next cycle ends with the monitoring data for the year 2020. Thereafter, the BfR will determine exposure for all examined plant protection product active substances and compare it with each respective toxicological limit value. Recommended courses of action for risk management can be derived from the results.

The NAP Forum, a committee which advises the national government, accepted both BfR proposals in February 2019 and recommended their approval by the national government.

As a further step, the BfR is planning the development of indicators with which it can be assessed how successful measures are which are intended to make the use of plant protection products safer.

Credit: 
BfR Federal Institute for Risk Assessment

NASA looks at Barry's rainfall rates

image: This color-coded GPM image shows instantaneous surface rain rates on July 14 at 02:42 UTC (July 13 at 10:42 p.m. EDT) from Hurricane Barry after coming ashore along the southern coast of Louisiana. The GPM data image revealed heavy rain bands wrapping up around the eastern side of the storm's center from the Gulf of Mexico and into central Louisiana and Mississippi. Rainfall rates in those areas were greater than 10 mm/hour (0.4 inches per hour).

Image: 
NASA/JAXA, using GPM data archived at https://pps.gsfc.nasa.gov/.

After Barry made landfall as a Category 1 hurricane, NASA's GPM core satellite analyzed the rate in which rain was falling throughout the storm. Now that Barry is a post-tropical cyclone moving through the mid-Mississippi Valley and toward the Ohio Valley, it is bringing some of that rainfall with it.

NASA Analyzes Barry's Rain Rates

GPM core satellite passed over Barry and analyzed its rainfall rates several hours after it had made landfall on the coast of Louisiana. Instantaneous surface rain rates were derived from the Dual-polarization Radar onboard the GPM core satellite on July 14 at 02:42 UTC (July 13 at 10:42 p.m. EDT). The GPM data revealed heavy rain bands wrapping up around the eastern side of the storm's center from the Gulf of Mexico and into central Louisiana and Mississippi. Rainfall rates in those areas were greater than 10 mm/hour (0.4 inches per hour). Some of the storm's asymmetry was also revealed by the fact that the rain shield is much heavier and broader south of the center of circulation.

At the time of this image, the center of Barry was located about 35 miles southwest of Alexandria, Louisiana and had maximum sustained winds of 50 mph. The following day, July 14, Barry continued its northward trek into northwestern Louisiana and weakened to a tropical depression before it continued into Arkansas. Despite numerous power outages and localized flooding, there were no reports of fatalities or serious injuries due to Barry.

 Barry's History

Barry formed from an area of low pressure that originated over the Tennessee Valley from a thunderstorm complex, which then drifted southward through the Florida Panhandle, off of the coast and out over the northeastern Gulf of Mexico on July 10. There it provided a focus for showers and thunderstorms, which led to its gradual intensification. Despite the warm waters of the Gulf, the system was slow to strengthen due to inhibiting northerly wind shear.  Nevertheless, it became Tropical Storm Barry on July 11 at 10:00 am CDT about 95 miles south-southeast of the mouth of the Mississippi River at which time it was drifting slowly westward.

Over the next day, Barry slowly intensified into a strong tropical storm as it continued to move west, but northerly wind shear and accompanying drier air caused Barry to remain rather asymmetrical with most of the heavier rain and thunderstorm activity located in the southern half of the storm. Fortunately, this precluded the storm from taking full advantage of the warm waters and quickly intensifying. It also meant that the heaviest rains stayed offshore. After drifting generally slowly westward to this point, Barry finally began to recurve to the northwest on Saturday, July 13 while gaining just enough intensity to become a hurricane before hitting the coast of Louisiana.

Barry became the first hurricane of the 2019 season just before making landfall on the south-central coast of Louisiana near Intracoastal City on Saturday, July 13. The storm came ashore around 1 p.m. CDT (18:00 UTC) with sustained winds reported at 75 mph by the National Hurricane Center, making Barry a minimal Category 1 hurricane. The biggest threats posed by Barry were heavy rains and flooding due to the storm's slow movement, close proximity to land, and time spent organizing over the warm waters of the Gulf of Mexico.

As Barry moved further inland, it continued to recurve more towards the north and slowly increased its forward speed while weakening back to a tropical storm.

 Watches and Warnings on Effect on July 16, 2019

As Post-tropical cyclone Barry moves northeastward, there are several watches and warnings in effect on July 16. Flash Flood Watches are in effect from the Ark-La-Tex eastward through the Lower and Middle Mississippi Valley. The Ark-La-Tex region consisting of Northwest Louisiana, Northeast Texas, and South Arkansas. Flash Flood Warnings are in effect for portions of southern Arkansas. Flood Warnings are in effect for portions of southern Louisiana, Arkansas, and Mississippi. Coastal Flood Advisories are in effect for portions of the Louisiana coast.

Where is Barry on July 16, 2019

At 5 a.m. (0900 UTC) on Tuesday, July 16, the center of Post-Tropical Cyclone Barry was located near latitude 37.8 degrees north and longitude 92.3 degrees west. That's about 75 miles (120 km) northeast of Springfield, Missouri. The post-tropical cyclone is moving toward the northeast near 14 mph (22 kph) and this motion is expected to continue today with a gradual turn more easterly by tonight. The estimated minimum central pressure is 1011 millibars (29.86 inches). Maximum sustained winds are near 15 mph (30 kph) with higher gusts. Little change in strength is forecast during the next 48 hours

Expected Rainfall from Barry

The National Weather Service Weather Prediction Center in College Park, Maryland said, "Barry is expected to produce additional rain accumulations of 3 to 6 inches across portions of southern Arkansas, northern Mississippi and far southwestern Tennessee. Isolated maximum totals exceeding 10 inches are possible across southwest Arkansas. Rainfall accumulations of 1 to 3 inches, locally higher are expected across portions of the Ohio Valley today into tonight [July 16].

For updated forecasts, visit: http://www.nhc.noaa.gov

By Steve Lang / Rob Gutro
NASA's Goddard Space Flight Center

Credit: 
NASA/Goddard Space Flight Center

Intake of phosphates: Babies, infants and children can exceed the health guidance values

They are added to a large number of foods to perform various technological functions, e.g. as acidity regulators. These include soft drinks, especially cola beverages, whipped cream and cream products, milk drinks, milk powder and coffee whitener, as well as meat products.

Within the scope of a re-evaluation published on 12 June 2019, the European Food Safety Authority (EFSA) derived an acceptable daily intake (ADI) for phosphates. The ADI value of 40 mg/kg body weight per day expressed as phosphorus applies to the uptake of phosphorus from foods which can naturally contain phosphates and those to which phosphates can be added as a food additives.

EFSA derived the group ADI of 40 mg/kg body weight per day for healthy adults. It does not apply to people with a moderate to severe impairment of kidney function who constitutes a special risk group.

Infants, toddlers and children can exceed this value even with average consumption quantities. This also applies to adolescents with a phosphate-rich diet.

The acceptable daily intake for phosphate is the estimated maximum amount to which individuals may be exposed daily over their lifetimes without appreciable health risk. From a toxicological point of view therefore, total intake of phosphate should not result in an exceedance of the acceptable daily intake on a regular basis. EFSA recommends the introduction of maximum levels to reduce the levels of phosphates used as additives in food supplements. The European Commission is considering measures to lower phosphate levels in food. The BfR agrees with EFSA's scientific assessment.

Consumers cannot recognise how much phosphate is contained in unprocessed foods. For processed foods, the list of ingredients gives information on whether food additives containing phosphates are contained. EFSA estimates that food additives account for between 6 and 30 percent of average total phosphorus intake.

Credit: 
BfR Federal Institute for Risk Assessment

Take flight! Automating complex design of universal controller for hybrid drones

image: Hybrid unmanned aerial vehicles, or UAVs, are drones that combine the advantages of multi-copters and fixed-wing planes. These drones are equipped to vertically take off and land like multi-copters, yet also have the strong aerodynamic performance and energy-saving capabilities of traditional planes. As hybrid UAVs continue to evolve, however, controlling them remotely still remains a challenge.

Image: 
Jie Xu

CHICAGO--Hybrid unmanned aerial vehicles, or UAVs, are drones that combine the advantages of multi-copters and fixed-wing planes. These drones are equipped to vertically take off and land like multi-copters, yet also have the strong aerodynamic performance and energy-saving capabilities of traditional planes. As hybrid UAVs continue to evolve, however, controlling them remotely still remains a challenge.

A team from the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Lab (CSAIL) has devised a new approach to automatically design a mode-free, model-agnostic, AI-driven controller for any hybrid UAV. The team will present their novel computational controller design at SIGGRAPH 2019, held 28 July-1 August in Los Angeles. This annual gathering showcases the world's leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

To control hybrid UAVs, one system directs the vehicle's copter-model rotors for hovering and a different one directs plane-model rotors for speed and distance. Indeed, controlling hybrid UAVs is challenging due to the complexity of the flight dynamics of the vehicle. Typically, controllers have been designed manually and are a time-consuming process.

In this work, the team addressed how to automatically design one single controller for the different flight modes (copter mode, gliding mode, transition, etc.) and how to generalize the controller design method for any UAV model, shape, or structure.

"Designing a controller for such a hybrid design requires a high level of expertise and is labor intensive," says Jie Xu of MIT and coauthor of the research. "With our automatic controller design method, any non-expert could input their new UAV model to the system, wait a few hours to compute the controller, and then have their own customized UAVs fly in the air. This platform can make hybrid UAVs far more accessible to everyone."

The researchers' method consists of a neural network-based controller design trained by reinforcement learning techniques. In their new system, users first design the geometry of a hybrid UAV by selecting and matching parts from a provided data set. The design is then used in a realistic simulator to automatically compute and test the UAV's flight performance. Reinforcement learning algorithm is then applied to automatically learn a controller for the UAV to achieve the best performance in the high-fidelity simulation. The team successfully validated their method both in simulation and in real flight tests.

With the continued prevalence of hybrid UAVs--in the flight industry and military sectors, for example--there is a growing need to simplify and automate controller design. In this work, the researchers aimed to deliver a novel model-agnostic method to automate the design of controllers for vehicles with vastly different configurations.

In future work, the team intends to investigate how to increase the maneuverability through improved geometry design (shape, positions of rotors/wings) so that it can help perfect the flight performance of the UAV.

Credit: 
Association for Computing Machinery

Does ICU flexible family visitation policy reduce delirium among patients?

Bottom Line: A randomized clinical trial involving patients, family members and clinicians from 36 adult intensive care units in Brazil looked at whether flexible family visitation (up to 12 hours per day) plus family education on ICUs and delirium would reduce the occurrence of delirium compared to standard visitation of up to 4½ hours per day. The study included 1,685 patients. The authors report no significant difference in reducing the occurrence of delirium between flexible and standard visitation. Limitations of the study include that it was restricted to a single country.

Authors: Regis Goulart Rosa, M.D., Ph.D., Hospital Moinhos de Vento, Porto Alegre, Brazil, and coauthors

(doi:10.1001/jama.2019.8766)

Credit: 
JAMA Network

Stronger earthquakes can be induced by wastewater injected deep underground

image: This is Ryan M. Pollyea, of the Virginia Tech Department of Geosciences.

Image: 
Virginia Tech

Virginia Tech scientists have found that in regions where oilfield wastewater disposal is widespread -- and where injected water has a higher density than deep naturally occurring fluids -- earthquakes are getting deeper at the same rate as the wastewater sinks.

Perhaps more critically, the research team of geoscientists found that the percentage of high-magnitude earthquakes increases with depth, and may create -- although fewer in number -- greater magnitude earthquakes years after injection rates decline or stop altogether.

The study, led by Ryan M. Pollyea in the Virginia Tech College of Science's Department of Geosciences, was published July 16 in Nature Communications. It shows that in areas such as Oklahoma and southern Kansas there is evidence that oilfield wastewater injected underground into the Arbuckle formation has a much higher density than natural fluids occurring within the deeper seismogenic zone faults.

The problem: The wastewater sinks and increases fluid pressure deep underground when it has a higher density than fluids already there naturally. Pressure changes so deep -- at depths up to 5 miles or greater -- can cause more high-magnitude earthquakes even though the overall number of earthquakes is decreasing.

"Earthquakes are now common in the central United States where the number of magnitude-3 or greater earthquakes increased from about 19 per year before 2008 to more than 400 per year since," said Pollyea, an assistant professor of geosciences and director of the Computational Geofluids Laboratory at Virginia Tech. (Pollyea adds that the overall earthquake rate per year has been declining since 2016.)

"In many cases, these earthquakes occur when oilfield wastewater is disposed of by pumping it into deep geologic formations," Pollyea added. "As wastewater is injected deep underground, fluid pressure builds up and migrates away from injection wells. This destabilizes faults and causes 'injection-induced' earthquakes, such as the damaging 5.8-magnitude earthquake that struck Pawnee, Oklahoma, in 2016."

Pollyea authored the study with Martin Chapman, a research associate professor of geosciences and director of the Virginia Tech Seismological Observatory; and Richard S. Jayne and Hao Wu, both graduate students at Virginia Tech. The study used computational modeling and earthquake data analysis from across a broad region of northern Oklahoma and southern Kansas -- roughly 30,000 square miles.

"This was a surprising result," Chapman said. "It suggests that sinking wastewater increases fluid pressure at greater depths and may cause larger earthquakes."

By analyzing earthquake data, the researchers found that the number of earthquakes greater than a magnitude 4 increased more than 150 percent from 2017 to 2018 while the number of earthquakes with magnitude 2.5 or greater decreased 35 percent during the same period. More bluntly, the overall number of earthquakes is starting to decrease, but the percentage of higher-magnitude earthquakes is increasing.

"Our models show that high-density wastewater may continue sinking and increasing fluid pressure at depths of 5 or more miles for 10 or more years after injections stop," Pollyea said. "There is a larger proportion of high-magnitude earthquakes at depths greater than 5 miles in north-central Oklahoma and southern Kansas, but there are fewer total earthquakes at these depths. This implies that the rate of high-magnitude earthquakes is decreasing more slowly than the overall earthquake rate."

The study also found that fluid pressure caused by sinking wastewater remains in the environment much longer than previously considered. "Our models show that high-density wastewater continues sinking and increasing fluid pressure for 10 to 15 years after injections stop, and this may prolong the earthquake hazard in regions like Oklahoma and Kansas," Pollyea said.

It's important to note that Pollyea and his colleagues are not saying that all oilfield wastewater disposal operations cause earthquakes, nor are they predicting a large and potentially damaging earthquake in the Midwest region. Nor does the study indicate that density-driven pressure build-up occurs everywhere that oilfield wastewater operations occur.

Researchers have known since the 1960s that pumping fluids deep underground can trigger earthquakes, Pollyea said, but this study is the first to show that the density of the wastewater itself plays a role in earthquake occurrence. The heavier the fluid, the greater the effect of displacement of natural fluids and the greater the fluid pressure change. To wit: Take a cup of salty ocean water heavy with dissolved particulates and dump it into a glass of regular tap water. Before the two eventually mix, the heavier ocean water will sink to the bottom, displacing the "lighter" tap water upward.

"Past pore-pressure models have assumed the density of injected fluids are the same as that in host rocks, but the real world begs to differ, and injected fluids are often heavier," said Shemin Ge, a professor and chair of the Department of Geological Sciences at the University of Colorado, who was not involved with the study. "This study looks into the effect of heavier fluids on pore pressure and consequently on inducing earthquakes. Heavier injected fluids have the tendency to migrate downward, which, interestingly, tracks the occurrence of earthquakes."

The new study offers scientists, regulators, and policymakers a new framework for managing oilfield wastewater disposal and the associated earthquake hazard, according to the research team. In places such as Oklahoma, where earthquake mitigation measures are ongoing, results from this study may be particularly important because the combination of persistent and deepening fluid pressure suggests that the rate of high-magnitude earthquakes is likely decreasing slower than the overall earthquake rate.

Credit: 
Virginia Tech

Australian ants prepared for 'Insect Armageddon'

image: These are Rhytidoponera mayri workers.

Image: 
Associate Professor Heloise Gibb, La Trobe University

Researchers studied ants in the Simpson Desert for 22 years and found that local changes in climate, such as long-term increases in rainfall, combined with human efforts to restore ecosystems, may have led to increased numbers of species - rather than the declines which might be expected in such unpredictable conditions.

Lead researcher, Associate Professor Heloise Gibb, said annual rainfall in the north Australian desert varied from 79 to 570 millimetres.

"While this unpredictability in rainfall is expected in hot climates, this is the first time we've been able to understand how insects respond to such large inconsistencies in their environment," Associate Professor Gibb said.

"For many species, this unpredictability - exacerbated by climate change - would equate to increasingly difficult conditions for their survival.

"What we've found, however, in contrast to warnings of a long-term decline in insects, is that species that already like it hot may do better where it also becomes wetter."

Associate Professor Gibb said researchers discovered a boom in the population of aggressive sugar-feeding ants with every rapid increase in rainfall.

"Water is the driving factor for this species' survival," Associate Professor Gibb said.

"These tyrant ants, as we would call them, are able to adjust their time of activity so they're active only when above-ground conditions are suitable.

"While the average temperature of their environment may be increasing, their flexibility in tough environments enables them to survive until the next big rainfall."

Researchers found the increase in ant populations reflected the change in resources available to them.

"Following rainfall, plants grow, flower and seed, providing honeydew, nectar and a food source for other invertebrates that the tyrant ants consume," Associate Professor Gibb said.

While ants other than the tyrants - including furnace ants, mono ants, sugar ants and pony ants - didn't respond as clearly in the study, their populations did increase over time.

Half way through the study, the property on which it was conducted was purchased by a conservation agency which stopped cattle grazing on the premises.

"While it's difficult to explicitly link this management change with ant responses, we believe this change was also critical in driving ecosystem change that eventually improved conditions for ants, allowing them to boom in response to extreme rainfall events," Associate Professor Gibb said.

"Active conservation efforts, funded by the public, can have very positive effects on biodiversity.

"It's important that future research identifies the best approach and locations for these efforts to take place if we want to ensure the continued persistence of the vast diversity of life that this planet currently supports."

Credit: 
La Trobe University

Research shows black plastics could create renewable energy

image: The process by which plastics are converted to carbon nanotube material.

Image: 
Dr Alvin Orbaek White

Research from Swansea University has found how plastics commonly found in food packaging can be recycled to create new materials like wires for electricity - and could help to reduce the amount of plastic waste in the future.

While a small proportion of the hundreds of types of plastics can be recycled by conventional technology, researchers found that there are other things that can be done to reuse plastics after they've served their original purpose.

The research, published in The Journal for Carbon Research, focuses on chemical recycling which uses the constituent elements of the plastic to make new materials.

While all plastics are made of carbon, hydrogen and sometimes oxygen, the amounts and arrangements of these three elements make each plastic unique. As plastics are very pure and highly refined chemicals, they can be broken down into these elements and then bonded in different arrangements to make high value materials such as carbon nanotubes.

Dr Alvin Orbaek White, a Sêr Cymru II Fellow at the Energy Safety Research Institute (ESRI) at Swansea University said: "Carbon nanotubes are tiny molecules with incredible physical properties. The structure of a carbon nanotube looks a piece of chicken wire wrapped into a cylinder and when carbon is arranged like this it can conduct both heat and electricity. These two different forms of energy are each very important to control and use in the right quantities, depending on your needs.

"Nanotubes can be used to make a huge range of things, such as conductive films for touchscreen displays, flexible electronics fabrics that create energy, antennas for 5G networks while NASA has used them to prevent electric shocks on the Juno spacecraft."

During the study, the research team tested plastics, in particular black plastics, which are commonly used as packaging for ready meals and fruit and vegetables in supermarkets, but can't be easily recycled. They removed the carbon and then constructed nanotube molecules from the bottom up using the carbon atoms and used the nanotubes to transmit electricity to a light bulb in a small demonstrator model.

Now the research team plan to make high purity carbon electrical cables using waste plastic materials and to improve the nanotube material's electrical performance and increase the output, so they are ready for large-scale deployment in the next three years.

Dr Orbaek White said: "The research is significant as carbon nanotubes can be used to solve the problem of electricity cables overheating and failing, which is responsible for about 8% of electricity is lost in transmission and distribution globally.

"This may not seem like much, but it is low because electricity cables are short, which means that power stations have to be close to the location where electricity is used, otherwise the energy is lost in transmission.

"Many long range cables, which are made of metals, can't operate at full capacity because they would overheat and melt. This presents a real problem for a renewable energy future using wind or solar, because the best sites are far from where people live."

Credit: 
Swansea University

Tending the future of data analysis with MVApp

image: The team typically uses small flowering plants called Arabidobsis thaliana as a model plant for their research.

Image: 
© 2019 KAUST

The vast datasets generated by modern plant-science technologies require clever data-mining methods to extract useful information. Now, KAUST researchers have developed MVApp--an open-source, online statistics platform for conducting multivariate analyses of these intricate data.

The recent development of high-throughput phenotyping techniques has rapidly produced huge datasets on the characteristics of plants. These multivariate data hold crucial details about plant physiology: how a plant responds to different environments and how a plant's growth patterns and potential yields change--all of which are valuable in developing sustainable agriculture and ensuring food security.

"Our experiments typically include measurements of thousands of plants every day for multiple traits, from leaf size through to salt-stress tolerance or resistance to plant pathogens," says Magdalena Julkowska, a research scientist working in Mark Tester's lab at KAUST. "These data are extremely powerful, but overwhelming to sieve through, she adds, "As a team, we know the struggles that come with large data analyses, and we figured if we struggle, then others must too."

The team built MVApp using R-language--a popular tool for statistical analyses--and incorporated the most pertinent R packages for analyzing phenotyping data. MVApp can be used with datasets of different sizes, from exploratory analyses of large-scale natural diversity to smaller-scale projects comparing mutant phenotypes to wild-type plants.

The team also incorporated a technique called quantile regression into MVApp--this specialist tool is used in other fields but has not yet reached its full potential in plant science.

"When we screen populations of hundreds of diverse plant accessions, originating from different parts of the globe, the plants that yield well might have different traits contributing to yield than the plants that produce low yields," says Julkowska. "Let's say you'd like to explain the yield of a specific plant type by its biomass and water use--quantile regression can help quantify how much each trait is contributing to your main trait of interest."

MVApp produces comprehensive, easy-to-follow outputs, and generates attractive, publication-ready figures that have clear links back to the raw data used to create them. The MVApp team are passionate about improving data transparency and streamlining data curation, ensuring that every scientist can produce valuable, reproducible results.

"We hope that MVApp will help the entire scientific community--not just plant scientists--to become familiar with various statistical methods and to know how to implement them, particularly with big data," says Julkowska. "We welcome feedback and hope users will help us improve MVApp."

Credit: 
King Abdullah University of Science & Technology (KAUST)

Electronic chip mimics the brain to make memories in a flash

image: The new chip is based on an ultra-thin material that changes electrical resistance in response to different wavelengths of light.

Image: 
RMIT University

Researchers from RMIT University drew inspiration from an emerging tool in biotechnology - optogenetics - to develop a device that replicates the way the brain stores and loses information.

Optogenetics allows scientists to delve into the body's electrical system with incredible precision, using light to manipulate neurons so that they can be turned on or off.

The new chip is based on an ultra-thin material that changes electrical resistance in response to different wavelengths of light, enabling it to mimic the way that neurons work to store and delete information in the brain.

Research team leader Dr Sumeet Walia said the technology moves us closer towards artificial intelligence (AI) that can harness the brain's full sophisticated functionality.

"Our optogenetically-inspired chip imitates the fundamental biology of nature's best computer - the human brain," Walia said.

"Being able to store, delete and process information is critical for computing, and the brain does this extremely efficiently.

"We're able to simulate the brain's neural approach simply by shining different colours onto our chip.

"This technology takes us further on the path towards fast, efficient and secure light-based computing.

"It also brings us an important step closer to the realisation of a bionic brain - a brain-on-a-chip that can learn from its environment just like humans do."

Dr Taimur Ahmed, lead author of the study published in Advanced Functional Materials, said being able to replicate neural behavior on an artificial chip offered exciting avenues for research across sectors.

"This technology creates tremendous opportunities for researchers to better understand the brain and how it's affected by disorders that disrupt neural connections, like Alzheimer's disease and dementia," Ahmed said.

The researchers, from the Functional Materials and Microsystems Research Group at RMIT, have also demonstrated the chip can perform logic operations - information processing - ticking another box for brain-like functionality.

Developed at RMIT's MicroNano Research Facility, the technology is compatible with existing electronics and has also been demonstrated on a flexible platform, for integration into wearable electronics.

How the chip works:

Neural connections happen in the brain through electrical impulses. When tiny energy spikes reach a certain threshold of voltage, the neurons bind together - and you've started creating a memory.

On the chip, light is used to generate a photocurrent. Switching between colors causes the current to reverse direction from positive to negative.

This direction switch, or polarity shift, is equivalent to the binding and breaking of neural connections, a mechanism that enables neurons to connect (and induce learning) or inhibit (and induce forgetting).

This is akin to optogenetics, where light-induced modification of neurons causes them to either turn on or off, enabling or inhibiting connections to the next neuron in the chain.

To develop the technology, the researchers used a material called black phosphorus (BP) that can be inherently defective in nature.

This is usually a problem for optoelectronics, but with precision engineering the researchers were able to harness the defects to create new functionality.

"Defects are usually looked on as something to be avoided, but here we're using them to create something novel and useful," Ahmed said.

"It's a creative approach to finding solutions for the technical challenges we face."

Credit: 
RMIT University

Resistance is utile: Magnetite nanowires with sharp insulating transition

image: This is the concept of the study. The 3D Fe3O4(100) nanowire of 10 nm length scale on 3D MgO nanotemplate were produced using original nanofabrication techniques.
The ultrasmall nanowire exhibited a prominent Verwey transition with lower defect concentration due to 3D nanoconfinement effect.

Image: 
Osaka University

Osaka - Magnetite (Fe3O4) is best known as a magnetic iron ore, and is the source of lodestone. It also has potential as a high-temperature resistor in electronics. In new research led by Osaka University, published in Nano Letters, ultra-thin nanowires made from Fe3O4 reveal insights into an intriguing property of this mineral.

When cooled to around 120 K (-150°C), magnetite suddenly shifts from a cubic to a monoclinic crystal structure. At the same time, its conductivity sharply drops--it is no longer a metal but an insulator. The exact temperature of this unique "Verwey transition", which can be used for switching in electronic devices, depends on the sample's properties, like grain size and particle shape.

Magnetite can be made into thin films, but below a certain thickness--around 100 nm--the Verwey transition weakens and needs lower temperatures. Thus, for electronics at the nano-scale, preserving this key feature ofFe3O4 is a major challenge. The Osaka study used an original technique to produce magnetite nanowires of just 10 nanometer length, which had exquisite Verwey behavior.

As described by study co-author Rupali Rakshit, "We used laser pulses to deposit Fe3O4 onto a template of MgO. We then etched the deposits into wire shapes, and finally attached gold electrodes on either side so we could measure the conductivity of the nanowires."

When the nanowires were cooled to around 110 K (-160°C), their resistance sharply increased, in line with typical Verwey behavior. For comparison, the team also produced Fe3O4 as a thin film with a large surface area on the millimeter scale. Its Verwey transition was not only weaker, but required temperatures down to 100 K.

"The nanowires were remarkably free of crystal defects," says study leader Azusa Hattori. "In particular, unlike the thin film, they were not dogged by antiphase domains, where the atomic pattern is suddenly reversed. The boundaries of these domains block conduction in the metal phase. In the insulator phase, they stop resistivity from emerging, so they flatten out the Verwey transition."

The nanowires were so pristine that the team could directly study the origin of the Verwey transition with unprecedented accuracy. The insulating properties of magnetite below 120 K are believed to come from "trimerons" repetitive structures in the low-temperature crystal. The researchers estimated the characteristic length scale of trimerons, and it closely matched the true size according to previous research.

"The Verwey transition has a host of potential uses in energy conversion, electronics and spintronics," says Hattori. "If we can fine-tune the transition by controlling the amount of defects, we can envisage producing very low-powered, yet advanced devices to support green technology."

Credit: 
Osaka University