Tech

Video game experience, gender may improve VR learning

ITHACA, N.Y. - Students who used immersive virtual reality (VR) did not learn significantly better than those who used two more traditional forms of learning, but they vastly preferred the VR to computer-simulated and hands-on methods, a new Cornell study has found.

"We didn't know exactly what we were going to see," said Jack Madden, doctoral student in astronomy at Cornell University and first author of "Ready Student One: Exploring the Predictors of Student Learning in Virtual Reality," which published March 25 in PLOS ONE. "But it's amazing that this brand-new technology performed just as well as these tried-and-true methods that are used today in classrooms. So at least we're not harming students by using VR."

Though the virtual reality experiment didn't change learning outcomes overall, the researchers found that students with more video game experience learned better using VR than those with little video game experience - a finding that correlated closely with gender.

The study - which has new implications as learning around the world shifts online to combat the spread of coronavirus - aimed to take a step toward determining whether new educational technology tactics, while popular, are actually effective.

"There's been a big push for enhanced technology in classrooms," Madden said. "I think we can be in awe of these fancy, shiny devices and it might feel like they're helping, but we need to know if they actually are."

Males were far more likely to have video game experience, the survey found, and also learned more in the VR simulation, suggesting that either gender or prior video game experience could impact the success of VR-based learning. Reviewing prior work, the researchers found that video games requiring players to navigate 3D spaces are more popular among males than females.

"This is an interesting finding, because it could potentially imply that if you can provide learners with that experience, then you could show broad benefits from immersive learning," said co-author Andrea Stevenson Won, assistant professor of communication and director of the Virtual Embodiment Lab at Cornell. "However, more study is definitely needed."

"If you're unfamiliar with navigating this kind of 3D space, you're not going to learn as well in it, so that could be a barrier," Madden said. "One of the conclusions of our work is that we need to do a better job of asking questions around things that might be gendered, like video game experience. There's a lot of finer detail you need to know to make VR learning successful."

Credit: 
Cornell University

Solving a 50-year-old puzzle in signal processing, part two:

image: Here are three examples of 16-point chirp contours on the unit circle. The ICZT algorithm developed by Iowa State engineers can work with all three while the one previously used can work only with the last contour.

Image: 
Figure courtesy of Alexander Stoytchev.

AMES, Iowa - Iowa State University's Alexander Stoytchev says it's one of the "most popular and useful" algorithms around - even though most of us have never heard of it.

But, if you've used a cell phone, browsed the internet or needed a medical image, you've benefitted from the fast Fourier transform (FFT).

The transform and its inverse (known as the IFFT) have been in use since 1965. For example, in your cell phone the FFT is used to analyze the signal received from the base station (or cell tower). The IFFT solves the inverse problem: it synthesizes the signal that your phone sends to the base station.

In 1969, researchers developed a more useful, generalized version of the FFT known as the chirp z-transform (CZT). But nobody had come up with a generalized version of the IFFT. It was a 50-year-old puzzle in signal processing.

That is, until last fall when two Iowa State engineers - Stoytchev and Vladimir Sukhoy - announced in a research paper they had come up with a closed-form solution for the inverse chirp z-transform (ICZT) and a fast algorithm for computing it. (The paper sparked a lot of interest in the signal-processing community, tallying more than 26,000 accesses since October.)

Now Stoytchev - an associate professor of electrical and computer engineering who's also affiliated with the university's Virtual Reality Applications Center - and Sukhoy - a lecturer in electrical and computer engineering - report new research results about their algorithm.

In a paper just published online by Scientific Reports, a Nature Research journal, the two show how their algorithm functions "on the unit circle," which refers to a special case of its parameters. (Their previous paper only highlighted operations "off the unit circle.")

The paper details how the algorithm can work with frequency components that are generated by sample points from the unit circle in the complex plane. These points form a contour that is known as the chirp contour. Unlike the IFFT, which can only work with equispaced sampling points that fully cover the unit circle, the ICZT algorithm can work with contours that cover only a fraction of the unit circle. It can also work with contours that wrap around and perform multiple revolutions over the circle. This enables the use of certain (non-orthogonal) frequency components, which lifts one of the main restrictions of the IFFT and could lead to better spectrum utilization.

The paper identifies the parameter values for which the algorithm is numerically accurate and for which it isn't, and describes how to estimate its accuracy as a function of the parameters. (Technical note: It shows that the singularities of the ICZT of size n are related to the elements of the Farey sequence of order n-1. This is an interesting connection because Farey sequences often appear in number theory.)

The paper demonstrates that, on the unit circle, the ICZT algorithm achieves high accuracy with only 64-bit floating-point numbers and does not require additional numerical precision, making it easier to implement. It reports the algorithm can pair well with the existing CZT algorithm to do back-to-back signal analysis and signal synthesis. And it shows that the algorithm is fast (it operates in what's known as O(n log n) time).

"This algorithm is more general than the IFFT, but maintains the same speed," Stoytchev said.

That's good news for the engineers working to solve all kinds of signal-processing challenges:

"Application domains that could benefit from this," the Iowa State engineers wrote in the paper, "include signal processing, electronics, medical imaging, radar, sonar, wireless communications, and others."

Credit: 
Iowa State University

Renewable energy developments threaten biodiverse areas

image: Guanacos in Chilean Patagonia. One of the last remaining wilderness areas left in the region.

Image: 
Francisca Hidalgo

More than 2000 renewable energy facilities are built in areas of environmental significance and threaten the natural habitats of plant and animal species across the globe.

A University of Queensland research team mapped the location of solar, wind and hydropower facilities in wilderness, protected areas and key biodiversity areas.

UQ School of Earth and Environmental Sciences lead author Mr José Rehbein said he was alarmed by the findings.

"Aside from the more than 2200 renewable energy facilities already operating inside important biodiversity areas, another 900 are currently being built," Mr Rehbein said.

"Energy facilities and the infrastructure around them, such as roads and increased human activity, can be incredibly damaging to the natural environment.

"These developments are not compatible with biodiversity conservation efforts."

The majority of renewable energy facilities in western Europe and developed nations are located in biodiverse areas.

Mr Rehbein said there is still time for developers to reconsider facilities under construction in Asia and Africa.

University of Amsterdam senior author Dr James Allan said effective conservation efforts and a rapid transition to renewable energy was essential to prevent species extinctions and avoid catastrophic climate change.

"The entire team agree that this work should not be interpreted as anti-renewables because renewable energy is crucial for reducing carbon emissions," Dr Allan said.

"The key is ensuring that renewable energy facilities are built in places where they do not damage biodiversity.

"Renewable energy developments must consider biodiversity as well as carbon, and avoid any negative impacts on biodiversity to be truly sustainable."

The team urge governments, industry and development organisations to avoid expanding renewable energy facilities into conservation areas and plan for alternative locations.

Credit: 
University of Queensland

OSU research paves way to improved cleanup of contaminated groundwater

image: Lew Semprini, left, and Mitchell Rasmussen in the lab where groundwater-purifying hydrogel beads are made.

Image: 
OSU College of Engineering

CORVALLIS, Ore. - Beads that contain bacteria and a slow-release food supply to sustain them can clean up contaminated groundwater for months on end, maintenance free, research by Oregon State University shows.

The hydrogel beads, which have the consistency of gummy candy and are made with an ingredient used in processed foods, hold the promise for sustained cleanup of groundwater contaminated with dangerous and widely used volatile organic compounds; many of the compounds are listed by the Centers for Disease Control and Prevention as likely human carcinogens.

At multiple locations around the country, the chemicals are present at concentrations that far exceed state and federal standards for safe drinking water.

Among the contaminants addressed in the study are 1,1,1-trichloroethane, cis-1,2-dichloroethene, and 1,4-dioxane -- degreasers commonly used by industry and the military. The chemicals can infiltrate groundwater through leaky underground storage tanks or runoff, or by simply being dumped on the ground as they were in past.

The new decontamination method, developed through a collaboration between the OSU College of Engineering and North Carolina State University, works because the microbes produce an enzyme that oxidizes the toxins when groundwater contaminants diffuse into the beads.

The result is a transformation of the contaminants into harmless compounds.

"We've created a process called long-term aerobic cometabolism, which is an enclosed, passive, self-sustaining system for groundwater remediation," said OSU's Lew Semprini, distinguished professor of environmental engineering and principal investigator on the study. "The beauty of this is that everything happens inside the beads."

Current practices, Semprini explains, call for gaseous growth substrates such as propane and methane to be added directly to the subsurface. The substrates nourish indigenous microbes, which in turn produce enzymes that transform the contaminants to non-toxic byproducts.

Often, however, the growth substrates chemically compete for those crucial enzymes, which significantly inhibits the transformation process.

The new system eliminates that competition, freeing all of the enzyme to oxidize contaminants.

"We've flipped the paradigm on its head by putting the right microorganism inside hydrogel beads and supplying it with a slow-release food source," Semprini said. "To my knowledge, this is the first time it's been done."

The study appears in Environmental Science: Processes & Impacts.

Semprini and his research team co-encapsulated the bacteria culture Rhodococcus rhodochrous and a slow-release growth substrate within hydrogel beads that they produced in the lab. The cylindrical beads, made of gellan gum, a common ingredient in processed foods, are 2 millimeters long.

As groundwater flows by the beads, the contaminants diffuse into the beads, where the slow release substrate reacts with groundwater to produce alcohol that sustains the Rhodococcus bacteria. The bacteria contain a monooxygenase enzyme that transforms the contaminants into harmless compounds, including carbon dioxide, water and chloride ions.

The purified water and the byproducts then diffuse out of the beads and rejoin the groundwater plume.

In bead-filled test columns supplied with a continuous flow of contaminated water, the system functioned continuously for more than 300 days (and counting) on the original growth substrate.

Semprini found that the beads remove more than 99% of the contaminants, and their concentrations declined from several hundred parts per billion to less than 1 part per billion.

The system's longevity will depend mainly on how long the bacteria live, which is a factor of how long the growth substrate lasts. That has yet to be determined.

"It's a question for future research," Semprini said. "How do we make beads that last many years, or how do we develop systems that can easily be replaced?"

Current cometabolic remediation methods require regular additions of growth substrates to ensure that key microorganisms flourish, and that necessitates regular site monitoring, biochemical adjustments and related costs.

The next step is to scale up the system and conduct pilot studies in the field.

Semprini envisions several possibilities for deploying the beads. One option is mixing beads directly into contaminated subsurface material. Another is to dig a trench in the path of groundwater flow and fill it with beads, creating a permeable reactive barrier. A third possibility is packing beads into reactors, a simple form being mesh bags, that can be placed in wells.

"Everybody favors sustainability in this type of system: Can we just have something working in the subsurface without much maintenance?" Semprini said. "I think we've achieved that."

Credit: 
Oregon State University

APS tip sheet: Ultimate strength of metals

image: The image shows the transformation of a crystalline metal (upper left) to amorphous material (lower right), with an energy difference related to the heat of fusion. The graph in the upper right shows a representative plot of strength vs. crystallite (grain) size and includes the predictions from the model (red line) compared to experimental (x) and simulation (o) data. The graph in the lower left shows the model predictions for grain boundary energies of a variety of metals compared to density functional theory calculations from the literature.

Image: 
Chandross and Agribay. Physical Review Letters (2020)

To build safe and robust automobiles, spacecrafts, and other technology, scientists attempt to know as much as possible about various metals' properties. However, these properties can be tricky to estimate without extensive testing. Now, researchers have created a theoretical model able to estimate various pure and alloyed metals' ultimate strength--a measurement defined as the amount of force necessary before a metal will deform. The framework, created by Chandross and Argibay of Sandia National Laboratories, does not require fit parameters. It relies on the connection between ultimate strength and thermodynamics and was able to accurately predict the ultimate strengths of nearly 20 different metals." The new model could improve research and development in many industries by allowing scientists to better understand the potential maximum achievable strengths of alloys and explore new design alternatives.

Credit: 
American Physical Society

SARS-CoV-2 transmission in patients with cancer at a hospital in China

What The Study Did: Researchers estimated the infection rate of SARS-CoV-2 in patients with cancer and reported on patient outcomes at a single hospital in Wuhan, China.

Authors: Conghua Xie, M.D., of Zhongnan Hospital of Wuhan University in China, and Melvin L. K. Chua, M.B.B.S., Ph.D., of the National Cancer Centre Singapore, are the corresponding authors.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/

(doi:10.1001/jamaoncol.2020.0980)

Editor's Note: The article includes funding/support disclosures. Please see the articles for additional information, including other authors, author contributions and affiliations, conflicts of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

A nanoscale device to generate high-power Terahertz waves

image: The nanoscale terahertz wave generator can be implemented on flexible substrates. © EPFL / POWERlab

Image: 
EPFL / POWERlab

Terahertz (THz) waves fall between microwave and infrared radiation in the electromagnetic spectrum, oscillating at frequencies of between 100 billion and 30 trillion cycles per second. These waves are prized for their distinctive properties: they can penetrate paper, clothing, wood and walls, as well as detect air pollution. THz sources could revolutionize security and medical imaging systems. What's more, their ability to carry vast quantities of data could hold the key to faster wireless communications.

THz waves are a type of non-ionizing radiation, meaning they pose no risk to human health. The technology is already used in some airports to scan passengers and detect dangerous objects and substances.

Despite holding great promise, THz waves are not widely used because they are costly and cumbersome to generate. But new technology developed by researchers at EPFL could change all that. The team at the Power and Wide-band-gap Electronics Research Laboratory (POWERlab), led by Prof. Elison Matioli, built a nanodevice (1 nanometer = 1 millionth of a millimeter) that can generate extremely high-power signals in just a few picoseconds, or one trillionth of a second, - which produces high-power THz waves.

The technology, which can be mounted on a chip or a flexible medium, could one day be installed in smartphones and other hand-held devices. The work first-authored by Mohammad Samizadeh Nikoo, a PhD student at the POWERlab, has been published in the journal Nature.

How it works

The compact, inexpensive, fully electric nanodevice generates high-intensity waves from a tiny source in next to no time. It works by producing a powerful "spark," with the voltage spiking from 10 V (or lower) to 100 V in the range of a picosecond. The device is capable of generating this spark almost continuously, meaning it can emit up to 50 million signals every second. When hooked up to antennas, the system can produce and radiate high-power THz waves.

The device consists of two metal plates situated very close together, down to 20 nanometers apart. When a voltage is applied, electrons surge towards one of the plates, where they form a nanoplasma. Once the voltage reaches a certain threshold, the electrons are emitted almost instantly to the second plate. This rapid movement enabled by such fast switches creates a high-intensity pulse that produces high-frequency waves.

Conventional electronic devices are only capable of switching at speeds of up to one volt per picosecond - too slow to produce high-power THz waves.

The new nanodevice, which can be more than ten times faster, can generate both high-energy and high-frequency pulses. "Normally, it's impossible to achieve high values for both variables," says Matioli. "High-frequency semiconductor devices are nanoscale in size. They can only cope with a few volts before breaking out. High-power devices, meanwhile, are too big and slow to generate terahertz waves. Our solution was to revisit the old field of plasma with state-of-the-art nanoscale fabrication techniques to propose a new device to get around those constraints."

According to Matioli, the new device pushes all the variables to the extreme: "High-frequency, high-power and nanoscale aren't terms you'd normally hear in the same sentence."

"These nanodevices, on one side, bring an extremely high level of simplicity and low-cost, and on the other side, show an excellent performance. In addition, they can be integrated with other electronic devices such as transistor. Considering these unique properties, nanoplasma can shape a different future for the area of ultra-fast electronics", says Samizadeh.

The technology could have wide-ranging applications beyond generating THz waves. "We're pretty sure there'll be more innovative applications to come," adds Matioli.

Credit: 
Ecole Polytechnique Fédérale de Lausanne

Scientists get first look at cause of 'slow motion' earthquakes

An international team of scientists has for the first time identified the conditions deep below the Earth's surface that lead to the triggering of so-called 'slow motion' earthquakes.

These events, more commonly known as slow slip events, are similar to regular sudden and catastrophic earthquakes but take place on much longer timescales, usually from days to months.

By drilling down to just over 1km deep in water depths of 3.5km off the coast of New Zealand, the team have shown that the fault zone areas in which slow slip events occur are characterised by a 'mash up' of different rock types.

The results, published today in the journal Science Advances, showed that the areas are comprised of extremely rough sea floor topography made of rocks that varied markedly in size, type and physical characteristics.

The lead author of the paper, Dr Philip Barnes of New Zealand's National Institute of Water and Atmospheric Research (NIWA), described that 'some rocks were mushy and weak, whilst others were hard, cemented and strong.'

This has given scientists the first-ever look at the types and properties of rocks directly involved in slow motion earthquakes and begins to answer some of the major outstanding questions surrounding these unique events, such as whether or not they can trigger larger, more damaging earthquakes and tsunamis.

Co-author of the study Dr Ake Fagereng, from Cardiff University's School of Earth and Ocean Sciences, said: "This was the first effort to sample the rocks that host slow slip events, and the striking, immediate observation is that their strengths are hugely variable. One can therefore visualise the slow slip source as a mixture of hard and weak rocks, and use this as a starting point for models of how slow slip occurs."

First discovered on the San Andreas fault in California, but since 2002 found to occur in several other locations, slow slip events remain a relative mystery to scientists, who are endeavouring to find out how, where and why they occur and what drives their behaviour.

As part of their study, the international team undertook two International Ocean Discovery Program (IODP) expeditions aboard the JOIDES Resolution research vessel to the Hikurangi subduction zone off the east coast of the North Island in 2017 and 2018.

This was the first time that scientists had studied, and directly sampled, rocks from the source region of slow slip events using ocean floor scientific drilling methods.

The Hikurangi subduction zone is New Zealand's largest earthquake fault and is one of the best places in the world to study slow slip because here these events occur close to the sea floor which makes drilling to collect rock samples a lot easier.

For instance, Laura Wallace of GNS Science, New Zealand, describes that the 2016 Kaik?ura earthquake triggered a series of major slow slip events on the Hikurangi subduction zone - where the Pacific Plate dives beneath the eastern North Island - and was the most widespread episode of slow slip seen in New Zealand since they were first discovered in the country.

These slow slip events following the Kaikoura earthquake released a large amount of built-up tectonic energy and lasted over the weeks and months following the earthquake.

During the expedition the team drilled two boreholes to obtain a sequence of rocks and sediments on the incoming (Pacific) plate approaching the North Island.

The drilling data were interpreted together with seismic reflection profiles - or pictures of the layers under the surface of the earth which are created at sea by sound waves.

The study has indicated that the co-existence of these contrasting rock types in the fault zone may lead to the slow slip movements observed offshore from Gisborne, and perhaps elsewhere at subduction boundaries around the world.

Indeed, Dr Barnes says that the research will have direct relevance not only to New Zealand, but to areas like Japan and Costa Rica, which sit on the Ring of Fire - the perimeter of the Pacific Ocean basin where many earthquakes and volcanic eruptions occur.

"We now know that a highly variable mixture of rock strengths is part of the recipe for slow slip. This opens for new studies of how such mixtures deform, why they can generate slow slip, and under what conditions (if any) they can also generate damaging earthquakes. This may help address the outstanding question of how earthquakes and slow slip events interact," continued Dr Fagereng.

Credit: 
Cardiff University

Assessing the the global problem of poor sanitation

Experts are investigating a better way of measuring the number of people exposed to the health risks of poorly-managed sanitation systems - and it will help reveal whether the world is on track to deliver UN Sustainable Goal 6 (SDG6).

SDG6 is the aspiration that everyone should have access to safe water and sanitation and that water should be well managed all over the world - the target is to achieve this in the next 10 years.

Progress is monitored by the World Health Organisation and UNICEF, working together as part of the Joint Monitoring Programme (JMP) for Water Supply, Sanitation and Hygiene.

The JMP is advised by high level sector experts, a group of whom was due to meet at the University of Leeds in the UK this week but the meeting was cancelled because of COVID19.

Instead, the experts are being brought together in a virtual conference hosted by Barbara Evans, Professor of Public Health Engineering at Leeds and a member of the JMP advisory group.

They are meeting to support six countries that are seeking to design, test and implement new data collection systems to more accurately estimate how many people across the world use sanitation systems that are not connected to a treatment works - and whether those systems are safely managed.

Known as on-site sanitation, these range from sceptic tanks to pit latrines, and are widely used in low-to-middle income countries as well as in many rural areas in richer countries.

The JMP report that 3.1 billion people use on-site sanitation. Very few countries keep data on how the waste material is isolated and treated. In many parts of the world, the system for safely disposing of the sludge has never existed or has broken down. That poses a major risk to human health.

Professor Evans said: "Many of the world's poorest people rely on-site sanitation and it those communities that suffer the most when the disposal and treatment of waste sludge from pit latrines, for example, does not happen.

"Or the waste that is supposedly taken for treatment is just dumped into rivers or onto land.

"Ensuring the safe disposal of human waste can bring about a revolution in public health. It can prevent the communicable diseases linked to poor sanitation and dirty water which cause illness and premature death.

"In some parts of the world, death rates among young children are at levels we saw in the UK at the end of the 19th century and much of that is due to the lack of adequate sanitation.

"It is also important to realise that this discharge of huge quantities of faecal waste into the environment can also contribute to the acceleration of anti-microbial resistance in disease-causing organisms."

The JMP has secured a three-year grant from the Bill and Melinda Gates Foundation to develop a set of data collection techniques that can be rolled out to countries to try and get a better assessment of the scale of challenge that's needed to ensure sewage waste is safely dealt with.

Professor Evans added: "The aim of SDG6 is to give everyone access to safe sanitation within the next ten years.

"You significantly increase the chance of achieving that when you know the scale of the challenge. At the moment there is a lack of robust data on exactly how many people rely on on-site sanitation and in turn how many of those people face inadequate sludge disposal."

Previous estimates of the scale of the problem

A research study, led by the University of Leeds, based on work funded by the World Bank and now supported by the Bill and Melinda Gates Foundation, conducted a rapid assessment of sanitation systems in 39 cities. It estimated that less than of the waste generated by households was safely disposed of. (That study can be found at: https://www.frontiersin.org/articles/10.3389/fenvs.2020.00001/full)

The untreated waste ended up in storm drains, open water on wasteland or unsanitary dumping grounds.

The study also found that waste sludge collection was often an informal arrangement in communities and was happening outside of any city or municipality regulation. Cities have tended to have more complex sewerage systems and overlook the on-site facilities.

Professor Evans was one of the first academics to draw attention to the failure of sludge collection systems worldwide. In seminal research undertaken in partnership with the World Bank, she and colleagues drew attention to what they described as the missing link in sanitation services - effective collection and disposal.

The University of Leeds is one of the major UK research centres for public health engineering with projects investigating ways that sludge can be recycled and turned into a substance with monetary value, such as a fertilizer as well as looking at ways cities and municipalities can be helped to develop sustainable sanitation management.

Credit: 
University of Leeds

How to break new records in the 200 metres?

image: From left to right: standard track, consisting of two 84.3-metre straight lines; both types of basket handle-shaped track

Image: 
Amandine Aftalion, Centre d'analyse et de mathématique sociales (CNRS/EHESS)

Usain Bolt's 200m record has not been beaten for ten years and Florence Griffith Joyner's for more than thirty years. And what about if the secret behind beating records was to use mathematics? Thanks to a mathematical model, Amandine Aftalion, CNRS researcher at the Centre d'analyse et de mathématique sociales (CNRS/EHESS), and Emmanuel Trélat, a Sorbonne Université researcher at the Laboratoire Jacques-Louis Lions (CNRS/Sorbonne Université/ Université de Paris) have proved that the geometry of athletic tracks could be optimised to improve records. They recommend to build shorter straights and larger radii in the future. These findings are to be published in Royal Society Open Science on 25 March, 2020.

At present, there are three designs of tracks that can be certified by World Athletics: standard tracks (consisting of straights and semi-circles) and two types of double-bend track (where the double bend is made of three arcs of two different radii). It is usually admitted in the athletic community that the standard track is the quickest and that there is no chance of beating a record on a double-bend track. Double-bend tracks have actually been designed to accommodate a football or rugby stadium, and the main drawback is that the bends have a smaller radius of curvature. Therefore, the centrifugal force is greater and the double bend tracks are slower. Multi-sports arenas are therefore not adapted to athletic records and there is a major disadvantage to being on inner lanes.

The mathematical model developped by Amandine Aftalion and Emmanuel Trélat couples mechanics and energetics, in particular the maximal oxygen uptake (VO2max) and anaerobic energy, into a system of differential equations that combines velocity, acceleration, propulsive force, neural drive with cost and benefit parametres in order to determine the optimal strategy to run a race.

Since this model optimises the effort to produce the best race, it makes it possible to compute the optimal geometry of a track and predict the discrepancy in records according to this geometry and the type of lane. For standard tracks, it shows that shorter straights and larger radii of curvature could improve the 200m record possibly by 4 hundredth of a second. The constraint to accommodate other sports can be met by opting for new tracks with shorter horizontal straights and small vertical straights. The researchers recommendation is to privilege such tracks in the future in order to improve runners' performance.

They are adapting their model to horse races with the support of the AMIES.

Credit: 
CNRS

Printing complex cellulose-based objects

image: The new 3D printing technology makes it possible to print filigree and robust structures from a cellulose composite material.

Image: 
Michael Hausmann / ETH Zurich / Empa

Trees and other plants lead the way: they produce cellulose themselves and use it to build complex structures with extraordinary mechanical properties. That makes cellulose attractive to materials scientists who are seeking to manufacture sustainable products with special functions. However, processing materials into complex structures with high cellulose content is still a big challenge for materials scientists.

A group of researchers at ETH Zurich and Empa have now found a way to process cellulose using 3D printing so as to create objects of almost unlimited complexity that contain high levels of cellulose particles.

Print first, then densify

To do this, the researchers combined printing via direct ink writing (DIW) method with a subsequent densification process to increase the cellulose content of the printed object to a volume fraction of 27 percent. Their work was recently published in the Advanced Functional Materials journal.

The ETH and Empa researchers are admittedly not the first to process cellulose with the 3D printer. However, previous approaches, which also used cellulose-containing ink, have not been able to produce solid objects with such a high cellulose content and complexity.

The composition of the printing ink is extremely simple. It consists only of water in which cellulose particles and fibres measuring a few hundred nanometres have been dispersed. The cellulose content is in between six and 14 percent of the ink volume.

Solvent bath densifies cellulose

The ETH researchers used the following trick to densify the printed cellulose products: After printing a cellulose-based water ink, they put the objects in a bath containing organic solvents. As cellulose does not like organic solvents, the particles tend to aggregate. This process results into shrinkage of the printed part and consequently to a significant increase in the relative amount of cellulose particles within the material.

In a further step, the scientists soaked the objects in a solution containing a photosensitive plastic precursor. By removing the solvent by evaporation, the plastic precursors infiltrate the cellulose-based scaffold. Next, to convert the plastic precursors into a solid plastic, they exposed the objects to UV light. This produced a composite material with a cellulose content of the aforementioned 27 volume percent. "The densification process allowed us to start out with a 6 to 14 percent in volume of water-cellulose mixture and finish with a composite object that exhibits up to 27 volume percent of cellulose nanocrystals," says Hausmann.

Elasticity can be predetermined

As if that were not enough, depending on the type of plastic precursor used, the researchers can adjust the mechanical properties of the printed objects, such as their elasticity or strength. This allows them to create hard or soft parts, depending on the application.

Using this method, the researchers were able to manufacture various composite objects, including some of a delicate nature, such as a type of flame sculpture that is only 1 millimetre thick. However, densification of printed parts with wall thickness higher than five milimeters lead to distortion of the structure because the surface of the densifying object contracts faster than its core.

Similar fibre orientation to wood

The researchers investigated their objects using X-?ray analyses and mechanical tests. Their findings showed that the cellulose nanocrystals are aligned similarly to those present in natural materials. "This means that we can control the cellulose microstructure of our printed objects to manufacture materials whose microstructure resembles those of biological systems, such as wood," says Rafael Libanori, senior assistant in ETH Professor André Studart's research group.

The printed parts are still small - laboratory scale you could say. But there are many potential applications, from customised packaging to cartilage-?replacement implants for ears. The researchers have also printed an ear based on a human model. Until such a product could be used in clinical practice, however, more research and, above all, clinical trials are needed.

This kind of cellulose technology could also be of interest to the automotive industry. Japanese carmakers have already built a prototype of a sports car for which the body parts are made almost entirely of cellulose-based materials.

Credit: 
ETH Zurich

Diet, nutrition have profound effects on gut microbiome

WASHINGTON (March 25, 2020) -- Nutrition and diet have a profound impact on microbial composition in the gut, in turn affecting a range of metabolic, hormonal, and neurological processes, according to a literature review by scientists from the George Washington University (GW) and the National Institute of Standards and Technology (NIST). The article is published in Nutrition Reviews.

Until recently, the human microbiome remained an understudied target for novel strategies to diagnose and treat disease. The prevalence of diseases that may involve disruption of the gut microbiome are increasing and there is currently no consensus in the scientific community on what defines a "healthy gut" microbiome.

The review from GW and NIST systematically assessed the current understanding of the interactions between nutrition and the gut microbiome in healthy adults.

"As we learn more about the gut microbiome and nutrition, we are learning how influential they are to each other and, perhaps more central to public health, the role they both play in prevention and treatment of disease," said Leigh A. Frame, PhD, MHS, program director of the Integrative Medicine Programs at the GW School of Medicine and Health Sciences.

Through their review, the authors found that the bi-directional relationship between nutrition and the gut microbiome is emerging as more research is conducted on how microbiota utilize and produce both macro and micronutrients. The authors found that research has mostly focused on the benefits of dietary fiber, which serves as fuel for gut microbiota, and also found that, in contrast, protein promotes microbial protein metabolism and potentially harmful byproducts that may sit in the gut, increasing the risk of negative health outcomes.

"This review reveals that the measurement tools currently in our arsenal are ineffective for identifying the microbial and molecular signatures that can serve as robust indicators of health and disease," said Scott Jackson, adjunct assistant professor of clinical research and leadership at SMHS and leader of the Complex Microbial Systems Group at NIST.

The authors suggest that future research must consider individual responses to diet and how the gut microbiome responds to dietary interventions, as well as emphasized function of the microbiome (what it does) over merely composition (what is there).

Credit: 
George Washington University

MIPT scientists explain why new dangerous viruses are so hard to identify

image: The sea of viruses as seen through the eye of a needle.

Image: 
Daria Sokol/MIPT Press Office

In a recently published fundamental review dedicated to the diagnostics of viral infections, a Russian research team featuring MIPT researchers was the first to systematically describe and summarize the cutting-edge technologies in the rapidly developing field of genetics. A number of new effective methods of virus detection have been developed over the past few years, including those targeted at unknown pathogens. The authors described the so-called high-throughput next-generation sequencing as a potent new approach. The method promises to revolutionize the detection and analysis of new pathogenic viruses, but it will be at least several years until it is introduced into mainstream clinical practice.

In response to the rapid spread of the COVID-19 pandemic, an authoritative global scientific journal, aptly named Viruses, published a fundamental review of problems related to identifying and studying emerging pathogens, such as the notorious coronavirus.

"There are, by various statistical estimations, over 320,000 various viruses infecting mammals," said Kamil Khafizov, a researcher at MIPT's Historical Genetics, Radiocarbon Analysis and Applied Physics Lab and one of the review's authors. "But up to date, less than 1% of this vast multitude has been studied."

Most viruses, including those that cause respiratory, digestive, and other diseases in humans, remain unresearched and thus almost undetectable. The reason behind this is the narrow spectrum of viruses that the modern testing systems are designed to target.

"Metaphorically, we are attempting to look at a vast sea of threats through the eye of a needle," the authors write in the review. Among other things, they explore the shortcomings of the polymerase chain reaction method. This essential technique for microorganism molecular testing fails to identify poorly explored viruses, and this constitutes one of the key problems in modern virology.

There are, however, new methods that may potentially solve the issues of detecting and identifying new microorganisms, and the review explores these approaches. The authors consider next-generation sequencing (NGS) to be the most promising. Also known as high-throughput sequencing, it enables the analysis of multiple DNA molecules in parallel, be it a set of samples, different regions of the same genome, or both.

"Efficient mathematical algorithms are a key part of the method," says Alina Matsvay, MIPT doctoral student and the review's correspondent author. "They allow researchers to compare the genome of an unknown virus against all available references of viral genomes, and predict all of its possible characteristics, including its pathogenic potential."

The key shortcomings of NGS include the high cost of equipment and reagents needed for running such tests and the lengthy sample preparation, sequencing, and data analysis processes. These limitations, combined with high lab personnel qualification requirements, prevent the method from being widely integrated in the mainstream clinical practice. Nevertheless, the cost of the technology goes down with each year, while its speed, accuracy, and efficiency keep growing.

Khafizov noted that the coronavirus pandemic has demonstrated the importance of NGS methods for identifying new pathogens in clinical samples and studying the molecular mechanisms of virus transmission from animals to humans. The technology may be certified for use in health care in the immediate future.

Credit: 
Moscow Institute of Physics and Technology

New carbon dot-based method for increasing the efficiency of solar cells & LED

An international group of scientists, including some from ITMO University, has proposed a method that allows for significantly increasing the efficiency of solar cells and light-emitting diodes. The scientists managed to achieve this result by augmenting the auxiliary layers of the devices responsible for electron transport rather than working with the main active layer. The work has been published in the journal Advanced Functional Materials.

The struggle to preserve the environment and sharp fluctuations in oil and gas prices lead to the fact that investors are increasingly looking towards renewable energy. That is why scientists in different countries are actively working to make the process of generating energy from renewable sources as efficient as possible. For instance, work is currently underway to increase the efficiency of solar cells, one of the most popular green energy sources in the world.

Typically, scientists work with the active layer of the cells, which is responsible for absorbing luminous energy - they are made from silicon, gallium arsenide, perovskite and other materials. But the efficiency, cost and durability of a solar cell depend not only on the active layer but also the auxiliary ones. Increasing their efficiency while simultaneously decreasing production costs can allow us to boost competitive advantages of a device.

The auxiliary layers of a solar cell can be of an electron-transport or hole-transport type. When sunlight reaches the active layer, pairs of electrons and electron holes, in other words, a negative and a positive charge, are formed in it. After that, they need to be taken to their corresponding electrodes. And this is where the auxiliary layers come in: the electron-transport one is responsible for extracting and transferring the negative charge from the active layer, while the hole-transport layer performs the same operations with the positive one. An international group of scientists, including those from ITMO University, has proposed a new method for the creation of auxiliary layers for solar cells and light-emitting diodes based on perovskite. They used carbon dots - an environmentally friendly and relatively cheap material that can easily be obtained both in laboratory and industrial conditions.

"Carbon dots are carbon-based nanoparticles with a diameter of two to ten nanometers," explains Aleksandr Litvin, a senior research associate at ITMO University and a co-author of the research. "Their surface always contains various functional groups that largely determine the properties of this material. The application of carbon dots in solar batteries isn't something new, what's important is the modification of their surface by means of working with the functional groups. A different ratio of these groups on the surface determines the electronic configuration of carbon dots. Consequently, tailoring this allows us to get the optimal values of the working functions of the electrodes and the energy levels of the transport layers on which they are applied. This makes it possible to obtain optimal configuration with maximum efficiency. This approach is universal for different types of devices, which for the first time has allowed the use of carbon dots for increasing the operational efficiency of light-emitting diodes."

Obtained in this way, the material can be used not only for solar cells but also the auxiliary layers of light-emitting diodes. The latter have a broadly similar structure, but the process there is reversed: electrons and holes don't have to be removed from the active layer but, instead, injected into it in order to create electron-hole pairs the recombination of which in the active layer will ensure luminescence. In both cases, international colleagues of ITMO University scientists obtained a significant increase in efficiency of the devices created with the use of auxiliary layers made from the abovementioned carbon dots.

"Devices have been created, and their properties have been tested," concludes Aleksandr Litvin. "In the case of perovskite-based solar cells, we managed to obtain an increase in efficiency from 17.3% to 19.5%, that is, by almost 13%. In the case of light-emitting diodes, depending on the material of the emission layer, the external quantum efficiency (the ratio of the number of photons emitted by an LED to the number of electrons injected into it) increased by 2.1 - 2.7 times."

Credit: 
ITMO University

Big brains or many babies: How birds can thrive in urban environments

A new study in Frontiers in Ecology and Evolution suggests that birds have two alternative strategies for coping with the difficulties of humanity's increasingly chaotic cities - either by having large brains or through more frequent breeding.

Surviving in cities is so difficult that many bird species may be driven to extinction by the increasing urbanization of the world. But curiously, some birds cope, and even thrive, in these new environments. Understanding which species succeed and which do not has implications for conservation programs and also helps humans better understand which species they share their cities with.

"Cities are harsh environments for most species and therefore often support much lower biodiversity than natural environments," explains postdoctoral researcher Dr. Ferran Sayol of the University of Gothenburg and the Gothenburg Global Biodiversity Centre in Sweden. "The species that can tolerate cities are important because they are the ones that most humans will have contact with in their daily lives, and they can have important effects on the urban environment within our cities."

Many past studies have shown that birds with larger brains have a number of advantages. They can find new food sources and avoid human-made hazards better than smaller-brained birds. But researchers haven't yet been able to explain why some species with small brains - pigeons, for example - also are able flourish in cities.

To understand what allows birds to adapt to urban life, Sayol and his colleagues analyzed databases containing brain and body size, maximum lifespans, global distribution and breeding frequency. They used existing databases and museum collections that contained details on more than 629 bird species across 27 cities around the world.

Their findings confirmed that brain size does play an important role, but it's not the only path to success.

"We've identified two distinct ways for bird species to become urban dwellers," explains Sayol. "On the one hand, species with large brains, like crows or gulls, are common in cities because large brain size helps them deal with the challenges of a novel environment. On the other hand, we also found that small-brained species, like pigeons, can be highly successful if they have a high number of breeding attempts over their lifetimes."

The second strategy represents an adaptation that prioritizes a species' future reproductive success over its present survival. Interestingly, their research suggests that the two strategies represent distinct ways of coping with urban environments because birds with average brain size (relative to their body) are the least likely to live in cities.

Unsurprisingly, both strategies are less common in natural environments. Researchers are working to understand how these adaptations will change the behavior and structure of urban bird communities in the future.

Sayol's study highlights that there are multiple strategies for adapting to urban habitats. When considering the impacts of our increasingly urban future on our wildlife neighbors, it will be important to consider both their reproductive strategies as well as their brain sizes.

"In our study, we found a general pattern, but in the future, it could be interesting to understand the exact mechanisms behind it, for instance, which aspects of being intelligent are the most useful," says Sayol. "Understanding what makes some species better able to tolerate or even exploit cities will help researchers anticipate how biodiversity will respond as cities continue to expand."

Credit: 
Frontiers