Tech

Engineers develop new fuel cells with twice the operating voltage as hydrogen

image: The figure summarizes open circuit voltages of the representative DBFC performance in green and current density at 1.5 V in orange. DBFCs with peak power density at high voltage (>1 V) are represented by blue columns and those with peak power density at low voltage (

Image: 
Courtesy: Ramani Lab

Electrification of the transportation sector -- one of the largest consumers of energy in the world -- is critical to future energy and environmental resilience. Electrification of this sector will require high-power fuel cells (either stand alone or in conjunction with batteries) to facilitate the transition to electric vehicles, from cars and trucks to boats and airplanes.

Liquid-fueled fuel cells are an attractive alternative to traditional hydrogen fuel cells because they eliminate the need to transport and store hydrogen. They can help to power unmanned underwater vehicles, drones and, eventually, electric aircraft -- all at significantly lower cost. These fuel cells could also serve as range-extenders for current battery-powered electric vehicles, thus advancing their adoption.

Now, engineers at the McKelvey School of Engineering at Washington University in St. Louis have developed high-power direct borohydride fuel cells (DBFC) that operate at double the voltage of conventional hydrogen fuel cells. Their research was published June 17 in the journal Cell Reports Physical Science.

The research team, led by Vijay Ramani, the Roma B. and Raymond H. Wittcoff Distinguished University Professor, has pioneered a reactant: identifying an optimal range of flow rates, flow field architectures and residence times that enable high power operation. This approach addresses key challenges in DBFCs, namely proper fuel and oxidant distribution and the mitigation of parasitic reactions.

Importantly, the team has demonstrated a single-cell operating voltage of 1.4 or greater, double that obtained in conventional hydrogen fuel cells, with peak powers approaching 1 watt/cm2. Doubling the voltage would allow for a smaller, lighter, more efficient fuel cell design, which translates to significant gravimetric and volumetric advantages when assembling multiple cells into a stack for commercial use. Their approach is broadly applicable to other classes of liquid/liquid fuel cells.

"The reactant-transport engineering approach provides an elegant and facile way to significantly boost the performance of these fuel cells while still using existing components," Ramani said. "By following our guidelines, even current, commercially deployed liquid fuel cells can see gains in performance."

The key to improving any existing fuel cell technology is reducing or eliminating side reactions. The majority of efforts to achieve this goal involve developing new catalysts that face significant hurdles in terms of adoption and field deployment.

"Fuel cell manufacturers are typically reluctant to spend significant capital or effort to adopt a new material," said Shrihari Sankarasubramanian, a senior staff research scientist on Ramani's team. "But achieving the same or better improvement with their existing hardware and components is a game changer."

"Hydrogen bubbles formed on the surface of the catalyst have long been a problem for direct sodium borohydride fuel cells, and it can be minimized by the rational design of the flow field," said Zhongyang Wang, a former member of Ramani's lab who earned his PhD from WashU in 2019 and is now at the Pritzker School of Molecular Engineering at the University of Chicago. "With the development of this reactant-transport approach, we are on the path to scale-up and deployment."

Ramani added: "This promising technology has been developed with the continuous support of the Office of Naval Research, which I acknowledge with gratitude. We are at the stage of scaling up our cells into stacks for applications in both submersibles and drones."

The technology and its underpinnings are the subject of patent filing and are available for licensing.

Credit: 
Washington University in St. Louis

Without intervention, a 70% reduction in strokes or death in patients with brain AVMs

Montreal, June 17, 2020--For people with a brain arteriovenous malformation, a congenital vascular system defect, fate has a name: stroke. To avoid this risk, patients sometimes undergo interventions to remove the malformation. But is this very beneficial? Not necessarily. According to an international clinical trial, co-directed by researchers from the University of Montreal Hospital Research Centre (CRCHUM), interventional treatment--by neurosurgery, neuroradiology or radiation therapy--could be more dangerous than the disease itself.

In a study published in The Lancet Neurology, Dr. Christian Stapf, a vascular neurologist at the CHUM and the co-author of the article, and his colleagues show that the risk of having a stroke or dying falls by 68% when doctors let the malformation follow its natural course.

"In other words, the risk of patients having a stroke or dying is at least three times lower," stated Dr. Stapf, a researcher at the CRCHUM and professor at the Université de Montréal. "We wondered what was better for the patient: to remove the malformation to prevent a stroke or to live with the malformation for several years? The results of our study are clear: in the long term, standard medical care is more beneficial for the patient than any intervention. This certainly shakes up conventional thinking about how to prevent stroke in these patients."

Before joining the neurovascular program at CHUM in 2015, Dr. Stapf worked at Lariboisière Hospital (Paris, France). He was already the principal co-author of this study and in charge of the European component.

A second phase of the study sought to evaluate whether early surgical intervention might reduce the risk of neurological deficits. "After a five-year follow-up period, we showed that there were twice as many patients with a disabling deficit after the interventions than medical management alone," pointed out Dr. Stapf.

An Extraordinary Study

In this international clinical trial named ARUBA (acronym for A Randomized trial of Unruptured Brain AVMs), 226 adult participants with an average age of 44 were recruited between 2007 and 2013 in 39 hospital centres located in nine countries. Among the members of this collaborative network, the CHUM was the most active centre in terms of recruitment in Canada. There were two other centres in Ontario.

These volunteer patients, who had never had a stroke and whose malformation was sometimes discovered by chance, were divided into two groups: the first would get standard medical care, while the second would receive standard care combined with invasive therapies (by neurosurgery, interventional neuroradiology or radiation therapy). They were followed for average periods of between 33 and 50 months.

In 2014, under the supervision of Dr. Jean Raymond (interventional neuroradiologist), the CHUM launched TOBAS, an international study whose aim was to see whether the conclusive findings of the clinical trial ARUBA might also be valid for all patients with a neurovascular malformation, including those who had had a stroke in the past.

To date, the CHUM's neurovascular health program is the largest in Quebec and among the biggest in Canada: more than 800 stroke patients are admitted to the program every year. With its Centre de Référence des Anomalies Neurovasculaires Rares (referral centre for rare neurovascular abnormalities or iCRANIUM), the CHUM also offers a specialized multidisciplinary clinic dedicated to patients with several types of vascular malformations of the brain.

Credit: 
University of Montreal Hospital Research Centre (CRCHUM)

Brainsourcing automatically identifies human preferences

image: Brainsourcing can be applied to simple and well-defined recognition tasks. Screencap from video abstract.

Image: 
Cognitive computing research group

Monitoring electroencephalograms with the help of artificial intelligence makes it possible to determine the preferences of large groups of people from just their brain activity.

Researchers at the University of Helsinki have developed a technique, using artificial intelligence, to analyse opinions, and draw conclusions using the brain activity of groups of people. This technique, which the researchers call "brainsourcing", can be used to classify images or recommend content, something that has not been demonstrated before.

Crowdsourcing is a method to break up a more complex task into smaller tasks that can be distributed to large groups of people and solved individually. For example, people can be asked if an object can be seen in an image, and their responses are used as instructional data for an image recognition system. Even the most advanced image recognition systems based on artificial intelligence are not yet fully automated. Instead, training them requires the opinions of several people on the content of many sample images.

The University of Helsinki researchers experimented with the possibility of implementing crowdsourcing by analysing people's electroencephalograms (EEGs) with the help of AI techniques. Rather than asking for people's opinions, this information could be read directly from the EEG.

"We wanted to investigate whether crowdsourcing can be applied to image recognition by utilising the natural reactions of people without them having to carry out any manual tasks with a keyboard or mouse," says Academy Research Fellow Tuukka Ruotsalo from the University of Helsinki.

Computers classify images

In the study, a total of 30 volunteers were shown images of human faces on a computer display. The participants were instructed to label the faces in their mind based on what was portrayed in the images. For example, whether an image portrayed a blond or dark-haired individual, or a person smiling or not smiling. Unlike in conventional crowdsourcing tasks, they did not provide any additional information using the mouse or keyboard - they simply observed the images presented to them.

Meanwhile, the brain activity of each participant was collected using electroencephalography. From the EEGs, the AI algorithm learned to recognise images relevant to the task, such as when an image of a blond person appeared on-screen.

In the results of the experiment, the computer was able to interpret these mental labels directly from the EEG. The researchers concluded that brainsourcing can be applied to simple and well-defined recognition tasks. Highly reliable labelling results were already achieved using data collected from 12 volunteers.

User-friendly techniques on the way

The findings can be utilised in various interfaces that combine brain and computer activity. These interfaces would require the availability of lightweight and user-friendly EEG equipment in the form of wearable electronics, as opposed to the equipment used in the study, which requires a trained technician. Lightweight wearables that measure EEG are actively being developed and may be available sometime in the near future.

"Our approach is limited by the technology available," says Keith Davis, a student and research assistant at the University of Helsinki.

"Current methods to measure brain activity are adequate for controlled setups in a laboratory, but the technology needs to improve for everyday use. Additionally, these methods only capture a very small percentage of total brain activity. As brain imaging technologies improve, it may become possible to capture preference information directly from the brain. Instead of using conventional ratings or like buttons, you could simply listen to a song or watch a show, and your brain activity alone would be enough to determine your response to it."

Credit: 
University of Helsinki

Women commuting during rush hour are exposed to higher levels of pollutants

image: Krall and colleagues conducted the first study to use personal air pollution monitors with vehicle monitors to measure women's exposure to fine particulate matter air pollution (PM2.5), one pollutant emitted by traffic. From left to right: Nada Adibah, Dr. Jenna Krall, Gabriella Cuevas.

Image: 
Photo by Lathan Goumas/Mason Strategic Communications

Studies have shown associations between exposure to traffic-related air pollution and adverse health outcomes, including preterm birth and low birthweight. However, few studies have estimated real-world exposures during personal vehicle trips for women commuters.

New research led by George Mason University’s College of Health and Human Services faculty found higher exposure to harmful pollutants during rush hour trips compared to other settings. Dr. Jenna Krall led the study published in Environmental Research.

Krall and colleagues conducted the first study to use personal air pollution monitors with vehicle monitors to measure women’s exposure to fine particulate matter air pollution (PM2.5), one pollutant emitted by traffic. They collected data across 48-hour periods among their sample of 46 women with a mean age of 26 commuting in the Washington, D.C. metropolitan area.

“Women frequently have different commute patterns compared to men, for example due to increased trips for household errands and/or transporting children,” explains Krall. “With this difference and adverse birth outcomes found in previous research, we believed it was important to focus on this population.”

The researchers did not find differences in PM2.5 exposures based on trip length, which might not be reflective of factors like traffic volume that impact exposures.

“Reducing vehicle trips might be one way to reduce PM2.5 exposures, and subsequently air pollution associated health effects,” explains Krall. “This is particularly important for vulnerable populations, such as pregnant women.”

In future work, the researchers plan to use additional vehicle data such as traffic and speed to better understand these exposures.

This study was made possible by a multidisciplinary seed grant from George Mason University and the Thomas F. and Kate Miller Jeffress Memorial Trust, Bank of America, Trustee.

Zimako Chuks contributed to this story.

About George Mason University

George Mason University is Virginia's largest and most diverse public research university. Located near Washington, D.C., Mason enrolls 38,000 students from 130 countries and all 50 states. Mason has grown rapidly over the past half-century and is recognized for its innovation and entrepreneurship, remarkable diversity and commitment to accessibility. For more information, visit https://www2.gmu.edu/.

About the College of Health and Human Services

George Mason University's College of Health and Human Services prepares students to become leaders and shape the public's health through academic excellence, research of consequence and interprofessional practice. The College enrolls 1,917 undergraduate students and 950 graduate students in its nationally recognized offerings, including: 5 undergraduate degrees, 12 graduate degrees, and 11 certificate programs. The College is transitioning to a college public health in the near future. For more information, visit https://chhs.gmu.edu/.

Journal

Environmental Research

DOI

10.1016/j.envres.2020.109644

Credit: 
George Mason University

Wind farms on the Black Sea coast could endanger bat populations in Eastern Europe

image: Victim of wind power plant

Image: 
Christian Voigt, IZW

The Via Pontica, an important migration route for birds in Eastern Europe, runs along the Black Sea coast of Romania and Bulgaria. Bats also use this route. In this region, numerous wind farms have been installed in recent years because of good wind conditions, but there has been little implementation of the legally required measures for the protection of bats. A Romanian research team cooperated with the Leibniz Institute for Zoo and Wildlife Research (Leibniz-IZW) in Berlin to demonstrate that this leads to high death rates of migrating bats and potentially large declines even in populations living far away in other countries. The scientists therefore recommend the widespread introduction of turn-off times during the migration months, which - as the team was able to show in a local wind farm - would massively decrease bat mortality yet produce only a marginal loss in the energy production of the turbines.

Over the past ten years, the coastal region west of the Black Sea developed into a hotspot for wind energy production in Eastern Europe. Favourable wind conditions make the operation of wind farms particularly profitable in this area. Moreover, the region is relatively sparsely populated, so conflicts between operators and residents are rare.

Because of the good wind conditions, there is also an important migration route for numerous bird and bat species - the so-called Via Pontica, named after a historical Roman road connection. This area should therefore particularly benefit from conservation measures such as carefully chosen turbine turn-off times in order to keep bat mortality to a minimum. International agreements such as the EUROBATS agreement and the European Union's Fauna Flora Habitat Directive do already provide the legal framework for that. Currently, few protective measures have been implemented and monitoring of the populations is limited or absent in many regions in Eastern Europe. Accordingly, little is known about the impact of wind farms on bat populations.

A Romanian research team in cooperation with the Leibniz-IZW investigated bat fatalities in a local wind farm over a period of four years. The wind farm, comprising twenty turbines, is located in the Romanian part of the Dobruja, a historical coastal region between the Danube and the Black Sea which forms the border between Romania and Bulgaria. Within the four years, the scientists collected a total of 166 dead bats of 10 different species in the wind farm. Carcasses of Nathusius' pipistrelle bats (Pipistrellus nathusii) and common noctule bats (Nyctalus noctula) were particularly common. Since many bats had open wounds and/or broken wing bones, they most likely died as a result of direct collisions with rotating rotor blades. About half as many animals died without direct collision from barotrauma, which are commonly deadly lung injuries caused by huge differences in air pressure close to the rotor blades.

Since the scientists conducted selective, punctual searches, the scientists calculated the total loss of animals taking into account the search pattern, search times and other factors such as the likely removal of carcasses by foxes and stray dogs. According to this projection, 2,394 bats had died in this wind farm over the period of four years - or in other words 30 bats per wind turbine and year or 14.2 bats per megawatt and year. "This fatality rate is extremely high," says Dr Christian Voigt, head of the Department of Evolutionary Ecology at the Leibniz-IZW. "For comparison: the highest fatality rates in Central Europe or the USA are 10 bats per megawatt and year." A stable isotope analysis of fur samples carried out at the Leibniz-IZW additionally revealed that 90 percent of bats came from distant regions in the north and northeast, including the Ukraine, Belarus and Russia.

"This has given us a clear picture of what happened," explains Christian Voigt. "The fatality rate is so high because the wind farm under investigation is located in the middle of an important migration route for bats. Thus, the wind farm does not only negatively affect local bats, but also distant populations. This makes it all the more important to take appropriate measures to keep the fatality rate as low as possible, especially on such migration routes".

The scientists have already shown at this site that this is relatively easy to achieve. During the summer months, when there is a high level of bat migration, the operator - in consultation with the scientists - increased the threshold wind speed at which the turbines start up (the cut-in speed) to 6.5 metres per second. As a result, the fatality rate fell massively by 78 percent. "The energy production of the wind farm decreased by only 0.35 percent per year, which is a marginal loss for the operator," says Christian Voigt.

"This wind farm has a total capacity of 42 megawatts and is rather small," the Leibniz-IZW scientist continues. "The wind farms throughout the Dobruja region have a total capacity of at least 3,000 megawatts in operation. The total number of fatalities across this area could lead to a long-term decline in local bat populations as well as those from large parts of Eastern Europe. We therefore strongly recommend the widespread introduction of turn-off times and higher cut-in speeds. This will cost the operators almost nothing and could make the Via Pontica once again a largely safe flight path for bats."

Credit: 
Forschungsverbund Berlin

Data Security in Website Tracking

image: When browsing on the Internet, companies collect data not only about accessed websites, but also about the time of access or location information. (Photo: Amadeus Bramsiepe, Markus Breig, KIT)

Image: 
Amadeus Bramsiepe, Markus Breig, KIT

Tracking of our browsing behavior is part of the daily routine of Internet use. Companies use it to adapt ads to the personal needs of potential clients or to measure their range. Many providers of tracking services advertise secure data protection by generalizing datasets and anonymizing data in this way. Computer scientists of Karlsruhe Institute of Technology (KIT) and Technische Universität Dresden (TUD) have now studied how secure this method is and reported their findings in a scientific paper for the IEEE Security and Privacy Conference.

Tracking services collect large amounts of data of Internet users. These data include the websites accessed, but also information on the end devices used, the time of access (timestamp) or location information. "As these data are highly sensitive and have a high personal reference, many companies use generalization to apparently anonymize them and to bypass data security regulations," says Professor Thorsten Strufe, Head of the "Practical IT Security" Research Group of KIT. By means of generalization, the level of detailing of the information is reduced, such that an identification of individuals is supposed to be impossible. For example, location information is restricted to the region, the time of access is limited to the day, or the IP address is shortened by some figures. Strufe, together with his team and colleagues of TUD, have now studied whether this method really allows no conclusions to be drawn with respect to the individual.

With the help of a large volume of metadata of German websites with 66 million users and over 2 billion page views, the computer scientists succeeded in not only drawing conclusions with respect to the websites accessed, but also with respect to the chains of page views, the so-called click traces. The data were made available by INFOnline GmbH, an institution measuring the data range in Germany.

The Course of Page Views Is of High Importance

"To test the effectiveness of generalization, we analyzed two application scenarios," Strufe says. "First, we checked all click traces for uniqueness. If a click trace, that is the course of several successive page views, can be distinguished clearly from others, it is no longer anonymous." It was found that information on the website accessed and the browser used has to be removed completely from the data to prevent conclusions to be drawn with respect to persons. "The data will only become anonymous, when the sequences of single clicks are shortened, which means that they are stored without any context, or when all information, except for the timestamp, is removed," Strufe says. "Even if the domain, the allocation to a subject, such as politics or sports, and the time are stored on a daily basis only, 35 to 40 percent of the data can be assigned to individuals." For this scenario, the researchers found that generalization does not correspond to the definition of anonymity.

A Few Observations Are Sufficient to Identify User Profiles

In addition, the researchers checked whether even subsets of a click trace allow conclusions to be drawn with respect to individuals. "We linked the generalized information from the database to other observations, such as links shared on social media or in chats. If, for example, the time is generalized precisely to the minute, one observation is sufficient to clearly assign 20 percent of the click traces to a person," says Clemens Deusser, doctoral researcher of Strufe's team, who was largely involved in the study. "Another two observations increase the success to more than 50 percent. Then, it is easily obvious from the database which other websites were accessed by the person and which contents were viewed." Even if the timestamp is stored with the precision of a day, only five additional observations are needed to identify the person.

"Our results suggest that simple generalization is not suited for effectively anonymizing web tracking data. The data remain sharp to the person and anonymization is ineffective. To reach effective data protection, methods extending far beyond have to be applied, such as noise by the random insertion of minor misobservations into the data," Strufe recommends.

Credit: 
Karlsruher Institut für Technologie (KIT)

A step forward in solving the reactor-neutrino flux problem

image: Upper panel: Comparison of the computed (red line) and measured (black dots) spectral shapes for the decay of Xe-137. Lower panel: black dots indicate the deviation of the computed points from the data points.

Image: 
Igor Ostrovskiy/University of Alabama

Joint effort of the nuclear theory group at the University of Jyvaskyla and the international collaborative EXO-200 experiment paves the way for solving the reactor antineutrino flux problems. The EXO-200 collaboration consists of researchers from 26 laboratories and the experiment is designed to measure the mass of the neutrino. As a by product of the calibration efforts of the experiment the electron spectral shape of the beta decay of Xe-137 could be measured. This particular decay is optimally well suited for testing a theoretical hypothesis to solve the long-standing and persistent reactor antineutrino anomaly. The results of measurements of the spectral shape were published in Physical Review Letters (June 2020)

Nuclear reactors are driven by fissioning uranium and plutonium fuel. The neutron-rich fission products decay by beta decay towards the beta-stability line by emitting electrons and electron antineutrinos. Each beta decay produces a continuous energy spectrum for the emitted electrons and antineutrinos up to a maximum energy (beta end-point energy).

The number of emitted electrons for each electron energy constitutes the electron spectral shape and the complement of it describes the antineutrino spectral shape.

Nuclear reactors emit antineutrinos with an energy distribution that is sum of the antineutrino spectral shapes of all the beta decays in the reactor. This energy distribution has been measured by large neutrino-oscillation experiments. On the other hand, this energy distribution of antineutrinos has been built by using the available nuclear data on beta decays of the fission products.

The established reference for this construction is the Huber-Mueller (HM) model. Comparison of the HM-predicted antineutrino energy spectrum with that measured by the oscillation experiments revealed a deficit in the number of measured antineutrinos and an additional "bump", an extra increase in the measured number of the antineutrinos between 4 and 7 MeV of antineutrino energy. The deficit was coined the reactor antineutrino anomaly or the flux anomaly and has been associated with the oscillation of the ordinary neutrinos to the so-called sterile neutrinos which do not interact with ordinary matter, and thus disappear from the antineutrino flux emitted by the reactors. Up to recently there has not been a convincing explanation for the appearance of the bump in the measured antineutrino flux.

Only recently a potential explanation for the flux anomaly and bump has been discussed quantitatively. The flux deficit and the bump could be associated to omission of accurate spectral shapes of the so-called first-fobidden non-unique beta decays taken into account for the first time in the so-called "HKSS" flux model (from the first letters of the surnames of the authors, L. Hayen, J. Kostensalo, N. Severijns, J. Suhonen, of the related article).

How to verify that the HKSS flux and bump predictions are reliable?

"One way is to measure the spectral shapes of the key transitions and compare with the HKSS predictions. These measurements are extremely hard but recently a perfect test case could be measured by the renowned EXO-200 collaboration and comparison with our theory group's predictions could be achieved in a joint publication [AlKharusi2020]. A perfect match of the measured and theory-predicted spectral shape was obtained, thus supporting the HKSS calculations and its conclusions. Further measurements of spectral shapes of other transitions could be anticipated in the (near) future", says Professor Jouni Suhonen from the Department of Physics at the University of Jyvaskyla.

Credit: 
University of Jyväskylä - Jyväskylän yliopisto

FSU researchers uncover new insights into Alzheimer's disease

A new study by Florida State University researchers may help answer some of the most perplexing questions surrounding Alzheimer's disease, an incurable and progressive illness affecting millions of families around the globe.

FSU Assistant Professor of Psychology Aaron Wilber and graduate student Sarah Danielle Benthem showed that the way two parts of the brain interact during sleep may explain symptoms experienced by Alzheimer's patients, a finding that opens up new doors in dementia research. It is believed that these interactions during sleep allow memories to form and thus failure of this normal system in a brain of a person with Alzheimer's disease may explain why memory is impaired.

The study, a collaboration among the FSU Program in Neuroscience, the University of California, Irvine, and the University of Lethbridge in Alberta, Canada, was published online in the journal Current Biology and will appear in the publication's July 6 issue.

"This research is important because it looks at possible mechanisms underlying the decline of memory in Alzheimer's disease and understanding how it causes memory decline could help identify treatments," Benthem said.

Wilber and Benthem's study, based on measuring brain waves in mouse models of the disease, gave researchers a number of new insights into Alzheimer's including how the way that two parts of the brain -- the parietal cortex and the hippocampus -- interact during sleep may contribute to symptoms experienced by Alzheimer's patients, such as impaired memory and cognition, and getting lost in new surroundings.

The team had examined a phenomenon known as memory replay -- the playing back of activity patterns from waking experience in subsequent sleep periods -- in a mouse model of Alzheimer's disease as a potential cause of impaired spatial learning and memory.

During these memory replay periods, they found that the mice modeling aspects of Alzheimer's Disease in humans had impaired functional interactions between the hippocampus and the parietal cortex.

The hippocampal formation is crucial for the storage of "episodic" memories -- a type of long-term memory of a past experience -- and is thought to be important for assisting other parts of the brain in extracting generalized knowledge from these personal experiences.

"Surprisingly, a better predictor of performance and the first impairment to emerge was not 'memory replay' per se, but was instead the relative strength of the post-learning coupling between two brain regions known to be important for learning and memory: the hippocampus and the parietal cortex," Wilber said.

According to the Alzheimer's Association, more than 47 million people worldwide are living with the disease, a number projected to soar to 76 million over the next decade. It is currently the sixth-leading cause of death in the U.S., affecting one out of every 10 people ages 65 and older.

Credit: 
Florida State University

'SlothBot in the Garden' demonstrates hyper-efficient conservation robot

image: SlothBot is a slow-moving and energy-efficient robot that can linger in the trees to monitor animals, plants, and the environment below. It has been installed for testing in the Atlanta Botanical Garden.

Image: 
Rob Felt, Georgia Tech

For the next several months, visitors to the Atlanta Botanical Garden will be able to observe the testing of a new high-tech tool in the battle to save some of the world's most endangered species. SlothBot, a slow-moving and energy-efficient robot that can linger in the trees to monitor animals, plants, and the environment below, will be tested near the Garden's popular Canopy Walk.

Built by robotics engineers at the Georgia Institute of Technology to take advantage of the low-energy lifestyle of real sloths, SlothBot demonstrates how being slow can be ideal for certain applications. Powered by solar panels and using innovative power management technology, SlothBot moves along a cable strung between two large trees as it monitors temperature, weather, carbon dioxide levels, and other information in the Garden's 30-acre midtown Atlanta forest.

"SlothBot embraces slowness as a design principle," said Magnus Egerstedt, professor and Steve W. Chaddick School Chair in the Georgia Tech School of Electrical and Computer Engineering. "That's not how robots are typically designed today, but being slow and hyper-energy efficient will allow SlothBot to linger in the environment to observe things we can only see by being present continuously for months, or even years."

About three feet long, SlothBot's whimsical 3D-printed shell helps protect its motors, gearing, batteries, and sensing equipment from the weather. The robot is programmed to move only when necessary, and will locate sunlight when its batteries need recharging. At the Atlanta Botanical Garden, SlothBot will operate on a single 100-foot cable, but in larger environmental applications, it will be able to switch from cable to cable to cover more territory.

"The most exciting goal we'll demonstrate with SlothBot is the union of robotics and technology with conservation," said Emily Coffey, vice president for conservation and research at the Garden. "We do conservation research on imperiled plants and ecosystems around the world, and SlothBot will help us find new and exciting ways to advance our research and conservation goals."

Supported by the National Science Foundation and the Office of Naval Research, SlothBot could help scientists better understand the abiotic factors affecting critical ecosystems, providing a new tool for developing information needed to protect rare species and endangered ecosystems.

"SlothBot could do some of our research remotely and help us understand what's happening with pollinators, interactions between plants and animals, and other phenomena that are difficult to observe otherwise," Coffey added. "With the rapid loss of biodiversity and with more than a quarter of the world's plants potentially heading toward extinction, SlothBot offers us another way to work toward conserving those species."

Inspiration for the robot came from a visit Egerstedt made to a vineyard in Costa Rica where he saw two-toed sloths creeping along overhead wires in their search for food in the tree canopy. "It turns out that they were strategically slow, which is what we need if we want to deploy robots for long periods of time," he said.

A few other robotic systems have already demonstrated the value of slowness. Among the best known are the Mars Exploration Rovers that gathered information on the red planet for more than a dozen years. "Speed wasn't really all that important to the Mars Rovers," Egerstedt noted. "But they learned a lot during their leisurely exploration of the planet."

Beyond conservation, SlothBot could have applications for precision agriculture, where the robot's camera and other sensors traveling in overhead wires could provide early detection of crop diseases, measure humidity, and watch for insect infestation. After testing in the Atlanta Botanical Garden, the researchers hope to move SlothBot to South America to observe orchid pollination or the lives of endangered frogs.

The research team, which includes Ph.D students Gennaro Notomista and Yousef Emam, undergraduate student Amy Yao, and postdoctoral researcher Sean Wilson, considered multiple locomotion techniques for the SlothBot. Wheeled robots are common, but in the natural world they can easily be defeated by obstacles like rocks or mud. Flying robots require too much energy to linger for long. That's why Egerstedt's observation of the wire-crawling sloths was so important.

"It's really fascinating to think about robots becoming part of the environment, a member of an ecosystem," he said. "While we're not building an anatomical replica of the living sloth, we believe our robot can be integrated to be part of the ecosystem it's observing like a real sloth."

The SlothBot launched in the Atlanta Botanical Garden is the second version of a system originally reported in May 2019 at the International Conference on Robotics and Automation. That robot was a much smaller laboratory prototype.

Beyond their conservation goals, the researchers hope SlothBot will provide a new way to stimulate interest in conservation from the Garden's visitors. "This will help us tell the story of the merger between technology and conservation," Coffey said. "It's a unique way to engage the public and bring forward a new way to tell our story."

And that should be especially interesting to children visiting the Garden.

"This new way of thinking about robots should trigger curiosity among the kids who will walk by it," said Egerstedt. "Thanks to SlothBot, I'm hoping we will get an entirely new generation interested in what robotics can do to make the world better."

Credit: 
Georgia Institute of Technology

Reliable, High-speed MTJ Technology for 1X nm STT-MRAM and NV-Logic Has Wide Applications

image: Figure 1: (a) Schematic and (b) TEM image of the developed quad -interface MTJ structure in this study.

Image: 
Tohoku University

Professor Tetsuo Endoh, leading a group of researchers at Tohoku University, has announced the development of an MTJ (Magnetic Tunnel Junction) with 10 ns high-speed write operation, sufficient endurance (>1011), and with highly reliable data retention over 10 years at 1X nm size. Realizing a 1X nm STT-MRAM (Spin Transfer Torque-Magnetoresistive Random Access Memory) and NV(Non-Volatile)-Logic has wide application to a variety of fields.

Because STT-MRAM and NV-Logic with MTJ/CMOS hybrid technology offer low power consumption, they are essential constituents in semiconductor memory and logic such as processors. To put spintronics technology to practical use, higher speed write operation, lower power consumption, and greater endurance are required. Additional needs include data retention exceeding 10 years, a higher operation temperature, and excellent scalability. However, there has been a significant problem with data retention, which is often achieved at the expense of operational performance such as write speed, write power, endurance and so on. This problem has seriously limited the application field of STT-MRAM and NV-Logic.

For the application of 1X nm node STT-MRAM and NV-Logic to a wide variety of fields, the research team developed a new MTJ stack design technology and highly reliable fabrication technology for Quad interface type iPMA-MTJ (Quad-MTJ).

Using the new technologies - first proposed and demonstrated by the same team last year - resulted in a successful fabrication of advanced Quad-MTJ (Figs. 1a and 1b). The research team has now been able to demonstrate that the current write density of Quad-MTJ can be reduced by over 20% at a 10ns high speed write operation in comparison with the conventional Double-MTJ (Fig. 2a) - even though the thermal stability factor of Quad-MTJ is 2 times larger than Double-MTJ (Fig. 2b). In other words, the data retention of Quad-MTJ can be maintained for a period exceeding 10 years and at a higher operating temperature than Double-MTJ. Moreover, Quad-MTJ achieved satisfactory endurance levels (over 1011 Fig. 3), performing better than Double-MTJ, even though the data retention of Quad-MTJ is superior to that of Double-MTJ.

The research team states that the advanced Quad-MTJ overcomes the serious issue of conventional Double-MTJ in several ways: the dilemma between data retention and many kinds of operation performance such as write speed, write power, endurance and so on.

As a result, these developed Quad-MTJ technologies, 1X nm STT-MRAM and NV-Logic with MTJ/CMOS hybrid technology will open a new spintronics base LSI suitable for wide applications including low-end fields (such as IoT systems and sensor network systems); high-end fields (such as AI systems and image processing systems); and the field of tolerance property for application in tougher environments (such as automobile parts, production facility systems and so on).

This research was supported by CIES's Industrial Affiliation with the STT MRAM program in the CIES Consortium of Tohoku University and CAO-SIP.

Results will be presented at this year's Symposia on VLSI Technology and Circuits as a virtual conference from June 14-19, 2020. In addition, the study was included in the "Technical Highlights from the 2020 Symposia on VLSI Technology & Circuits."

Credit: 
Tohoku University

Classes set by ability are hitting children's self-confidence, study finds

The way a vast amount of schools are setup, with classes grouping children based on their ability, is severely affecting pupil's self-confidence.

This is according to a new substantial study, by experts from the UCL Institute of Education, Queen's University Belfast and Lancaster University, who looked at more than 9,000 12-to-13-year-old students taking part in 'setted' maths and English classes (when classes are grouped by children's ability).

The team, who published their results in the British Journal of Sociology of Education, found that not only is there a "worrying" self-confidence gap between students in the top and bottom sets, but, for those in maths sets, the gap in general self-confidence in fact widens over time - something the report states is "deeply concerning".

Commenting on their findings, Professor Jeremy Hodgen of UCL Institute of Education stated that the study has "potentially important implications for social justice", with the growing gap risking "cementing existing inequalities rather than dissipating them".

"Low attainers are being ill-served in schools that apply setting, and low attainment groups are shown to be disproportionately populated by pupils from low socio-economic backgrounds and from particular ethnic groups.

"Our results have important implications for interventions directed at addressing disadvantage in education.

"In terms of social in/justice, our findings suggest that setting is indeed promoting both distributional and recognitive injustice."

The research was undertaken via student surveys in 139 UK secondary schools (divided into intervention or control groups), and involved instigating work with and monitoring student cohorts from the beginning of Year 7 (11-12 years old) to the end of Year 8 (12-13 years old), focusing on their experiences and outcomes in English and Mathematics.

The analysis shows that when compared with two years previously, there was a general trend that students had higher self-confidence in the subject area of mathematics or English if they were placed in the top set and a significantly lower self-confidence when placed in the bottom set in mathematics when compared with an average student in the middle set. This trend in self-confidence remained for general self-confidence in mathematics and those in the top set in English - and crucially remained after controlling for attainment level.

In other cases, the trend was reduced, albeit in no case was reversed.

Dr Becky Taylor of the IOE added that the labels associated with ability based classes impact children's self-perception in relation to their learning, subject identification, and feelings about themselves, as learners, and about their place in school.

"We do not think it unreasonable to hypothesise that these trends in self-confidence likely impact on pupils' dis/associations with schooling, and in turn on pupils' perceptions of their futures.

"The 'ability set' label snowballs as it builds momentum and impact via the various practices, understandings and behaviours on the part of the pupil, on teachers, parents, peers, and therefore the school and its practices."

The report acknowledges more research is now needed to further understand how self-confidence impacts children's futures, and recognises that there may also be a range of different psychological factors and processes which mediate the affects between the receipt of an 'ability label' via tracking, and self confidence in learning.

"We recognise that there may be other issues associated with bottom set groups that might also impede the development of self-confidence over time, such as absenteeism or exclusion - albeit it is worth noting that these may also be precipitated by designation to a bottom set group and the disassociation with schooling entailed," Professor Hodgen concluded.

Credit: 
Taylor & Francis Group

Repeated coughing seriously degrades face mask efficiency

video: A subject coughing in a cyclic incident in a qualitative examination of airborne droplet transmission with and without wearing a surgical mask

Image: 
Talib Dbouk and Dimitris Drikakis

WASHINGTON, June 16, 2020 -- Face masks are thought to slow the spread of viruses, including the novel coronavirus that causes COVID-19, but little is known about how well they work.

In an issue of Physics of Fluids, by AIP Publishing, Talib Dbouk and Dimitris Drikakis, from the University of Nicosia in Cyprus, use precise computer models to map out the expected flow patterns of small droplets released when a mask-wearing person coughs repeatedly.

Previous work from this research group showed droplets of saliva can travel 18 feet in five seconds when an unmasked person coughs. This new work used an extended model to consider the effect of face masks and multiple cycles of coughing.

The results show masks can reduce airborne droplet transmission. However, the filtering efficiency of masks is adversely affected by repeated coughing, as might happen when an individual is ill. Repeated coughs reduce the efficiency, letting many more droplets through.

The model was created using complex mathematical equations for turbulence and other flow effects. A sequence of coughs was modeled by applying several cycles of forward-directed velocity pulses to the initial droplets. The researchers performed numerical simulations that account for droplet interactions with the porous filter in a surgical mask.

The results are alarming. Even when a mask is worn, some droplets can travel a considerable distance, up to 1 meter, during periods of mild coughing. Without a mask, droplets travel twice as far, however, so wearing a mask will help. A mask also decreases the number of droplets that leak out the side of the mask but fails to eliminate it entirely.

These calculations also revealed an effect on the droplet size due to turbulent flow encountering the mask, escaping and entering the environment.

"The droplet sizes change and fluctuate continuously during cough cycles as a result of several interactions with the mask and face," said Drikakis.

Dbouk explained how droplet sizes might change. "Masks decrease the droplet accumulation during repeated cough cycles," Dbouk said. "However, it remains unclear whether large droplets or small ones are more infectious."

"The use of a mask will not provide complete protection," Drikakis said. "Therefore, social distancing remains essential." For health care workers, the investigators recommend much more complete personal protective equipment, including helmets with built-in air filters, face shields, disposable gowns and double sets of gloves.

The investigators also urge manufacturers and regulatory authorities to consider new criteria for assessing mask performance that account for flow physics and cough dynamics. They also proposed a new criterion for mask performance assessment.

Credit: 
American Institute of Physics

Quantum material research facilitates discovery of better materials that benefit our society

image: Spin texture and vortex in quantum magnet TMGO when the material is inside the topological KT phase.

Image: 
@The University of Hong Kong

A joint research team from the University of Hong Kong (HKU), Institute of Physics at Chinese Academy of Science, Songshan Lake Materials Laboratory, Beihang University in Beijing and Fudan University in Shanghai, has provided a successful example of modern era quantum material research. By means of the state-of-art quantum many-body simulations, performed on the world's fastest supercomputers (Tianhe-I and Tianhe-III protype at National Supercomputer Center in Tianjin and Tianhe-II at National Supercomputer Center in Guangzhou), they achieved accurate model calculations for a rare-earth magnet TmMgGaO4 (TMGO). They found that the material, under the correct temperature regime, could realise the the long-sought-after two-dimensional topological Kosterlitz-Thouless (KT) phase, which completed the pursuit of identifying the KT physics in quantum magnetic materials for half a century. The research work has been published in Nature Communications.

Quantum materials are becoming the cornerstone of the continuous prosperity of human society. From the next-generation AI computing chips that go beyond Moore's law (the law is the observation that the number of transistors in a dense integrated circuit doubles about every two years, our PCs and smartphones are all based on the success of it. Nevertheless, as the size of the transistors are becoming smaller to the scale of nanometer, the behaviour of electrons are subject to quantum mechanics, Moore's law is expected to breakdown very soon), to the high speed Maglev train and the topological unit for quantum computers, investigations along these directions all belong to the arena of quantum material research.

However, such research is by no means easy. The difficulty lies in the fact that scientists have to solve the millions of thousands of the electrons in the material in a quantum mechanical way (hence quantum materials are also called quantum many-body systems), this is far beyond the time of paper and pencil, and requires instead modern quantum many-body computational techniques and advanced analysis. Thanks to the fast development of the supercomputing platforms all over the world, scientists and engineers are now making great use of these computation facilities and advanced mathematical tools to discover better materials to benefit our society.

The research is inspired by the KT phase theory avocated by J Michael Kosterlitz, David J Thouless and F Duncan M Haldane, laureates of the Nobel Prize in Phyiscs 2016. They were awarded for their theoretical discoveries of topological phase and phase transitions of matter. Topology is a new way of classifying and predicting the properties of materials in condensed matter physics, and is now becoming the main stream of quantum material research and industry, with broad potential applications in quantum computing, lossless transmission of signals for information technology, etc. Back in the 1970s, Kosterlitz and Thouless had predicted the existence of topological phase, hence named after them as the KT phase, in quantum magnetic materials. However, although such phenomena have been found in superfluids and superconductors, KT phase has yet been realised in bulk magnetic material.

The joint team is led by Dr Zi Yang Meng from HKU, Dr Wei Li from Beihang Univeristy and Professor Yang Qi from Fudan University. Their joint effort has revealed the comprehensive properties of the material TMGO. For example, in Figure 2, by self-adjustable tensor network calculation, they computed the properties of the model system at different temperatures, magnetic field, and by comparing with the corresponding experimental results of the material, they identified the correct microscopic model parameters. With the correct microscopic model on hand, they then performed quantum Monte Carlo simulation and obtained the neutron scattering magnetic spectra at different temperatures (neutron scattering is the established detection method for material structure and their magnetic properties, the closest such facility to Hong Kong is the China Spallation Neutron Source in Dongguan, Guangdong). As shown in Figure 3, the magnetic spectra with its unique signature at the M point is the dynamical fingerprint of the topological KT phase that has been proposed more than half-a-century ago.

"This research work provides the missing piece of topological KT phenomena in the bulk magnetic materials, and has completed the half-a-century pursuit which eventually leads to the Nobel Physics Prize of 2016. Since the topological phase of matter is the main theme of condensed matter and quantum material research nowadays, it is expected that this work will inspire many follow-up theoretical and experimental researches, and in fact, promising results for further identification of the topological properties in quantum magnet have been obtained among the joint team and our collaborators," said Dr Meng.

Dr Meng added: "The joint team research across Hong Kong, Beijing and Shanghai also sets up the protocol of modern quantum material research, such protocol will certainly lead to more profound and impactful discoveries in quantum materials. The computation power of our smartphone nowadays is more powerful than the supercomputers 20 years ago, one can optimistically foresee that with the correct quantum material as the building block, personal devices in 20 years' time can certainly be more powerful than the fastest supercomputers right now, with minimal energy cost of everyday battery."

Credit: 
The University of Hong Kong

Mangroves at risk of collapse if emissions not reduced by 2050, international scientists predict

image: Mangroves are amongst the most valuable of natural ecosystems, supporting coastal fisheries and biodiversity.

Image: 
Dr Nicole Khan

An international research team comprising scientists from the University of Hong Kong, the Nanyang Technological University, Singapore (NTU Singapore), Macquarie University and the University of Wollongong (Australia) as well as Rutgers University (USA) has predicted that mangroves will not be able to survive with rising sea-level rates reached by 2050, if emissions are not reduced. The team's findings were recently published in one of the world's top peer-reviewed academic journal Science.

Using sedimentary archives from the Earth's past, researchers estimated the probability of mangrove survival under rates of sea-level rise corresponding to two climate scenarios - low and high emissions.

When rates of sea-level rise exceeded 6 mm per year, similar to estimates under high emissions scenarios for 2050, the researchers found that mangroves very likely (more than 90% probability) stopped keeping pace. In contrast, mangroves can survive sea-level rise by building vertically when the rise remains under 5 mm per year, which is projected for low emissions scenarios during the 21st century.

The threshold of 6 mm per year is one that can be 'easily surpassed' on tropical coastlines - if society does not make concerted efforts to cut carbon emissions, said lead investigator of the study, Professor Neil Saintilan, from the Department of Earth and Environmental Sciences at Macquarie University.

Professor Saintilan said, "We know that sea-level rise is inevitable due to climate change, but not much is known about how different rates of sea-level rise affect the growth of mangroves, which is an important ecosystem for the health of the earth."

"Most of what we know about the response of mangroves to rising sea level comes from observations over the past several years to decades when rates of rise are slower than projected for later this century. This research offers new insights because we looked deeper into the past when rates of sea-level rise were rapid, reaching those projected under high emissions scenarios," said Dr Nicole Khan, Assistant Professor of Department of Earth Sciences, The Unviersity of Hong Kong.

Why mangroves matter

With their iconic roots that rise from under the mud, mangrove stands grow in a process called vertical accretion. This feature is crucial to the ecosystem as it helps to soak up greenhouse gas emissions (carbon sequestration) at densities far greater than other forests, and provides a buffer between the land and sea - helping protect people from flooding on land.

The study, which covered 78 locations over the globe, explores how mangroves responded as the rate of sea-level rise slowed down from over 10 mm per year 10,000 years ago to nearly stable conditions 4,000 years later. The drawdown of carbon as mangrove forests expanded over this time period contributed to lower greenhouse gas concentrations.

The study found that mangroves will naturally encroach inland if its ability to vertically accrete is hindered.

"Our results underscore the importance of adopting coastal management and adaptation measures that allow mangroves to naturally expand into low-lying coastal areas to protect these valuable ecosystems," said Dr Khan.

Professor Benjamin Horton, Chair of the Asian School of the Environment at NTU Singapore, who co-authored the paper, said, "In 30 years, if we continue upon a high-emissions trajectory, essentially all mangroves, including those across southeast Asia, will face a high risk of loss."

"This research therefore highlights yet another compelling reason why countries must take urgent action to reduce carbon emissions. Mangroves are amongst the most valuable of natural ecosystems, supporting coastal fisheries and biodiversity, while protecting shorelines from wave and storm attack across the tropics," Professor Horton added.

Credit: 
The University of Hong Kong

Researchers: Homes of North Zealand's elite are most likely to be preserved

According to the Danish Act on Listed Buildings, listed buildings must reflect Danish housing and working conditions throughout history. An extensive study conducted by, among others, a UCPH researcher, demonstrates that since 1945, the vast majority of listed dwellings are unique homes designed by well-known architects in North Zealand.

This is a problem for our democracy and the narrative that we create about Denmark, according to Svava Riesto, an associate professor at the University of Copenhagen's Department of Geosciences and Natural Resource Management, and one of the researchers behind the study. The study was most recently published in the journal Bolig og Fabrik, 2020.

"Our conclusion is that the state has advanced an elitist narrative about Denmark's history during the postwar period by way of our historical building preservation decisions. This is problematic for democracy because it fails to reflect housing among the general public and goes against the letter of the law. Thus, either the law or practices must be revised," she says.

Alongside Rikke Stenbro, a PhD who runs Substrata, a consultancy, Riesto reviewed historically preserved dwellings since 1945 in relation to their type and location using archives from the the Danish Agency for Culture and Palaces. It is the first time that such a detailed study of listed dwellings from this period has taken place.

Long live the homes of the elite

The researchers produced a map (look above) using data from Statistics Denmark which clearly shows that since 1945, the vast majority of listed homes are located north of Copenhagen. These include family villas located along the water in Hørsholm as well as detached homes along the coastal road, Strandvejen.

"The homes listed and which the state has favored since World War II are often large villas, designed by well-known architects such as Jørn Utzon and Arne Jakobsen. But where is post-war welfare housing, where 20 percent of the population lives? Or standardized single-family housing? If we are to follow the law and represent housing more broadly, the equation simply doesn't add up," says Rikke Stenbro.

Once listed, a building cannot be modified without seeking permission to do so. Nor may it be demolished. Therefore, the researchers believe that it is problematic when we only preserve the homes of the elite, homes constructed of the finest materials, with beautiful sea views.

"As of now, we have created an elitist narrative, as opposed to one that is inclusive of the history of the broader community. If building preservation is to be meaningful in terms of reflecting a historical period, it should have a larger perspective. It's about democracy and who is represented," says Rikke Stenbro.

Architectural value too narrowly defined

The Historical Buildings Survey (Det Særlige Bygningssyn) is a committee that advises the Minister of Culture and decides whether a building is worthy of conservation. They do so, based upon their estimation of a dwelling's architectural value, among other things. The researchers believe that the Survey's definition is overly narrow:

"We believe that the concept of architectural value should be broadened when considering whether dwellings should be listed. For example, consideration could be extended to include public-housing areas where planning served to ensure views and provide access to green spaces, playgrounds and common areas from buildings," says Svava Riesto. She concludes:

"The review of listed post-war homes reveals that there is a need for a discussion about the way in which the Act on Listed Buildings is applied. We cannot continue to focus on a handful of architects if the purpose of preservation is to represent the cultural heritage of an entire population."

Credit: 
University of Copenhagen