Tech

Marine biology: Whales coordinate deep dives to evade predators

Groups of beaked whales reduce predation risk through extreme diving synchronization, according to a study in Scientific Reports. This behaviour has not been observed in other deep diving whales and the underlying reasons have remained unclear.

Natacha Aguilar de Soto, Mark Johnson and Peter Madsen and colleagues analysed data on 26 beaked whales carrying sensors that tracked the depths they swam to, the steepness of their dives, and the sounds they made. The authors observed that the whales performed closely coordinated deep dives in order to forage using echolocation (using sounds to search for prey) but limited vocalizations at shallow depths where they are vulnerable to hunting by killer whales. Beaked whales began vocalizations at an average depth of 450 metres before searching for food individually. Whales then reunited as a group at an average depth of 750 metres and ascended silently to the surface at a shallow angle covering an average horizontal distance of one kilometre.

The authors suggest that by limiting vocalizations to depths out of the range of killer whale attacks and surfacing at unpredictable locations, beaked whales prevent killer whales from tracking them. However, the authors note that this strategy is costly; long silent ascents from dives lasting more than one hour reduce foraging time by around 35% compared to the diving strategies used by other toothed whales.

The findings suggest that predation risk may have been a strong evolutionary force driving the unique diving and vocal behaviour of beaked whales.

Credit: 
Scientific Reports

Tropical trees are living time capsules of human history

image: Settlement near useful plants on the banks of the Jaú River, Amazonas, Brazil

Image: 
Victor L. Caetano Andrade

In a new article published in Trends in Plant Science, an international team of scientists presents the combined use of dendrochronology, radiocarbon dating and isotopic and genetic analysis as a means of investigating the effects of human activities on forest disturbances and the growth dynamics of tropical tree species. The study presents the potential applicability of these methods for investigating prehistoric, historical and industrial periods in tropical forests around the world and suggests that they have the potential to detect time-transgressive anthropogenic threats, insights that can inform and guide conservation priorities in these rapidly disappearing environments.

Led by scientists from the Max Planck Institute for the Science of Human History and co-authored by leading scientists at the National Institute for Amazonian Research, The Max Planck Institute for Biogeochemisty and the Max Planck Institute for Developmental Biology, the study shows that tropical trees store records of changing human populations and their management practices, including activities that ultimately led to a 'domestication' of tropical landscapes. The study promotes a dialogue between various fields of research to ensure that tropical trees are acknowledged for their role in both cultural and natural ecosystems.

Tropical forests as centers of past human action

Tropical forests, long thought of as barriers to human migration, agricultural experimentation, and dense sedentary populations, have until recently been considered 'Green Deserts' in the context of past human activity. However, the last two decades have seen a wealth of research from various disciplines highlight extensive and diverse evidence of plant and animal domestication, including forest management, landscape alteration, and the deliberate translocation of wild taxa by ancient human societies - including the inhabitants of some of the largest pre-industrial cities on the face of the planet.

Western colonialism and the expansion of global capitalism resulted in new human impacts on these environments, with consumer decisions in Europe driving deforestation and tropical resource exploitation as they do to this day. Understanding how different societies, economic systems, and administrative organizations changed tropical forests is essential if we are to properly develop sustainable conservation policies.

Yet, high-resolution records of human impacts on tropical ecosystems are often difficult to come by. "Amazingly, this whole story has neglected some of the largest, most ancient witnesses tropical forests have to offer: their trees," says Victor Caetano Andrade, lead author of the study at the Max Planck Institute for the Science of Human History. "Archaeological excavation and archaeobotanical analyses has led to great strides in our recognition of past human lives in the tropics, but the trees themselves standing next to the trench have things to say as well," he continues.

Tree rings - a living stratigraphy

The study of tree rings has been frequently used in temperate environments to create a picture of how changing climate and human activities have altered forests. However, such work has been limited in the tropics, due to perceptions that a lack of seasonality meant no rings would be visible. As the authors note, however, it has now been demonstrated that more than 200 tropical tree species form annual rings. This opens up a whole new avenue for the exploration of changing tropical forest conditions in the past.

Counting tree rings can, alongside radiocarbon dating, produce robust, high-resolution chronologies or 'stratigraphies' of the growth of an individual tree. A change in the size of growth rings identified across a number of trees in the same forest can provide an indicator of abrupt changes in environmental conditions. In addition, these rings can be sampled chemically to investigate how climate conditions changed over time and how such changes correlate with tree growth. Where no strong correlation between climate and growth is visible, the door opens to other potential explanations, chief among them being human activity.

As Victor Caetano Andrade puts it, "There are some species of special importance for humans, for example as food trees or trees used for a particular purpose. In these cases humans would be likely to undertake forest management practices, such as clearing the understory, opening up the forest, and actively protecting individual trees." By contrast, other species may have been deliberately removed for use as construction material or to make way for settlement. Combining observations of tree growth with local historical and archaeological data allows scientists to look at the relationship between tree communities and past human societies and their economic practices.

Tree genes point to pre-Columbian forest management

DNA analysis of modern trees is commonly used by companies and foresters to select trees with economically desirable traits. However, modern genetic analysis, as well as analysis of preserved specimens, can reveal important insights into how populations of a given species have changed through space and time. Where relevant, this genetic analysis can be used to look at processes of domestication, including the selection for particular traits. The ability to associate patterns of genetic diversity for economically important trees with known archaeological records promises to reveal new insights into the settlement of tropical environments in the past.

The authors' review shows that in many cases in Central and South America, maximum genetic diversity of these species is found in areas with intense pre-Columbian human occupation. However, in addition to investigations of the distant past, the present study also shows that sampling of modern trees such as mahogany can document changes in genetic diversity before and after logging episodes. The authors propose that, given the advance of full genome sequencing, applying such methods to ancient modern trees in a given forest may make it possible to genetically reconstruct past human clearance and management events - particularly where detailed historical and archaeological information is also available.

'Tree Houses' - Homes to new data and past human societies

While the majority of ecological study on the supposedly 'pristine' tropics has focused on how changes in forest structure and tree growth are linked to climate fluctuations and natural disturbances, the present research highlights centuries of human impact. As study co-author Dr. Patrick Roberts states, "The work evaluated here demonstrates two important findings: first, that human societies, from hunter-gatherers to urban dwellers, have played a significant role in tropical tree growth in the past; and second, that this role can be observed in trees that still stand today."

Furthermore, as Victor Caetano Andrade continues, "Multidisciplinary approaches to ancient trees will enable us to look at how forest management changed in the tropics from pre-colonial to post-colonial scenarios, and from pre-industrial to 21st century threats. The resolution available is remarkable and will allow us to get a handle on the legacies of past activities, and how changing practices have placed new pressures on these highly threatened environments". The authors conclude by arguing that it is essential that archaeologists and ecologists work together to preserve not just the natural benefits of tropical trees, but also the records of human cultural heritage and knowledge that span millennia stored within them.

Credit: 
Max Planck Institute of Geoanthropology

Molecular 'switch' reverses chronic inflammation and aging

Berkeley -- Chronic inflammation, which results when old age, stress or environmental toxins keep the body's immune system in overdrive, can contribute to a variety of devastating diseases, from Alzheimer's and Parkinson's to diabetes and cancer.

Now, scientists at the University of California, Berkeley, have identified a molecular "switch" that controls the immune machinery responsible for chronic inflammation in the body. The finding, which appears online Feb. 6 in the journal Cell Metabolism, could lead to new ways to halt or even reverse many of these age-related conditions.

"My lab is very interested in understanding the reversibility of aging," said senior author Danica Chen, associate professor of metabolic biology, nutritional sciences and toxicology at UC Berkeley. "In the past, we showed that aged stem cells can be rejuvenated. Now, we are asking: to what extent can aging be reversed? And we are doing that by looking at physiological conditions, like inflammation and insulin resistance, that have been associated with aging-related degeneration and diseases."

In the study, Chen and her team show that a bulky collection of immune proteins called the NLRP3 inflammasome -- responsible for sensing potential threats to the body and launching an inflammation response -- can be essentially switched off by removing a small bit of molecular matter in a process called deacetylation.

Overactivation of the NLRP3 inflammasome has been linked to a variety of chronic conditions, including multiple sclerosis, cancer, diabetes and dementia. Chen's results suggest that drugs targeted toward deacetylating, or switching off, this NLRP3 inflammasome might help prevent or treat these conditions and possibly age-related degeneration in general.

"This acetylation can serve as a switch," Chen said. "So, when it is acetylated, this inflammasome is on. When it is deacetylated, the inflammasome is off."

By studying mice and immune cells called macrophages, the team found that a protein called SIRT2 is responsible for deacetylating the NLRP3 inflammasome. Mice that were bred with a genetic mutation that prevented them from producing SIRT2 showed more signs of inflammation at the ripe old age of two than their normal counterparts. These mice also exhibited higher insulin resistance, a condition associated with type 2 diabetes and metabolic syndrome.

The team also studied older mice whose immune systems had been destroyed with radiation and then reconstituted with blood stem cells that produced either the deacetylated or the acetylated version of the NLRP3 inflammasome. Those who were given the deacetylated, or "off," version of the inflammasome had improved insulin resistance after six weeks, indicating that switching off this immune machinery might actually reverse the course of metabolic disease.

"I think this finding has very important implications in treating major human chronic diseases," Chen said. "It's also a timely question to ask, because in the past year, many promising Alzheimer's disease trials ended in failure. One possible explanation is that treatment starts too late, and it has gone to the point of no return. So, I think it's more urgent than ever to understand the reversibility of aging-related conditions and use that knowledge to aid a drug development for aging-related diseases."

Credit: 
University of California - Berkeley

Humanity's greatest risk: Cascading impacts of climate, biodiversity, food, water crises: scientists

image: Mean ranked likelihood and impact of global risks and robustness of the knowledge base surrounding each risk (size of the circle)
for the 30 global risks in 5 categories (colors).

Image: 
Future Earth Global Risks Scientists' Perception survey

The greatest threat to humanity hides in the potential cascading of impacts of five highly-related, highly-likely risks -- a collision that can amplify these effects catastrophically, according to a new survey of 222 leading scientists from 52 countries.

Conducted by Future Earth, the international sustainability research network, the survey identifies five global risks -- failure of climate change mitigation and adaptation; extreme weather events; major biodiversity loss and ecosystem collapse; food crises; and water crises -- as the most severe in terms of impact. Four of them -- climate change, extreme weather, biodiversity loss, and water crises -- were also deemed by scientists as most likely to occur.

Business leaders and policymakers, in a survey released in January by the World Economic Forum, likewise assigned these same five risks, chosen from a set of 30, top rank positions in terms of impact.

More than one-third (82) of the scientists, however, underlined the threat posed by the synergistic interplay and feedback loops between the top five-- with global crises worsening one another "in ways that might cascade to create global systemic crisis."

Extreme heatwaves, for example, can accelerate global warming by releasing large amounts of stored carbon from affected ecosystems, and at the same time intensify water crises and / or food scarcity; the loss of biodiversity weakens the capacity of natural and agricultural systems to cope with climate extremes, increasing vulnerability to food crises.

Some 173 of the scientists surveyed volunteered additional risks, beyond the list of 30, as deserving of greater global attention. Commons themes included erosion of societal trust and values; social infrastructure deterioration; rising inequality; rising political nationalism; overpopulation; and mental health decline.

Said the report: "Interestingly, the majority of these touch on issues of societal well-being and social security, suggesting that societal risks may be growing and in need of greater consideration. This is especially pertinent as we consider how society can transition to a climate-safe and equitable future..."

"Perhaps the most interesting theme to emerge from these responses was the failure to take into account feedback across different systems."

"Despite this ubiquity of connections, many scientists and policymakers are embedded in institutions that are used to thinking and acting on isolated risks, one at a time. This needs to change to thinking about risks as connected."

"As the scientific advisors for this survey, we call on the world's academics, business leaders, and policymakers to pay urgent attention to these five global risks, and to ensure that they are treated as interacting systems, rather than addressed one at a time, in isolation."

The survey (in full from Feb. 12: futureearth.org/initiatives/other-initiatives/grp), was led by Maria Ivanova, University of Massachusetts, Boston, and scientific advisors Markus Reichstein, Max Planck Institute, Germany; Matthias Garschagen, Ludwig-Maximilians-Universität München, Germany; Qian Ye, Beijing Normal University, China; Kalpana Chaudhari, Institute for Sustainable Development and Research, India; and Sylvia Wood, Science Officer, Future Earth, Canada office.

Joined-up thinking: Our Future on Earth, 2020

Publicly available at futureearth.org/publications/our-future-on-earth

The survey is presented as a chapter in a new report, Our Future on Earth, 2020, in which scientists summarize the latest peer-reviewed research on the state of our planet and distill the many, interconnected complexities into an authoritative, 50-page synthesis.

Today's environmental problems, the report says, represent a blend of physical, chemical, biological, and social change that all interact and feedback on each other.

"Trying to understand how our impacts in one area, such as river extraction, affect another, such as food provision, is a complex task," the report says. "But that's what scientists, sociologists, economists, ecologists, and others are trying to do."

"And while our problematic practices in one area can impact many other areas, the good news is that so can our restorative ones: improving biodiversity in a wetland ecosystem can also reduce water pollution and soil erosion, and protect crops against storm damage, for instance."

"We are making our own Anthropocene, and we can make it a good one."

Among the issues highlighted in the report:

The rise and impact of populism, "characterized by a denial of complexity, including the complexity of environmental damage and the systemic, multi-layered interactions required to achieve sustainability," as well as "fake news."

Increasing financial risk of climate and environmental change. Now deemed by insurers as their industry's top risk, the report notes the first climate-change-related bankruptcy: California's largest electric utility company, PG&E went under last year after sparking a huge forest fire.

Trends in migration, and the impacts of digital technologies, including artificial intelligence, on sustainability.

The ecological and social science of climate change, our food systems, and the collapse in biodiversity; developments in regulating the high seas; the rise of green finance mechanisms; and the new field of studying "transformations" in society: how, exactly, can we uproot ingrained ideas about economic development or wellbeing and wrench them into new, more sustainable frameworks?

"2020 is a critical time to look at these issues," says Amy Luers, Executive Director of Future Earth. "Our actions in the next decade will determine our collective future on earth."

Our Future on Earth, 2020 is the first of a series of such reports.

Chapter summaries

Introduction: Charting the future
Author: Gaia Vince, journalist and author, London.

We are a vast global population facing unprecedented environmental challenges, yet we still have the time and the capability to prevent extreme outcomes. The past year has been one of extraordinary social awakening to the hazards of environmental change, and of demands for action towards a sustainable future.

"Green deals" have been proposed by several nations, and if passed into legislation they could prove transformative. The global population is expected to be 9.7 billion by 2050. Their future is in our hands: can we make it more sustainable, resilient and fair?

Climate: Dialing down the heat
Lead author: Diana Liverman, School of Geography and Development, University of Arizona

Over the last 18 months, major assessments by the Intergovernmental Panel on Climate Change (IPCC), the US National Climate Assessment, and the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES), have all argued that time is running out to reduce greenhouse gas emissions that are causing the climate to warm.

This has inspired declarations of a climate crisis or climate emergency by the leaders of more than 700 cities, states and governments. Yet, during 2019, the concentration of CO2 in the atmosphere reached more than 415 ppm, and the five years from 2014 to 2018 were the warmest recorded over land and ocean since 1880.

Despite the evidence, "many countries have not yet risen to the challenge or are reversing prior commitments." What is needed to dial down the heat?

This chapter also includes independently-authored boxes looking at the climate and social causes and impacts of wildfire; and the (predominantly negative) impacts of climate change on human health.

Politics: Populism versus grassroots movements
Author: Richard Calland, University of Cambridge's Institute for Sustainability Leadership; Associate Professor of Public Law, University of Cape Town.

Right-wing populism is on the rise around the world: a breed of politics that exploits peoples' fears during times of economic decline and growing inequality, and that focuses on nationalist tendencies to clamp down on borders and reject immigrants. Populism is often characterized by a "denial of complexity", preferring to identify simple, seductive culprits for the erosion of society, the economy, and the welfare of the masses. This often leads to a denial of climate change facts or impacts. But grassroots organizations are emerging as a potentially strong, countervailing force. But is it politically strong enough and will it work?

Ocean: Governing the high seas
Lead author: Robert Blasiak, Stockholm Resilience Centre, Stockholm University

Over three billion people are dependent on functioning marine ecosystems as their primary source of protein, and the livelihoods of nearly half of humanity are linked to marine and coastal biodiversity. While the ocean was once considered too big to be significantly altered through human activity, it is now clear that it too has entered the Anthropocene, an age in which humans are the dominant influence.

Stressors from climate change to pollution, fishing and shipping, have on average nearly doubled over the past decade. Officials from around the world are now negotiating a new United Nations treaty to govern the high seas (Biological diversity of areas Beyond National Jurisdiction), which may be hashed out in 2020. What are the expectations and challenges for regulating fish, seafloor mining, biodiversity and more?

This chapter includes spotlights on plastic pollution in the ocean; and the rise of conflicts over seafood resources, sometimes called "fish wars".

Forced Migration: Empowering mobility when moving isn't a choice
Lead author: David Wrathall, Oregon State University, College of Earth, Ocean and Atmospheric Sciences

As of September 2019, the Syrian Conflict had resulted in over 5.6 million refugees seeking refuge mainly in Turkey, Lebanon, and Jordan. As of 2018, 800,000 people had fled their home countries in North Africa as asylum seekers and refugees, some embarking on often-deadly boat trips across the Mediterranean.

For many observers in the wealthy, industrialized global North, the influx of migrants from Central America and the Middle East has been seen as a sign of an impending flood: their assumption is that climate change impacts will spur violence and/or push hundreds of millions of people into their borders, causing yet more violence and other problems.

The truth is more nuanced. Humanity is not at the mercy of forces seemingly beyond our control: it is policy, not climate change, behind the real crises.

This chapter also includes an independently-authored box on growing urbanization and solutions to make cities more liveable.

Media: Industrializing disinformation
Lead author: Owen Gaffney, Potsdam Institute for Climate Impact Research & Stockholm Resilience Centre.

The flow of information in the world is changing. Today, around half of the planet's 7.6 billion people are online, where they are deeply influenced by social media, search engines and eCommerce algorithms. These digital platforms tend to favour the spread of information designed to engage with emotion over reason, can cause the propagation of "fake news", and can lead to social harms like an erosion of trust in vaccines. Some politicians are now calling for the tech giants to be split up, arguing that their power and dominance is bad for democracy. Digital information technologies and media, though messy, could support global action for sustainability. Yet it remains unclear whether information technologies will drive the Earth towards a pandemic or away from it; towards a destabilized climate or a potentially-manageable 1.5°C warmer world.

Biodiversity: The unravelling web of life
Lead author: Cornelia Krug, Department of Geography, University of Zurich, Switzerland.

Humans have now "significantly altered" 75% of our planet's land area; about a quarter of species in assessed plant and animal groups are threatened. In 2018, the world's last male northern white rhino died in his Kenyan enclosure. The Brazillian blue parrot, Spix's Macaw, was declared extinct in the wild, amongst a handful of other birds. And yet studies continue to show that biodiversity helps to make landscapes more resilient to climate change. Countries are now in the process of negotiating a "Global Deal for Nature": a new global biodiversity framework to be discussed through the Convention on Biological Diversity (CBD) in 2020. Reversing the trends of loss of life on this planet will require some new ways of thinking about conservation.

This chapter also includes a box on the long-term historical perspective of biodiversity change and extinctions, to better contextualize how altered ecosystems can cope with future change.

Finance: Making money work for green goals
Lead author: Kristina Alnes, CICERO Center for International Climate Research, Norway

Finance is a risky business. But the global situation today--economic, political, and environmental, especially thanks to climate change--is conspiring to make it riskier. Examples like the 2019 bankruptcy of the Californian utility PG&E illustrate the impacts of climate on financial risk; recent reports like that by the Network for Greening the Financial Services show how the financial world is starting to take this seriously. Efforts like the nascent Task Force on Climate Related Disclosure are helping to grapple with the issue.

The "great acceleration" of economic growth over the 20thcentury put a lot of pressure on earth systems. There is an opportunity now to reverse this trend. This piece looks at the potential for green bonds, sustainability-linked loans and more, to promote sustainable development.

Food: Rethinking global security
Lead author: Jiaguo Qi, Center for Global Change & Earth Observations, Michigan State University, USA

The amount of food produced per person on the planet has gone up more than 40 percent since the 1960s. Yet, ironically, the prevalence of undernourishment--which had been declining for decades--has started to tip upwards again: the total number of people undernourished in 2018 stood at more than 820 million people, up from a record low of 785 million in 2015. We will need to squeeze ever-more food out of the same amount of land for our growing population in our changing climate.

Strains on food production are expected to increase, as a result of various forces including climate change, biodiversity loss, and a global population on the rise. Solutions may include eating less meat, precision agriculture supported by new technologies, ensuring less waste, and taking a holistic approach to food production that looks at water, ecosystem protection, social welfare and more.

Transformation: How to spur radical change
Lead author: Sandra Waddock, Carroll School of Management, Boston College

When more than 150 world leaders met in 2015 to develop the United Nations 2030 Agenda for Sustainable Development, their key phrase was "transforming our world." Transformative change goes well beyond incrementalism or reform, both of which allow existing practices, goals, and structures to stay in place.

Transformation involves a step change, often in fundamental norms or assumptions. An emerging field of research is just starting to unpick exactly how to encourage, guide, and enact such changes, from a shifting mindset about single-use plastics to a revolution in how we think about economic growth.

Digital Innovation: Harnessing technology for good
Lead author: Dirk Messner, President, German Environment Agency (UBA), Co-Director, Centre for Global Cooperation Research, University of Duisburg-Essen, Germany

Massive amounts of data, new computational abilities, and artificial intelligence are spurring disruptive progress: technical systems are becoming as good (or even better) than humans at recognizing faces and voices, diagnosing cancer, translating languages, and producing news articles, music and paintings. Artificial general intelligence (AGI)-- a technical system able to accomplish any cognitive task at least as well as humans--could be achieved within the 21st century.

This will all cause massive disruption to labor markets, democracies, and our understanding of the planet and humanity. So far, these technological changes have largely been used to increase consumption, economic growth and resource extraction, rather than saving the planet or promoting just and fair societies. But the digital sector has immense potential for reducing emissions and empowering people to monitor and protect ecosystems. A new field of digital sustainability could be forged to encourage positive action.

This chapter includes a spotlight on the energy use of the digital sector.

Credit: 
Terry Collins Assoc

Setting up fundamental bases for information metasurface

image: (a) A set of measurement locations in k-space. (b) Theoretical intensity distributions of the measurement points. (c, d) A sample of 1-bit disordered-phase pattern of the metasurface and the generated far-field intensity distribution. (e) Theoretical and calculated results of I2 (information of the radiation pattern) with respect to different sizes and phase patterns of the metasurfaces. (f, g) A sample of 2-bit disordered-phase pattern of metasurface and the generated far-field intensity distribution. (h) Theoretical and calculated results of I2 with respect to different sizes and phase patterns of metasurfaces. (i, j) A sample of continuous disordered-phase pattern of the metasurface and the generated far-field intensity distribution. (k) Theoretical and calculated results of I2 with respect to different sizes and phase patterns of metasurfaces.

Image: 
©Science China Press

When illuminated by electromagnetic waves, subwavelength-scale particles of metasurface can couple the incident energy to free space with controllable amplitude, phase and polarizations, such that the transmitted wave can be manipulated flexibly with predesigned functionalities. In recent years, rapid developments of digital and information metasurfaces have stimulated many information processing applications, such as computational imaging, wireless communications, and performing mathematical operations. With the increasing amount of researches focused on the topic of information processing with metasurface, a general theory to characterize the properties of metasurface from the information perspective is urgently demanded.

Recently, Prof. Tie Jun Cui's group at Southeast University (SEU) has collaborated with Prof. Lianlin Li at Peking University (PKU), and reported a breakthrough on this topic in National Science Review entitled "Information theory of metasurfaces" (the first author is graduate student, Haotian Wu). In this work, a general information theory of metasurface is proposed to analyze the relation between the information of metasurface and its far-field radiation pattern.

To analyze the metasurface from the information perspective, the scientists introduced a general aperture model to characterize the amplitude and phase responses of the metasurface. By incorporating the general aperture model of the metasurface with uncertainty relation in L2-space, the scientists found an upper bound of information contained in the radiation pattern of a metasurface, and revealed the theoretical upper limit of orthogonal radiation states that can be realized by the metasurface. The proposed theory also provides guidance for inverse design of metasurface with respect to given functionalities. That is to say, once the required radiation pattern(s) is/are specified, the size of metasurface must be larger than the value predicted by the proposed theory, otherwise it would be impossible to realize the required radiation pattern(s) no matter what design strategies are adopted. In addition, through investigating the information of disordered-phase modulated metasurfaces, the scientists found the information invariance (1-γ) of chaotic radiation patterns generated by disordered metasurfaces. The result indicates that the far-field information of the disordered-phase modulated radiation patterns is always equal to 1-γ (γ is the Euler's constant), which is independent of the size of metasurface, the number of metasurface elements, and the phase patterns.

To verify the proposed theory, the scientists firstly analyzed the information of far-field radiation patterns generated by multiple sets of metasurface samples with different amplitude and phase modulations. The calculated results are all bounded by the inequality provided by the proposed theory. Additionally, to verify the information invariance of disordered metasurfaces, they calculated the information of far-field patterns generated by multiple sets of disordered phase-modulated metasurface samples: 1-bit digital coding phases with each phase taking on one of the values of 0 and π randomly; 2-bit digital coding phases with each phase taking on one of the values of 0, π/2, π, and 3/2π randomly, and continuous random phase variable described by a uniform probability distribution. The calculated results of the speckle-like radiation patterns match the theoretical prediction (1-γ) perfectly, which convincingly demonstrate the information invariance of the chaotic far-field patterns.

"The presented theory establishes a quantitative framework to characterize the information processing capabilities of metasurface, which provides deeper physical insights in understanding metasurface from information perspective, and offers new approaches to facilitate analyses and designs of metasurface. The findings of this investigation are generally applicable in a wide range of spectra, which would be helpful to lay the groundwork for future researches into the regime of information metasurface." the scientists forecast.

Credit: 
Science China Press

Wikipedia, a source of information on natural disasters biased towards rich countries

Floods are the natural disaster to cause most damage each year across the world. Valerio Lorini (JRC-UPF), Javier Rando (UPF), Diego Saez-Trumper (Wikimedia), and Carlos Castillo (UPF) are the authors of a study they are to present at the 17th International Conference on Information Systems for Crisis Response and Management (ISCRAM 2020), Virginia Tech in Blacksburg, Virginia (USA), from 24 to 27 May, entitled: "Uneven Coverage of Natural Disasters in Wikipedia: the Case of Floods".

The study corresponds to a line of research led by Carlos Castillo, coordinator of the Web Science and Social Computing group (WSSC) at the Department of Information and Communication Technologies (DTIC), UPF, within the active collaboration it enjoys with the Joint Research Center (JRC), the body that advises the European Commission on scientific and technical issues. The principal investigator is Valerio Lorini (JRC-UPF), a student of the PhD programme in ICT at UPF who is being supervised by Carlos Castillo, with Javier Rando, co-author and student of the UPF bachelor's degree in Mathematical Engineering in Data Science.

In the management of natural disasters, access to unofficial data offers the opportunity to dispose of different information from that available through other means. It can also serve to detect bias in news content. "We believe that Wikipedia is a valuable, free source of information and that it could be beneficial to researchers working on reducing the risk of disasters if the biases are identified, measured and mitigated", Castillo asserts.

In their study, the authors focused on the English version of Wikipedia, which they considered by far the most complete version of this encyclopaedia. Wikipedia, an encyclopaedia that is produced collaboratively, contains detailed information on many natural and human disasters, especially when incidents result in a large number of casualties, and its editors are particularly adept at adding real-time information, as the crisis develops.

As a source of information related to natural disasters, the authors show that on Wikipedia, there is a greater tendency to cover events in wealthy countries than in poor countries. By performing careful, large-scale analysis of automatic content, "we show how flood coverage in Wikipedia leans towards wealthy, English-speaking countries, particularly the USA and Canada", they claim in their work. "We also note that the coverage of flooding in low-income countries and in countries in South America, is substantially less than the coverage of flooding in middle-income countries", they add.

For this research the authors estimated the coverage of floods in Wikipedia taking many variables into account: gross domestic product (GDP), gross national income (GNI), geographical location, the number of English speakers, fatalities and various indices describing the country's level of vulnerability.

They have identified a set of reliable references about floods

With the support of hydrologists, one of the contributions of this work is a set of validated references from several independent organizations that collect data on floods for different purposes: insurers, government agencies, the UN, etc. They all collect data on flooding on a global scale and dispose of reliable databases to work with and compare.

Having identified the sources of information, the authors moved to the experimental phase of the study. Using 458 events that had been reliably described as floods, according to the records of two or three sources of reliable data: Europe's Floodlist; the United Nations' Emergency Events Database (EM-DAT), and the Dartmouth Flood Observatory (DFO) of the University of California (USA), the authors compared these data with the entries in Wikipedia to locate these events and see if they were consistent or not with the data sources contrasted in terms of location and time references.

"The results of our analysis are consistent over several dimensions, and draw a box where Wikipedia coverage is biased towards some countries, particularly the most industrialized and where large settlements are English speaking, and at the expense of other countries, particularly lower income, more vulnerable ones", the authors suggest.

The results show that the tools that use data from social networks or collaborative platforms should be carefully evaluated to avoid bias, and that Wikipedia editors must make a greater effort to cover disasters suffered by the neediest countries. These results correspond only to one possible type of natural disaster, floods, but other types of events could also be considered for study.

Credit: 
Universitat Pompeu Fabra - Barcelona

Controllable functional ferroelectric domain walls under piezoresponse microscope

image: Domain patterns after (a) 3.4 V and (b) 5.8 V poling. Dark, white, light gray, and dark gray area represent domains with polarizations along [111], [111], [111], and [111], respectively. Head-head, head-tail and tail-tail DWs are colored by orange, light blue and purple, respectively. (c) Average DW movement during each poling process. Sketch of two-step poling process including scan poling by (d) lower and (e) higher electric field. (f) Well-aligned conductive tail-tail DWs are successfully produced.

Image: 
©Science China Press

Ferroelectric materials possessing high photoelectric, piezoelectric and dielectric response are widely applied in industrial products, such as transducers, capacitors and memory devices. However, as the development of technology, miniaturization, integration and flexibility are of great importance, which could hardly be fulfilled by traditional bulk ferroelectric materials. Hence, nanoscale ferroelectric domain walls (DWs), with recently found dramatic mechanical, electrical, optical and magnetic properties aside from ferroelectric domains, have become a hotspot.

Despite intriguing properties ferroelectric domain walls have, to put them into use, understanding DW dynamics and developing DW manipulation approaches are urgently needed. It is known that external stimuli, such as electric field, mechanical strain and temperatures, could manipulate DW morphology and stability. Besides, the DW movement could also be affected by inertial properties of the sample as well as intrinsic characteristics of DWs. However, the impact of bound charges, which is one of the foremost characteristics of DWs, is mostly studied theoretically.

In a new research article published in the Beijing-based National Science Review, scientists at the Nanjing University in Nanjing, China, Rutgers University in New Jersy, USA and at Chinese Academy of Sciences in Shenzhen, China provide direct experimental insight into DW dynamics of differently charged DWs under electric field. It is found via atomic force microscopy that the mobility of differently charged DWs in bismuth ferrite films varies with the electric field.

Under lower voltages, head-to-tail DWs are more mobile than other DWs, while under higher voltages, tail-to-tail DWs become rather active and possess relatively long average length. This is attributed to the high nucleation energy and relative low growth energy for charged DWs. Based on these results, researchers designed a two-step poling approach. They polarize ferroelectric thin films with lower and higher electric field by scanning the surface of the sample with the atomic force microscopy tip. Arrays of well-aligned stripe tail-to-tail DWs are successfully produced as conductive paths, while the orientation of DWs could be changed by varying the scanning direction of the tip. In this way, they achieved the oriented growth and configuration control of ferroelectric DWs.

"Our work unveils the remarkable impact of charge accumulation around DWs on DW mobility, providing a generalizable approach for DW dynamic studies in ferroic materials. The methodology proposed here for the advanced tunability of conductive DWs makes significant progress towards their applications in functional nano-devices", they claim.

Credit: 
Science China Press

What decides the ferromagnetism in the non-encapsulated few-layer CrI3

image: Exfoliated few-layer CrI3. (a) Atomic structures of monolayer CrI3. (b, c) Rhombohedral (b) and monoclinic (c) stacking order in bilayer CrI3. The rhombohedral structure has an out-of-plane C3 axis and a symmetric center (s6=c3+i ), while the monoclinic structure has an in-plane C2 axis and a mirror plane. (d) Optical micrograph of the exfoliated few-layer CrI3. (e) Optical contrast of the CrI3 samples with different numbers of layers (red circles). The blue solid line is calculated results based on Fresnel's equations from Xu et al. (Nature, 2017, 546, 270). (f) Magnetization of bulk CrI3 as a function of temperature, indicating that Tc = ~65 K. Layer-dependent magnetic ordering in atomically-thin CrI3 at 10 K. RMCD signal on a 2L (g), 3L (h) and 4L (i) CrI3 flake, showing antiferromagnetic behavior in bilayer CrI3 and ferromagnetic behavior in 3L and 4L CrI3. The blue arrows indicate the magnetization orientation in different layer.

Image: 
©Science China Press

Since the discovery of two ferromagnetic (FM) atomically thin CrI3 and Cr2Ge2Te6 in 2017 (Nature 2017, 546, 270?Nature 2017, 546, 265), intrinsic ferromagnetism in two-dimensional (2D) van der Waals (vdWs) materials, maintaining long-range magnetic orders at the atomic monolayer limit, has received growing attention. Each individual layer is FM; however adjacent layers are antiferromagnetically (AFM) coupled together. The physical property of 2D ferromagnetism CrI3 are significantly influenced by interlayer spacing and stacking order; the interlayer magnetic states are switched between FM and AFM through electric gating or electrostatic doping and pressure.

However, there remains debate on the stacking order of CrI3 at low temperature. Previous studies have reported that CrI3 is rhombohedral structure at low temperature (Nat. Mater. 2019, 18, 1303; Phys. Rev. B 2018, 98, 104307), but recent experiments and theory demonstrate that the BN-encapsulated bi- and few-layer CrI3 and CrCl3 belong to monoclinic structure (Nature 2019, 572, 497; Nat Phys, 2019, 15, 1255). Therefore, a complete understanding of lattice dynamics and stacking order of CrI3 is crucial for 2D vdW ferromagnetic materials; however, to data, this is rare.

Recently, Prof. Bo Peng from the University of Electronic Science and Technology of China and his cooperators published a paper entitled "Layer dependence of stacking order in non-encapsulated few-layer CrI3" in Science China Materials, and demonstrated the layer, polarization and temperature dependence of the Raman features of non-encapsulated 2-5 layer and bulk CrI3 (Fig. 1), illustrating that the non-encapsulated few-layer and bulk CrI3 are rhombohedral stacking order at low temperature, rather than monoclinic structure. The helicity of incident light can be maintained by Ag modes at 10 K, while it is reversed by Eg modes, which is independent of the magnetic field and only originates from the phonon symmetry. Strikingly, the spin-phonon coupling occurs below ~60 K, which modifies the Hamiltonian of Raman modes and results in a deviation behavior of the linewidth from phonon-phonon coupling modes.

This work opens up an insight into lattice stacking order and spin-phonon coupling in 2D ferromagnets, and highlights the feasibility for the manipulation of the electron spin and spin waves through spin-phonon coupling toward novel spintronic devices. This work has far-ranging impact and will stimulate the interest in several communities, e.g., layered 2D materials, spintronics, ferromagnetic 2D materials.

Credit: 
Science China Press

Literature online: Research into reading habits almost in real time

Young people make intensive use of digital networks to read, write and comment on literary texts. But their reading behavior varies considerably depending on whether the title is from the world of popular or classic literature, as revealed by a new study that takes the reading platform Wattpad as an example. This computer-aided analysis under the direction of the University of Basel was published in the journal PLOS ONE.

Time and time again, people complain that young people no longer read enough - with the habit of deeper reading, in particular, becoming lost. But this overlooks the fact that young people not only read printed books, but also use several different forms of media to read and write literature. Many teenagers turn to networks such as Goodreads, BücherTreff and LovelyBooks in order to read literature, discuss it with other readers and even write their own literature. This is termed "social reading."

The phenomenal scale of "social reading" is clear from the Wattpad platform, on which more than 80 million predominantly young people worldwide exchange some 100,000 stories in more than 50 languages every day. Fanfiction, in which fans write continuations of famous stories such as Harry Potter, is a particularly popular genre.

Computer-aided analysis

For the first time, a team of researchers from Switzerland and Italy have researched the use of the digital reading platform Wattpad in greater detail. Their research incorporated computer-aided techniques, such as network analysis and sentiment analysis, in order to detect patterns in reading behavior within the millions of datasets.

Using statistical techniques, the researchers analyzed which books young people around the world read and comment on, and also write themselves on platforms such as Wattpad. The analysis looked at reading preferences, the emotionality and intensity of comments made about books, the networking between young readers and the potential educational impact.

Passionate reading

This revealed how intensively young people read not only youth literature - "teen fiction" - but also classic literature by, for example, Jane Austen or Hermann Hesse, commenting on individual sentences up to several hundred times and using the works as a model for stories of their own. It is also striking to see that the young readers are highly emotionally involved in this process.

Nevertheless, there are clear differences depending on whether a text is classified as popular literature or belongs to the classical literary canon. For example, teen fiction is read and commented on much more frequently on Wattpad than classic works. The researchers also observed that readers often stop reading classic works after the first few chapters, whereas teen fiction manages to captivate readers over longer sections of the plot.

Another aspect that varied by genre was the degree of interchange between users: readers of teen fiction formed networks with strong social bonds, with frequent interaction. Among readers of the classics, on the other hand, the researchers identified a more cognitively oriented style of interaction, in which users helped one another to understand and interpret the works.

A new understanding of culture

"For the first time, we're able to analyze reading behavior almost in real time," says study leader Professor Gerhard Lauer, from the Digital Humanities Lab at the University of Basel. "Social media is ushering in a revolution in our understanding of culture. Platforms such as Wattpad, Spotify and Netflix enable culture to be understood in a density and accuracy that goes way beyond previous approaches in the humanities and social sciences."

Credit: 
University of Basel

Enjoying the View? How computer games can help evaluate landscapes

image: Ruth Swetnam, Professor of Applied Geography, has spent years analysing geographical landscapes and determining what features people from different countries find most appealing.

In a bid to engage younger audiences Ruth teamed up with Jan Korenko, Senior Lecturer in Visual Effects at Staffordshire University, to create a series of videos depicting dynamic fly-throughs of virtual landscapes.

Image: 
Staffordshire University

Geographers from Staffordshire University are stepping into the virtual world of computer games to develop exciting new ways of assessing landscapes.

Ruth Swetnam, Professor of Applied Geography, has spent years analysing geographical landscapes and determining what features people from different countries find most appealing.

This has included work on a major project for the Welsh Government to evaluate the impact of their agri-environmental scheme Glastir on rural landscapes. In this collaborative £6 million programme coordinated by UKCEH Ruth led the landscape team and their findings have helped to shape government policy in Wales.

The project involved digitally mapping 300 sites across Wales and sought feedback from the public on the visual quality of the landscapes. More than two thousand responses were collected using a photographic survey. However, participants aged 25 and under were significantly underrepresented in the self-selecting sample.

In a bid to engage younger audiences Ruth teamed up with Jan Korenko, Senior Lecturer in Visual Effects at Staffordshire University, to create a series of videos depicting dynamic fly-throughs of virtual landscapes inspired by the Welsh countryside.

Ruth said: "To address the gap, we stepped out of the real-world landscapes that most geographers are comfortable with, into the virtual landscapes of gaming.

"The aim was to represent the reality of typical landscape vistas and we designed an amalgam of different sites in Wales which allowed us to easily add or remove different features such as sheep or woodland."

A second survey incorporating these virtual landscapes was first targeted at computer games design students and then secondly to the wider public, with both groups taking the same assessment.

Overall more than 70% of respondents were highly satisfied with the quality of the landscape visualisations. Of those who had visited rural Wales before, 64% gave a rating of at least 7 out of 10 for its representativeness.

Ruth commented: "The response was really positive, especially feedback about how realistic the virtual landscapes were. There were no significant differences in overall ratings between the two groups which indicates that being familiar with gaming would not preclude the use of similar landscape visualisations in public consultation exercises."

"Perhaps we, as geographers and planners, need to explore these virtual worlds to engage with our youth in a landscape setting they are comfortable navigating. Key to success is an interdisciplinary approach, combining the technical flair of the visual effects experts with the geographical grounding provided by the landscape scientist."

She added: "With our rapidly changing climate, it is important to understand how our future landscapes will function and what they look like as this impacts on our wellbeing and culture. Young voices are essential to capture in this debate as they are the generation who will be living with these new landscape futures."

Credit: 
Staffordshire University

FEFU scientists developed method to build up functional elements of quantum computers

image: a Artistic representation of the HgTe QD layer coated above the laser-printed Au nanobump array. b Side-view (view angle of 45°) SEM image showing the Au nanobump array printed at a 1-μm pitch (scale bar corresponds to 1?μm). A close-up SEM image on the top inset demonstrates the difference between the period and the "effective" period of the nanobump array. The bottom inset shows a photograph of two large-scale (3?×?9 mm2) nanobump arrays produced on the glass-supported Au film. c Typical Fourier transform infrared (FTIR) reflection spectrum of the plasmonic nanobump array printed at a 1?μm pitch (green curve). The contribution of the localized surface plasmon resonance (LSPR) of the isolated nanobumps of a given shape is shown by the orange dashed curve. FLPR denotes the first-order lattice plasmon resonance. The inset provides the distribution of the z-component of the EM field (Ez/E0) calculated 50?nm above the smooth Au film surface at 1480?nm wavelength. Circles indicate the nanobump positions. The details related to the calculations of the LSPRs and FLPRs are provided in the Supporting Information. d Side-view (view angle of 70°) SEM image of the cross-section of the nanobump (scale bar is 200?nm). e, f Calculated EM-field intensity distribution (E2/E02) near the isolated nanobump (in the xz plane) and 50?nm above the smooth Au film level (in the xy plane) at an 880?nm pump wavelength (scale bars in e, f are 200, 1000?nm, respectively).

Image: 
FEFU press office

Scientists from Far Eastern Federal University (FEFU, Vladivostok, Russia), together with colleagues from FEB RAS, China, Hong Kong, and Australia, manufactured ultra-compact bright sources based on IR-emitting mercury telluride (HgTe) quantum dots (QDs), the future functional elements of quantum computers and advanced sensors. A related article is published in "Light: Science and Applications".

FEFU scientists, together with colleagues from the Far Eastern Branch of the Russian Academy of Sciences and foreign experts, designed a resonant lattice laser printed on a surface of thin gold film that allows to control the near- and mid-IR radiation properties of capping layer of mercury telluride (HgTe) QDs.

The near- and mid-IR spectral range is extremely promising for the implementation of optical telecommunication devices, detectors, and emitters, as well as sensor and next-generation security systems. Recently developed semiconductor QDs represent promising nanomaterials emitting light exactly in this range. However, the main issue is associated with fundamental physical limitations (the Fermi golden rule, Auger recombination, etc.) dramatically decreasing intensity of the IR-emitting QDs.

Scientists from FEFU, and Institute of Automation and Control Processes (IACP FEB RAS) together with foreign colleagues for the first time overcame this limitation by applying a special resonant lattice of nanostructures. Scientists formed the lattice by ultra-precise direct laser printing on the surface of a thin film of gold.

"The plasmon lattice we developed consists of millions of nanostructures arranged on the gold film surface. We produced such lattice using advanced direct laser processing. This fabrication technology is inexpensive comparing to existing commercial lithography-based methods, easily up-scalable, and allows facile fabrication of nanostructures over cm-scale areas. This opens up prospects for applying the developed approach to design new optical telecommunication devices, detectors, and emitters, including the first IR-emitting QD-based microlaser." - said the author of the work, Aleksander Kuchmizhak, a researcher at the FEFU Center for Virtual and Augmented Reality.

The scientist explains that the resonant lattice converts the pump radiation into a special type of electromagnetic waves referred to as surface plasmons. Such waves, propagating over the surface of the patterned gold film within the capping layer of QDs, provide their efficient excitation boosting photoluminescence yield.

"For the visible spectral range, quantum dots have been synthesizing for several decades. Just a few scientific groups in the world, though, are capable of synthesizing QDs for the near and mid-IR range. Thanks to the plasmon lattice we developed, which consists of plasmon nanostructures arranged in a special way, we are able to control the main light-emitting characteristics of such unique QDs, for example, by repeatedly increasing the intensity and photoluminescence lifetime, reducing the efficiency of non-radiative recombinations, as well as by tailoring and improving emission spectrum." Said Alexander Sergeev, a senior researcher at IACP FEB RAS.

The scientist noted that quantum dots are a promising class of luminophores. They synthesized by a simple and cost-effective chemical method, this material is durable and unlike organic molecules does not suffer from degradation.

Credit: 
Far Eastern Federal University

A study shows growth trends in female homicide victims in Spain spanning over a century

In a groundbreaking study, research carried out between the Universitat Oberta de Catalunya (UOC) and the University of Lausanne (UNIL, Switzerland) has compiled data on homicide victims in Spain, disaggregated by gender, from 1910 to 2014. Unlike previous studies, which have focused on particular regions of the country or shorter time periods, this study gathers and analyses data corresponding to more than a century in Spain. Although it takes a look at both male and female victimization, the analysis has centred particularly on female victims. Among its most salient results, the analysis shows an increase in female homicide victims starting in the 1960s and associates it with the evolution of women's role and status in society.

The study "Female Homicide Victimization in Spain from 1910 to 2014: the Price of Equality?", published in the European Journal on Criminal Policy and Research, is the work of Antonia Linde, director of the UOC Bachelor's Degree in Criminology. Linde studied the documentation in depth and analysed the trends in female homicide victimization, irrespective of the gender of the perpetrator. In other words, the data used include women who have been victims of homicides committed by both men and women. "This is because there are no public data available on female homicide victims in which the perpetrator's sex is specified until the late 1990s", said the researcher.

Female homicide mortality does not follow a steady line over the century; the study identifies an upward trend starting in the 1960s. It was in this decade that women's daily routines started a process of rapid change. Women began to spend more time out of the home, either studying or working, and remained single and childless until later ages. And it is also from this decade that the likelihood of becoming homicide victims increases. "These new activities would have increased their exposure to the risk of victimization (they spend more time out of the home, they interact with more people, etc)", Linde pointed out.

Evolution of women's role and status in Spain

The study reveals associations between the evolution of female homicide victims and six indicators of women's role and social status in Spain: enrolment in higher education, entry in the job market, average age when the first child is born, marriage, divorce and abortion.

The data reveal that, starting in the 1960s, the number of women enrolled in higher education institutions increased very significantly. In 1915-1916, women accounted for 2% of the total student population; by the early 1960s, this percentage had increased to 25% and, from the 1980s onwards, there were more female students than male students. In addition, the number of marriages fell 50% during the period analysed (from 7 marriages per 1,000 inhabitants in 1910 to 3.4 in 2014).

The other indicators, although available for shorter time periods, also confirmed women's transition from traditional to non-traditional roles. There were more divorces (between 1982 and 2014, the number of divorces per 100 marriages increased six-fold) and abortions (in 1990, there were 4 abortions per 1,000 women - in the 15-44 age group - and 10.5 in 2014), women's presence in the job market grew (in 1976, there were 2.7 men working for every woman but only 1.2 men for every woman in 2014) and women were older when they had their first child (the average age was 25 in 1975 and 31 in 2014).

The behaviour of mortality is different in men and women

During the century analysed, the overall trend in the number of homicide victims followed the trend in the number of male homicide victims. However, when the number of murdered women was compared with the overall trend, the two did not track each other so closely. "Male and female victimization followed different trends during several periods. This gives a variable gender gap in homicide victimization in Spain over time", Linde remarked.

The ratio between male and female homicide victims narrowed between the early 20th century and the early 21st century. In the 1910s, 7-9 men were killed for every woman, in the late 1960s, it fell to 2.7 men for every women and, in the early 2010s, it was less than 2 men per woman.

Another noteworthy result is that, while male victimization decreased after the mid-1980s, female victimization increased. "This is interesting because during this period most countries in our region show a decrease in homicide victimization in general after the 1960s, including female victimization", the expert underscored.

Before the 1960s

The study shows that there is an increase in the number of women who were killed during the 1910s, a drop during the 1920s and a spike during the 1930s, up until the Spanish Civil War. After the war, homicides fell steadily until they reached their lowest ever values in the early 1960s. "The analysis shows a consistent upward trend from the early 20th century, with dips during the two dictatorships that ruled the country between 1923 and 1930 and between 1939 and 1975. The lower death rates during these years are probably due to the restrictions on personal freedom", the researcher suggested.

To conclude, the article raises a considerable number of questions because many more analyses can be performed on the data compiled. "I have published these data now for reasons of research ethics, so that other colleagues can use them in future studies", Linde said.

Credit: 
Universitat Oberta de Catalunya (UOC)

Story tips: Fusion squeeze, global image mapping, computing mental health, Na+ batteries

image: This simulation of a fusion plasma calculation result shows the interaction of two counter-streaming beams of super-heated gas.

Image: 
David L. Green/Oak Ridge National Laboratory, U.S. Dept. of Energy

Fusion - Squeezing the code

The prospect of simulating a fusion plasma is a step closer to reality thanks to a new computational tool developed by scientists in fusion physics, computer science and mathematics at Oak Ridge National Laboratory.

Harnessing fusion power on Earth requires strong magnetic fields to hold and squeeze a super-heated gas, and the large scale experiments capable of such extreme conditions can take decades to build.

Through simulation, a team led by ORNL's David Green hopes to perform virtual investigations of how fusion devices behave using high-performance computing.

"The mathematics underlying a fusion plasma are so complex that traditional approaches test the limits of even today's largest supercomputers," Green said. The team has tested a new approach on ORNL's Summit supercomputer and they expect that its combination with ORNL's upcoming exascale Frontier supercomputer will make a virtual fusion device possible.

Media Contact: Sara Shoemaker, 865.576.9219; shoemakerms@ornl.gov

Image: https://www.ornl.gov/sites/default/files/2020-02/Fusion_plasma_simulation.jpg
Caption: This simulation of a fusion plasma calculation result shows the interaction of two counter-streaming beams of super-heated gas. Credit: David L. Green/Oak Ridge National Laboratory, U.S. Dept. of Energy

Modeling - Efficient infrastructure mapping

A novel approach developed by scientists at Oak Ridge National Laboratory can scan massive datasets of large-scale satellite images to more accurately map infrastructure - such as buildings and roads - in hours versus days.

Comprehensive image data is useful for stakeholders to make informed decisions. The new computational workflow uses deep learning techniques to train and deploy models to better address location, environmental and time challenges when mapping structures.

"We developed a framework that divides the labor of characterization among several models, each tasked to learn to detect specific objects under near-homogeneous contexts versus one model attempting to detect objects under diverse conditions across a collection of images," said ORNL's Dalton Lunga who led the study.

The team tested the approach on 21 terabytes-worth of image data (one terabyte covers about 12,085 square miles) and reduced the computing time from 28 days to about 21 hours.

Next, they will characterize even larger image datasets on ORNL's Summit supercomputer.

Media Contact: Sara Shoemaker, 865.576.9219; shoemakerms@ornl.gov

Image: https://www.ornl.gov/sites/default/files/2020-02/Puerto_Rico_Resflow9_0.png
Caption: A new computational approach by ORNL can more quickly scan large-scale satellite images, such as these of Puerto Rico, for more accurate mapping of complex infrastructure. Credit: Maxar Technologies and Dalton Lunga/Oak Ridge National Laboratory, U.S. Dept. of Energy

Computing - Assessing mental health

Oak Ridge National Laboratory will partner with Cincinnati Children's Hospital Medical Center, collaborating with CCHMCs' Dr. John Pestian and Dr. Tracy Glauser, to explore ways to deploy expertise in health data science that could more quickly identify patients' mental health risk factors and aid in suicide prevention strategies.

ORNL and CCHMC will leverage the lab's expertise in high-performance computing - which includes access to the world's most powerful supercomputers - and proven ability to keep medical records safe and secure to research new computational methodologies using deep learning and artificial intelligence techniques.

"Our collaboration's goal is to quickly scan and detect at-risk patients' behaviors and predictive patterns that could lead to mental health solutions, all while keeping personal information protected," said ORNL's Joe Lake.

After the pilot phase, the team plans to scale up to larger patient datasets and share the results with other entities that have identified mental health conditions as a top priority, such as the National Institutes of Mental Health and the Department of Veterans Affairs.

Media Contact: Sara Shoemaker, 865.576.9219; shoemakerms@ornl.gov

Image: https://www.ornl.gov/sites/default/files/2020-02/CADES2019-P00182_0.jpg
Caption: ORNL's collaboration with Cincinati Children's Hospital Medical Center will leverage the lab's expertise in high-performance computing and safe, secure recordkeeping. Credit: Genevieve Martin/Oak Ridge National Laboratory, U.S. Dept. of Energy

Batteries - Charged up on sodium

Researchers at Oak Ridge National Laboratory demonstrated that sodium-ion batteries can serve as a low-cost, high performance substitute for rechargeable lithium-ion batteries commonly used in robotics, power tools, and grid-scale energy storage.

Sodium-ion batteries, or SIBs, show promise beyond lithium-ion batteries because sodium's abundance makes it more affordable compared to lithium. However, limitations in the technical design of their anode, cathode and electrolyte systems prevent SIBs from being widely used.

In a study, ORNL researchers developed SIBs by pairing a high-energy oxide or phosphate cathode with a hard carbon anode and achieved 100 usage cycles at a one hour charge and discharge rate.

"The dedication to lithium-ion batteries over the past 20 years has eclipsed any significant development around room temperature sodium-ion batteries despite the material availability," ORNL's Ilias Belharouak said. "This research shows how SIBs can be designed for improved performance."

Media Contact: Jennifer Burke, 865.576.3212; burkejj@ornl.gov

Image: https://www.ornl.gov/sites/default/files/2020-02/sodium-ion_batteries.jpg
Caption: ORNL researchers developed sodium-ion batteries by pairing a high-energy oxide or phosphate cathode with a hard carbon anode and achieved 100 usage cycles at a one-hour charge and discharge rate. Credit: Mengya Li/Oak Ridge National Laboratory, U.S. Dept. of Energy

Credit: 
DOE/Oak Ridge National Laboratory

New online therapy for lingering depression symptoms could fill important gap in care

image: A pioneering therapy for lingering depression that was developed by Professor Zindel Segal (above) is now available online, offering greater access to those in need

Image: 
U of T Scarborough

An online version of a pioneering therapy aimed at reducing the lingering symptoms of depression can offer additional benefits for patients receiving care, according to a new U of T Scarborough study.

When added to regular depression care, the online version of Mindfulness-based Cognitive Therapy (MBCT) can help treat depression symptoms and help prevent its return, notes U of T Scarborough Professor Zindel Segal, a clinical psychologist and lead author of the study.

"Treatments work well for many suffering from depression, but there remains a considerable group who continue to struggle with lingering symptoms such as sleep, energy or worry," he says.

Clinical data shows that in the absence of treatment, these patients face a significantly higher risk of becoming fully depressed again, notes Segal.

"Patients with these residual symptoms face a gap in care since they are not depressed enough to warrant re-treatment, but receive few resources for managing the symptom burden they still carry."

The digital version of MBCT, called Mindful Mood Balance (MMB), is an online adaptation of the effective treatment developed by Segal and his colleagues. It combines the practice of mindfulness meditation with the tools of cognitive therapy to teach patients adaptive ways of regulating their emotions.

The practice of mindfulness meditation helps patients observe rather than act automatically to any thought, feeling or sensation that comes to mind, setting them up for being able to choose how best to respond, explains Segal.

"Our goal has always been for people to develop skills that they could continue to rely on once treatment had ended," he says.

While research indicates that MBCT is as effective as antidepressant medication in preventing relapse, access remains limited and nearly impossible for those living outside large cities.

"What drove us to develop MMB is to improve access to this treatment. The online version uses the same content as the in-person sessions, except people can now avoid the barriers of cost, travel or wait times, and they can get the care they need efficiently and conveniently," he says.

Segal, along with colleagues Arne Beck (Kaiser Permanente Institute for Health Research) and Sona Dimidjian (University of Colorado Boulder), received a $2 million grant in 2015 from the National Institute for Health (NIH) in the U.S. to develop MMB. The program was tested in a randomized clinical trial of 460 patients in clinics at Kaiser Permanente Colorado, a large American HMO in Colorado.

The results of the study, published in JAMA Psychiatry, found that adding MMB to depression care offered by Kaiser led to greater reductions in depressive and anxious symptoms, higher rates of remission and higher levels of quality of life compared to patients receiving conventional depression care alone.

"An online version of MBCT, when added with usual care, could be a real game changer because it can be offered to a wider group of patients for little cost," says Segal.

Segal admits that even with the positive results, there is work to be done. A common trade-off with online programs is that drop-out rates tend to be higher than in-person treatment. An important next step is looking at ways to cut down on the dropout rate.

"The higher rates of dropout are somewhat offset by fact that you can reach many more people with online treatment," he says. "But, there's still room for improvement and we will be looking at our user metrics and outcomes for ways to make MMB more engaging and durable."

Credit: 
University of Toronto

Drones can determine the shape of a room by listening

Imagine a loudspeaker is placed in a room with a few microphones. When the loudspeaker emits a sound impulse, the microphones receive several delayed responses as the sound reverberates from each wall in the room. These first-order echoes--heard after sound impulses have bounced only once on a wall--then bounce back from each wall to create second-order echoes and so on.

In a paper publishing next week in the SIAM Journal on Applied Algebra and Geometry, Mireille Boutin and Gregor Kemper attempt to reconstruct the shape of a room using first-order echoes received by four microphones attached to a drone. The microphones are aligned in a rigid configuration and do not lie in a common plane. Placing microphones on a drone--rather than independently throughout the room--reveals new areas of application.

"The microphones listen to a short sound impulse bouncing on finite planar surfaces -- or the 'walls,'" Boutin, a professor of mathematics and electrical and computer engineering at Purdue University, explains . "When a microphone hears a sound that has bounced on a wall, the time difference between the emission and reception of the sound is recorded. This time difference corresponds to the distance traveled by the sound during that time."

The time delay of each first-order echo provides the authors with a set of distances from every microphone to mirror images of the source reflected across each wall. Identifying the corresponding wall from which each echo originates is impossible; a microphone may not even receive an echo from a given wall based on its configuration and room geometry.

The authors use a known modeling technique to focus on first-order echoes. This method interprets bounced sound as coming from a virtual source behind the wall instead of from the source, thus allowing a virtual source point to represent each wall.

"The time differences between emission and reception provide the distance between the microphone and virtual source point," Boutin says. "If we know the distance from one of these virtual source points to each of the four microphones, we can recover the coordinates of the virtual source and subsequently reconstruct four points on the wall -- and hence the plane that contains the wall."

However, the microphones cannot determine the distance that corresponds to each virtual source point, i.e., each wall. In response, Boutin and her colleagues designed a method to label the distances that correlate with each wall, a process they call "echo sorting."

The echo sorting technique uses a polynomial as a screening test and discovers whether the four distances lie on the zero set of a certain polynomial in four variables. A nonzero value reveals that the distances cannot bounce from the same wall. Alternatively, if the polynomial is equal to zero, the distances could possibly come from the same wall.

This study demonstrates that reconstructing a room from first-order echoes acquired by four microphones is a theoretical problem that is well-posed under generic conditions. "This is a first step towards solving the corresponding real-world problem," Boutin observes. "If the problem was not well-posed, then a practical solution would require more information. But since we know that it is well-posed, we can move on to the next step: finding a way to reconstruct the room when the echo measurements are noisy."

This task is by no means straightforward. Certain drone placements give rise to problems that are not well-posed, suggesting that the noisy version of the problem will be susceptible to ill conditioning. More work is necessary to properly solve the problem of reconstructing a room from echoes.

While the mathematical framework simply requires a rigid configuration of non-coplanar microphones, the research has a range of other potential applications. "These microphones can be placed inside a room or on any vehicle, such as a car, an underwater vehicle, or a person's helmet," Gregor Kemper, a professor in the Department of Mathematics at Technische Universität München, explains. The authors' journal paper poses examples with stationary, indoor sound sources as well as sources placed on vehicles that may get rotated and translated due to movement; these latter sources present significantly more complicated situations.

"A moving car is different from a drone or an underwater vehicle in an interesting way," Kemper adds. "Its positions have only three degrees of freedom--x-axes, y-axes, and orientation--whereas a drone has six degrees of freedom. Our work indicates that these six degrees of freedom are sufficient to almost always detect the walls, but this does not necessarily mean that three degrees will also suffice. The case of a car or any surface-based vehicle is the subject of ongoing research by our group."

Achieving computational economy for such problems is an important goal for Boutin and Kemper. Their method requires a computer algebra system to perform symbolic computations, which can become more computationally complex for other variations of the problem, thus limiting its expansion to similar problems. "Finding a less computationally expensive technique to prove the same results would be desirable, especially if this method turned out to be applicable to other cases," Kemper says. "Our mathematical framework is suitable for surface-based vehicles, but the actual computations necessary for the proof present challenges. We hope other teams will explore this issue."

Credit: 
Society for Industrial and Applied Mathematics