Culture

How quickly do flower strips in cities help the local bees?

image: The flower strip at Fockensteinstraße as an example of the urban context of the flower strips studied here.

Image: 
Susanne S. Renner

Insects rely on a mix of floral resources for survival. Populations of bees, butterflies, and flies are currently rapidly decreasing due to the loss of flower-rich meadows. In order to deal with the widespread loss of fauna, the European Union supports "greening" measures, for example, the creation of flower strips.

A group of scientists from the University of Munich, led by Prof. Susanne S. Renner, has conducted the first quantitative assessment of the speed and distance over which urban flower strips attract wild bees, and published the results of the study in the open-access Journal of Hymenoptera Research.

Flower strips are human-made patches of flowering plants that provide resources for flower-visiting insects and insect- and seed-feeding birds. Previous experiments have proved their conservation value for enhancing biodiversity in agricultural landscapes.

The success of flower strips in maintaining populations of solitary bees depends on the floristic composition, distance from suitable nesting sites, and distance from other habitats maintaining stable populations of bees. To study the attractiveness of the flower strips in urban landscapes, the scientists used an experimental set-up of nine 1,000 sq. meters flower strips recently established in Munich by a local bird conservation agency.

"We identified and counted the bees visiting flowers on each strip and then related these numbers to the total diversity of Munich's bee fauna and to the diversity at different distances from the strips. Our expectation was that newly planted flower strips would attract a small subset of mostly generalist, non-threatened species and that oligolectic species (species using pollen from a taxonomically restricted set of plants) would be underrepresented compared to the city's overall species pool," shared Prof. Susanne S. Renner.

Bees need time to discover new habitats, but the analysis showed that the city's wild bees managed to do that in just one year so that the one-year-old flower strips attracted one-third of the 232 species recorded in Munich between 1997 and 2017.

Surprisingly, the flower strips attracted a random subset of Munich's bee species in terms of pollen specialization. At the same time, as expected, the first-year flower-strip visitors mostly belonged to common, non-threatened species.

The results of the study support that flower strip plantings in cities provide extra support for pollinators and act as an effective conservation measure. The authors therefore strongly recommend the flower strip networks implemented in the upcoming Common Agricultural Policy (CAP) reform in the European Union.

Credit: 
Pensoft Publishers

Directed species loss from species-rich forests strongly decreases productivity

image: The field trial BEF-China is carried out in Xingangshan in the province of Jiangxi in southeast China.

Image: 
Yuanyuan Huang

The forest biodiversity experiment BEF-China began in 2009 with the collaboration among institutions in China, Germany and Switzerland and is one of the world's biggest field experiment. In the subtropical forests in southeastern China the international team planted over 500 plots of 670 square meters of land with 400 trees each - with each plot receiving between one and 16 tree species in various combinations. The researchers simulated both random and directed species extinction scenarios and analyzed the data.

Directed loss of species reduces productivity

After eight years, directed species loss in species-rich forest ecosystems, in which evolutionary distinct species had higher extinction risks, showed much stronger reductions in forest productivity than did treatments that were subject to random species loss. "These findings have significant implications for biodiversity conservation and climate mitigation, because more productive forests also removed more carbon dioxide from the air," says Bernhard Schmid, professor at the Department of Geography of the University of Zurich (UZH) and last author of the study.

Diversity alone does not protect against losses

"Our results suggest that species loss can severely hamper ecosystem functioning already at high species richness, when many species still remain in the ecosystem. It challenges the decade-long assumption derived from studies based on random species loss," says Schmid. This assumption is that species loss from high-diversity communities would have only little impact on ecosystem functioning, because the remaining species could take over the functions of extinct species due to species redundancy.

Extinct species missing from the network

Why could directed species loss lead to such strong productivity reduction? "We think two processes associated with directed species loss might lead to the results. When species loss is directed, we may lose the most functionally distinct species first, if that functionally distinctiveness is also the cause of extinction," explains first author of the study Yuxin Chen, former post-doc at UZH and now associate professor at Xiamen University in China. "Species do not live independently, but participate in complex networks of species interactions. Losing species can change these interaction networks. The loss of species interactions contributed significantly to the observed results."

China responds with new laws

The ongoing impacts of a severe coronavirus epidemic this winter has prompted China to speed up biosecurity legislation and elevate it to a national security issue. The biosecurity law would cover various areas including conserving biodiversity. "Our research is grounded in one of the diversity hotspots in China. The findings are timely for supporting the legislation of biosecurity," says Keping Ma professor at the Chinese Academy of Sciences and co-founder of the BEF-China experiment. "Diversity loss from species-rich forests could also increase the risk of pest and disease outbreaks. Some other research teams are studying this issue".

Credit: 
University of Zurich

ITMO scientists develop new algorithm that can predict population's demographic history

image: Genetic algorithm for inferring demographic history of multiple populations from allele frequency spectrum data, ITMO University

Image: 
Dmitry Lisovskiy, ITMO.NEWS

Bioinformatics scientists from ITMO University have developed a programming tool that allows for quick and effective analysis of genome data and using it as a basis for building the most probable models of demographic history of populations of plants, animals and people. Operating with complex computational schemes, the software can, with a very high degree of likelihood, predict what history a particular group of living organisms has gone through in the past thousands of years, what periods of mass extinction or mass population growth a population has experienced, and how long it has been in contact with other populations of the same species. The scientists' article dedicated to this methodology has been published in GigaScience.

How to find out when exactly the modern tigers' first ancestors appeared on Earth? When did the two elephant populations split? Is there a difference between the Dama and the Moroccan gazelle? When did the division of the African and the Eurasian homo sapiens occur? The answers to all these questions can be found in the population's demographic history - in other words, the scenario that shows what stages the population went through in the course of its history, whether it underwent any mass extinctions, migrations, or sharp spikes in its numbers.

Apart from solving fundamental questions, this data can help us in the matters of applied research in the field of ecology and environmental protection. For instance, if some region only has some 800 walruses left, scientists have to understand whether it constitutes a critical decrease or it is a natural population size which has remained constant for several thousand years now, and answer the question of whether valuable resources have to be spent on protecting and saving this species from becoming extinct.

The creation of a population's demographic history on the basis of genetic information is a complicated task which requires population geneticists to possess not only knowledge in the field of biology but also programming skills. Such scientists have to garner data and write a code for computing possible models of a population's evolution which could have led to the vast multitude of the genetic information we can witness in this population's representatives today. Up until recently, this was a long process the end result of which relied very heavily on the researcher's initial hypothesis. If it had any defects or the research failed to take some aspect into consideration, the software couldn't correct this initial error and calculated the probability of particular demographic events only within the boundaries predefined by the researcher.

The software developed by a group of ITMO University scientists as part of the Project 5-100 grant programs and with support from JetBrains Research aims to solve this problem. The researchers proposed a programming product which independently and automatically predicts the most probable model of a population's demographic history. At that, it is significantly less dependent on the initial research hypothesis, doesn't require advanced programming skills and produces more accurate results. What is more, the software has the advantage of flexibility, meaning that if the obtained result somehow diverges from archaeological or historical data, you can easily introduce additional limitations into the underlying algorithm to update its hypothesis.

"Using genetic data, our software automatically computes the model it considers optimal," shares Vladimir Ulyantsev. "It looks at the entire volume of the scenarios available. As a scientist, I'll consider the scenarios I deem the most likely, there can be three, five, maybe ten of those. The software, on the other hand, will test all of the models it estimates as probable, this is a much bigger amount. That's why the solutions it comes up with are better than those proposed by people working on the basis of the initial methods. The most beautiful thing here is the method - a genetic algorithm inspired by how evolution happens: species multiply, mutate, with those with the least ability to adapt dying out. In the place of the species we have demographic models and their parameters, and their adaptability is measured on the basis of their similarity with the studied data."

After obtaining this data, the scientists can present it on a map and compare the information indicating that during a particular period a population underwent a migration with archaeological findings and other evidence. These algorithms were used to check a large number of hypotheses and research by evolutionary geneticists. In many cases, the obtained result was much more accurate than that of the initial works.

Credit: 
ITMO University

Cloud data speeds set to soar with aid of laser mini-magnets

image: Model of a single-molecule magnet

Image: 
Dr Olof Johansson

Tiny, laser-activated magnets could enable cloud computing systems to process data up to 100 times faster than current technologies, a study suggests.

Chemists have studied a new magnetic material that could boost the storage capacity and processing speed of hard drives used in cloud-based servers.

This could enable people using cloud data systems to load large files in seconds instead of minutes, researchers say.

A team led by scientists from the University of Edinburgh created the material - known as a single-molecule magnet - in the lab.

They discovered that a chemical bond that gives the compound its magnetic properties can be controlled by shining rapid pulses from a laser on it. The compound is composed mainly of the element manganese, which is named after the Latin word magnes, which means magnet.

Their findings suggest that data could be stored and accessed on the magnets using laser pulses lasting one millionth of a billionth of a second. They estimate this could enable hard drives fitted with the magnets to process data up to 100 times faster than current technologies.

The development could also improve the energy efficiency of cloud computing systems, the team says, which collectively emit as much carbon as the aviation industry.

Existing hard drives store data using a magnetic field generated by passing an electric current through a wire, which generates a lot of heat, researchers say. Replacing this with a laser-activated mechanism would be more energy efficient as it does not produce heat.

The study, published in the journal Nature Chemistry, also involved researchers from Newcastle University. It was funded by the Royal Society of Edinburgh, the Carnegie Trust and the Engineering and Physical Sciences Research Council.

Dr Olof Johansson, of the University of Edinburgh's School of Chemistry, who led the study, said: "There is an ever-increasing need to develop new ways of improving data storage devices. Our findings could increase the capacity and energy efficiency of hard drives used in cloud-based storage servers, which require tremendous amounts of power to operate and keep cool. This work could help scientists develop the next generation of data storage devices."

Credit: 
University of Edinburgh

KITE code could power new quantum developments

A research collaboration led by the University of York's Department of Physics has created open-source software to assist in the creation of quantum materials which could in turn vastly increase the world's computing power.

Throughout the world the increased use of data centres and cloud computing are consuming growing amounts of energy - quantum materials could help tackle this problem, say the researchers.

Quantum materials - materials which exploit unconventional quantum effects arising from the collective behaviour of electrons - could perform tasks previously thought impossible, such as harvesting energy from the complete solar spectrum or processing vast amounts of data with low heat dissipation.

The design of quantum materials capable of delivering intense computing power is guided by sophisticated computer programmes capable of predicting how materials behave when 'excited' with currents and light signals.

Computational modelling has now taken a 'quantum leap' forward with the announcement of the Quantum KITE initiative, a suite of open-source computer codes developed by researchers in Brazil, the EU and the University of York. KITE is capable of simulating realistic materials with unprecedented numbers of atoms, making it ideally suited to create and optimise quantum materials for a variety of energy and computing applications.

Dr Aires Ferreira, a Royal Society University Research Fellow and Associate Professor of Physics, who leads the research group at the University of York, said:

"Our approach uses a new class of quantum simulation algorithms to help predict and tailor materials' properties for a wide range of applications ranging from solar cells to low-power transistors.

"The first version of the free, open source KITE code already demonstrates very encouraging capabilities in electronic structure and device-level simulation of materials.

"KITE's capability to deal with multi-billions of atomic orbitals, which to our knowledge is unprecedented in any area of quantum science, has the potential to unlock new frontiers in condensed matter physics and computational modelling of materials."

One of the key aspects of KITE is its flexibility to simulate realistic materials, with different kinds of inhomogeneities and imperfections.

Dr Tatiana Rappoport from the Federal University of Rio de Janeiro in Brazil, said:

"This open-source software is our commitment to help removing barriers to realistic quantum simulations and to promote an open science culture. Our code has several innovations, including 'disorder cell' approach to simulate imperfections within periodic arrangements of atoms and an efficient scheme for dealing with RAM intensive calculations that can be useful to other scientific communities and industry."

Read the research paper in Royal Society Open Science.

Credit: 
University of York

Scientists find functioning amyloid in healthy brain

image: Protein FXR1, extracted from the brain of healthy rats, is colourised with an amyloid specific dye 'Congo Red' and shows an apple-green glow in polarised light, which is recognised as the 'gold standard' for amyloid identification.

Image: 
SPbU

Scientists from St Petersburg University worked with their colleagues from the St Petersburg branch of the Vavilov Institute of General Genetics. They conducted experiments on laboratory rats and showed that the FRX1 protein in the brains of young and healthy animals functions in an amyloid form. The previously published reports indicate that this protein controls long term memory and emotions: mice that have the FRX1 gene "off" quickly remember even complex mazes, and animals that have too much of this protein do not suffer from depression even after severe stress. In addition, in humans, a failure in the gene encoding FRX1 is linked to autism and schizophrenia.

'Our findings clearly show that developing a universal remedy that will destroy all amyloids in the brain is totally futile. Instead, we need to look for a cure for each specific pathology. The healthy brain was previously known to store only a few protein hormones in amyloid form. They are stored in secretory granules in the hypophysis, but when the time comes, the secretory granules burst and the proteins function in a normal, monomeric form,' said Alexey Galkin, Professor of the Department of Genetics, Doctor of Biology. 'We have initially proved that the protein can actually function in the brain in amyloid form, both as oligomers and as insoluble aggregates. Also, the amyloid form FRX1 can bind RNA molecules and protect them from degradation.'

The research was conducted by the Research Park of St Petersburg University with equipment provided by the resource centres "Chromas Core Facility" and "The Centre for Molecular and Cell Technologies". The amyloid form of FXR1 protein was discovered by scientists using the amyloid proteome screening method developed by a research team in 2016. Amyloids generally play an important role in many organisms: for example, one of these proteins is found in human pigment cells and affects skin tanning. However, today, scientists are interested in amyloids primarily due to the need to find a cure for neurodegenerative diseases, where these proteins play a key role.

Credit: 
St. Petersburg State University

Story Tips: Antidote chasing, traffic control and automatic modeling

image: A team of scientists may have discovered a new family of antidotes for certain poisons that can mitigate their effects more efficiently compared with existing remedies.

Image: 
Andrey Kovalevsky/Oak Ridge National Laboratory, US Dept. of Energy

Biochemistry - Chasing the antidote

In the most comprehensive, structure-based approach to date, a team of scientists may have discovered a new family of antidotes for certain poisons that can mitigate their effects more efficiently compared with existing remedies.

Poisons such as organophosphorus nerve agents and pesticides wreak havoc by blocking an enzyme essential for proper brain and nerve function. Fast-acting drugs, called reactivators, are required to reach the central nervous system and counteract damage that could lead to death.

"To enhance the antidote's effectiveness, we need to improve the reactivator's ability to cross the blood-brain barrier, bind loosely to the enzyme, chemically snatch the poison and then leave quickly," said ORNL's Andrey Kovalevsky, co-author of a study led by Zoran Radić of UC San Diego.

The team designed and tested reactivators on three different nerve agents and one pesticide with positive initial results. Their next step is to use neutron crystallography to better understand antidote designs.

Media Contact: Sara Shoemaker, 865.576.9219; shoemakerms@ornl.gov

Image: https://www.ornl.gov/sites/default/files/2020-02/01a%20-%20Biochemistry-Antidote1_1.jpg

Caption: A team of scientists may have discovered a new family of antidotes for certain poisons that can mitigate their effects more efficiently compared with existing remedies. Credit: Andrey Kovalevsky/Oak Ridge National Laboratory, U.S. Dept. of Energy

Vehicles - Fuel savings green light

Large trucks lumbering through congested cities could become more fuel efficient simply by not having to stop at so many traffic lights.

A proof-of-concept study by Oak Ridge National Laboratory shows promise of a potential new system to direct traffic lights to keep less-efficient vehicles moving and reduce fuel consumption.

In collaboration with traffic-management services company GRIDSMART, researchers used smart cameras to collect real-world data from images of vehicles as they move through select intersections.

The team used artificial intelligence and machine learning techniques to "teach" these cameras how to quickly identify each vehicle type and its estimated gas mileage, sending the information to the next intersection's traffic light.

ORNL's Thomas Karnowski said early results from the computer simulation could lead to more comprehensive research.

Media Contact: Sara Shoemaker, 865.576.9219; shoemakerms@ornl.gov

Image: https://www.ornl.gov/sites/default/files/2020-02/02%20-%20Truck-intersection_1.png

Caption: A preliminary study by ORNL and GRIDSMART shows promise of a new system to keep trucks moving through intersections and reduce fuel consumption. Credit: Thomas Karnowski/Oak Ridge National Laboratory, U.S. Dept. of Energy

Buildings - Automatic modeling

Oak Ridge National Laboratory researchers have developed a modeling tool that identifies cost-effective energy efficiency opportunities in existing buildings across the United States.

Using supercomputing, the energy modeling method assesses building types, systems, use patterns and prevailing weather conditions.

"Manually collecting and organizing data for energy modeling is a time-consuming process and is used in only a small percentage of retrofit performance projects," ORNL's Joshua New said.

The team's modeling approach applies automation to extract a building's floor area and orientation parameters from publicly available data sources such as satellite images. Researchers tested the tool on more than 175,000 buildings in the Chattanooga, Tennessee, area, demonstrating energy-saving opportunities.

"We can model a building in minutes from a desktop computer," New said. "This is the next level of intelligence for energy-saving technologies."

Future plans include making the tool openly available to help reduce energy demand, emissions and costs for America's homes and businesses.

Media Contact: Jennifer Burke, 865.576.3212; burkejj@ornl.gov

Image: https://www.ornl.gov/sites/default/files/2020-02/03%20-%20Building_energy_model_graphic_1.png

Caption: ORNL's modeling tool simulates the energy efficiency of buildings by automating data received from satellite images. The tool was tested on buildings in the Chattanooga area. Credit: Joshua New/Oak Ridge National Laboratory, U.S. Dept. of Energy

Credit: 
DOE/Oak Ridge National Laboratory

The GDP fudge: China edition

image: SMU Professor Cheng Qiang, Dean of the School of Accountancy, presenting his paper at the Review of Accounting Studies (RAST) Conference.

Image: 
Flora Teoh

SMU Office of Research & Tech Transfer - For all its shortcomings, the gross domestic product (GDP) of a country remains an important barometer of its economic health, strongly influencing both private and public spending. Though conceptually simple as the total dollar value of all goods and services produced within a specified time frame, calculating GDP is tricky in practice and can be manipulated by individual firms in a strategy known as earnings management.

In particular, China's economic reporting has been called into question, with the provincial governments reporting in 2016 a collective GDP that was 2.76 trillion yuan higher than the national GDP calculated by the National Bureau of Statistics, or about 3.7% of the national GDP. The central government has acknowledged the issue: in 2017, the National Audit Office singled out ten provinces which had inflated their fiscal revenue to the tune of 1.5 billion yuan.

According to a new analysis presented at the 2019 Review of Accounting Studies (RAST) conference, held from 13 to 14 December at the Singapore Management University (SMU), there is reason to believe that Chinese firms engage in earnings management to prop up provincial GDP figures. Titled "GDP Growth Incentives and Earnings Management: Evidence from China," the study presented by the Dean of the School of Accountancy, Professor Cheng Qiang, also won the "Best Paper Award" by popular vote.

The pressure to grow GDP

"If the government is making decisions based on an inaccurate GDP number, then its decision quality will be lower," said Professor Cheng, explaining the implications of his study findings. Although discrepancies in GDP calculation can simply be a result of poor infrastructure for the collection of statistical data, differences in calculation methods or simple human error, not all such problems are unintentional, he said.

In the case of China, there is a strong incentive for provincial officials to present a rosy economic picture as this is intrinsically linked to opportunities for political advancement. This desire may lead officials to pressure firms into behaviours that negatively affect the accuracy of their financial statements, Professor Cheng suggested.

"The central government controls the personnel: who should be promoted into the central government, who should move to a bigger province. The political careers of provincial officials are decided by the central government," he said. Because a province's GDP is a significant factor in deciding which officials to promote, the system creates competition among them to present the best economic picture to the central government, Professor Cheng explained.

To test their hypothesis, Professor Cheng and his colleagues examined various measures of financial reporting between 2002 to 2016 representing over 21,000 firm-years. Specifically, they looked at three figures as proxies for earnings management: discretionary revenues, overproduction and abnormal asset impairment losses, all of which can be manipulated to directly influence GDP numbers.

These measures were then examined in tandem with potential incentives for inflating GDP growth. One way in which the study calculated such incentives was to compare provinces' GDP growth with that of adjacent provinces (which are more likely to have similar economic situations) as well as the national GDP growth. A province with a lower GDP growth compared to the national average or that of adjacent provinces would hypothetically be under greater pressure to engage in earnings management to inflate future GDP growth.

Additionally, the study also examined the issue of incentives from other perspectives, such as the age of provincial officials. Hypothetically, younger officials are more likely to compete for advancement compared to older officials nearing the retirement age of 65, therefore giving younger officials a stronger incentive to inflate a province's GDP.

Short-term gain, long-term pain

Indeed, Professor Cheng and colleagues found that firms in provinces with GDP growth lower than national or adjacent provinces' average GDP growth were more likely to engage in earnings management in the future compared to firms in other provinces. Specifically, these firms were more likely to inflate revenues, overproduce and delay asset impairment losses.

Lending strength to the study's hypothesis was that these results were more pronounced for firms in provinces with younger officials (60 years old and below), as well as firms which were local state-owned enterprises (SOEs) - over which provincial officials have greater control - compared to central SOEs or non-SOEs.

Besides studying the factors behind GDP inflation, the study also examined the potential consequences of earnings management to the firms (and in turn the province), revealing that there is a heavy price to pay for constructing an artificial image of a flourishing economy.

"When the province reports a high growth, the tax collected as well as other economic expectations will also be higher," Professor Cheng explained. "When you cannot fulfil these expectations, at some point, the situation will just blow up." This was indeed what played out in many provinces that admitted to inflating their GDP between 2017 and 2018, he pointed out.

In short, engaging in earnings management is costly to firms in the long run, Professor Cheng cautioned. "We find that firms that engage in earnings management for the incentive of GDP growth have a high bad debt expense that comes from inflating revenue; high inventory write-off that comes from overproduction; and high asset impairment losses that come from the delaying of asset impairment losses. All these result in a lower return on assets in the future."

"This is the first study that examines how the incentives at the government level affect management at the firm level. The second contribution is that this paper provides evidence about one mechanism by which government officials use to inflate GDP growth," Professor Cheng said.

"The third contribution is in articulating the dynamics between macroeconomic numbers and microeconomic numbers, and how the macroeconomic situation can affect the integrity of a firm's financial reporting."

Credit: 
Singapore Management University

Atomic vacancy as quantum bit

image: Atomic thin layer of boron nitride with a spin center formed by the boron vacancy. With the help of high frequency excitation (red arrow) it is possible to initialize and manipulate the qubit.

Image: 
(Image: Mehran Kianinia, University of Technology Sydney)

Although boron nitride looks very similar to graphene in structure, it has completely different optoelectronic properties. Its constituents, the elements boron and nitrogen, arrange - like carbon atoms in graphene - a honeycomb-like hexagonal structure. They arrange themselves in two-dimensional layers that are only one atomic layer thick. The individual layers are only weakly coupled to each other by so-called van der Waals forces and can therefore be easily separated from each other.

Publication in Nature Materials

Physicists from Julius-Maximilians-Universität Würzburg (JMU) in Bavaria, Germany, in cooperation with the Technical University of Sydney in Australia have now succeeded for the first time in experimentally demonstrating so-called spin centers in a boron nitride crystal. Professor Vladimir Dyakonov, holder of the Chair of Experimental Physics VI at the Institute of Physics, and his team were responsible for this on the JMU side and carried out the crucial experiments. The results of the work have been published in the renowned scientific journal Nature Materials.

In the layered crystal lattice of boron nitride the physicists found a special defect - a missing boron atom - which exhibits a magnetic dipole moment, also known as a spin. Furthermore, it can also absorb and emit light and is therefore also called color center. To study the magneto-optical properties of the quantum emitter in detail, JMU scientists have developed a special experimental technique that uses the combination of a static and a high-frequency magnetic field.

A little luck is needed

"If you vary the frequency of the alternating magnetic field, at some point you hit exactly the frequency of the spin, and the photoluminescence changes dramatically," explains Dyakonov. A bit of luck is necessary, however, since it is difficuilt to predict at which frequencies one has to search for unknown spin states. Dyakonov and his team had discovered these centers in the 2D crystalline system, which had previously only been predicted theoretically. Among other things, they were able to demonstrate spin polarization, i.e. the alignment of the magnetic moment of the defect under optical excitation - even at room temperature.

This makes the experiments interesting for technical applications as well: Scientists around the world are currently working on finding a solid-state system in which the spin state can be aligned, manipulated on demand and later read-out optically or electrically. "The spin center we have identified in boron nitride meets these requirements," adds Dyakonov. Because it has a spin and additionally absorbs and emits light, it is a quantum bit that can be used in quantum sensing and quantum information. New navigation technology could also work with this technology, which is why space agencies such as DLR and NASA are conducting intensive research on this topic, too.

Material design by the Lego brick principle

For the basic scientist, the 2D materials are also exciting from another point of view. They have very special layer structure, combined with the only weak bonding of the layers to each other, offers the possibility of constructing different stacking sequences from different semiconductors. "If you then place a defect in one of these layers, we call it a spin probe, this can help to understand the properties of the adjacent layers, but also to change the physical properties of the entire stack," says Dyakonov.

In a next step, Dyakonov and his colleagues therefore want to produce, among other things, heterostructures made of multilayer semiconductors with a boron nitride layer as an intermediate layer. They are convinced: "If the atomically thin layers of boron nitride, which are 'decorated' with individual spin centers, can be produced and incorporated into a heterostructure, it will be possible to design artificial two-dimensional crystals based on Lego brick principles and investigate their properties."

Credit: 
University of Würzburg

Paper: Disposal of wastewater from hydraulic fracturing poses dangers to drivers

image: A new paper co-written by Yilan Xu, a professor of agricultural and consumer economics at Illinois, shows that the growing traffic burden in shale energy boomtowns from trucks hauling wastewater to disposal sites resulted in a surge of road fatalities and severe accidents.

Image: 
Photo by L. Brian Stauffer

CHAMPAIGN, Ill. -- Environmental concerns about hydraulic fracturing - aka "fracking," the process by which oil and gas are extracted from rock by injecting high-pressure mixtures of water and chemicals - are well documented, but according to a paper co-written by a University of Illinois at Urbana-Champaign environmental economics expert, the technique also poses a serious safety risk to local traffic.

New research from Yilan Xu ("E-Lan SHE"), a professor of agricultural and consumer economics at Illinois, shows that the growing traffic burden in fracking boomtowns from trucks hauling wastewater to disposal sites resulted in a surge of road fatalities and severe accidents.

"Fracking requires large amounts of water, and it subsequently generates a lot of wastewater," she said. "When trucks need to transport all that water within a narrow window of time to a disposal site, that poses a safety threat to other drivers on the road - especially since fracking occurs mostly in these boomtowns where the roadway infrastructure isn't built up enough to handle heavy truck traffic."

The study examined how fracking-related trucking affected the number of fatal crashes in the Bakken Formation in North Dakota from 2006-14, using the timing of fracking operations near certain road segments.

The researchers identified a causal link between fracking-related trucking and fatal traffic crashes, finding that an additional post-fracking well within six miles of the road segments led to 8% more fatal crashes and 7.1% higher per-capita costs in accidents.

"Our back-of-the-envelope calculation suggests that an additional 17 fatal crashes took place per year across the sampled road segments, representing a 49% increase relative to the annual crash counts of the drilling counties in North Dakota in 2006," Xu said. "That's a significant number when you're talking about a sparsely populated area like North Dakota.

"And besides the fatality and injury costs in fatal crashes quantified in our study, other costs may occur as well, including injury costs in nonfatal crashes and indirect expenditures on emergency services, insurance administrative costs, and infrastructure maintenance and replacement."

To lessen the negative impact on traffic fatalities as well as the severity of traffic accidents, the study proposes a tax that can be charged per well to internalize the costs of fracking-related trucking activities, similar to the impact fees implemented in energy-rich towns in Pennsylvania that yield hundreds of millions of dollars per year for the state.

"The tax could serve as an economic instrument that affects operators' drilling and fracking decisions and thus alleviate the hazard of the associated truck traffic indirectly," Xu said. "Likewise, a toll fee by miles driven by trucks could be collected on highways to absorb the negative impacts of fracking-related trucking."

The study also sheds light on more practical measures that local governments can undertake to curb the traffic risks associated with fracking.

"Since many fracking-induced fatal crashes take place in the daytime rush hours, local governments could adopt policies such as making a high occupancy vehicle lane for trucks carrying wastewater. An active traffic alert and warning system with live well-operations updates could also help drivers monitor traffic and avoid exposure to road hazards," she said.

Moreover, the paper calls for the active involvement of the oil and gas industry to seek ways to improve their workplace safety and mitigate the traffic hazard of fracking to road users.

"Our findings suggest that oil and gas operators could redistribute the traffic loads over time to avoid concentrated water hauling during peak hours," Xu said. "In the long run, since a well may need to be fracked multiple times over its productive life, operators may improve the water supply system by constructing water wells serving multiple well pads via a piping system. They could also develop the onsite wastewater treatment and disposal facilities as opposed to trucking wastewater over long distances. Such measures would reduce the long-term transport costs and the associated traffic effects."

The findings should give local and federal policymakers information when conducting due diligence and evaluating the regional costs and benefits of shale energy development, Xu said.

"Our study provides an estimate based on the North Dakota experience where population density and traffic volume is relatively low, but our findings have implications for other regions planning future shale development."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Engendering trust in an AI world

image: Above (left-right): SMU Professor David Llewelyn, Deputy Dean of SMU School of Law; SMU Associate Professor Warren Chik; Professor Ian Walden, Centre for Commercial Law Studies, Queen Mary University of London; SMU Associate Professor Alvin See (who presented SMU Associate Professor Yip Man's paper on her behalf); Mr KK Lim, Head of Cybersecurity, Privacy and Data Protection, Eversheds Harry Elias; and Mr Lanx Goh, Senior Legal Counsel (Privacy & Cybersecurity) & Global Data Protection Officer, Klook Travel Technology.

Image: 
Kareyst Lin

SMU Office of Research & Tech Transfer - Can you imagine a world without personalised Spotify playlists, curated social media feeds, or recommended cat videos on the sidebars of YouTube? These modern-day conveniences, which were made possible by artificial intelligence (AI), also present a scary proposition - that the machines could end up knowing more about us than we ourselves do.

According to Gartner's 2019 CIO Agenda survey, 37 percent of Chief Information Officers (CIOs) globally have already deployed AI technology in their organisations. The rapid adoption of AI solutions brings to focus the way data - which could consist of sensitive, confidential and personal information - are being managed and used by organisations.

Speaking at the conference panel on 'AI and Data Protection: New Regulatory Approaches', Singapore Management University (SMU) Associate Professor Warren Chik gave his perspective on how to conceptualise trust in a digital age. "When it comes to matters such as personal data, we don't treat AI as god. Therefore, we cannot rely on faith, which is what religion requires. We need something more substantial than that," he said.

In his talk titled 'Artificial Intelligence and Data Protection in Singapore: Consumers' Trust, Organisational Security and Government Regulation', Professor Chik explained that to engender trust in a digital solution, it is crucial that users are being engaged on the issues involved. "People tend to fear the unknown, and it is hard to have trust in something that you don't know."

Moderated by Professor David Llewelyn, Deputy Dean of the SMU School of Law, the roundtable featured speakers Professor Ian Walden, Centre for Commercial Law Studies, Queen Mary University of London; Associate Professor Yip Man (whose paper was presented by Associate Professor Alvin See on her behalf); as well as commentators Mr KK Lim, Head of Cybersecurity, Privacy and Data Protection, Eversheds Harry Elias; and Mr Lanx Goh, Senior Legal Counsel (Privacy & Cybersecurity) & Global Data Protection Officer, Klook Travel Technology.

AI as an influencer

The ability of an AI system to conduct personal profiling could fundamentally change a user's digital personality, said Professor Chik, highlighting a cause of worry for many.

"While an AI holds specific information such as your name and address, it also forms its own knowledge of your identity, and who you are as a person," Professor Chik said, citing algorithms used by social media feeds to collect data on one's identity, interests and surfing habits. From that data, the system then creates a profile of who they think you are.

"These algorithms - which may be right or wrong - feed you information, articles and links, and as a result brings about an effect on your thinking. In other words, AI can mold human behaviour, and this is a risk that makes a lot of people uncomfortable," Professor Chik said. The threat is very real, he emphasised, noting that regulators have clearly identified a need to regulate the use of data in AI.

In Singapore, for instance, the Protection from Online Falsehoods and Manipulation Act (POFMA) carries criminal provisions on the creation, use and alteration of bots to spread false information.

Data protection legislation: a balancing act

In trying to regulate data, there are always two competing objectives when regulating the use, collection and processing of personal data. "The first objective is to protect the data subject, and the second is to promote innovation," said Professor See, who presented Professor Yip's paper on her behalf.

Of the different types of protection for data subjects that exist today, the most commonly available option is the use of contracts. Professor Yip's paper points out that "[t]he problem with trying to regulate data use through terms and conditions is that in most cases, people don't read [the legal fine print]". The consent given is therefore not genuine.

Professor Llewelyn, who moderated the roundtable, added that the meaning of consent is an issue that needs to be explored in greater depth. "If a consumer were to accept an online contract in full without reading it, can it be realistically said that he or she has agreed to all the terms and conditions, and given full consent?" he asked. "Perhaps there should be legal acknowledgement given to the automatic nature of the commitment made in such contracts."

A more critical limitation of the contract as protection for the data subject, is that the contract only governs the information that is shared between the two parties bound by the contract. For instance, if Facebook were to transfer a user's personal data to a third-party not bound by the contract, the third-party firm will not be obligated to protect the user's information.

Data protection by design

Singapore's Personal Data Protection Act (PDPA), which regulates personal data through the use of legislation, is described as light touch regime that takes a strongly balanced approach between the need for privacy protection and the interest of business innovation.

Professor Yip's paper recognises that there is some level of tension between the two objectives mentioned above. The issue at hand, therefore, is how to strike a balance between individual rights and privacy, and the competing interest of economic growth and innovation, she noted.

At the end of the day, the focus is on preventing, rather than trying to remedy a breach of data privacy. "It is about recognising the rights of the individual and the privacy of their data, and at the same time, the need for organisations to collect, use and disclose personal data for legitimate and reasonable purposes," Professor Yip's paper added.

Another solution that Professor Yip explored in her paper was the use of technology instead of law to protect data subjects. In some cases, privacy can be directly built into the design and operation of operation systems, work processes, network infrastructure and even physical spaces. She nevertheless highlights that this solution is not perfect because it is against the interest of businesses which leverage data to make profits to build robust privacy safeguards into their systems and business models.

Credit: 
Singapore Management University

Geologists determine early Earth was a 'water world' by studying exposed ocean crust

image: Benjamin Johnson of Iowa State University woks at an outcrop in remote Western Australia where geologists are studying 3.2-billion-year-old ocean crust.

Image: 
Photo by Jana Meixnerova/provided by Benjamin Johnson

AMES, Iowa - The Earth of 3.2 billion years ago was a "water world" of submerged continents, geologists say after analyzing oxygen isotope data from ancient ocean crust that's now exposed on land in Australia.

And that could have major implications on the origin of life.

"An early Earth without emergent continents may have resembled a 'water world,' providing an important environmental constraint on the origin and evolution of life on Earth as well as its possible existence elsewhere," geologists Benjamin Johnson and Boswell Wing wrote in a paper just published online by the journal Nature Geoscience.

Johnson is an assistant professor of geological and atmospheric sciences at Iowa State University and a recent postdoctoral research associate at the University of Colorado Boulder. Wing is an associate professor of geological sciences at Colorado. Grants from the National Science Foundation supported their study and a Lewis and Clark Grant from the American Philosophical Society supported Johnson's fieldwork in Australia.

Johnson said his work on the project started when he talked with Wing at conferences and learned about the well-preserved, 3.2-billion-year-old ocean crust from the Archaean eon (4 billion to 2.5 billion years ago) in a remote part of the state of Western Australia. Previous studies meant there was already a big library of geochemical data from the site.

Johnson joined Wing's research group and went to see ocean crust for himself - a 2018 trip involving a flight to Perth and a 17-hour drive north to the coastal region near Port Hedland.

After taking his own rock samples and digging into the library of existing data, Johnson created a cross-section grid of the oxygen isotope and temperature values found in the rock.

(Isotopes are atoms of a chemical element with the same number of protons within the nucleus, but differing numbers of neutrons. In this case, differences in oxygen isotopes preserved with the ancient rock provide clues about the interaction of rock and water billions of years ago.)

Once he had two-dimensional grids based on whole-rock data, Johnson created an inverse model to come up with estimates of the oxygen isotopes within the ancient oceans. The result: Ancient seawater was enriched with about 4 parts per thousand more of a heavy isotope of oxygen (oxygen with eight protons and 10 neutrons, written as 18O) than an ice-free ocean of today.

How to explain that decrease in heavy isotopes over time?

Johnson and Wing suggest two possible ways: Water cycling through the ancient ocean crust was different than today's seawater with a lot more high-temperature interactions that could have enriched the ocean with the heavy isotopes of oxygen. Or, water cycling from continental rock could have reduced the percentage of heavy isotopes in ocean water.

"Our preferred hypothesis - and in some ways the simplest - is that continental weathering from land began sometime after 3.2 billion years ago and began to draw down the amount of heavy isotopes in the ocean," Johnson said.

The idea that water cycling through ocean crust in a way distinct from how it happens today, causing the difference in isotope composition "is not supported by the rocks," Johnson said. "The 3.2-billion-year-old section of ocean crust we studied looks exactly like much, much younger ocean crust."

Johnson said the study demonstrates that geologists can build models and find new, quantitative ways to solve a problem - even when that problem involves seawater from 3.2 billion years ago that they'll never see or sample.

And, Johnson said these models inform us about the environment where life originated and evolved: "Without continents and land above sea level, the only place for the very first ecosystems to evolve would have been in the ocean."

Credit: 
Iowa State University

Egg stem cells do not exist, new study shows

Researchers at Karolinska Institutet in Sweden have analysed all cell types in the human ovary and found that the hotly debated so-called egg stem cells do not exist. The results, published in Nature Communications, open the way for research on improved methods of treating involuntary childlessness.

The researchers used single-cell analysis to study more than 24,000 cells collected from ovarian cortex samples of 21 patients. They also analysed cells collected from the ovarian medulla, allowing them to present a complete cell map of the human ovary.

One of the aims of the study was to establish the existence or non-existence of egg stem cells.

"The question is controversial since some research has reported that such cells do exist, while other studies indicate the opposite," says Fredrik Lanner, researcher in obstetrics and gynaecology at the Department of Clinical Science, Intervention and Technology at Karolinska Institutet, and one of the study's authors.

The question of whether egg stem cells exist affects issues related to fertility treatment, since stem cells have properties that differ from other cells.

"Involuntary childlessness and female fertility are huge fields of research," says co-author Pauliina Damdimopoulou, researcher in obstetrics and gynaecology at the same department. "This has been a controversial issue involving the testing of experimental fertility treatments."

The new study substantiates previously reported findings from animal studies - that egg stem cells do not exist. Instead, these are so-called perivascular cells.

The new comprehensive map of ovarian cells can contribute to the development of improved methods of treating female infertility, says Damdimopoulou.

"The lack of knowledge about what a normal ovary looks like has held back developments," she says. "This study now lays the ground on which to produce new methods that focus on the egg cells that already exist in the ovary. This could involve letting egg cells mature in test tubes or perhaps developing artificial ovaries in a lab."

The results of the new study show that the main cell types in the ovary are egg cells, granulosa cells, immune cells, endothelial cells, perivascular cells and stromal cells.

Credit: 
Karolinska Institutet

Not a 'math person'? You may be better at learning to code than you think

image: Language skills are a stronger predictor of programming ability than math knowledge, according to a new University of Washington study. Here, study co-author Malayka Mottarella demonstrates coding in Python while wearing a specialized headset that measures electrical activity in the brain.

Image: 
Justin Abernethy/U. of Washington

Want to learn to code? Put down the math book. Practice those communication skills instead.

New research from the University of Washington finds that a natural aptitude for learning languages is a stronger predictor of learning to program than basic math knowledge, or numeracy. That's because writing code also involves learning a second language, an ability to learn that language's vocabulary and grammar, and how they work together to communicate ideas and intentions. Other cognitive functions tied to both areas, such as problem solving and the use of working memory, also play key roles.

"Many barriers to programming, from prerequisite courses to stereotypes of what a good programmer looks like, are centered around the idea that programming relies heavily on math abilities, and that idea is not born out in our data," said lead author Chantel Prat, an associate professor of psychology at the UW and at the Institute for Learning & Brain Sciences. "Learning to program is hard, but is increasingly important for obtaining skilled positions in the workforce. Information about what it takes to be good at programming is critically missing in a field that has been notoriously slow in closing the gender gap."

Published online March 2 in Scientific Reports, an open-access journal from the Nature Publishing Group, the research examined the neurocognitive abilities of more than three dozen adults as they learned Python, a common programming language. Following a battery of tests to assess their executive function, language and math skills, participants completed a series of online lessons and quizzes in Python. Those who learned Python faster, and with greater accuracy, tended to have a mix of strong problem-solving and language abilities.

In today's STEM-focused world, learning to code opens up a variety of possibilities for jobs and extended education. Coding is associated with math and engineering; college-level programming courses tend to require advanced math to enroll and they tend to be taught in computer science and engineering departments. Other research, namely from UW psychology professor Sapna Cheryan, has shown that such requirements and perceptions of coding reinforce stereotypes about programming as a masculine field, potentially discouraging women from pursuing it.

But coding also has a foundation in human language: Programming involves creating meaning by stringing symbols together in rule-based ways.

Though a few studies have touched on the cognitive links between language learning and computer programming, some of the data is decades old, using languages such as Pascal that are now out of date, and none of them used natural language aptitude measures to predict individual differences in learning to program.

So Prat, who specializes in the neural and cognitive predictors of learning human languages, set out to explore the individual differences in how people learn Python. Python was a natural choice, Prat explained, because it resembles English structures such as paragraph indentation and uses many real words rather than symbols for functions.

To evaluate the neural and cognitive characteristics of "programming aptitude," Prat studied a group of native English speakers between the ages of 18 and 35 who had never learned to code.

Before learning to code, participants took two completely different types of assessments. First, participants underwent a five-minute electroencephalography scan, which recorded the electrical activity of their brains as they relaxed with their eyes closed. In previous research, Prat showed that patterns of neural activity while the brain is at rest can predict up to 60% of the variability in the speed with which someone can learn a second language (in that case, French).

"Ultimately, these resting-state brain metrics might be used as culture-free measures of how someone learns," Prat said.

Then the participants took eight different tests: one that specifically covered numeracy; one that measured language aptitude; and others that assessed attention, problem-solving and memory.

To learn Python, the participants were assigned 10 45-minute online instruction sessions using the Codeacademy educational tool. Each session focused on a coding concept, such as lists or if/then conditions, and concluded with a quiz that a user needed to pass in order to progress to the next session. For help, users could turn to a "hint" button, an informational blog from past users and a "solution" button, in that order.

From a shared mirror screen, a researcher followed along with each participant and was able to calculate their "learning rate," or speed with which they mastered each lesson, as well as their quiz accuracy and the number of times they asked for help.

After completing the sessions, participants took a multiple-choice test on the purpose of functions (the vocabulary of Python) and the structure of coding (the grammar of Python). For their final task, they programmed a game -- Rock, Paper, Scissors -- considered an introductory project for a new Python coder. This helped assess their ability to write code using the information they had learned.

Ultimately, researchers found that scores from the language aptitude test were the strongest predictors of participants' learning rate in Python. Scores from tests in numeracy and fluid reasoning were also associated with Python learning rate, but each of these factors explained less variance than language aptitude did.

Presented another way, across learning outcomes, participants' language aptitude, fluid reasoning and working memory, and resting-state brain activity were all greater predictors of Python learning than was numeracy, which explained an average of 2% of the differences between people. Importantly, Prat also found that the same characteristics of resting-state brain data that previously explained how quickly someone would learn to speak French, also explained how quickly they would learn to code in Python.

"This is the first study to link both the neural and cognitive predictors of natural language aptitude to individual differences in learning programming languages. We were able to explain over 70% of the variability in how quickly different people learn to program in Python, and only a small fraction of that amount was related to numeracy," Prat said. Further research could examine the connections between language aptitude and programming instruction in a classroom setting, or with more complex languages such as Java, or with more complicated tasks to demonstrate coding proficiency, Prat said.

Credit: 
University of Washington

Researchers develop app to determine risk of preterm birth

An improved mobile phone app will help identify women who need special treatments at the right time and reduce emotional and financial burden on families and the NHS.

A team of researchers from the Department of Women & Children's Health, King's College London, supported by Guy's and St Thomas' Charity, the National Institute for Health Research and Tommy's have created a user-friendly mobile phone application, QUiPP v2, that will allow doctors to quickly calculate a woman's individual risk of preterm birth. This will help them to make sure women who need special treatments get them at the right time, but it also helps them to reassure women when their risk is low.

When babies are born early, before 37 weeks of pregnancy, they are more likely to die, or have physical, developmental and emotional problems. This can result in a huge emotional and financial burden for families and substantial cost for the NHS and care services.

Some women are known to be more likely to have their babies early, and some have symptoms of labour too early in pregnancy. If identified, these women can be given extra monitoring and/or special treatments that aim to prevent early delivery and ensure the infants have the best chance of surviving without long-term problems.

QUiPP v2 calculates the risk based on a woman's individual risk factors, such as previous preterm birth, late miscarriage or symptoms, along with clinical test results that help to predict preterm birth (i.e. fetal fibronectin tests and cervical length measurements). The app then produces a simple individual % risk score.

In two papers, published in Ultrasound in Obstetrics and Gynecology, the authors show how they developed and tested the complicated algorithms (mathematical calculations) incorporated in the app which calculate the simple % risk.

"We are delighted to be able to share the findings of our work which shows that the QUiPP app is very reliable in predicting preterm birth in women at risk. This should mean that women who need treatments are offered them appropriately, and also that doctors and women can be reassured when these treatments are not needed, which reduces the possibility of negative effects and unnecessary costs for the NHS," said lead author Dr Jenny Carter, Senior Research Midwife, Department of Women & Children's Health at King's College London.

The authors have recently completed the EQUIPTT trial, where they evaluated whether QUiPP improves appropriate targeting of care. Results of this trial are expected later on this year.

Patient Safety Minister, Nadine Dorries said: "The joy a newborn brings can be cruelly contrasted alongside the fear when a baby is born too soon. Being able to identify mothers at risk of a pre-term birth as early as possible can help clinicians to intervene sooner, improve safety and ultimately save lives.

"We want the NHS to be the safest place in the world to give birth and the harnessing of promising digital innovations such as this is another stepping stone on this shared journey."

The team will continue to collect data (which will be used to update the algorithms in the future) through the ongoing UK wide PETRA study, and through the Preterm Clinical Network Database which is a global clinical registry of care given to women at risk of preterm birth.

Credit: 
King's College London