Tech

Computational imaging benefits from untrained neural network

Computational imaging (CI) techniques exploit optical device and calculation algorithms to reconstruct object information. A key goal of CI is to develop more advanced algorithms in order to simplify hardware equipment and improve imaging quality.

Deep learning, one of the most powerful algorithms, uses a deep neural network to learn from a large number of input-output data pairs so as to establish mapping relationships among data. It has been widely used for CI and has achieved state-of-the-art results in many imaging problems. However, most existing deep-learning-based CI methods face challenges in data collection and generalization.

In a recent study, investigators from the Chinese Academy of Sciences described how they combined an untrained neural network and physics knowledge to eliminate the limitations of deep-learning-based CI methods.

"Our imaging method does not need a large amount of data to train a neural network. All it needs is the measurement of the object recorded by the detector and the physics model from the object to the measurement," said Prof. SITU Guohai, leader of the research team.

The researchers demonstrated their technique on a lensless quantitative phase imaging problem, which requires reconstructing the phase information lost in the detection stage.

The new approach is based on a deep neural network (DNN), a multi-layer calculation model widely used to fit different mapping functions from a lot of training data pairs, and the free space propagation principle, which has been established by hundreds of years of research.

The researchers turned to knowledge of the optical system to enhance the parameters' optimization of DNN. They fed the measured intensity diffraction pattern to a randomly initialized DNN (untrained), took the output of the DNN as the estimation of phase information, and calculated the estimate of the intensity diffraction pattern starting from the estimated phase by the free-space propagation principle.

Then, the parameters in the DNN were updated to minimize the error between the measured and estimated pattern. Along with the minimization of error, the output of DNN also converges upon real phase information.

"While most previous deep-learning-based CI methods used a lot of training data to optimize the parameters in DNN, our approach exploits the raw measurement and the physical model," said SITU. "It's a general method that can be used to reconstruct different kinds of objects."

The researchers tested their technique by using it to image a phase object. The new approach was able to resolve phase information using a single intensity diffraction pattern.

"The new approach for phase imaging is a single shot, non-interferometric method, which has great potential in microscopy and optical metrology. Also, a similar framework can be used in various CI methods provided that the physical model is known," said SITU.

This work was supported by the Key Research Program of Frontier Sciences of the Chinese Academy of Sciences, the Sino-German Center, and the National Natural Science Foundation of China.

Credit: 
Chinese Academy of Sciences Headquarters

KIST develops large-scale stretchable and transparent electrodes

image: A Korean research team has developed a large-scale stretchable and transparent electrode for the stretchable display. The Korea Institute of Science and Technology (KIST) announced that a research team has developed a technology to fabricate a large-area wavy silver nanowire network electrode that is structurally stretchable with a high degree of conductivity and transparency.

Image: 
Korea Institute of Science and Technology (KIST)

A Korean research team has developed a large-scale stretchable and transparent electrode for the stretchable display. The Korea Institute of Science and Technology (KIST) announced that a research team, led by Dr. Sang-Soo Lee and Dr. Jeong Gon Son at KIST's Photo-Electronic Hybrids Research Center, has developed a technology to fabricate a large-area (larger than an A4 sized paper) wavy silver nanowire network electrode that is structurally stretchable with a high degree of conductivity and transparency.

Transparent electrodes, through which electricity flows, are essential for solar cell- and touchscreen-based display devices. The indium tin oxide (ITO)-based transparent electrode is the one currently commercialized for use. The ITO-based transparent electrode is made of a thin layer of metallic oxides that have very low stretchability and is easy to fragile. Thus, the ITO electrode hard to be used for flexible and wearable devices, which are expected to quickly become mainstream products in the electronic device market. Therefore, it is necessary to develop a new transparent electrode with stretchability as one of its special features.

A silver nanowire is tens of nanometers in diameter, and the nano material itself is long and thin like a stick. The small size of the nanowire allows it to be bent when an external force is applied. Since it is made of silver, silver nanowire has excellent electrical conductivity and can be used in a random network of straight nanowires to fabricate a highly transparent and flexible electrode. However, despite the fact that silver nanowire is bendable and flexible, it cannot be used as a stretchable material.

Other research teams have studied stretchable electrodes using a method of placing silver nanowires on pre-stretched elastic substrates and relaxing the substrates so that they return to their original size, while in the process creating wavy or wrinkled silver nanowire structures with small radius of curvature. However, this method has one major problem: the nanowires are easily broken by the repeated stretching-relaxing cycles. This problem typically have been approached by increasing the number of nanowires to make a high-density nanowire network so that enough electrical links can still be maintained to use the elastic electrodes, even if the nanowires are partially broken. However, creating a high-density network considerably decreases transparency and makes it very challenging to fabricate a highly transparent electrode that can be stretched and transformed with both transparency and conductivity.

The KIST research team, led by Dr. Sang-Soo Lee and Dr. Jeong Gon Son, has developed a new process to form a structurally stretchable nanowire network by bringing the nanowire networks in contact with solvents to overcome the problem nanowire breakage and damage when relaxing the pre-stretched substrate. When the solvents are placed on the nanowire networks, they become wet, and there is less frictional resistance between the individual nanowires. In particular, each silver nanowire can be slid by water and rearranged into a curved nanowire structure with a large radius of curvature, so that a structure capable of stably stretching can be realized. Since the nanowires do not experience any unstable conditions, there are no nanowire network fractures or nanowire layer peeling.

By fabricating a silver nanowire network in this way, the research team was able to stretch the substrate and its nanowires by at least 50% of the initial length, stably maintaining transparency and conductivity for approximately 5,000 stretching-relaxing cycles. The team also found that this type of material could be produced using an inexpensive and environmentally-friendly process that uses ethanol and water as solvents.

The KIST research team used its newly developed process to form a wavy silver nanowire network film on a substrate the size of an A4 paper and succeeded in creating a stretchable and transparent display the size of an adult's hand. The created display maintained its constant luminescence efficiency despite the imposition of various mechanical deformations. Through testing, the team was able to prove the applicability of the new process to all displays that are transparent except for their electroluminescent layer.

Dr. Sang-Soo Lee at KIST said, "The stretchable and transparent electrodes made using wavy silver nanowire networks, developed through this research, have a high degree of electrical conductivity that is not changed by any deformation." KIST's Dr. Jeong Gon Son added, "Since the technology can be used for the mass production, it is expected to have a great impact on markets related to wearable electronic devices, such as high-performance smart wear, and the medical equipment field."

Credit: 
National Research Council of Science & Technology

How does nitrogen dynamics affect carbon and water budgets in China?

image: This is a schematic diagram of the nitrogen cycle.

Image: 
Jingjing Liang

As an important part of biogeochemical cycling, the nitrogen cycle modulates terrestrial ecosystem carbon storage, water consumption, and environmental quality. It remains unclear how nitrogen dynamics affects carbon and water budgets in China. Incorporating the terrestrial nitrogen cycle into the Noah LSM with multi-parameterization options (Noah-MP) helps address the above question.

By comparing the simulations of the nitrogen-augmented Noah-MP-CN with those from the original Noah-MP in China, a recent study quantifies the impacts of nitrogen dynamics on the terrestrial carbon and water cycles, as reported in Advances in Atmospheric Sciences.

The lead author is Jingjing Liang, a PhD student from the Institute of Atmospheric Physics, Chinese Academy of Sciences. "Our study is the first regional application of Noah-MP-CN by explicitly accounting for spatially varying biogeochemical parameters based on the previous point-scale work," explains Liang.

The results show that incorporating nitrogen dynamics improves the simulations of gross primary productivity (GPP) and leaf area index (LAI) in most of the regions in terms of a slightly higher correlation coefficient, a much lower root-mean-square error (RMSE), and a better spatial pattern of multi-year climatology. The overestimation of GPP by Noah-MP with a dynamic vegetation option is greatly reduced by considering the limitation of nitrogen availability, especially in the southeastern regions of China. Moreover, Noah-MP-CN provides a more accurate LAI simulation in different land-cover types, with reduced RMSEs and increased correlations.

The impacts of fertilizer application over cropland on carbon fixation, water consumption and nitrogen leaching are investigated through a trade-off analysis. Compared to halved fertilizer use, the actual quantity of application increases GPP and water consumption by only 1.97% and 0.43%, respectively; however, the nitrogen leaching is increased by 5.35%. This indicates that the current level of fertilizer use has only a negligible impact on water consumption but a damaging impact on the environment.

Despite the superior performance of Noah-MP-CN over Noah-MP, Noah-MP-CN continues to overestimate LAI and GPP. Future work needs to focus on more systematic calibration of model parameters and including more biogeochemical processes such as soil organic matter (SOM) and microbe dynamics.

"As the largest global reservoir of terrestrial organic carbon, SOM not only affects the storage of nutrients in the soil (especially for nitrogen) but also results in environmental pollution," says the corresponding author, Prof. Zong-Liang Yang from the Department of Geological Sciences, Jackson School of Geosciences, the University of Texas at Austin. "More concerted efforts are required to improve our understanding and modeling the SOM and microbe dynamics in land surface and earth system models."

Credit: 
Institute of Atmospheric Physics, Chinese Academy of Sciences

Training GPs to identify domestic violence leads to dramatic increase in finding victims

A training programme that teaches GPs how to identify domestic violence and abuse (DVA) victims has led to a 30-fold increase in DVA referrals, according to a collaborative study of 205 general practices led by Queen Mary University of London, in partnership with the Centre for Academic Primary Care, Bristol Medical School.

Across the world, 'lockdowns' in response to the COVID-19 pandemic are putting women at increased risk of DVA. In England, the National Domestic Abuse Helpline, run by Refuge, has seen a 50 per cent increase in calls compared to pre-COVID-19, along with a 400 per cent increase in web traffic

In most settings, including high-income countries, healthcare is still not responding adequately to DVA. The World Health Organization, National Institute for Health and Care Excellence (NICE) and Department of Health and Social Care have called for greater health sector involvement in helping those affected.

IRIS (Identification and Referral to Improve Safety) is a training and support programme to help primary care teams identify and refer women affected by DVA. It involves training the whole primary care team at GP practices (GPs, nurses, practice managers and healthcare assistants and ancillary staff) in identifying DVA in their patients.

This includes adapting electronic medical records to prompt health workers to ask further questions about DVA, when presented with clinical conditions such as depression, anxiety or injury. The programme also includes a simple referral pathway to a named DVA advocate, ensuring direct access for women to specialist services.

The latest research, published in the journal BMC Medicine, looked at 205 general practices across London over four years. It compared practices in four London boroughs which had implemented the IRIS DVA training and referral programme, with general practices in a fifth borough which only had a stand-alone education session.

The study found that the benefits seen in the 144 practices receiving the full IRIS DVA programme were substantial, increasing DVA referrals 30-fold, with no increase in the 61 practices in a comparator borough. IRIS also led to a 27 per cent increase in new identification of women affected by DVA in the implementation borough, but not in the comparator borough.

Beth, who used the DVA service as a result of the IRIS programme, said: "IRIS were the first people to get right into my life and begin to make that difference. I had stressed repeatedly about what was happening to many professionals in the past but no-one could really help to make it stop. By the time IRIS became involved I was exhausted. My Advocate Educator was incredible, so knowledgeable, patient and intent on transforming my stuck situation. I will always feel beyond grateful to her and the team at IRIS for giving me the freedom I have now. The children and I are now safe and happy. It feels amazing."

Study lead Dr Alex Sohal from Queen Mary University of London said: "This new work shows that implementation of the IRIS programme surprisingly remains highly effective at scale in day to day general practice. It allows GPs to engage constructively with DVA rather than turning their back on this vulnerable group of patients."

A previous smaller randomised controlled trial in London and Bristol found IRIS to be very effective in identifying and referring women facing domestic abuse. The IRIS programme has been funded in 41 English and Welsh sites, but in one quarter of those, funding has since stopped despite the programme being effective

Co-author Professor Chris Griffiths from Queen Mary University of London, said: "Health commissioners can now commission this programme with the confidence that it works in practice. IRIS can help GPs respond to the increased needs of women during COVID-19. Our work shows that the Mayor of London's recent investment in rolling out the IRIS DVA programme across a further seven London boroughs is an excellent use of resource. Boroughs and Violence Reduction Units across the UK should follow this lead."

Professor Rosalind Raine (Director, NIHR ARC North Thames) said: "We were delighted to be able to fund this research, which has profound implications for women and their families in great need. Our findings are timely given the new Domestic Abuse Bill in which the Government has committed to investing in domestic abuse training for responding agencies and professionals."

Co-author Professor Gene Feder of University of Bristol said: "This is a landmark study, showing that an evidence-based DVA programme commissioned within the NHS is effective and sustainable in general practice. Our findings strengthen the case for the implementation of IRIS across the whole NHS and further development of a global primary care based response to DVA."

The social enterprise IRISi, who work to improve the healthcare response to gender-based violence, have recently been helping GP teams to continue to respond to domestic abuse during the COVID-19 lockdowns, by releasing guidance on how to apply IRIS during telephone and video consultations with their patients.

Credit: 
Queen Mary University of London

Graphene sets sail in microgravity

image: Graphene light sail of 3mm in diameter with a mass of 0.25 mg 'sets sail' when pointed with a 1W laser. The prototype has a graphene micromembrane design that reduces the overall mass while keeping functional the complete area of the sail.

Image: 
Dr. Santiago Jose Cartamil-Bueno

Overseas exploration and trade during the Age of Discovery (15th-17th centuries) were possible by sail technology, and deep-space exploration will require the same for the coming Age of NewSpace. This time, however, the new sails shall move with light instead of wind, for which these light sails need to be extremely large, thin, lightweight, reflective and strong.

In a light-hearted leap for humankind, ESA-backed researchers demonstrate the laser-propulsion of graphene sails in microgravity. In an article recently published in Acta Astronautica, they report a scalable design that minimizes the overall sail mass and hence increases their thrust upon light irradiation. In addition, they prove the new sail concept by accelerating prototypes in a free-fall facility with 1W-lasers, reaching up to 1 m/s2. This milestone paves the way for lightweight ultralarge sails and eventually may help us to reach other star systems in a human lifespan.

Let me play among the stars

Physical exploration of deep space became a reality when NASA's Voyager 1 left our Solar System in 2012, after a trip of 35 years and 121 AU (18,100,000,000 Km, 11,250,000,000 mi). Were Voyager 1 traveling to Alpha Centauri Cb, the exoplanet of our closest neighboring star system at 260,000 AU, humanity would have to wait dozens of millennia and hope that the shuttle kept some power to reach us then.

As demonstrated first by JAXA's mission IKAROS (2010) and recently by The Planetary Society's LightSail 2 (2019), using light sails as propulsion system is among the most promising ideas to enable fast and affordable space trips. Not only sails do not require fuel to move, but they save its corresponding costly weight and that of its containing tanks. Unfortunately, the light radiation pressure (momentum transfer of photons) only confers relevant acceleration when the sails are sufficiently large (from few to thousands of squared meters) with a minimal mass, and currently used materials are limited when scaling up their size.

"Graphene is part of the solution", says Dr. Santiago J. Cartamil-Bueno, SCALE Nanotech's director and leader of GrapheneSail team. "We demonstrate a novel sail design that reduces the overall sail mass by using perforated films. By covering the holes with CVD graphene, the full area of the sail is again available for optical performance at minimal mass cost. The fabrication is relatively simple and could be easily scaled up to squared kilometers, although the in-space deployment of such a giant sail will be a serious challenge".

Völlig losgelöst, von der Erde

With the support of ESA, the researchers gained access to the ZARM Drop Tower in Bremen (Germany), in order to test the graphene sails in space-like conditions. Here, experiments are performed in a free-fall capsule that ensures a high-quality microgravity environment (2.

Dr. Thorben Könemann, Dep. Scientific Director, ZARM Drop Tower Operation and Service Company, remarked: "It is always a great pleasure for us to support visionary and promising experiment concepts. The success of the GrapheneSail team underlines again the capabilities of the Bremen Drop Tower - offering not only an excellent microgravity environment for fundamental research, but also being a first stepping stone and testbed for space technology without the complexity of in-orbit operations".

Accessing this type of facilities is not trivial, even for such a breakthrough initiative. Luckily, Dr. Astrid Orr, ESA's Physical Sciences Coordinator at ESTEC, saw it different: "this project is a wonderful example of scientific research that can be performed with the support of ESA on a ground-based space-analogue platform - in this case microgravity - and which also has high potential for ESA's future spaceflight and exploration programs".

"We want to set sails to Mars before SpaceX," jokes Dr. Santiago J. Cartamil-Bueno, "but for now we keep our feet on the ground. Currently, the graphene sails are being developed through the European Space Agency Business Incubator Center Hessen & Baden-Württemberg and we look for more strategic partners that allow us to scale the technology up for an eventual test in space". Maybe it's the final countdown for graphene to take off.

Credit: 
SCALE Nanotech

Scientists observes changes in Earth's surface movement months before big earthquakes

This is part of the results of a new investigation carried out by scientists from Germany, Chile and the United States, including the University of Concepción geologist Marcos Moreno Switt. This team investigated the signals that captured the movement of GNSS satellite navigation stations (GPS) before the major Maule earthquakes, magnitude 8.8, and Tohoku-oki, magnitude 9.0. Scientists report their work in the latest issue of the prestigious journal "Nature".

Asked about the implications of this study, Dr. Marcos Moreno, research from Universidad de Concepción, says: "Thanks to satellite data today we can identify with great precision how the Earth's surface deforms before major earthquakes. This allows us to identify the changes that may be related to the processes that trigger earthquakes. The vast majority of major earthquakes are likely to be accompanied by precursor activity, as had already been recorded before the Iquique earthquake in 2014, and now before the 2011 Japan and Maule earthquakes in 2010. Much remains to be understood. this pioneering activity, but it is a great advance to be able to detect these movements. This is the focus of our new RING 2020 PRECURSOR project, financed by Anid, in which we will integrate an interdisciplinary team of Chilean and foreign researchers to obtain more and better information on these processes "

Credit: 
Universidad de Concepción

Fighting autoimmunity and cancer: The nutritional key

video: Video to illustrate the scientific discovery

Image: 
Luxembourg Institute of Health

Scientists at the Department of Infection and Immunity of the Luxembourg Institute of Health (LIH) revealed a novel mechanism through which the immune system can control autoimmunity and cancer. In the special focus of the researchers were regulatory T cells - a specific type of white blood cells that in general act as a brake on the immune system. The LIH research team led by Prof Dirk Brenner, FNR ATTRACT fellow and Head of Experimental & Molecular Immunology, revealed a mechanism that controls the function of regulatory T cells and determines the balance between autoimmunity and anti-cancer activity. In a preclinical model, the scientists further showed that the elucidation of the metabolic mechanism of a disease can lead to disease reduction by a rationally-designed diet that specifically addresses these metabolic alterations. This sets a new direction for future treatment of metabolic diseases. These findings, which were published today in the leading international journal Cell Metabolism, hold important implications for the development of personalised treatment options for autoimmune disorders and cancer.

"Our immune system is needed for a healthy body function and protects us from all kinds of infections. Particularly important in this respect are T cells, and specifically regulatory T cells. Although these represent only a small fraction of all T cells, they are crucial to keep our immune system in check" explains Prof Brenner. "If regulatory T cells are not functional, the immune system gets out of control and turns against its own body. This can lead to detrimental autoimmune diseases like multiple sclerosis, type I diabetes or arthritis. However, a highly reactive immune system can kill cancer cells very efficiently. This has led to the development of 'checkpoint inhibitors', specific drugs that unleash an immune system attack on cancer cells and which won the Nobel Prize in Medicine in 2018". The Luxembourgish scientists took this angle and revealed a novel mechanism by which this balance between an extreme or subdued immune reaction can be controlled by modifying regulatory T cell metabolism.

Initially, the researchers focused on how regulatory T cells cope with stress. Cellular stress can originate from the cells themselves, for example when they get activated and divide, but also from their environment, especially from nearby tumour cells. Free radicals called reactive oxygen species (ROS) are the molecular mediators of cellular stress. These are harmful for the cells and therefore need to be inactivated. "Free oxygen radicals are 'neutralised' by antioxidants and the major antioxidant in T cells is a molecule known as glutathione. We were surprised when we realised that regulatory T cells had about three times as much glutathione as other T cells. This pointed to an important function", says Henry Kurniawan, first author of the study and PhD student in Prof Brenner's group. Through a sophisticated genetic approach, the scientists removed a gene named 'glutamate cysteine ligase' (Gclc) only in a small population of regulatory T cells in mice. The Gclc gene is instrumental for glutathione production. Prof Brenner's team discovered that free radicals accumulated in these genetically altered regulatory T cells and that these cells lost their ability to act as a brake on the immune system. Strikingly, this led to a massive immune activation and a fatal autoimmune disease.

The team also found that the absence of glutathione in regulatory T cells increased serine metabolism massively. Serine is one of the 22 different amino acids that constitute the building blocks of proteins, which are in turn important for the structure and function of cells. No previous research group had studied the connection between glutathione, free radicals, serine and regulatory T cell function before. Prof Brenner's team characterised the metabolic alteration that led to the observed autoimmune disease in their mutant mice. Based on their findings, they designed a specific nutritional plan with the aim of correcting these disease-causing metabolic shifts. This dietary plan lacked both the amino acids serine and the closely related glycine. Interestingly, this engineered precision diet suppressed the severe autoimmunity and no disease developed. "Importantly, our study shows that the absence of only 2 out of 22 amino acids can cure a complex autoimmune disease. Therefore, elucidating the exact metabolic and molecular basis of a disease offers the possibility to correct these metabolic abnormalities through a special diet that is precisely adapted to the delineated disease mechanism. Our study might be a first step in the direction of the personalised treatment of metabolic diseases and autoimmunity", explains Prof Brenner.

"The relationship between glutathione, free radicals and serine can be used as a 'switch' to modulate immune cell activation. A higher immune cell activity is beneficial for cancer patients. We were intrigued by the idea of using our findings also to boost anti-tumor responses" he adds. Indeed, the team further showed that lower glutathione levels in regulatory T cells and the resulting rise in immune cell activation led to a significant tumour rejection, which might open up new therapeutic avenues for cancer treatment. "These astonishing results show the enormous potential of tweaking metabolism to prevent autoimmunity and target cancer. This could pave the way for the development of a new generation of immunotherapies," explains Prof Markus Ollert, Director of LIH's Department of Infection and Immunity. "The publication of these results in such a competitive and prestigious international journal is a momentous accomplishment not just for our department and institute, but for the entire Luxembourgish biomedical research community", he concludes.

In future projects, the researchers will use their findings to elaborate new approaches for therapeutic intervention. In that respect, the scientists have already proven that their delineated disease-controlling mechanism is also relevant in human regulatory T cells.

Due to its significance, the publication was selected by Cell Metabolism to be featured as the cover story of the May issue of the journal.

Involved research teams

Prof Dirk Brenner is the Deputy Head of Research & Strategy at LIH's Department of Infection and Immunity. He is Professor for Immunology & Genetics at the Luxembourg Center for Systems Biomedicine (LCSB) of the University of Luxembourg and Professor of Allergology at the University of Southern Denmark. He received a prestigious ATTRACT Consolidator grant from the Luxembourg National Research Fund (FNR), in 2015 to set up the Experimental & Molecular Immunology research group at LIH. The FNR-ATTRACT programme supports the national research institutions by expanding their competences in strategic research areas by attracting outstanding young researchers with high potential to Luxembourg.

The present study was performed in close collaboration with a national and international team and involved partners from LIH's Department of Infection and Immunity, LIH's Department of Oncology, the Braunschweig Integrated Center of Systems Biology (BRICS) of the Technische Universität Braunschweig (Germany), the Helmholtz Centre for Infection Research (Germany), the Campbell Family Institute for Breast Cancer Research at the University of Toronto (Canada), the Institute for Medical Microbiology and Hospital Hygiene at the University of Marburg (Germany), the Department of Environmental Health Sciences at the Yale School of Public Health (USA), the Odense Research Center for Anaphylaxis (ORCA) of the Odense University Hospital (Denmark), the Department of Biomedical Genetics and Wilmot Cancer Institute of the University of Rochester Medical Center (USA), the Departments of Medical Biophysics and Immunology at the University of Toronto (Canada) and the University of Hong Kong (China).

Credit: 
Luxembourg Institute of Health

Considering how many firms can meet pollutant standards can spur green tech development

When a government agency considers tightening a standard on a pollutant, it often considers the proportion of firms that can meet the new standard, because a higher proportion suggests a more feasible standard. A new study developed a model of regulation in which the probability of a stricter standard being enacted increased with the proportion of firms in an industry that could meet the standard. The study found that regulations that consider the proportion of firms that can meet the new standard can motivate the development of a new green technology more effectively than regulations that do not consider this factor.

The study, by researchers at Carnegie Mellon University and The Hong Kong University of Science and Technology, appears in Management Science.

"Our analysis highlights the importance of considering the interplay of industry capability and uncertainty about a new green technology's payoff in a firm's decision about development," says Alan Scheller-Wolf, Professor of Operations Management at Carnegie Mellon University's Tepper School of Business, who coauthored the study.

A government agency's potential regulatory action is an important driving force for firms to develop and adopt new green technologies. Despite the fact that regulation often takes industry capability into account, most prior research has assumed that government agencies move to stricter standards with fixed probabilities regardless of industry capability. In this study, researchers sought to determine how the uncertainty of a new technology's payoff, and the strategic effects induced by regulation based on industry capability, jointly affect firms' incentives to develop or adopt a new green technology.

To do this, the researchers developed a model based on game theory to study firms' decisions on developing, or adopting, a new green technology when there was a possibility of new regulation. The model considered factors that may affect a firm's decision to innovate or adopt a green technology: potential benefits from the technology; anticipated costs of developing, adopting, and using this technology; other firms' decisions, and regulation.

The study found that regulations that consider the proportion of firms that can meet the new standard (often indicated by the industry's voluntary adoption level) are more effective at spurring development of a new green technology than regulations that ignore the voluntary adoption level.

"What this means is that in an industry in which firms can easily catch up with a new technology, a government agency may want to use a regulation that explicitly considers industry capability to encourage innovation," suggests Xin Wang, Assistant Professor in the Department of Industrial Engineering and Decision Analytics at the Hong Kong University of Science and Technology, who led the study (Wang received his Ph.D. in operations management from the Tepper School of Business).

The study also found that regulation that is more aggressive (for which there is a higher probability of enforcing a stricter standard for a given voluntary adoption level) encourages more firms to adopt a green technology once the technology becomes available, but may discourage firms from developing it in the first place in the face of intense competition.

"This suggests that in an industry with intense competition, a government agency should exercise caution about being too aggressive with regulation, which could stifle innovation," explains Soo-Haeng Cho, Associate Professor of Operations Management at Carnegie Mellon University's Tepper School of Business, who coauthored the study.

Credit: 
Carnegie Mellon University

Innovative drunk driving program lowers risk of rearrest

An innovative statewide alcohol-monitoring program that requires drunk drivers to be tested frequently for alcohol use significantly lowers the likelihood that participants will be rearrested or have probation revoked, according to a new RAND Corporation study.

The study analyzed results from South Dakota's 24/7 Sobriety Program, a public safety effort that imposes very frequent alcohol testing along with swift but modest sanctions for those testing positive or missing a test -- typically a night or two in jail.

Researchers found that among those who had been arrested for a second or third drunk driving offense, 24/7 participants were almost 50 percent less likely to be rearrested or have their probation revoked in the following year compared to those who were not in the program.

Evidence suggests the trend continued into the second and third years following arrest.

The results are published online by the Journal of Policy Analysis and Management.

"The 24/7 Sobriety program is targeted at individuals whose alcohol use leads them to repeatedly threaten public health and safety," said Beau Kilmer, the study's lead author and director of the RAND Drug Policy Research Center. "Our research shows that 24/7 is making a difference in people's lives and reducing the risk of rearrest."

The 24/7 Sobriety program started in 2005 as a pilot project in a small number of South Dakota counties. As a condition of bond, individuals arrested for DUI were ordered to abstain from alcohol and show up at a law enforcement office to undergo twice-daily breathalyzer tests.

Individuals who test positive for any alcohol or who skip the tests are immediately subject to a short jail term.

Over time, the program expanded throughout the state and other alcohol testing technologies were incorporated. More than 30,000 South Dakotans participated in program through early 2017. The program also expanded in terms of who was eligible to participate.

"This study focuses on DUI participants, but the program's impact may be much broader," said co-author Greg Midgette, an assistant professor of criminology and criminal justice at the University of Maryland and an adjunct researcher at RAND. "There are many other kinds of crime, and aspects of life, that may be positively affected by less heavy drinking."

Today, the program is used throughout South Dakota, and similar efforts have been adopted in North Dakota and other jurisdictions, including the Greater London Authority in England. The program also has been designated as "promising" in CrimeSolutions.gov, the U.S. Department of Justice's evidence-based practices portal.

The RAND analysis is the first peer-reviewed study of the 24/7 program using individual-level data.

While previous RAND research focused on the program's impact at the county level, the new study allows policymakers to make comparisons with other well-studied programs that target the same population.

RAND based its analysis on information from 16,513 individuals who were arrested for a second or third drunk driving offense in South Dakota between 2004 and early 2012. The researchers controlled for several factors to help rule out alternative explanations and took advantage of the fact that the program was implemented in counties at different times.

"[W]e find strong evidence that 24/7 participation reduced criminal activity at 12 months after the initial arrest, and perhaps longer," the study concludes. "These findings provide support for 'swift-certain-fair' approaches to applying sanctions in community supervision."

Credit: 
RAND Corporation

Decoding the skies: The impact of water vapor on afternoon rainfall

The role that atmospheric water vapor plays in weather is complex and not clearly understood. However, University of Arizona researchers have started to tease out the relationship between morning soil moisture and afternoon rainfall under different atmospheric conditions in a new study in the journal Geophysical Research Letters.

"The prevailing wisdom on the relationship between soil moisture and rainfall is that if you have wetter soil in the morning, you'll have a greater occurrence of rainfall in afternoon, but it's more complicated than that," said lead author Josh Welty, a UArizona doctoral student in the Department of Hydrology and Atmospheric Sciences. "On a global scale, we see evidence that you can have greater chances of afternoon rainfall over both wet and dry soil conditions, depending on atmospheric moisture."

The team, which also included researchers from the Desert Research Institute in Nevada and NASA's Goddard Space Flight Center, used satellite-based observations of soil moisture and afternoon rainfall in the northern hemisphere from the last five years. The work was supported by NASA and is based on NASA satellite data from the Global Precipitation Measurement mission and the Soil Moisture Active Passive satellite, as well as atmospheric moisture and movement data from the Modern-Era Retrospective analysis for Research and Applications Version 2, or MERRA-2, model, which incorporates satellite observations.

The researchers found that on days when wind blows in little atmospheric moisture, afternoon rainfall is more likely to occur over wetter soils or higher relative humidity. On days when wind introduces lots of atmospheric moisture, afternoon rainfall is more likely over drier soils or lower relative humidity. The team also found that for both conditions, afternoon rainfall occurrence is more likely with warmer morning soil or air temperature.

The researchers focused on days in which afternoon rainfall occurred and noted the difference between the number of rainfall days that occurred over wetter-than-average soil and the number of rainfall days that occurred over drier-than-average soil. They then grouped their results into three categories: high, mid and low atmospheric moisture transport by wind.

This research builds on a 2018 study that identified soil moisture's role in afternoon rainfall amount in the Southern Great Plains of Oklahoma. The new findings show that the relationship between soil moisture, afternoon rainfall and atmospheric moisture in Oklahoma doesn't apply across the entire northern hemisphere.

"Over the Southern Great Plains, we found that when the wind brings less moisture, dry soils are associated with increases in rainfall amount; and when the wind brings greater moisture, wet soils are associated with increases in rainfall amount. In the current study, we find that, actually, in many regions, the opposite is true for the likelihood of afternoon rainfall," Welty said.

Understanding the role of water vapor in weather is important because its effects are felt everywhere, according to Welty's thesis adviser and paper co-author Xubin Zeng, Agnese N. Haury Endowed Chair in Environment and director of the Climate Dynamics and Hydrometeorology Center and Land-Atmosphere-Ocean Interaction Group.

"For example, for the Southern Great Plains there are many tornado activities because there is water vapor moving in from the Gulf of Mexico. Also, on the California coast you talk about severe flooding from atmospheric rivers," which are corridors of concentrated water vapor that can quickly precipitate once hitting a mountain range, causing mass flooding," Zeng said.

"Water vapor brought in by the winds is an important source to understand. In the past, people didn't pay enough attention to it in studying how land conditions affect rainfall, potentially making their results misleading. Once we consider the wind's movement of water vapor, the results become more robust," Zeng said.

Understanding this relationship is even more important as global warming changes patterns of atmospheric moisture, soil moisture and more. Such changes will not only have effects on weather and natural disasters, but also on agriculture, Zeng said.

"The results really show the complexity of the land's influence on weather and climate," said physical scientist and paper co-author Joe Santanello from NASA's Goddard Space Flight Center, who chairs the NASA-supported Local Land-Atmosphere Coupling working group to improve weather and climate models. "When you add in the human factor of irrigation or land use that changes the dryness or wetness of the soils, which we currently don't represent well in the models, we potentially have additional downstream effects on weather and climate that we haven't foreseen."

The next step is to assess how these relationships play out in global climate and weather forecasting models.

"Our findings are observational, but now, we want to use computer modeling to help us understand why drier or wetter soil could enhance rainfall likelihood," Zeng said. "We know it's true, but we don't quantitatively know why."

Credit: 
University of Arizona

Spending time in the garden linked to better health and wellbeing

Spending time in the garden is linked to similar benefits for health and wellbeing as living in wealthy areas, according to a new large-scale study.

Research conducted by the University of Exeter and the Royal Horticultural Society charity, published in Elsevier's Landscape and Urban Planning, analysed data from nearly 8,000 people collected by Natural England between 2009 and 2016. The research, conducted with funding from Innovate UK and NIHR, found that people who spend time in the garden are significantly more likely to report general good health, higher psychological wellbeing and greater physical activity levels than those who do not spend time in the garden.

The study found the benefits of gardening to health and wellbeing were similar to the difference in health between people living in the wealthiest parts of the country, compared to the poorest. The benefits applied whether people spent their time gardening or simply relaxing. People who regularly spend time in their garden were also more likely to visit nature elsewhere once a week.

The study also found that people with access to a private garden had higher psychological wellbeing and those with an outdoor space such as a yard were more likely to meet physical activity guidelines. These benefits were in comparison to people who did not have a garden or outdoor space.

Dr Sian de Bell, of the University of Exeter Medical School, lead author of the study, said: "A growing body of evidence points to the health and wellbeing benefits of access to green or coastal spaces. Our study is one of the largest to date to look at the benefits of gardens and gardening specifically. Our findings suggest that whilst being able to access an outdoor space such as a garden or yard is important, using that space is what really leads to benefits for health and wellbeing."

Dr Becca Lovell, also of the University of Exeter Medical School and project lead, said: "Gardens are a crucial way for people to access and experience the natural environment. Our new evidence highlights that gardens may have a role as a public health resource and that we need to ensure that their benefit is available equally."

Prof Alistair Griffiths, Director of Science and Collections at the Royal Horticultural Society and co-author on the paper, said: "This work adds to the increasing body of scientific evidence on the health benefits of gardens and gardening. As the current COVID crisis has demonstrated, there's an urgent need to include the provision of private gardens in the planning process to better support the UK's preventative health agenda and the wellbeing of our nation."

There is growing evidence that living in a greener neighbourhood can be good for health and wellbeing, but most research has focused on public green spaces such as parks and playing fields. The current research used data collected by Natural England's Monitor of Engagement with the Natural Environment Survey, the world's largest survey collecting data on people's weekly contact with the natural world.

Marian Spain, Interim Chief Executive of Natural England, said:

“In these unprecedented times, the government’s priority continues to be making sure people stay at home to help protect the NHS and save lives. The benefits of spending time around nature during this time, be that in our back gardens or in local green spaces as part of our daily exercise, cannot be underestimated – and this research shines a light on the impact this has on people’s health and wellbeing.

“We know that not everyone has easy access to nature or green spaces, and that’s why we’ve launched our #BetterWithNature campaign to inspire more people to connect with nature safely during this period. Longer term, this campaign aims to bring the benefits of nature to as many people as possible through initiatives like our Nature Recovery Network, which will see more green spaces created near where people live and work.”

Credit: 
University of Exeter

Identifying light sources using artificial intelligence

image: A detector (the eye) measures identical photons from natural sunlight and laser light. The fast identification of light sources is performed by an artificial neuron that is trained to efficiently extract patterns in the quantum fluctuations of photons.

Image: 
Elsa Hahne

WASHINGTON, May 5, 2020 -- Identifying sources of light plays an important role in the development of many photonic technologies, such as lidar, remote sensing, and microscopy. Traditionally, identifying light sources as diverse as sunlight, laser radiation, or molecule fluorescence has required millions of measurements, particularly in low-light environments, which limits the realistic implementation of quantum photonic technologies.

In Applied Physics Reviews, from AIP Publishing, researchers demonstrated a smart quantum technology that enables a dramatic reduction in the number of measurements required to identify light sources.

"We trained an artificial neuron with the statistical fluctuations that characterize coherent and thermal light," said Omar Magana-Loaiza, an author of the paper.

After researchers trained the artificial neuron with light sources, the neuron could identify underlying features associated with specific types of light.

"A single neuron is enough to dramatically reduce the number of measurements needed to identify a light source from millions to less than hundred," said Chenglong You, a fellow researcher and co-author on the paper.

With fewer measurements, researchers can identify light sources much more quickly, and in certain applications, such as microscopy, they can limit light damage since they don't have to illuminate the sample nearly as many times when taking measurements.

"If you were doing an imaging experiment with delicate fluorescent molecular complexes, for example, you could reduce the time the sample is exposed to light and minimize any photodamage," said Roberto de J. León-Montiel, another co-author.

Cryptography is another application where these findings could prove valuable. Typically to generate a key to encrypt an email or message, researchers need to take millions of measurements. "We could speed up the generation of quantum keys for encryption using a similar neuron," said Magana-Loaiza.

As laser light plays an important role in remote sensing, this work could also enable development of a new family of smart lidar systems with the capability to identify intercepted or modified information reflected from a remote object. Lidar is a remote sensing method that measures distance to a target by illuminating the target with laser light and measuring the reflected light with a sensor.

"The probability of jamming a smart quantum lidar system will be dramatically reduced with our technology," he said. In addition, the possibility to discriminate lidar photons from environmental light such as sunlight will have important implications for remote sensing at low-light levels.

Credit: 
American Institute of Physics

From expressions to mind wandering: Using computers to illuminate human emotions

May 5, 2020 - A common view of human emotions is that they are too idiosyncratic and subjective to be studied scientifically. But as being presented at the Cognitive Neuroscience Society (CNS) virtual meeting today, cognitive neuroscientists are using contemporary, data-driven computational methods to overturn old ideas about the structure of emotions across humanity.

Researchers are applying computing power to understanding everything from how we generate spontaneous emotions during mind wandering to how we decode facial expressions across cultures. Their findings are important in characterizing how emotions contribute to well-being, the neurobiology of psychiatric disorders, and even how to make more effective, social robots.

"Artificial intelligence (AI) enables scientists to study emotions in ways that were previously thought to be impossible, which is leading to discoveries that transform how we think emotions are generated from biological signals," says Kevin LaBar of Duke University who is chairing the symposium on this topic at the CNS virtual meeting.

Decoding facial expressions across cultures

Six core human emotions -- fear, anger, disgust, sadness, happiness and surprise -- have been considered as universal in human psychology for decades and popularized in such places as the Pixar film "Inside Out." Yet despite the societal prevalence of this idea dating back to the work of Paul Ekman, the scientific consensus actually shows that these emotions are far from universal, with a significant gap in facial recognition of these emotions across cultures in particular for people from East Asia, says Rachael Jack, a researcher at the University of Glasgow.

Jack has been working to understand what she calls the "language of the face" -- how individual face movements combine in different ways to create meaningful facial expressions (like how letters combine to create words). "I think of this a bit like trying to crack hieroglyphics or an unknown ancient language," Jack says. "We know so much about spoken and written language, even hundreds of ancient languages, but we have comparatively little formal knowledge of the non-verbal communication systems we use every day and which are so critical to all human societies."

In new work she is presenting at the CNS annual meeting, Jack will showcase the novel data-driven methods her team has used to develop dynamic models of these face movements, like a recipe book of facial expressions of emotions. Her team is now transferring these models to digital agents, such as social robots and virtual humans, so that they can generate facial expressions that are socially nuanced and culturally sensitive.

In their work, a novel face movement generator, created at the Institute of Neuroscience and Psychology at the University of Glasgow, randomly selects a subset of individual face movements, such as eyebrow raiser, nose wrinkler, or lip stretcher, and randomly activates the intensity and timing of each. These randomly activated face movements then combine to create a facial animation. Study participants from different cultures then categorize the facial animation according to the six classic emotions, or they can select "other" if they do not perceive any of these emotions. After many such trials, the researchers build a statistical relationship between the face movements presented on each trial and the participants' responses, which produces a mathematical model.

"In contrast to traditional theory-driven approaches where experimenters took a hypothesized set of facial expressions and showed them to participants across the world, we have added a psychophysical approach," she explains. "It is more data-driven and more agnostic in sampling and testing facial expressions and, critically, uses the subjective perceptions of cultural participants to understand what face movements drive their perception of a given emotion, for example, 'he is happy.'"

These studies have whittled the six commonly thought of universal facial expressions of emotions to only four cross-cultural expressions. "There are substantial cultural differences in facial expressions that can hinder cross-cultural communication," Jack says. "We often, but not always, find that East Asian facial expressions have more expressive eyes than Western facial expressions, which tend to have more expressive mouths, just like Eastern v. Western emoticons!"

She adds that there are also cultural commonalities that can be used to support accurate cross-cultural communication of specific messages - for example, facial expressions of happy, interested, and bored are similar across Eastern and Western cultures and can be recognized across cultures easily.

Jack and her team are now using their models to enhance the social signalling capabilities of robots and other digital agents that can be used globally. "We're very excited to transfer our facial expression models to a range of digital agents and to see the dramatic improvement in performance," she says.

Understanding spontaneous emotions at rest

Understanding how the subjective experience of emotion is mediated in the brain is the holy grail of affective neuroscience, says LaBar of Duke. "It is a hard problem, and there has been little progress to date."

In his lab, LaBar and colleagues are working to understand the emotions that emerge while the brain is mind-wandering at rest. "Whether triggered by internal thoughts or memories, these 'stream-of-consciousness' emotions are the targets of rumination and worry that can lead to prolonged mood states, and can bias memory and decision-making," he explains.

Until recently, researchers have been unable to decode these emotions from resting-state signals of brain function. As he is presenting today, LaBar's team has been able to apply machine learning tools to derive neuroimaging markers of a small set of emotions like fear, anger, and surprise. And the researchers have modeled how these emotions spontaneously emerge in the brain while subjects are resting in an MRI scanner.

The core of the work has been training a machine learning algorithm to differentiate patterns of brain activity that separate emotions from one another. The researchers present a pattern classifier algorithm with a training data set from a group of participants who were presented with music and movie clips that induced specific emotions. Using feedback, the algorithm learns to weigh the inputs coming from different regions of the brain to optimize the signaling of each emotion. The researchers then test how well the classifier can predict the elicited emotions in a new sample of participants using the set of brain weights it generated from the test sample.

"Once the emotion-specific brain patterns are validated across subjects in this way, we look for evidence that these patterns emerge spontaneously in participants who are merely lying at rest in the scanner," he says. "We can then determine whether the pattern classifier accurately predicts the emotions that people spontaneously report in the scanner, and identify individual differences."

With this method, they have shown, for example, how individual differences in anxiety and depression bias the expression of spontaneous emotions, how emotions change across the lifespan as people age, and how the temporal dynamics of emotions differentiate healthy subjects from individuals with psychopathology.

"Our hope is that this endeavor will lead to more reliable, objective indices of specific emotional experiences that do not rely on self-report, as these verbal reports are not reliable in certain populations, like children or even some adults who have little self-insight into their emotions," he says. "If so, then we might gain traction into understanding the emotional life of these individuals better."

Credit: 
Cognitive Neuroscience Society

Recent Australian wildfires made worse by logging

image: Bushfire

Image: 
The University of Queensland

Logging of native forests increases the risk and severity of fire and likely had a profound effect on the recent, catastrophic Australian bushfires, according to new research.

In the wake of the country's worst forest fires in recorded history, University of Queensland researchers have been part of an international collaboration, investigating Australia's historical and contemporary land-use.

UQ Professor and Wildlife Conservation Society Director James Watson said logging regimes have made many forests more fire prone for a host of reasons.

"Logging causes a rise in fuel loads, increases potential drying of wet forests and causes a decrease in forest height," Professor Watson said.

"It can leave up to 450 tonnes of combustible fuel per hectare close to the ground - by any measure, that's an incredibly dangerous level of combustible material in seasonally dry landscapes.

"By allowing these practices to increase fire severity and flammability, we undermine the safety of some of our rural communities.

"It affects wildlife too by creating habitat loss, fragmentation and disturbance for many species, with major negative effects on forest wildlife."

Lead author, Australian National University's Professor David Lindenmayer, said there are land management actions we can take to stop these fires from occurring in the future.

"The first is to prevent logging of moist forests, particularly those close to urban areas," Professor Lindenmayer said.

"We must also reduce forest fragmentation by proactively restoring some previously logged forests.

"In the event of wildfires, land managers must avoid practices such as 'salvage' logging - or logging of burnt forests - which severely reduces recovery of a forest."

The Federal Government has launched a Royal Commission to find ways to improve Australia's preparedness, resilience, and response to natural disasters.

Researcher Michelle Ward, from UQ's School of Earth and Environmental Sciences, said it was time for government to act.

"We urge policy makers to recognise and account for the critical values of intact, undisturbed native forests, not only for the protection of biodiversity, but for human safety," Ms Ward said.

"Let's act strongly and swiftly for the sake of our communities, the species they house, our climate and Australia's wild heritage."

Credit: 
University of Queensland

Fossil fuel-free jet propulsion with air plasmas

image: A schematic diagram of a prototype microwave air plasma thruster and the images of the bright plasma jet at different microwave powers. This device consists of a microwave power supply, an air compressor, a compressed microwave waveguide and a flame ignitor.

Image: 
Jau Tang and Jun Li

WASHINGTON, May 5, 2020 -- Humans depend on fossil fuels as their primary energy source, especially in transportation. However, fossil fuels are both unsustainable and unsafe, serving as the largest source of greenhouse gas emissions and leading to adverse respiratory effects and devastation due to global warming.

A team of researchers at the Institute of Technological Sciences at Wuhan University has demonstrated a prototype device that uses microwave air plasmas for jet propulsion. They describe the engine in the journal AIP Advances, from AIP Publishing.

"The motivation of our work is to help solve the global warming problems owing to humans' use of fossil fuel combustion engines to power machinery, such as cars and airplanes," said author Jau Tang, a professor at Wuhan University. "There is no need for fossil fuel with our design, and therefore, there is no carbon emission to cause greenhouse effects and global warming."

Beyond solid, liquid and gas, plasma is the fourth state of matter, consisting of an aggregate of charged ions. It exists naturally in places like the sun's surface and Earth's lightning, but it can also be generated. The researchers created a plasma jet by compressing air into high pressures and using a microwave to ionize the pressurized air stream.

This method differs from previous attempts to create plasma jet thrusters in one key way. Other plasma jet thrusters, like NASA's Dawn space probe, use xenon plasma, which cannot overcome the friction in Earth's atmosphere, and are therefore not powerful enough for use in air transportation. Instead, the authors' plasma jet thruster generates the high-temperature, high-pressure plasma in situ using only injected air and electricity.

The prototype plasma jet device can lift a 1-kilogram steel ball over a 24-millimeter diameter quartz tube, where the high-pressure air is converted into a plasma jet by passing through a microwave ionization chamber. To scale, the corresponding thrusting pressure is comparable to a commercial airplane jet engine.

By building a large array of these thrusters with high-power microwave sources, the prototype design can be scaled up to a full-sized jet. The authors are working on improving the efficiency of the device toward this goal.

"Our results demonstrated that such a jet engine based on microwave air plasma can be a potentially viable alternative to the conventional fossil fuel jet engine," Tang said.

Credit: 
American Institute of Physics