Culture

A convex-optimization-based quantum process tomography method for reconstructing quantum channels

image: Reconstructing seawater quantum channels via convex-optimization-based quantum process tomography.

Image: 
©Science China Press

Quantum process tomography is often used to completely characterize an unknown quantum process. However, it may lead to an unphysical process matrix, which will cause the loss of information with respect to the tomography result. Professor Xian-Min Jin and his group from Center for Quantum Information Technologies in Shanghai Jiao Tong University (SJTU) reported a new quantum process tomography method which is based on convex optimization. They demonstrated that this new method is able to minimize the errors between the process matrix and the experimental results effectively and works well both in unitary and non-unitary quantum channels. This work was published in Science Bulletin.

As for a general quantum channel, it might include a series of non-unitary operations. However, previous researches mainly focus on the unitary and near-unitary operations while there are few studies on the non-unitary operations.

Professor Jin chose the seawater channel as a benchmark to test their method. The seawater channel is one of the key channels for global quantum communication, which can serve as a typical representation for the general quantum channel. Their method revealed the true action of the seawater quantum channel more accurately and could well maintain the physical properties. They further tested it on the seven fundamental gates, showing that this new method can reveal the quantum channel more precisely and robustly without preliminary parameter adjustments. In addition, they prepared a series of non-unitary quantum channel. Compared with previous tomography methods, they still can reach up to 99.5% accuracy in this scenario.

Because each element of the process matrix represents different operations, it is necessary to determine each value precisely if we want to reveal the true action of the quantum channel. The work from Professor Xian-Min Jin and his group offers a more universal tool for further analyses on the general quantum channels.

Credit: 
Science China Press

Vitamin D deficiency during pregnancy connected to elevated risk of ADHD

- Alongside genotype, prenatal factors such as vitamin D deficiency during pregnancy, can influence the development of ADHD, says MD Minna Sucksdorff from the University of Turku, Finland.

The study is the first population-level research to demonstrate an association between low maternal vitamin D level in early to mid-pregnancy and an elevated risk for diagnosed attention-deficit/hyperactivity disorder ADHD in the offspring.

The study included 1,067 children born between 1998 and 1999 diagnosed with ADHD in Finland and the same number of matched controls. The data was collected before the current national recommendation in Finland for the intake of vitamin D during pregnancy, which is 10 micrograms per day throughout the year.

Vitamin D deficiency still a problem

The primary investigator, Professor Andre Sourander says that, despite the recommendations, vitamin D deficiency is still a global problem. In Finland, for example, mothers' vitamin D intake among several immigrant groups is not at a sufficient level.

- This research offers strong evidence that a low level of vitamin D during pregnancy is related to attention deficiency in offspring. As ADHD is one of the most common chronic diseases in children, the research results have a great significance for public health, says Professor Sourander.

The study is part of a larger research project that aims to discover the connections between the mother's health during pregnancy and ADHD in offspring. The goal is to produce information for developing preventative treatments and measures for identifying children with ADHD risk.

The study was done in collaboration between researchers from the University of Turku, Finland, and Columbia University, New York and it was funded by the National Institute of Mental Health NIHM (USA) and the Academy of Finland, and it is part of the INVEST flagship programme of the University of Turku.

In the study, the researchers used the exceptionally comprehensive Finnish Maternity Cohort (FMC) consisting of approximately 2 million serum specimens collected during the first and early second trimester of pregnancy.

Credit: 
University of Turku

Statins: Researchers uncover how cholesterol-lowering drugs cause muscle pain

image: Left: Microscope image of cultivated normal muscle cells: The structures stained in green show that muscle fibres have differentiated from precursor cells; the cell nuclei are shown in blue.
Centre: Cultivated muscle cells after treatment with statins: Far fewer muscle fibres have formed.
Right: The muscle cell culture has been treated with statins but the GILZ protein has been genetically deactivated: Muscle fibre formation is similar to the normal undamaged state shown in the image on the left. This indicates that the GILZ protein is responsible for the muscle cell damage observed when statins are taken.

Image: 
Microscope imaging: Jenny Vanessa Valbuena Perez

Patients who take statins in order to lower their blood cholesterol levels often complain about muscle problems, typically muscle pain. But why this occurs is still largely unresolved. In a recent study, the pharmaceutical scientists Professor Alexandra K. Kiemer und Jessica Hoppstädter from Saarland University have identified a potential causal relationship. According to the results of their work, statins cause enhanced production of a protein called 'GILZ' that impairs muscle cell function.

The study has been published in The FASEB Journal under the title 'The Glucocorticoid-Induced Leucine Zipper Mediates Statin-Induced Muscle Damage', DOI: 10.1096/fj.201902557RRR

Cholesterol-lowering drugs, which are commonly referred to as statins, are some of the most frequently prescribed drugs around the world. Generally speaking, statins are well tolerated by patients. However, it is not uncommon for patients on statins to complain of muscle pain or muscle weakness. 'According to figures from observational studies, muscle problems have been found to occur in 5% to 29% of cases. Older patients and female patients appear to be at greater risk of developing these symptoms, but so too are patients that are very physically active,' explains Alexandra K. Kiemer, Professor of Pharmaceutical Biology at Saarland University. In 2018, more than 6 million patients in Germany were treated with statins. This would suggest that muscle problems may be affecting several hundreds of thousands of patients, potentially as many as 1.8 million, in Germany alone. The precise nature of the bodily processes that induce symptoms of muscle impairment has not yet been fully characterized.

Alexandra K. Kiemer and her research group may now have identified the actual cause of the muscle pain affecting patients receiving statins. They believe that a protein known as GILZ is responsible. 'The acronym GILZ stands for glucocorticoid-induced leucine zipper,' explains Professor Kiemer. Over the years, her research group has conducted numerous experimental studies into this particular protein. 'The main function of GILZ is actually to suppress inflammatory processes in the body. Statins prevent cardiovascular disease not only by lowering blood cholesterol levels, but also by reducing vascular inflammation. That's why we thought there might be a connection between statins and GILZ. Our data indicate that the presence of GILZ in the body can have both positive and negative effects,' says Kiemer. Building on this initial conjecture that there might be a link between the protein GILZ and statins and their side effects, the pharmaceutical researchers began analysing numerous datasets drawn from research databases available around the world.

They assessed the data in terms of whether statins influenced the production of GILZ in the body. After confirming their original suspicions, the researchers were able to corroborate their hypothesis by carrying out a series of experiments on living cells. 'Statins cause an increase in the cellular production of the GILZ protein. This, however, leads to impaired muscle function, because increased GILZ production results in an increased rate of muscle cell death. In addition, the formation of muscle fibres is inhibited,' says Alexandra K. Kiemer. The research team then tried switching off the GILZ protein in living cells and observing what effect the statins then had. 'When we look at what happens when statins are administered to muscle cells or entire muscle fibres in which GILZ has been genetically deactivated, the damage that was previously observed is now almost completely absent,' says Kiemer.

There also seem to be indications that people who engage in a significant amount of physical activity suffer from muscle problems when prescribed statins. Furthermore, the statins appear to impair the success of physical training programmes. The pharmaceutical researchers led by Alexandra K. Kiemer are therefore planning a new study to be conducted in collaboration with the sports medicine physician Anne Hecksteden from the research group headed by Professor Tim Meyer at Saarland University. 'We have some evidence that there is a link between statins, physical activity and the GILZ protein, and our plan is to shed more light on how these factors interact with each other,' says Professor Kiemer.

Credit: 
Saarland University

New repair mechanism for DNA breaks

Chromosomal breaks are the most harmful damage for cells. If they are not repaired, they block the duplication and segregation of chromosomes, stop the growth cycle and cause cell death. These breaks appear frequently in tumour cells and are produced spontaneously during the replication of genetic material. To be able to repair this damage in the genetic material, the cell transfers the information from the intact daughter copy to the broken copy, which is known as recombination of sister chromatids.

In a project recently published by the review Nature Communications, researchers from the University of Seville and the Andalusian Centre of Molecular Biology and Regenerative Medicine (CABIMER) have identified new factors that are necessary for the repair of these breaks. These factors, in contrast with those already known, only affect the repair between sister chromatids of breaks that have arisen during chromosome duplication. Specifically, they are proteins that modify 'histones', which are the basic proteins that form the structure of the chromosomes.

The research group has shown that the inability to repair breaks in cells lacking these proteins derives from a deficient cohesin load. These are the proteins that keep the sister chromatids paired and together until their segregation in meiosis. With the lessening of cohesion between the chromatids, the repair is defective which leave many breaks unrepaired and increases the chromosomal reorganisation.

The project carried out on the organism model Saccharomyces has identified new factors involved on the maintenance of genome integrity and a new mechanism with which the cohesin load in the chromosomes can be regulated, which could be of great value for deciphering the multiple mechanisms responsible for genome instability in the tumour cells and different neurodegenerative pathologies.

This study corresponds to the doctoral thesis of Pedro Ortega, directed by the teachers Belén Gómez-González and Andrés Aguilera. It had financing from the Asociación Española Contra el Cáncer (AECC), the European Research Council and the Ministry of Economy and Competiveness (Ministerio de Economia y Competitividad).

Credit: 
University of Seville

Epigenetics: Inheritance of epigenetic marks

A study undertaken by an international team led by Ludwig-Maximilians-Universitaet (LMU) in Munich molecular biologist Axel Imhof sheds new light on the mechanisms that control the establishment of epigenetic modifications on newly synthesized histones following cell division.

The classical genetic code is not the only code involved in the regulation of cell differentiation and behavior in multicellular organisms. The instructions encoded in the nucleotide sequence of the genomic DNA determine which sets of genes are expressed within a given cell type. Their selective expression thus defines the differences between a muscle cell and a nerve cell, for example. However, there is a second level of control that contributes to the regulation of patterns of gene expression. This is based on chemical modifications of DNA and of the histone proteins in which it is packed. This epigenetic code is now recognized as a vital part of the process responsible for the differentiation - and maintenance - of different cell types in higher organisms, although virtually all cells in an individual carry the same complement of genetic information. However, unlike the replication of the DNA sequence itself, the transmission of epigenetic information during cell division is not well understood. Now, a team led by Axel Imhof at LMU's Biomedical Center, in collaboration with research groups based at the Helmholtz Zentrum München and in Denmark, has used a combination of theoretical modeling and experimentation to elucidate the mechanisms that mediate the establishment of epigenetic marks following cell division. The findings, which appear in the journal Cell Reports, provide deeper insights into the inheritance of epigenetic histone modifications.

In higher organisms, most of the DNA in cells is found in a condensed form known as chromatin, in which the DNA is wrapped around particles made of proteins known as histones. In chromatin, the functional state of any given gene is largely dependent on exactly how it is packaged. More specifically, chemical modification of histones modulates the accessibility of the DNA in chromatin, and thus controls whether the proteins required for gene expression can actually bind to the DNA. In order to ensure the stable transmission to daughter cells of the gene expression patterns that define the identities of the different cell types, it is crucial that chromatin states are maintained during cell division. 

In the new study, Imhof and his colleagues focused on two specific modifications of histone H3 - methylation of the lysines at positions 27 and 36 (K27me and K36me). The attachment of a methyl group (CH3) to the histone alters its binding affinity for regulatory proteins and changes the degree of chromatin compaction. K27me is usually found on H3 in regions where genes are inactive, while K36me serves as a marker for active genes.

The crucial question addressed in the study was: What happens to these modifications during the course of cell division? Cell division is preceded by DNA replication, which doubles the amount of DNA that has to be packed - and thus requires the synthesis of new histones. However, freshly synthesized histones carry no epigenetic modifications. How then do cells ensure that the new histones acquire the correct pattern of modifications within the newly formed chromatin?

The problem is a tricky one, and the experimental approach adopted to solve it was technically challenging. The team first labelled newly synthesized histones with (non-radioactive) heavy isotopes. The new (heavy) histones could therefore be distinguished from the old (light) histones by means of high-resolution mass spectrometry. They then followed the fate of these two 'generations' of histones in the daughter cells after cell division.

The patterns of modification that they observed were extremely complex. In order to make sense of them, they devised two models for the inheritance of epigenetic histone modifications and used a computer-based procedure to compare the theoretical modification patterns with the dynamic changes detected in their labeling experiments. In theory, each of the lysines at positions 27 and 36 in histone H3 can be modified with one, two or three methyl groups. This meant that 16 possible isoforms had to be taken into consideration. "Based on our modeling studies, we were able to demonstrate that the methylation patterns of the two functionally antagonistic residues K27me and K36me in cells reciprocally influence each other," says Axel Imhof. "The patterns that we actually observed can best be accounted for by the assumption that certain regions of the genome - which we refer to as domains - exhibit defined patterns of methylation." A further surprising finding was that, in rapidly dividing embryonal stem cells, the levels of demethylation observed during cell division were insignificant. The team now plans to investigate in greater detail what precisely is happening in these cells.

In the longer term, the researchers hope that their work will allow them to swiftly identify pathological alterations in the epigenetic states of cells. It is known that tumor cells often contain mutant forms of the enzymes responsible for the de novo modifications that occur during cell division, and this seems to be associated with the increased proliferation rates seen in such cells. "Consequently, a lot of work is currently going into the development of 'epidrugs' that could modulate the activity of these enzymes," says Imhof.

Credit: 
Ludwig-Maximilians-Universität München

Study finds innate protein that restricts HIV replication by targeting lipid rafts

WASHINGTON (Feb. 10, 2020) - The human protein apolipoprotein A-I binding protein (AIBP) inhibits HIV replication by targeting lipid rafts and reducing virus-cell fusion, according to a new study published in the premier American Society for Microbiology journal mBio by researchers from the George Washington University. These results provide the first evidence suggesting that AIBP is an innate immunity factor that restricts HIV replication by modifying lipid rafts on cells targeted by HIV.

AIBP is involved in the regulation of lipid rafts and cholesterol efflux. It has been suggested to function as a protective factor under several conditions associated with an increased abundance of lipid rafts, including atherosclerosis and acute lung injury.

"Previous studies have suggested a protective and possibly therapeutic role of AIBP in human diseases associated with inflammation and impairment of cholesterol metabolism, particularly atherosclerosis," said Michael Bukrinsky, MD, PhD, professor of microbiology, immunology, and tropical medicine at the GW School of Medicine and Health Sciences and senior author on the study. "What we found in our study is that AIBP also exerts anti-HIV activity."

Host cell lipid rafts -- subdomains of the plasma membrane that contain high concentrations of cholesterol and glycosphingolipids -- are critically important for the biology of HIV and are involved in HIV-1 assembly and budding and the infection of target cells. Given the dependence HIV has on lipid rafts, and AIBP's ability to reduce them, the researchers hypothesized that AIBP could inhibit HIV replication.

The results of the study show that exogenously added AIBP reduced the abundance of lipid rafts and inhibited HIV replication in vitro and in vivo, while knockdown of AIBP native to the cells increased HIV replication. With these findings, the authors suggest that new therapeutic approaches aimed at inhibition of HIV infection and HIV-associated comorbidities via stimulation of AIBP production can be envisioned.

"Through this study, we identified a novel innate immunity factor that inhibits HIV infection by targeting lipid rafts," Bukrinsky said. "Further studies could possibly show AIBP may also protect against infection by other viruses and microbes."

Credit: 
George Washington University

Inner 'clockwork' sets the time for cell division in bacteria

image: The signaling molecule c-di-GMP controls cell division in Caulobacter crescentus.

Image: 
(Image: University of Basel, Swiss Nanoscience Institute/Biozentrum)

Researchers at the Biozentrum of the University of Basel have discovered a "clockwork" mechanism that controls cell division in bacteria. In two publications, in "Nature Communications" und "PNAS", they report how a small signaling molecule starts the "clock", which informs the cell about the right time to reproduce.

The ability of pathogens to multiply in the host is crucial for the spread of infections. The speed of bacterial division greatly depends on the environmental conditions. Under unfavorable conditions, such as nutrient deficiency, bacteria tend to pause after division and reproduce more slowly. But how do bacteria know, when it is time to enter the next round of cell division?

A team at the Biozentrum of the University of Basel, led by Prof. Urs Jenal has now identified a central switch for reproduction in the model bacterium Caulobacter crescentus: the signaling molecule c-di-GMP. In their current study, published in the journal Nature Communications, they report that this molecule initiates a "clock-like" mechanism, which determines whether individual bacteria reproduce.

A signaling molecule regulates "clockwork" in bacteria

How long a cell pauses after division and how it then decides to engage in the next round of division is still poorly understood. The signaling molecule c-di-GMP plays a key role in this process. "The rise in the c-di-GMP level sets the individual cogwheels of the cell's clock into action, one after the other," explains Jenal. "These cogwheels are enzymes called kinases. They prepare the transition of the cell from the resting to the division phase."

Enzymes respond to c-di-GMP levels

Under favorable living conditions, newborn bacteria begin to produce the signaling molecule - this starts the clock ticking. The initially low c-di-GMP level activates a first kinase. This activates the expression of over 100 genes, which drive the cell towards division and boost the production of c-di-GMP.

The resulting peak levels of c-di-GMP finally stimulate the last wheel of the machinery, also a kinase. "With this step, the cell ultimately decides to replicate its DNA and to trigger cell division," explains Jenal. "Simultaneously the over 100 genes are switched off again, as these are only important for the transition phase but obstruct later stages of proliferation."

Insights into c-di-GMP mediated enzyme activation

In a parallel study, recently been published in PNAS, a team led by Prof. Tilman Schirmer, also at the Biozentrum of the University of Basel, describes how c-di-GMP activates the first cogwheel of the newly discovered clock at the atomic level.

The researchers have revealed that the mobile domains of the kinase are initially locked in a fixed position. The binding of c-di-GMP liberates the domains, thereby activating the kinase for gene expression. "In our study, we have discovered a new mode of c-di-GMP mediated activation," says Schirmer. "Once again, we are fascinated by the diverse ?strategies? of this small molecule to regulate biochemical processes."

Universal principle in bacterial reproduction

The c-di-GMP regulated timing of the bacterial cell cycle by this signaling molecule seems to be a universal mechanism. The researchers assume that this mechanism enables bacteria to precisely coordinate growth and development. The elucidation of this novel mechanism also contributes to a better understanding of the growth of bacterial pathogens.

Credit: 
University of Basel

Supercharged light pulverises asteroids, study finds

Radiation from dying stars is luminous enough to easily spin up orbiting asteroids to break-up speed

New study led by University of Warwick astronomer computes the cascade of destruction down to boulder-size fragments

Break-ups form a vast asteroid debris field much like the classic arcade game, including fragments which revolve around each other as "double asteroids"

Scientists predict that the Solar System's asteroid belt will be pulverised by the Sun's light in 6 billion years

The majority of stars in the universe will become luminous enough to blast surrounding asteroids into successively smaller fragments using their light alone, according to a University of Warwick astronomer.

Electromagnetic radiation from stars at the end of their 'giant branch' phase - lasting just a few million years before they collapse into white dwarfs - would be strong enough to spin even distant asteroids at high speed until they tear themselves apart again and again. As a result, even our own asteroid belt will be easily pulverized by our Sun billions of years from now.

The new study from the University of Warwick's Department of Physics, published in Monthly Notices of the Royal Astronomical Society, analyses the number of successive break-up events and how quickly this cascade occurs.

The authors have concluded that all but the most distant or smallest asteroids in a system would be disintegrated in a relatively short one million years, leaving behind debris that scientists can find and analyse around dead white dwarf stars. Some of this debris may be in the form of 'double asteroids' which revolve around each other while they orbit the Sun.

After main sequence stars like our Sun have burnt all their hydrogen fuel, they then become hundreds of times larger during a 'giant branch' phase and increase their luminosity ten-thousand-fold, giving out intense electromagnetic radiation. When that expansion stops, a star sheds its outer layers, leaving behind a dense core known as a white dwarf.

The radiation from the star will be absorbed by orbiting asteroids, redistributed internally and then emitted from a different location, creating an imbalance. This imbalance creates a torque effect that very gradually spins up the asteroid, eventually to break-up speed at one full rotation every 2 hours (the Earth takes almost 24 hours to complete a full rotation). This effect is known as the YORP effect, named after four scientists (Yarkovsky, O'Keefe, Radzievskii, Paddack) who contributed ideas to the concept.

Eventually, this torque will pull the asteroid apart into smaller pieces. The process will then repeat itself in several stages, much like how in the classic arcade game 'Asteroids' they break down into smaller and smaller asteroids after each destruction event. The scientists have calculated that in most cases there will be more than ten fission events - or break-ups - before the pieces become too small to be affected.

Lead author Dr Dimitri Veras, from the University of Warwick's Astronomy and Astrophysics Group, said: "When a typical star reaches the giant branch stage, its luminosity reaches a maximum of between 1,000 and 10,000 times the luminosity of our Sun. Then the star contracts down into an Earth-sized white dwarf very quickly, where its luminosity drops to levels below our Sun's. Hence, the YORP effect is very important during the giant branch phase, but almost non-existent after the star has become a white dwarf.

"For one solar-mass giant branch stars - like what our Sun will become - even exo-asteroid belt analogues will be effectively destroyed. The YORP effect in these systems is very violent and acts quickly, on the order of a million years. Not only will our own asteroid belt be destroyed, but it will be done quickly and violently. And due solely to the light from our Sun."

The remains of these asteroids will eventually form a debris disc around the white dwarf, and the disc will be drawn into the star, 'polluting it'. This pollution can be detected from Earth by astronomers and analysed to determine its composition.

Dr Veras adds: "These results help locate debris fields in giant branch and white dwarf planetary systems, which is crucial to determining how white dwarfs are polluted. We need to know where the debris is by the time the star becomes a white dwarf to understand how discs are formed. So the YORP effect provides important context for determining where that debris would originate."

When our Sun dies and runs out of fuel in about 6 billion years it too will shed its outer layers and collapse into a white dwarf. As its luminosity grows it will bombard our asteroid belt with increasingly intense radiation, subjecting the asteroids to the YORP effect and breaking them into smaller and smaller pieces, just like in a game of 'Asteroids'.

Most asteroids are what are known as 'rubble piles' - a collection of rocks loosely held together - which means they have little internal strength. However, smaller asteroids have greater internal strength, and while this effect will break down larger objects quite quickly, the debris will plateau at objects around 1-100 metres in diameter. Once the 'giant branch' phase starts the process will continue unabated until reaching this plateau.

The effect lessens with increasing distance from the star and with increasing internal strength of the asteroid. The YORP effect can break up asteroids at hundreds of AU (Astronomical Units), much further away than where Neptune or Pluto resides.

However, the YORP effect will only influence asteroids. Objects larger than Pluto will likely escape this fate due to their size and internal strength - unless they are broken up by another process, such as a collision with another planet.

Credit: 
University of Warwick

Financial pressure makes CFOs less likely to blow the whistle

A recent study finds that corporate financial managers do a great job of detecting signs of potential fraud, but are less likely to voice these concerns externally when their company is under pressure to meet a financial target.

"One of the take-away messages here is that auditors, investors, regulators and other stakeholders should be prepared to identify red flags on their own, rather than expecting management to raise the issue," says Joe Brazel, corresponding author of a paper on the work and Jenkins Distinguished Professor of Accounting at North Carolina State University. "That could be challenging, since research suggests many of these stakeholders aren't as skilled as financial managers at detecting fraud red flags."

For this study, researchers recruited 204 financial managers - such as chief financial officers (CFOs) and controllers - who worked for private or publicly held companies based in Italy. Study participants were given a suite of financial and nonfinancial information, similar to the materials that CFOs are asked to review at the end of a fiscal year, and asked to respond to a series of questions as if they were acting in the role of CFO.

The study participants were split into four groups. One group was told that the company was under significant pressure to meet a financial target and was also given data that included inconsistencies that could be viewed as red flags, or indicators of potential fraud. One group was under pressure but received no red flags. One group received the red flags but was not under pressure to meet the target. And one group had no red flags and no pressure to meet a target.

The researchers found that the financial managers were adept at identifying the red flags, and that the presence of red flags made it more likely that participants would report internally to their chief executive officer (CEO) about any potential departures from accepted accounting practices. Participants who discovered red flags and were not under financial pressure were also more likely to take their concerns to external parties, like their auditor, if the company didn't address the potential fraud.

However, if under pressure, financial managers became significantly less willing to approach external parties.

"In other words, in really important scenarios - when the pressure is on - executives don't blow the whistle," Brazel says. "They shut down."

The researchers also found that two other variables played a significant role. Executives who had been with their company for a longer time were more likely to keep quiet about their concerns. And CFOs who came from accounting backgrounds were much more likely to go public with their concerns than CFOs from a finance or banking background.

"Broadly speaking, when a company was under pressure to hit a financial target, managers felt that the short-term harm of blowing the whistle on red flags was too high to risk - even though it could lead to professional ruin if any fraud ever came to light," Brazel says. "That's likely because, in the scenarios we presented, there was the possibility that reporting red flags to the external parties could result in a failure to meet financial target - and that could lead to the company's bankruptcy.

"In short, while financial managers are very good at identifying red flags, and can be relied on to report internally, they're reluctant to report potential fraud publicly when the pressure is on.

"It's also worth noting that this study was done with participants in Italy, but it was inspired by issues that are of global concern," Brazel says. "And the findings are consistent with what we would expect to see in other large markets - including the United States."

Credit: 
North Carolina State University

Model shows how to make on-farm sustainable energy projects profitable

Researchers have developed a model that could boost investment in farm-based sustainable energy projects by allowing investors to more accurately predict whether a project will turn a profit. The model improves on earlier efforts by using advanced computational techniques to address uncertainty.

"Converting animal waste into electricity can be profitable for farmers while also producing environmental benefits, such as reducing greenhouse gas emissions," says Mahmoud Sharara, lead author of a paper on the work and an assistant professor of biological and agricultural engineering at North Carolina State University. "However, farmers cannot always finance these projects, and projects aren't always a profitable enterprise for a single farm.

"One way to address this is to develop cooperative anaerobic digestion systems that make use of waste from multiple farms," Sharara says. "Two of the big questions surrounding this sort of project are: Where do you build the cooperative system? And how can you tell whether it will be profitable?"

To that end, the researchers developed a computational model that tells users how to maximize the economic return on anaerobic digestion systems. Specifically, it tells users where a system should be located, what its capacity should be and how large a geographic area it should serve.

The model accounts for a variety of known factors, such as which species a farm is raising, the size of each farm and where each farm is located. But what sets the model apart is the way it accounts for uncertainty.

For this work, the researchers identified 13 key sources of uncertainty that can affect the profitability of an anaerobic digester system.

For example, one way these systems make money is by converting animal waste into biogas, using that gas to produce electricity, and then selling the electricity. Therefore, one key variable in predicting the profitability of a system is the future sale price of electricity. And while the future price of electricity is uncertain, you can draw on historical data or market forecasts to estimate a price range.

The same is true for other sources of uncertainty. For instance, the efficiency of an anaerobic digester is uncertain, but you can predict that the digester's performance will fall within a certain range.

This is where the model comes in.

The researchers designed the model to run repeated simulations that account for variation in each area of uncertainty. For example, what does profitability look like when electricity prices and digester efficiency are both high? What if they're both low? And so on. By running all of these simulations for different site locations, capacities and service areas, the model can tell users which combination of factors would generate the most profit.

The research team demonstrated the model with case studies of anaerobic digester systems for dairy farms in two regions of Wisconsin.

"The case studies were a good sanity check for us, and highlighted the viability and utility of the model," Sharara says.

"Ultimately, we think this will spur investment in these projects, which will be good for both farmers and the environment."

The model is available now, and the researchers are interested in packaging it in a format that would be easier to use and distribute.

"We're also hoping to work more closely with anaerobic digester system developers to fine-tune our assessment of the costs related to these systems," Sharara says. "And, ultimately, we'd like to expand our work to account for efforts to use the solids left behind after anaerobic digestion - such as projects that convert these solids into marketable fertilizer."

Credit: 
North Carolina State University

Casting light on the brain's inner workings

image: Recording the activity of individual neurons is made possible by this tiny, wireless, battery-free device.

Image: 
University of Arizona Gutruf Lab

The mammalian brain is the most complex organ in the body, capable of processing thousands of stimuli simultaneously to analyze patterns, predict changes and generate highly measured action. How the brain does all this - within fractions of a second - is still largely unknown.

Implants that can probe the brain at the individual neuron level are not widely available to researchers. Studying neuron activity while the body is in motion in an everyday setting is even more difficult, because monitoring devices typically involve wires connecting a study participant to a control station.

Researchers at the University of Arizona, George Washington University and Northwestern University have created an ultra-small, wireless, battery-free device that uses light to record individual neurons so neuroscientists can see how the brain is working. The technology is detailed in a study in the Proceedings of the National Academy of Sciences.

"As biomedical engineers, we are working with collaborators in neuroscience to improve tools to better understand the brain, specifically how these individual neurons - the building blocks of the brain - interact with each other while we move through the world around us," said lead study author Alex Burton, a University of Arizona biomedical engineering doctoral student and member of the Gutruf Lab.

The process first involves tinting select neurons with a dye that changes in brightness depending on activity. Then, the device shines a light on the dye, making the neurons' biochemical processes visible. The device captures the changes using a probe only slightly wider than a human hair, then processes a direct readout of the neuron's activity and transmits the information wirelessly to researchers.

"The device is smaller than a single M&M and only one-twentieth of the weight," Burton said.

The device can be tiny, and even flexible like a sheet of paper, because it does not need a battery. It harvests energy from external oscillating magnetic fields gathered by a miniature antenna on the device. This allows researchers to study brain activity without the use of restrictive equipment and gives neuroscientists a platform to gain insight into the underpinning mechanisms of the brain.

"When creating the device, we used materials and methods that are readily available and cheap enough to enable large-scale adaptation of the tool by the scientific community," said study senior author Philipp Gutruf, who leads the Gutruf Lab and is an assistant professor of biomedical engineering and member of the university's BIO5 Institute. "We hope that the technology can make a difference in fighting neurodegenerative diseases such as Alzheimer's and Parkinson's and cast light on the biological mechanisms, such as pain, addiction and depression."

Credit: 
University of Arizona College of Engineering

'Rule breaking' plants may be climate change survivors

image: A plantain (Plantago lanceolata) growing in its native habitat, Ireland.

Image: 
Dr Annabel Smith

Plants that break some of the 'rules' of ecology by adapting in unconventional ways may have a higher chance of surviving climate change, according to researchers from the University of Queensland and Trinity College Dublin.

Dr Annabel Smith, from UQ's School of Agriculture and Food Sciences, and Professor Yvonne Buckley, from UQ's School of Biological Sciences and Trinity College Dublin Ireland, studied the humble plantain (Plantago lanceolate) to see how it became one of the world's most successfully distributed plant species.

"The plantain, a small plant native to Europe, has spread wildly across the globe - we needed to know why it's been so incredibly successful, even in hot, dry climates," Dr Smith said.

The global team of 48 ecologists set up 53 monitoring sites in 21 countries, tagged thousands of individual plants, tracked plant deaths and new seedlings, counted flowers and seeds and looked at DNA to see how many individual plants have historically been introduced outside Europe.

What they discovered went against existing tenets of ecological science.

"We were a bit shocked to find that some of the 'rules of ecology' simply didn't apply to this species," Dr Smith said.

"Ecologists use different theories to understand how nature works - developed and tested over decades with field research - these are the so-called 'rules'.

"One of these theories describes how genetic diversity or variation in genes embedded in DNA are produced by changes in population size.

"Small populations tend to have little genetic diversity, while large populations with many offspring, such as those with lots of seeds, have more genetic diversity.

"Genetic diversity sounds boring, but actually it's the raw material on which evolution acts; more genetic diversity means plants are better able to adapt to environmental changes, like climate change.

"We discovered that, in their native range, the environment determined their levels of genetic diversity.

"But, in new environments, these rule breakers were adapting better than most other plants."

The team found the plantain's success was due to multiple introductions around the world.

Professor Buckley, who coordinates the global project from Trinity College Dublin Ireland, said the DNA analysis revealed that ongoing introductions into Australia, NZ, North America, Japan and South Africa quickly prompted genetic diversity,

It gave these 'expats' a higher capacity for adaptation," Professor Buckley said.

"In Europe plantains played by the rules, but by breaking it outside of Europe, it didn't matter what kind of environment they were living in, the plantains almost always had high genetic diversity and high adaptability."

Dr Smith said the finding was fascinating and critical, for two crucial reasons.

"It's important we now know that multiple introductions will mix genetic stock and make invasive plants more successful quite quickly - an important finding given invasive species cause extinction and cost governments billions of dollars," she said.

"And secondly, research on invasive plants gives us clues about how our native plants might adapt to climate change.

Credit: 
University of Queensland

Place-based tax incentives stimulate employment in remote regions

A place-based payroll tax incentive can be effective in stimulating employment in remote and underdeveloped regions, helping to address regional inequalities, according to a new UCL and University of Oslo study.

The study, published in the Journal of Public Economics, examined the effect of a tax reform in Norway that harmonised payroll tax rates across regions.
Prior to this, to promote economic activity in less developed and remote areas, the government of Norway applied geographically differentiated payroll tax rates (ranging from 0% in the northernmost regions to 14.1% in the central areas) to stimulate employment and business activity, and avoid depopulation of sparsely populated areas. The geographically differentiated tax system was abolished in 2004 in compliance with EU trade regulation.

The researchers found that after the place-based tax scheme was abolished, regions more heavily exposed to the reform-induced tax hike experienced a substantial decline in employment and a modest decrease in worker wages.

First author on the study, Dr Hyejin Ku (UCL Economics) said: "Our findings suggest that in countries or states where wages cannot adjust so easily, due for instance to centralised wage bargaining, place-based payroll tax incentives can indeed be an effective tool in stimulating local employment in underdeveloped regions."

"Ultimately, the effectiveness of place-based payroll tax incentives in stimulating local employment depends on how flexibly wages can adjust to a given tax change. In settings where rising labour costs for firms are easily shifted on to worker wages, we would expect no changes in employment levels in response to payroll tax hikes. However, in Norway, where trade unions have strong influence over wage bargaining, we see that it is employment levels that are most affected."

Payroll taxes - taxes imposed on employers or employees, usually calculated as a percentage of the salaries that employers pay their staff - are a major part of labour cost for businesses. They are the backbone of financing the social insurance system, and payroll taxes levied on firms constitute about 15% of the total tax revenue in OECD countries.

While place-based payroll taxes have not received a great deal of attention, they are popular in Finland, Norway and Sweden; countries that have noticeably lower levels of income inequality.

The researchers compared changes in employment and wages before (2000-2003) and after (2004-2006) the abolition of geographically differentiated payroll taxes between commuting zones (or local labour markets).

The researchers found that a one percentage point increase in the payroll rate tax leads to a decline in wages in the local labour market of 0.32%.

The researchers also found a significant decrease in local employment in response to the payroll tax hike: a one percentage point increase in the payroll tax rate reduced employment in the local labour market by 1.37%. This is a strong response especially taking into account that only large firms--which employ about 70% of workers in the local labour market--are subject to the payroll tax increase (as the government provided a subsidy for small firms).

The employment decline was primarily driven by workers transitioning from employment to unemployment or non-employment rather than a worker moving to a different local labour market.

According to the latest Labour Force Survey (Sep-Nov 2019), there is a fair amount of regional disparities in the labour market status of individuals in the UK. For instance, the employment rate among males aged 16-64 varies from 82% in the South East to 74% in the North East. The unemployment rate among the economically active males aged 16-64 was 3.65% in the South East and nearly twice as high in the North East (6.78%).

Professor Uta Schoenberg (UCL Economics) said: "Most countries have large and persistent geographical differences in employment and income, and a growing number of place-based policies attempt to reduce these differences through targeting underdeveloped or economically stressed regions. In the UK, for example, the Conservative Government has said it wants to reduce regional divisions, so this could be among the types of policies they consider for a post-Brexit Britain."

Credit: 
University College London

Middle-aged adults worried about health insurance costs now, uncertain for future

image: Key findings of the new study, based on a poll of people in their 50s and early 60s.

Image: 
University of Michigan

Health insurance costs weigh heavily on the minds of many middle-aged adults, and many are worried for what they'll face in retirement or if federal health policies change, according to a new study just published in JAMA Network Open.

More than a quarter of people in their 50s and early 60s lack confidence that they'll be able to afford health insurance in the next year, and the number goes up to nearly half when they look ahead to retirement. Two-thirds are concerned about how potential changes in health insurance policies at the national level could affect them.

Nearly one in five of survey respondents who are working say they've kept a job in the past year in order to keep their employer-sponsored health insurance. And 15% of those who are working say they've delayed retirement, or thought about it, because of their insurance.

The study uses data from the National Poll on Healthy Aging, conducted in late 2018, during the open enrollment period for many employers' insurance plans, and near the start of open enrollment for Medicare and plans available to individuals on federal and state marketplace sites.

"Seeking regular medical care is critically important for adults in their 50s and 60s, to prevent and treat health conditions," says lead author Renuka Tipirneni, M.D., M.Sc. "We found that many adults in this age group are unfortunately worried about affording health insurance and avoiding care because of costs." Tipirneni is an assistant professor of internal medicine at the University of Michigan and a member of the U-M Institute for Healthcare Policy and Innovation, which runs the poll. She sees patients in the General Medicine clinics at Michigan Medicine, U-M's academic medical center.

The poll was conducted at a time when the Affordable Care Act had survived challenges in Congress but was facing possible changes or invalidation in a federal court case. That case is now pending before the Supreme Court.

"It is clear from our poll that health care remains a top issue for middle-aged adults and that many of them find the recent uncertainty surrounding federal healthcare policies troubling," says senior author Aaron Scherer, Ph.D., an associate in internal medicine at the University of Iowa and former postdoctoral fellow at U-M. "Policymakers should work to ensure the stability and affordability of health insurance for vulnerable adults on the verge of retirement."

The worries about cost already affect how people in this pre-Medicare age group use health care, the study finds. More than 18 percent had avoided seeking care, or had not filled a prescription, because of cost in the past year.

Those who were in fair or poor health were four times more likely to have avoided care. Those with an insurance plan purchased on the individual level, such as the federal Marketplace, were three times more likely to have avoided seeking care or filling a prescription.

The poll of 1,024 adults in their pre-Medicare years was conducted sponsored by AARP and Michigan Medicine, U-M's academic medical center.

The poll focuses on those approaching the "magic" age of 65, when most Americans qualify for Medicare health insurance. The researchers say their findings hold implications for policy proposals that would offer Medicare availability at younger ages, or offer a publicly-funded plan on the federal Marketplace.

Credit: 
Michigan Medicine - University of Michigan

Understanding gut microbiota, one cell at a time

image: Homology modeling of proteins encoded by each gene included in the specific gene cluster from the draft genomes of the newly found Bacteroides species for breaking down inulin. *This is a modified version of an image found in the supplementary information section of the article published in Microbiome.

Image: 
Waseda University

A population of microorganisms living in our intestine, known as the gut microbiota, plays a crucial role in controlling our metabolism and reducing the risk of conditions such as obesity and diabetes.

Studies have shown that a way to promote the growth of such beneficial microorganisms and modulate their composition for a healthy balance is to add certain forms of fiber, such as inulin, to our diet. However, out of all the tens of trillions of microorganisms in the gut microbiota, it has been difficult to determine which and how microorganisms respond to dietary fiber. This is because current techniques rely on the availability of reference genomes in DNA sequence databases for precise taxonomic classification and accurate functional assignments of specific organisms, but in actuality, an estimated half of the human gut species lack a reference genome. In addition, existing techniques require hours or even days to complete the task.

To address this problem, Waseda University scientists devised a novel technique called the single-cell amplified genomes in gel beads sequencing (SAG-gel) platform, which can provide multiple draft genomes of the gut microbiota at once and identify bacteria that respond to dietary fiber at the species level without a need for existing reference genomes. What’s more, the advantage of this technique is that it only takes 10 minutes to obtain draft genomes from raw data of whole-genome sequencing since each data is purely derived from individual microbes. This dramatically speeds up the time needed for the process.

“Our new, single-cell genome sequencing technique can obtain each bacterial genome separately and characterize uncultured bacteria with specific functions in the microbiota, and this can help us estimate metabolic lineages involved in the bacterial fermentation of fiber and metabolic outcomes in the intestine based on the fibers ingested,” says Masahito Hosokawa, an assistant professor at Waseda University’s Faculty of Science and Engineering and corresponding author of this study. “It introduces an enhanced and efficient functional analysis of uncultured bacteria in the intestine.”

What the scientists did was to feed mice an inulin-based diet for two weeks and used the technique to randomly capture individual bacterial cells found in the mice’s fecal samples into tiny gel beads. The bacterial cells were then individually processed in the gel beads floating in a test tube, and more than 300 single-cell amplified genomes (SAGs), or genomes from a single-cell organism such as bacteria, were obtained by massively parallel sequencing. Because each SAG is composed of tens of thousands of reads on average, it enables extremely cost-efficient whole-genome sequencing of target cells. After quality control and classifying the SAGs, the scientists determined which bacteria were responsible for breaking down inulin and extracting energy from it.

“According to our results, the inulin-rich diet increased activities by the Bacteroides species inside the mouse intestine,” Hosokawa explains. “Also, from the draft genomes of newly found Bacteroides species, we discovered the specific gene cluster for breaking down inulin and the specific metabolic pathway for production of specific short-chain fatty acids, a metabolite which is produced by gut microbiota. Findings like these will help scientists in the future to predict the metabolic fermentation of dietary fibers based on the presence and ability of the specific responders.”

This technique could be applied to bacteria living anywhere, whether it is inside the human gut, in the ocean, or even in soil. Though there is a need to improve its accuracy since reading the DNA sequence for some gene regions is deemed difficult, Hosokawa hopes this technique will be applied in medicine and industry and be exploited to improve human and animal health.

Credit: 
Waseda University