Culture

Place-based tax incentives stimulate employment in remote regions

A place-based payroll tax incentive can be effective in stimulating employment in remote and underdeveloped regions, helping to address regional inequalities, according to a new UCL and University of Oslo study.

The study, published in the Journal of Public Economics, examined the effect of a tax reform in Norway that harmonised payroll tax rates across regions.
Prior to this, to promote economic activity in less developed and remote areas, the government of Norway applied geographically differentiated payroll tax rates (ranging from 0% in the northernmost regions to 14.1% in the central areas) to stimulate employment and business activity, and avoid depopulation of sparsely populated areas. The geographically differentiated tax system was abolished in 2004 in compliance with EU trade regulation.

The researchers found that after the place-based tax scheme was abolished, regions more heavily exposed to the reform-induced tax hike experienced a substantial decline in employment and a modest decrease in worker wages.

First author on the study, Dr Hyejin Ku (UCL Economics) said: "Our findings suggest that in countries or states where wages cannot adjust so easily, due for instance to centralised wage bargaining, place-based payroll tax incentives can indeed be an effective tool in stimulating local employment in underdeveloped regions."

"Ultimately, the effectiveness of place-based payroll tax incentives in stimulating local employment depends on how flexibly wages can adjust to a given tax change. In settings where rising labour costs for firms are easily shifted on to worker wages, we would expect no changes in employment levels in response to payroll tax hikes. However, in Norway, where trade unions have strong influence over wage bargaining, we see that it is employment levels that are most affected."

Payroll taxes - taxes imposed on employers or employees, usually calculated as a percentage of the salaries that employers pay their staff - are a major part of labour cost for businesses. They are the backbone of financing the social insurance system, and payroll taxes levied on firms constitute about 15% of the total tax revenue in OECD countries.

While place-based payroll taxes have not received a great deal of attention, they are popular in Finland, Norway and Sweden; countries that have noticeably lower levels of income inequality.

The researchers compared changes in employment and wages before (2000-2003) and after (2004-2006) the abolition of geographically differentiated payroll taxes between commuting zones (or local labour markets).

The researchers found that a one percentage point increase in the payroll rate tax leads to a decline in wages in the local labour market of 0.32%.

The researchers also found a significant decrease in local employment in response to the payroll tax hike: a one percentage point increase in the payroll tax rate reduced employment in the local labour market by 1.37%. This is a strong response especially taking into account that only large firms--which employ about 70% of workers in the local labour market--are subject to the payroll tax increase (as the government provided a subsidy for small firms).

The employment decline was primarily driven by workers transitioning from employment to unemployment or non-employment rather than a worker moving to a different local labour market.

According to the latest Labour Force Survey (Sep-Nov 2019), there is a fair amount of regional disparities in the labour market status of individuals in the UK. For instance, the employment rate among males aged 16-64 varies from 82% in the South East to 74% in the North East. The unemployment rate among the economically active males aged 16-64 was 3.65% in the South East and nearly twice as high in the North East (6.78%).

Professor Uta Schoenberg (UCL Economics) said: "Most countries have large and persistent geographical differences in employment and income, and a growing number of place-based policies attempt to reduce these differences through targeting underdeveloped or economically stressed regions. In the UK, for example, the Conservative Government has said it wants to reduce regional divisions, so this could be among the types of policies they consider for a post-Brexit Britain."

Credit: 
University College London

Middle-aged adults worried about health insurance costs now, uncertain for future

image: Key findings of the new study, based on a poll of people in their 50s and early 60s.

Image: 
University of Michigan

Health insurance costs weigh heavily on the minds of many middle-aged adults, and many are worried for what they'll face in retirement or if federal health policies change, according to a new study just published in JAMA Network Open.

More than a quarter of people in their 50s and early 60s lack confidence that they'll be able to afford health insurance in the next year, and the number goes up to nearly half when they look ahead to retirement. Two-thirds are concerned about how potential changes in health insurance policies at the national level could affect them.

Nearly one in five of survey respondents who are working say they've kept a job in the past year in order to keep their employer-sponsored health insurance. And 15% of those who are working say they've delayed retirement, or thought about it, because of their insurance.

The study uses data from the National Poll on Healthy Aging, conducted in late 2018, during the open enrollment period for many employers' insurance plans, and near the start of open enrollment for Medicare and plans available to individuals on federal and state marketplace sites.

"Seeking regular medical care is critically important for adults in their 50s and 60s, to prevent and treat health conditions," says lead author Renuka Tipirneni, M.D., M.Sc. "We found that many adults in this age group are unfortunately worried about affording health insurance and avoiding care because of costs." Tipirneni is an assistant professor of internal medicine at the University of Michigan and a member of the U-M Institute for Healthcare Policy and Innovation, which runs the poll. She sees patients in the General Medicine clinics at Michigan Medicine, U-M's academic medical center.

The poll was conducted at a time when the Affordable Care Act had survived challenges in Congress but was facing possible changes or invalidation in a federal court case. That case is now pending before the Supreme Court.

"It is clear from our poll that health care remains a top issue for middle-aged adults and that many of them find the recent uncertainty surrounding federal healthcare policies troubling," says senior author Aaron Scherer, Ph.D., an associate in internal medicine at the University of Iowa and former postdoctoral fellow at U-M. "Policymakers should work to ensure the stability and affordability of health insurance for vulnerable adults on the verge of retirement."

The worries about cost already affect how people in this pre-Medicare age group use health care, the study finds. More than 18 percent had avoided seeking care, or had not filled a prescription, because of cost in the past year.

Those who were in fair or poor health were four times more likely to have avoided care. Those with an insurance plan purchased on the individual level, such as the federal Marketplace, were three times more likely to have avoided seeking care or filling a prescription.

The poll of 1,024 adults in their pre-Medicare years was conducted sponsored by AARP and Michigan Medicine, U-M's academic medical center.

The poll focuses on those approaching the "magic" age of 65, when most Americans qualify for Medicare health insurance. The researchers say their findings hold implications for policy proposals that would offer Medicare availability at younger ages, or offer a publicly-funded plan on the federal Marketplace.

Credit: 
Michigan Medicine - University of Michigan

Understanding gut microbiota, one cell at a time

image: Homology modeling of proteins encoded by each gene included in the specific gene cluster from the draft genomes of the newly found Bacteroides species for breaking down inulin. *This is a modified version of an image found in the supplementary information section of the article published in Microbiome.

Image: 
Waseda University

A population of microorganisms living in our intestine, known as the gut microbiota, plays a crucial role in controlling our metabolism and reducing the risk of conditions such as obesity and diabetes.

Studies have shown that a way to promote the growth of such beneficial microorganisms and modulate their composition for a healthy balance is to add certain forms of fiber, such as inulin, to our diet. However, out of all the tens of trillions of microorganisms in the gut microbiota, it has been difficult to determine which and how microorganisms respond to dietary fiber. This is because current techniques rely on the availability of reference genomes in DNA sequence databases for precise taxonomic classification and accurate functional assignments of specific organisms, but in actuality, an estimated half of the human gut species lack a reference genome. In addition, existing techniques require hours or even days to complete the task.

To address this problem, Waseda University scientists devised a novel technique called the single-cell amplified genomes in gel beads sequencing (SAG-gel) platform, which can provide multiple draft genomes of the gut microbiota at once and identify bacteria that respond to dietary fiber at the species level without a need for existing reference genomes. What’s more, the advantage of this technique is that it only takes 10 minutes to obtain draft genomes from raw data of whole-genome sequencing since each data is purely derived from individual microbes. This dramatically speeds up the time needed for the process.

“Our new, single-cell genome sequencing technique can obtain each bacterial genome separately and characterize uncultured bacteria with specific functions in the microbiota, and this can help us estimate metabolic lineages involved in the bacterial fermentation of fiber and metabolic outcomes in the intestine based on the fibers ingested,” says Masahito Hosokawa, an assistant professor at Waseda University’s Faculty of Science and Engineering and corresponding author of this study. “It introduces an enhanced and efficient functional analysis of uncultured bacteria in the intestine.”

What the scientists did was to feed mice an inulin-based diet for two weeks and used the technique to randomly capture individual bacterial cells found in the mice’s fecal samples into tiny gel beads. The bacterial cells were then individually processed in the gel beads floating in a test tube, and more than 300 single-cell amplified genomes (SAGs), or genomes from a single-cell organism such as bacteria, were obtained by massively parallel sequencing. Because each SAG is composed of tens of thousands of reads on average, it enables extremely cost-efficient whole-genome sequencing of target cells. After quality control and classifying the SAGs, the scientists determined which bacteria were responsible for breaking down inulin and extracting energy from it.

“According to our results, the inulin-rich diet increased activities by the Bacteroides species inside the mouse intestine,” Hosokawa explains. “Also, from the draft genomes of newly found Bacteroides species, we discovered the specific gene cluster for breaking down inulin and the specific metabolic pathway for production of specific short-chain fatty acids, a metabolite which is produced by gut microbiota. Findings like these will help scientists in the future to predict the metabolic fermentation of dietary fibers based on the presence and ability of the specific responders.”

This technique could be applied to bacteria living anywhere, whether it is inside the human gut, in the ocean, or even in soil. Though there is a need to improve its accuracy since reading the DNA sequence for some gene regions is deemed difficult, Hosokawa hopes this technique will be applied in medicine and industry and be exploited to improve human and animal health.

Credit: 
Waseda University

Fly model offers new approach to unraveling 'difficult' pathogen

image: A normal fruit fly gut revealing the regular structure of actin staining microvilli (green) and cell nuclei (blue). This organization in the fruit fly is very similar to that of the human intestine. A study published in iScience makes use of these close parallels in structure and function to identify new activities of a toxin produced by the problematic hospital pathogen C. difficile.

Image: 
Bier Lab, UC San Diego

The Clostridium difficile pathogen takes its name from the French word for "difficult." A bacterium that is known to cause symptoms ranging from diarrhea to life-threatening colon damage, C. difficile is part of a growing epidemic of concern for the elderly and patients on antibiotics.

Outbreaks of C. difficile-infected cases have progressively increased in Western countries, with 29,000 reported deaths per year in the United States alone.

Now, biologists at the University of California San Diego are drawing parallels from newly developed models of the common fruit fly to help lay the foundation for novel therapies to fight the pathogen's spread. Their report is published in the journal iScience.

"C. difficile infections pose a serious risk to hospitalized patients" said Ethan Bier, a distinguished professor in the Division of Biological Sciences and science director of the UC San Diego unit of the Tata Institute for Genetics and Society (TIGS). "This research opens a new avenue for understanding how this pathogen gains an advantage over other beneficial bacteria in the human microbiome through its production of toxic factors. Such knowledge could aid in devising strategies to contain this pathogen and reduce the great suffering it causes."

As with most bacterial pathogens, C. difficile secretes toxins that enter host cells, disrupt key signaling pathways and weaken the host's normal defense mechanisms. The most potent strains of C. difficile unleash a two-component toxin that triggers a string of complex cellular responses, culminating in the formation of long membrane protrusions that allow the bacteria to attach more effectively to host cells.

UC San Diego scientists in Bier's lab created strains of fruit flies that are capable of expressing the active component of this toxin, known as "CDTa." The strains allowed them to study the elaborate mechanisms underlying CDTa toxicity in a live model system focused on the gut, which is key since the digestive system of these small flies is surprisingly similar to that of humans.

"The fly gut provides a rapid and surprisingly accurate model for the human intestine, which is the site of infection by C. difficile," said Bier. "The vast array of sophisticated genetic tools in flies can identify new mechanisms for how toxic factors produced by bacteria disrupt cellular processes and molecular pathways. Such discoveries, once validated in a mammalian system or human cells, can lead to novel treatments for preventing or reducing the severity of C. difficile infections."

The fruit fly model gave the researchers a clear path to examine genetic interactions disrupted at the hands of CDTa. They ultimately found that the toxin induces a collapse of networks that are essential for nutrient absorption. As a result, the model flies' body weight, fecal output and overall lifespan were severely reduced, mimicking symptoms in human C. difficile-infected patients.

Credit: 
University of California - San Diego

Scientists identify new biochemical 'warning sign' of early-stage depression

image: Major depressive disorder affects over 300 million people worldwide, but so far there have been no established biomarkers that clinicians can rely on to detect early-stage depression symptoms. Now, in a new study published in Scientific Reports, scientists at Fujita Health University led by Professor Yasuko Yamamoto have shown that the levels of anthranilic acid in blood may provide a basis for identifying patients at risk of major depressive disorder.

Image: 
Fujita Health University

Chronic pain, or inflammation, is believed to be one of the major factors in the onset of major depressive disorder. Therefore, to better understand what happens physiologically during depression, scientists have long studied several metabolic processes or "pathways" related to inflammation. One of these pathways, called the kynurenine pathway, is the principal pathway involved in metabolizing the amino acid tryptophan. Now, a new study by a team of scientists, led by Professor Kuniaki Saito and Associate Professor Yasuko Yamamoto of Japan's Fujita Health University, shows that elevated levels of anthranilic acid--an important metabolite (product/intermediate) of the kynurenine pathway--in the blood may serve as a marker for identifying individuals who are experiencing depression-like symptoms and are at risk of developing major depressive disorder. This interesting new study is published in Scientific Reports.

"Various lines of scientific evidence suggest that tryptophan metabolism is involved in the symptoms of major depressive disorder," notes Dr Yamamoto. For example, past studies have reported that patients with depression and other conditions involving depression-like symptoms show increased blood levels of various tryptophan metabolites produced by the kynurenine pathway. These findings led Dr Saito's team to speculate that metabolites of the kynurenine pathway may serve as "biomarkers" that could allow early detection of patients at risk of developing depression.

To test this idea, Dr Saito's team analyzed serum (fractionated, clear part of blood) samples from 61 patients who had clinical test scores that indicated a high risk of developing major depressive disorder. For scientifically accurate comparison, they also used a "control" group, wherein they analyzed serum samples from 51 healthy individuals. The scientists measured the serum levels of various kynurenine pathway metabolites with a technique called high-performance liquid chromatography, which allows precise measurement of concentrations. Compared to the healthy "controls," the patients at risk of depression had increased serum levels of anthranilic acid. Furthermore, the women at risk of depression had reduced serum levels of tryptophan. Given that the kynurenine pathway consumes tryptophan and produces anthranilic acid, these findings are aligned with the previous findings of increased kynurenine pathway activity in patients at risk of developing major depressive disorder.

The scientists also wanted to find out whether tryptophan metabolite profiles can predict the progression of depression-related symptoms. For that, they did further analyses on samples and data from 33 patients at risk of depression whose scores on a clinical depression scale at different timepoints indicated regression from a healthy state to a depressed state. The analyses showed that increases in serum anthranilic acid levels over time correlated with worsening of the clinical test scores. Prof Saito states, "this finding confirms that there is indeed a strong, direct correlation between anthranilic acid levels in blood and the severity of depression on the clinical depression scale."

Because chronic pain can cause depression and related symptoms, the scientists also scrutinized tryptophan metabolite profiles in patients with chronic pain disorders affecting the mouth, jaw, and face. By testing serum samples from 48 patients with chronic pain disorders and 42 healthy individuals, the research team found that the patients with chronic pain had elevated serum levels of anthranilic acid and lower serum levels of tryptophan, just like those who were at risk of major depressive disorder.

So, what is the takeaway of this study? According to Prof Saito and team, these results show that clinicians can monitor serum levels of anthranilic acid to find out if patients are at risk of developing major depressive disorder. As Prof Saito notes, "monitoring the levels of tryptophan metabolites may be useful for the realization of pre-emptive medicine for depressive symptoms." Preemptive medicine in this case involves specific treatments that can prevent a patient from developing depression. Of course, more research is necessary to validate the clinical relevance of serum anthranilic acid levels and to understand exactly how tryptophan metabolism influences outwardly aspects like mood. But, that said, this study has the potential to pinpoint the physiological processes that contribute to depression and thus improve the standard of care for preventing depression.

Credit: 
Fujita Health University

The power of going small: Copper oxide subnanoparticle catalysts prove most superior

image: This is a research concept of copper oxide subnanoparticles.

Image: 
Makoto Tanabe, Kimihisa Yamamoto

Scientists at Tokyo Institute of Technology have shown that copper oxide particles on the sub-nanoscale are more powerful catalysts than those on the nanoscale. These subnanoparticles can also catalyze the oxidation reactions of aromatic hydrocarbons far more effectively than catalysts currently used in industry. This study paves the way to better and more efficient utilization of aromatic hydrocarbons, which are important materials for both research and industry.

The selective oxidation of hydrocarbons is important in many chemical reactions and industrial processes, and as such, scientists have been on the lookout for more efficient ways to carry out this oxidation. Copper oxide (CunOx) nanoparticles have been found useful as a catalyst for processing aromatic hydrocarbons, but the quest of even more effective compounds has continued.

In the recent past, scientists applied noble metal-based catalysts comprising of particles at the sub-nano level. At this level, particles measure less than a nanometer and when placed on appropriate substrates, they can offer even higher surface areas than nanoparticle catalysts to promote reactivity (Fig. 1).

In this trend, a team of scientists including Prof. Kimihisa Yamamoto and Dr. Makoto Tanabe from Tokyo Institute of Technology (Tokyo Tech) investigated chemical reactions catalyzed by CunOx subnanoparticles (SNPs) to evaluate their performance in the oxidation of aromatic hydrocarbons. CunOx SNPs of three specific sizes (with 12, 28, and 60 copper atoms) were produced within tree-like frameworks called dendrimers (Fig. 2). Supported on a zirconia substrate, they were applied to the aerobic oxidation of an organic compound with an aromatic benzene ring.

X-ray photoelectron spectroscopy (XPS) and infrared spectroscopy (IR) were used to analyze the synthesized SNPs' structures, and the results were supported by density functionality theory (DFT) calculations.

The XPS analysis and DFT calculations revealed increasing ionicity of the copper-oxygen (Cu-O) bonds as SNP size decreased. This bond polarization was greater than that seen in bulk Cu-O bonds, and the greater polarization was the cause of the enhanced catalytic activity of the CunOx SNPs.

Tanabe and the team members observed that the CunOx SNPs sped up the oxidation of the CH3 groups attached to the aromatic ring, thereby leading to the formation of products. When the CunOx SNP catalyst was not used, no products were formed. The catalyst with the smallest CunOx SNPs, Cu12Ox, had the best catalytic performance and proved to be the longest lasting.

As Tanabe explains, "the enhancement of the ionicity of the Cu-O bonds with decrease in size of the CunOx SNPs enables their better catalytic activity for aromatic hydrocarbon oxidations."

Their research supports the contention that there is great potential for using copper oxide SNPs as catalysts in industrial applications. "The catalytic performance and mechanism of these size-controlled synthesized CunOx SNPs would be better than those of noble metal catalysts, which are most commonly used in industry at present," Yamamoto say, hinting at what CunOx SNPs can achieve in the future.

Credit: 
Tokyo Institute of Technology

Bovine embryo completely regenerates placenta-forming cells

image: An early bovine embryo regenerating its TE cells which will later form a large part of the placenta. (Left: intact, Middle: after removal of TE, Right: regenerated) (Kohri N. et al., Journal of Biological Chemistry. November 8, 2019)

Image: 
Kohri N. et al., Journal of Biological Chemistry. November 8, 2019

A calf was born from an embryo lacking cells which form a large part of the placenta, providing new insight into the regenerative capacity of mammalian embryos.

Mammalian development starts from a single cell -- a fertilized egg. The egg goes through multiple cell divisions to increase its cell numbers and then starts forming a sphere-like structure with a cavity inside, called the blastocyst. The blastocyst consists of two types of cells, the inner cell mass (ICM) and the trophectoderm (TE), which develop into an embryo proper and a large part of the placenta, respectively.

Scientists led by Manabu Kawahara at Hokkaido University have shown that, since bovine ICM cells can regenerate TE, they are capable of forming both the embryo and placenta. The study was published in the Journal of Biological Chemistry and became one of the top 50 most viewed papers from November through December 2019 on the Journal's website.

To examine the ICM's capacity to regenerate TE, the researchers cultivated mouse and bovine blastocysts and removed entire TE from both blastocysts. They found that both blastocysts regained their sphere-like shapes in 24 hours. However, the regeneration rate to reform the blastocyst was remarkably higher in bovine cells (97%) than mouse cells (57%). The more complete recovery of bovine blastocysts in cell numbers compared to mouse blastocysts suggests the bovine cells have a higher regenerative capacity.

Further experiments revealed abnormal protein expression in the TE of mouse regenerated blastocysts, whereas bovine regenerated blastocysts showed normal gene expressions overall.

To test its developmental abilities, the researchers then transferred the regenerated blastocysts to recipient females. After the embryo-transfer, to their surprise, one of the four cows became pregnant and a female calf was naturally born with an apparently normal placenta. In contrast, none of the more than 100 mouse embryos transferred to recipients developed to term.

"We will continue to monitor the health of the calf born from the regenerated blastocyst," says Manabu Kawahara. "Our study suggests that we can remove and use a large part of TE for genetic testing to breed cattle with improved qualities. Also, further studies could reveal the mechanism of cell fate decision in mammals and its differences between species."

Credit: 
Hokkaido University

How long coronaviruses persist on surfaces and how to inactivate them

The novel coronavirus 2019-nCoV is making headlines worldwide. Since there is no specific therapy against it, the prevention of infection is of particular importance in order to stem the epidemic. Like all droplet infections, the virus can spread via hands and surfaces that are frequently touched. "In hospitals, these can be door handles, for example, but also call buttons, bedside tables, bed frames and other objects in the direct vicinity of patients, which are often made of metal or plastic," explains Professor Günter Kampf from the Institute of Hygiene and Environmental Medicine at the Greifswald University Hospital.

Together with Professor Eike Steinmann, head of the Department for Molecular and Medical Virology at Ruhr-Universität Bochum (RUB), he has compiled comprehensive findings from 22 studies on coronaviruses and their inactivation for a future textbook. "Under the circumstances, the best approach was to publish these verified scientific facts in advance, in order to make all information available at a glance," says Eike Steinmann.

Infectious on surfaces for up to nine days

The evaluated studies, which focus on the pathogens Sars coronavirus and Mers coronavirus, showed, for example, that the viruses can persist on surfaces and remain infectious at room temperature for up to nine days. On average, they survive between four and five days. "Low temperature and high air humidity further increase their lifespan," points out Kampf.

Tests with various disinfection solutions showed that agents based on ethanol, hydrogen peroxide or sodium hypochlorite are effective against coronaviruses. If these agents are applied in appropriate concentrations, they reduce the number of infectious coronaviruses by four so-called log steps within one minute: this means, for example, from one million to only 100 pathogenic particles. If preparations based on other active ingredients are used, the product should be proven to be at least effective against enveloped viruses ("limited virucidal activity"). "As a rule, this is sufficient to significantly reduce the risk of infection," explains Günter Kampf.

Findings should be transferable to 2019-CoV

The experts assume that the results from the analyses of other coronaviruses are transferable to the novel virus. "Different coronaviruses were analysed, and the results were all similar," concludes Eike Steinmann.

Credit: 
Ruhr-University Bochum

Plugging into a 6G future with users at the center

video: 6G communications will need to be more secure and reliable, involving a decentralized network architecture

Image: 
2020 KAUST

With the deployment of 5G networks throughout 2020, scientists are now focusing their research attentions on 6G communications. This research will need to be human-centric, according to KAUST postdoctoral fellow Shuping Dang.

Dang and his colleagues examined the potential applications and challenges of 6G communications in a study published in Nature Electronics. They found that 6G communications will need to be more secure, protect people's privacy, be ubiquitously accessible and affordable, and safeguard users' mental and physical well-being.

Achieving these criteria is no small feat. 5G communications have many advantages, supporting internet protocol television, high-definition video streaming, basic virtual and augmented reality services, and faster transmission; however, it does not involve the use of ground-breaking technologies. Its main focus has been about enhancing performance rather than technology.

Communications systems get updated every decade, which is also known as a generation or simply a "G."

Meanwhile, 6G is expected to revolutionize the way we communicate. Envision a day, somewhere around 2030, when recreational scuba divers, for example, use their phones to transmit holographic images of themselves in their underwater surroundings to colleagues at work.

Other applications will include more accurate indoor positioning, allowing an application to identify exactly where you are in a 10-story building; a more tactile internet that allows remote machine operation or cooperative automated driving; improved in-flight and on-the-move connectivity; and the transmission of biological information extracted from exhaled breath, allowing the detection of developing contagions and the diagnosis of disease.

"These 6G communications will need to be more secure and reliable, involving a decentralized network architecture," says KAUST research scientist, Osama Amin.

This could involve using blockchain technology, famous for its use in Bitcoin mining, to make data anonymous and untraceable. This technology would prevent private data leakages, such as those that have recently captured public attention. Currently, governments and corporations control internet connections through the use of centralized servers. A decentralized blockchain network would involve storing encoded data on thousands of nodes that cannot all be accessed by a single person or entity.

Researchers will also need to investigate the use of physical-layer security technologies, which exploit the physical characteristics of wireless communication, such as noise and fading, to improve user security and privacy.

"6G communications will also require the employment of a 3D network architecture, where terrestrial base stations, unmanned aerial vehicles and space satellites are jointly used to provide seamless, high-quality and affordable communication services to people living in remote and underdeveloped areas," adds advisor Mohamed-Slim Alouini. This could even involve the deployment of underwater communication nodes in the form of autonomous vehicles and sensors that are connected to underwater base stations.

"Artificial intelligence will play a pivotal role in the 6G communication revolution," explains Dang. Machine learning algorithms could be used, for example, to efficiently allocate base station resources and achieve close-to-optimum performance. Intelligent materials placed on surfaces in the environment, such as on buildings or on streetlights, could be used to sense the wireless environment and apply customized changes to radio waves. And deep learning techniques could be used to improve the accuracy of indoor positioning.

All of this will require systems that offer an extremely large bandwidth for signal transmission and that are robust in the face of adverse weather conditions. Also required are devices that consume less energy and that have longer battery lives. This will need further research into technologies that can harvest energy from ambient radio frequency signals, microvibrations and sunlight. Finally, researchers will need to investigate the impacts of these evolving technologies on mental and physical health.

"Our current study aims to provide a vision of 6G and to serve as a research guideline in the post-5G era," says Alouini. He and his team are investigating the integration of satellite, airborne and terrestrial networks for forming a decentralized 6G communication system.

They are also studying the use of artificial intelligence and deep learning techniques to optimize 6G communications and the use of "smart radio environments" with reconfigurable reflecting surfaces to optimize signal transmission. Finally, the team is working on expanding the wireless spectrum to the terahertz and optical bands to unlock a much larger bandwidth for 6G communication systems.

Credit: 
King Abdullah University of Science & Technology (KAUST)

More people and fewer wild fish lead to an omega-3 supply gap

Everyone knows that eating fish is good for you, in part because of the healthy omega-3 fatty acids that it contains.

Several of these fatty acids are essential in human diets, especially when it comes to infant development and reducing cognitive decline in adults.

But dwindling fish stocks worldwide, combined with a growing population, mean that a substantial number of people on the planet don't get enough of these essential nutrients, a new study shows.

The researchers focused on two particular omega-3 fatty acids, abbreviated EPA and DHA, because they are the two fatty acids that are both essential and limited in supply. Other fatty acids are readily available through plants.

"When we looked at how EPA and DHA are produced and consumed, in humans and in the ocean, we found that 70 per cent of the world's population doesn't get what they really need. That can have far-reaching health consequences," said Helen Hamilton, first author of the paper.

Hamilton recently completed a postdoc at the Norwegian University of Science and Technology's (NTNU) Industrial Ecology Programme and is now a sustainability specialist at Biomar Global.

Hamilton and her colleagues documented the reasons behind the supply gap and suggested ways to increase supplies through improved recycling and tapping new primary sources, and to reduce demand through alternative diets. Their findings have been published in the academic journal Nature Food.

The world's fisheries are under pressure, with an estimated 63 per cent of all fish stocks considered exploited and in need of rebuilding, Hamilton and her colleagues wrote. That makes it unlikely that people can catch enough fish to provide their dietary needs for EPA and DHA.

"We can't take any more fish out of the ocean," Hamilton said. "That means we really need to optimize what we do have or find new, novel sources. We need to look at how EPA and DHA are produced and consumed by humans and in the ocean."

To arrive at their results, the researchers collected data from the UN's Food and Agriculture Organization and the International Marine Ingredients Organization, along with published research articles and reports. The data was fed into a model called a multi-layer material flow analysis framework. This allowed Hamilton and her colleagues to estimate the amount of available omega-3 fatty acids, and how and where they are consumed.

The researchers suggest that better fisheries management, such as limiting catches and modifying fishing gear to cut the catch of unwanted fish, as ways to boost fish stocks. However, allowing fish stocks to recover is a long-term solution that will result in short-term decreases in supplies, they said.

Another marine source of EPA and DHA is krill, currently harvested from Antarctic waters.

"Increasing krill catch for use as feed could substantially increase the EPA/DHA supply," Hamilton and her colleagues wrote. Annual harvest rates of roughly 300 000 tons are well below recommend catch limits of 5.6 million tons, the researchers wrote.

But catching krill isn't necessarily a quick fix answer either, they said. Catching krill from the Antarctic is both costly and challenging because of the sheer distance from Antarctic waters to markets, they wrote.

Fish farming can help, but many farmed fish, including salmon, need fish feed that includes fish meal and fish oil.The strong demand for fish oil and meal has led the aquaculture industry to develop fish feed based on plant products, like soy. But too little EPA and DHA in fish feed can cause health problems in farmed fish and also reduce the amount of omega-3 fatty acids they contain.

Hamilton and her colleagues suggest that aquaculture can make strategic use of fish oils in fish feed by feeding these essential compounds to farmed fish at key life stages, especially right before the fish will be slaughtered for consumption.

The researchers' analysis also showed that aquaculture, while a major consumer of EPA and DHA, is also a major producer when it comes to species that don't depend on fish oils in their diet. These species include molluscs and carp. Freshwater fish like carp also can synthesize the two substance, the researchers noted.

People rarely eat all of a fish, yet these leftover by-products, such as innards and heads, also contain omega-3 fatty acids. Fish feed and fish oil can be made from fish wastes, the researchers wrote, with the trick being to collect and process the wastes.

"In Europe and North America, fish are gutted and processed by industry, which makes it really easy to collect and reuse by-products," Hamilton said. "But in China, specifically, the culture is to filet and gut the fish at home, making it very difficult to use the waste for anything useful."

Asia, far more than elsewhere in the world, is where there's most to be gained by collecting fish by-products for use, she said.

As a result, better use of by-products will require both cultural changes and central processing facilities, they said.

Changing diets can help

The researchers observed that EPA and DHA can be produced by both natural and genetically modified microalgae, as well as microbacteria and plants.

But that will also require a scale-up in production and changes in cultural acceptance, particularly in Europe, where current regulations limit use of genetically modified organisms.

"There is no silver bullet for closing the supply gap and none of the strategies we have suggested are easy. But we have to find a way to balance healthy human nutrition, a growing population and protecting our environment," Hamilton said."To do this, we will need a combination of strategies that target different parts of the supply chain. However, before we go forward, it is essential we understand potential trade-offs, such as the repercussions that can come from the widespread use of genetically modified organisms," she said.

Credit: 
Norwegian University of Science and Technology

Jackiw-Rebbi zero-mode: Realizing non-Abelian braiding in non-Majorana system

image: (a) Nanowire-based cross-shaped junction supporting the non-Abelian braiding of Jackiw-Rebbi zero-modes. (b) Numerical results for the evolution of wavefunction that demonstrates the non-Abelian braiding properties of Jackiw-Rebbi zero-modes.

Image: 
©Science China Press

As an important branch of quantum computation, topological quantum computation has been drawing extensive attention for holding great advantages such as fault-tolerance. Topological quantum computation is based on the non-Abelian braiding of quantum states, where the non-Abelian braiding in the field of quantum statistics is highly related to the non-locality of the quantum states. The exploration on topological quantum computation in the last two decades is mainly focused on Majorana fermion (or its zero-energy incarnation known as Majorana zero-mode), an exotic particle possessing non-Abelian statistics and well-known for its anti-particle being itself.

Jackiw-Rebbi zero-mode was firstly raised in the field of high energy physics in 1970s. With the growing importance of topology in the area of condensed matter physics, the concept of Jackiw-Rebbi zero-mode was also adopted to refer to the topologically protected zero-mode in the boundary of topological insulator. In contrast with the Majorana zero-mode only presented with non-vanishing superconducting order parameter, Jackiw-Rebbi zero-mode is not self-conjugate and therefore could be presented even in the absence of particle-hole symmetry.

Recently, in a research article entitled as "Double-frequency Aharonov-Bohm effect and non-Abelian braiding properties of Jackiw-Rebbi zero-mode" published in National Science Review, researchers from four universities including Peking University and Xi'an Jiaotong University claimed a new method realizing non-Abelian braiding. Co-authors Yijia Wu, Haiwen Liu, Jie Liu, Hua Jiang, and X. C. Xie demonstrated that the Jackiw-Rebbi zero-modes widely existed in topological insulators also support non-Abelian braiding.

In this work, the authors constructed Jackiw-Rebbi zero-modes in a quantum spin Hall insulator. Through showing the Aharonov-Bohm oscillation frequency of the Jackiw-Rebbi zero-mode intermediated transport is doubled, they claimed that the Majorana zero-mode can be viewed as a special case of Jackiw-Rebbi zero-mode with particle-hole symmetry. In the method of numerical simulation, they also demonstrated that non-Abelian braiding properties are exhibited by Jackiw-Rebbi zero-modes in the absence of superconductivity. The authors believed that these results not only make theoretical progress exhibiting the charming properties of Jackiw-Rebbi zero-mode, but also provide the possibility realizing topological quantum computation in a non-Majorana (non-superconductivity) system.

This latest research also put forward a generalized and continuously tunable fusion rule in topological quantum computation when the degeneracy of Jackiw-Rebbi zero-modes is lifted. The authors concluded that Jackiw-Rebbi zero-mode could be a new candidate for topological quantum computation and holds additional advantages compared with its Majorana cousin: (1) the superconductivity is no longer required; (2) possesses generalized fusion rule; and (3) the energy gap is generally larger.

Credit: 
Science China Press

Scientists resurrect mammoth's broken genes

image: New research builds on evidence that the last mammoths on Wrangel Island suffered from a variety of genetic defects.

Image: 
Rebecca Farnham / University at Buffalo

BUFFALO, N.Y. -- Some 4,ooo years ago, a tiny population of woolly mammoths died out on Wrangel Island, a remote Arctic refuge off the coast of Siberia.

They may have been the last of their kind anywhere on Earth.

To learn about the plight of these giant creatures and the forces that contributed to their extinction, scientists have resurrected a Wrangel Island mammoth's mutated genes. The goal of the project was to study whether the genes functioned normally. They did not.

The research builds on evidence suggesting that in their final days, the animals suffered from a medley of genetic defects that may have hindered their development, reproduction and their ability to smell.

The problems may have stemmed from rapid population decline, which can lead to interbreeding among distant relatives and low genetic diversity -- trends that may damage a species' ability to purge or limit harmful genetic mutations.

"The key innovation of our paper is that we actually resurrect Wrangel Island mammoth genes to test whether their mutations actually were damaging (most mutations don't actually do anything)," says lead author Vincent Lynch, PhD, an evolutionary biologist at the University at Buffalo. "Beyond suggesting that the last mammoths were probably an unhealthy population, it's a cautionary tale for living species threatened with extinction: If their populations stay small, they too may accumulate deleterious mutations that can contribute to their extinction."

The study was published on Feb. 7 in the journal Genome Biology and Evolution.

Lynch, an assistant professor of biological sciences in the UB College of Arts and Sciences, joined UB in 2019 and led the project while he was at the University of Chicago. The research was a collaboration between Lynch and scientists at the University of Chicago, Northwestern University, University of Virginia, University of Vienna and Penn State. The first authors were Erin Fry from the University of Chicago and Sun K. Kim from Northwestern University.

To conduct the study, Lynch's team first compared the DNA of a Wrangel Island mammoth to that of three Asian elephants and two more ancient mammoths that lived when mammoth populations were much larger.

The researchers identified a number of genetic mutations unique to the Wrangel Island mammoth. Then, they synthesized the altered genes, inserted that DNA into cells in petri dishes, and tested whether proteins expressed by the genes interacted normally with other genes or molecules.

The scientists did this for genes that are thought or known to be involved in a range of important functions, including neurological development, male fertility, insulin signaling and sense of smell.

In the case of detecting odors, for example, "We know how the genes responsible for our ability to detect scents work," Lynch says. "So we can resurrect the mammoth version, make cells in culture produce the mammoth gene, and then test whether the protein functions normally in cells. If it doesn't -- and it didn't -- we can infer that it probably means that Wrangel Island mammoths were unable to smell the flowers that they ate."

The research builds on prior work by other scientists, such as a 2017 paper in which a different research team identified potentially detrimental genetic mutations in the Wrangel Island mammoth, estimated to be a part of a population containing only a few hundred members of the species.

"The results are very complementary," Lynch says. "The 2017 study predicts that Wrangel Island mammoths were accumulating damaging mutations. We found something similar and tested those predictions by resurrecting mutated genes in the lab. The take-home message is that the last mammoths may have been pretty sick and unable to smell flowers, so that's just sad."

Credit: 
University at Buffalo

Galaxy formation simulated without dark matter

image: 1.5 billion years after the start of the simulation. The lighter the color, the higher the density of the gas. The light blue dots show young stars.

Image: 
© AG Kroupa/Uni Bonn

For the first time, researchers from the Universities of Bonn and Strasbourg have simulated the formation of galaxies in a universe without dark matter. To replicate this process on the computer, they have instead modified Newton's laws of gravity. The galaxies that were created in the computer calculations are similar to those we actually see today. According to the scientists, their assumptions could solve many mysteries of modern cosmology. The results are published in the Astrophysical Journal.

Cosmologists nowadays assume that matter was not distributed entirely evenly after the Big Bang. The denser places attracted more and more matter from their surroundings due to their stronger gravitational forces. Over the course of several billion years, these accumulations of gas eventually formed the galaxies we see today.

An important ingredient of this theory is the so-called dark matter. On the one hand, it is said to be responsible for the initial uneven distribution that led to the agglomeration of the gas clouds. It also explains some puzzling observations. For instance, stars in rotating galaxies often move so fast that they should actually be ejected. It appears that there is an additional source of gravity in the galaxies that prevents this - a kind of "star putty" that cannot be seen with telescopes: dark matter.

However, there is still no direct proof of its existence. "Perhaps the gravitational forces themselves simply behave differently than previously thought," explains Prof. Dr. Pavel Kroupa from the Helmholtz Institute for Radiation and Nuclear Physics at the University of Bonn and the Astronomical Institute of Charles University in Prague. This theory bears the abbreviation MOND (MOdified Newtonian Dynamics); it was discovered by the Israeli physicist Prof. Dr. Mordehai Milgrom. According to the theory, the attraction between two masses obeys Newton's laws only up to a certain point. Under very low accelerations, as is the case in galaxies, it becomes considerably stronger. This is why galaxies do not break apart as a result of their rotational speed.

Results close to reality

"In cooperation with Dr. Benoit Famaey in Strasbourg, we have now simulated for the first time whether galaxies would form in a MOND universe and if so, which ones," says Kroupa's doctoral student Nils Wittenburg. To do this he used a computer program for complex gravitational calculations which was developed in Kroupa's group. Because with MOND, the attraction of a body depends not only on its own mass, but also on whether other objects are in its vicinity.

The scientists then used this software to simulate the formation of stars and galaxies, starting from a gas cloud several hundred thousand years after the Big Bang. "In many aspects, our results are remarkably close to what we actually observe with telescopes," explains Kroupa. For instance, the distribution and velocity of the stars in the computer-generated galaxies follow the same pattern that can be seen in the night sky. "Furthermore, our simulation resulted mostly in the formation of rotating disk galaxies like the Milky Way and almost all other large galaxies we know," says the scientist. "Dark matter simulations, on the other hand, predominantly create galaxies without distinct matter disks - a discrepancy to the observations that is difficult to explain."

Calculations based on the existence of dark matter are also very sensitive to changes in certain parameters, such as the frequency of supernovae and their effect on the distribution of matter in galaxies. In the MOND simulation, however, these factors hardly played a role.

Yet the recently published results from Bonn, Prague and Strasbourg do not correspond to reality in all points. "Our simulation is only a first step," emphasizes Kroupa. For example, the scientists have so far only made very simple assumptions about the original distribution of matter and the conditions in the young universe. "We now have to repeat the calculations and include more complex influencing factors. Then we will see if the MOND theory actually explains reality."

Credit: 
University of Bonn

Statistical method developed at TUD allows the detection of higher order dependencies

image: Full dependence structure.

Image: 
Copyright: Björn Böttcher

Distance multivariance is a multivariate dependence measure, which can detect dependencies between an arbitrary number of random vectors each of which can have a distinct dimension. In his new article, Böttcher now presents the concept as a unifying theory that combines several classical dependence measures. Connections between two or more high-dimensional variables can be captured and even complicated non-linear dependencies as well as dependencies of higher order can be detected. For numerous scientific disciplines, this method opens up new approaches to detect and evaluate dependencies.

Can the number of missed school days be linked to the age, gender or origin of school students? In a survey of 146 school students, social scientists analysed various influencing variables on missed school days and examined them for dependencies in order to derive a prediction model. This classic question has already been widely discussed and analysed with various statistical approaches.

The statistical measure "distance multivariance" presents a novel approach to this question: Dr. Bjoern Boettcher from the Institute of Mathematical Stochastics was able to use distance multivariance to determine the cultural background and a higher order dependence including age and gender as influencing factors for the missed school days. He thus was able to suggest a minimal model. "This is an elementary example for an application of the developed method. I cannot judge whether this is also a substantiated finding with regard to the investigated question. Working with real data and especially the subject-specific interpretation of the results always requires expertise in the respective subject," Dr. Böttcher emphasizes and provides numerous other illustrative examples of the application of his method: "In the paper, I refer to more than 350 freely available data sets from all scientific disciplines in which statistically significant higher-order dependencies occur. Again, whether these dependencies are meaningful in terms of the underlying surveys requires further investigations as well as the expertise in the respective fields," and he adds, "of course, requests for cooperation are always welcome."

Statistical analysis usually considers dependencies between individual variables. Especially with many variables, it is desirable to remove independent variables prior to studying any specific types of dependence. Dr. Björn Böttcher presents a method for this purpose called "dependence structure detection", which can also be used to detect higher-order dependencies. Variables are called "higher-order dependent", if they are pairwise independent, but more than two variables still influence each other jointly. Dependencies of this kind have not been in the focus of applications so far.

Some scientists suspect that higher-order dependencies occur in genetics in particular: the basic idea here is that several genes together determine a property, but these genes show neither individually any dependence among each other nor individually with the property - thus indeed these would be higher-order dependent. The framework of "distance multivariance" and the "dependence structure detection" method are now promising tools for such investigations.

Implementations of the new methods are provided for direct applications in the package 'multivariance' for the free statistical computing environment 'R'.

Credit: 
Technische Universität Dresden

New progress in turbulent combustion modeling: Filtered flamelet model

image: Distribution of the H2O in Sydney bluff body turbulent jet flame at different axial sections.

Image: 
©Science China Press

In turbulent combustion, the interaction between the strong nonlinear reaction source and turbulence leads to broad spectrum of the spatio and temporal scales. From the modeling point of view, it is especially challenging to predict the field statistics satisfactorily. Although there are different turbulent combustion models, e.g. the flamelet-like model, probability density function-like model, conditional moment closure model and eddy dissipation concept model, the bases of the model closure have not been reasonably justified. Recently, a new modeling idea for turbulent diffusion flame has been proposed by Lipo Wang's group from Shanghai Jiao Tong University and Jian Zhang from the Institute of Mechanics, CAS. The article titled "non-premixed turbulent combustion modeling based on the filtered turbulent flamelet equation" was published in SCIENCE CHINA Physics, Mechanics & Astronomy.

In the framework of large eddy simulation (LES), a new filtered flamelet equation was first derived, based on which a filtered flamelet model could be constructed directly from the filtered quantities. For instance, the scalar dissipation rate of the filtered progress variable, instead of the unfiltered one, is involved in the model construction. Therefore, the model uncertainty can be largely reduced. Figures 1 and 2 show the comparison between the simulation results of the Sydney bluff-body turbulent jet flame using different models, including the newly proposed filtered flamelet model with simplified mechanism (solid red lines), the flamelet/progress variable approach with detailed mechanism (dotted pink lines); the flamelet/progress variable approach with simplified mechanism (dotted blue lines), the laminar flamelet model with detailed mechanism (dotted green lines) and the experiment results (solid triangle marks). Overall, the new model results agree satisfactorily with the experimental data.

In summary, the promising performance of the present filtered flamelet model can be attributed to the new inspiration of model construction based on the filter flamelet equation. Further improvement and various case tests will be implemented in future's work.

Credit: 
Science China Press