Tech

The easy route the easy way: New chip calculates the shortest distance in an instant

image: Scientists have developed the world's first fully coupled AI chip that can solve the traveling salesman problem for 22 cities instantly, something that would take about 1,200 years for a high-performance von Neumann CPU.

Image: 
Tokyo University of Science

How would you go about returning books to the correct shelves in a large library with the least amount of walking? How would you determine the shortest route for a truck that has to deliver many packages to multiple cities? These are some examples of the "traveling salesman problem", a type of "combinatorial optimization" problem, which frequently arises in everyday situations. Solving the traveling salesman problem involves searching for the most efficient of all possible routes. To do this easily, we require the help of low-power, high-performance artificial intelligence.

To solve this conundrum, scientists are actively exploring the use of integrated circuits. In this method, each state in a traveling salesman problem (for example, each possible route in the delivery truck) is represented by "spin cells", each having one of two states. Using a circuit which can store the strength of one spin cell state over another, the relationship between these states (or to use our analogy, the distance between two cities for the delivery truck) can be obtained. Using a large system containing the same number of spin cells and circuits as the components (or the cities and routes for the delivery truck) in the problem, we can identify the state requiring the least energy, or the route covering the least distance, thus solving the traveling salesman problem, or any other type of combinatorial optimization problem.

However, a major drawback of the conventional way of using integrated circuits is that it requires pre-processing, and the number of components and time required to input the data increase as the scale of the problem increases. For this reason, this technology has only been able to solve the traveling salesman problem involving a maximum of 16 states, or cities.

A group of researchers led by Professor Takayuki Kawahara of the Department of Electrical Engineering at Tokyo University of Science aimed to overcome this issue. They observed that the interactions between each spin cell is linear, which ensured that the spin cells could only interact with the cells near them, prolonging the processing time. "We decided to arrange the cells slightly differently to ensure that all spin cells could be connected," Prof Kawahara explains.

To do this, they first arranged the circuits in a two-dimensional array, and the spin cells separately in a one-dimensional arrangement. The circuits would then read the data and an aggregate of this data was used to switch the states of the spin cells. This would mean that the number of spin cells required and the time needed for processing were drastically reduced.

The authors have presented their findings at the IEEE 18th World Symposium on Applied Machine Intelligence and Informatics (SAMI 2020). "Our new technique thus represents a fully coupled method," remarks Prof Kawahara, "and has the potential to solve a traveling salesman problem involving up to 22 cities." The authors are hopeful that this technology will have future applications as a high-performance system with low power requirements for office equipment and tablet terminals for finding easily find optimal solutions from large numbers of combinations.

Credit: 
Tokyo University of Science

Chinese tariff rate quota policy severely impacted US wheat exports

image: Bowen Chen, post-doctoral research associate in agricultural and consumer economics at the University of Illinois, concluded that China's tariff rate quota policies significantly affected wheat imports from the US.

Image: 
Marianne Stein

URBANA, Ill. - The U.S. and China recently agreed to a phase one trade deal that aims to resolve the current trade war between the two countries. But that is just the latest development in longstanding and complicated U.S.-Chinese trade disputes.

China has consistently used tariff rate quotas to restrict grain imports, and in 2016 the U.S. launched a complaint to the World Trade Organization (WTO) over China's implementation of tariff rate quotas on wheat, corn and rice. In their report, issued in April 2019, WTO sided with the U.S., but did not provide an assessment of the effect on U.S. exports.

A new study from University of Illinois, published in Agricultural Economics, quantifies those effects and shows that China's tariff quota administration significantly affected U.S. grain exports, particularly for wheat.

"Our analysis shows that if China hadn't used trade policies to restrict trade, wheat imports from the U.S. could have been more than 80% higher in 2017. That's a value of around $300 million," says Bowen Chen, a postdoctoral research associate in the Department of Agricultural and Consumer Economics at U of I. Chen is lead author on the study, which was conducted as part of his doctoral dissertation.

The dispute concerns China's administration of tariff rate quotas (TRQ), a policy instrument intended to regulate imports. Tariff rate quotas establish two tiers of tariffs, with a lower tariff for in-quota imports and a much higher tariff for out-of-quota imports. Chinese tariffs for grain commodities were 1% for in-quota and 65% for out-of-quota imports.

The system is intended to allow some access for imports at a low tariff rate, while the second-tier tariffs provide protection for domestic commodities. Under the TRQ agreement, China is obligated to import certain quantities of grain at the low tariff level. However, the U.S. contended that these obligations were not fulfilled, and that China's imports of corn, wheat and rice were far below in-quota quantities.

Chen and his colleagues analyzed trade and price data to assess the impact of Chinese TRQ policies on U.S. grain exports. They also sought to explore the rationale behind the grain quota administration in order to better inform policy initiatives and trade negotiations.

The researchers obtained monthly trade data for grain commodities from 2013 to 2017, using information from a United Nations database and the Ministry of Commerce in China. They also looked at domestic price data published by the Chinese Ministry of Agriculture. Using the trade and price data, they estimated the import demand elasticities for corn, wheat and rice.

"We estimate how the prices would have been reduced if China was not imposing the tariffs. Then we simulate how the quantities would change based on the price and elasticity," Chen says.

Overall, the researchers concluded that China's 2017 grain imports could have been $1.4 billion or 40% higher. Wheat imports from the U.S. could have been $324 million or 83% higher without the restrictive policies. Corn and rice imports were affected to a lesser extent.

Chen cautions that those results are contingent on Chinese domestic prices being equal to world prices, assuming that China would not maintain high prices to support domestic production.

"If China liberated their import policy and reduced domestic price support, such market policy reforms would alleviate pressure from trading partners," Chen says. "However, they may not be interested in full trade liberalization at this time."

Chen explains that China has used TQR as a trade policy instrument to stabilize domestic prices and restrict imports, and his research can help understand why they engage in this practice.

"These restrictions will make foreign commodities more expensive and give more incentive for domestic producers, so China can eat more domestically produced food," he says.

"China wants to feed itself and be less dependent on other suppliers. Furthermore, China has huge grain stocks and want to use them. Finally, international prices are volatile, so for food security reasons they don't want prices to fluctuate too much. They want to have stable food prices so people can feel safe, buying the same food with the same budget."

Chen says the study can have implications for trade negotiators and policy makers, both in the U.S. and China, by showing the effect the TQR policy has on trade.

The new phase one trade deal stipulates that tariff rate quota administration not be used to prevent the full utilization of agricultural tariff rate quotas. The implementation of the trade deal will likely benefit U.S. grain exports to China, Chen notes.

Soybean trade is an important part of the trade negotiations between the U.S. and China, and that will be the topic for Chen's next research project.

"We will quantify the impact on U.S. soybean exports to China, calculating how exports have been reduced by the trade war in the last year. That's what I'm currently working on," he says.

Credit: 
University of Illinois College of Agricultural, Consumer and Environmental Sciences

Coating helps electronics stay cool by sweating

video: This video depicts how a small amount of MIL-101(Cr)--a MOF coating that helps an electronic chip sweat--can help keep it cool.

Image: 
Chenxi Wang

Mammals sweat to regulate body temperature, and researchers from Shanghai Jiao Tong University in China are exploring whether our phones could do the same. In a study published January 22 in the journal Joule, the authors present a coating for electronics that releases water vapor to dissipate heat from running devices--a new thermal management method that could prevent electronics from overheating and keep them cooler compared to existing strategies.

"The development of microelectronics puts great demands on efficient thermal management techniques, because all the components are tightly packed and chips can get really hot," says senior author Ruzhu Wang, who studies refrigeration engineering at Shanghai Jiao Tong University. "For example, without an effective cooling system, our phones could have a system breakdown and burn our hands if we run them for a long time or load a big application."

Larger devices such as computers use fans to regulate temperature. However, fans are bulky, noisy, and energy consuming and thus unsuitable for smaller devices like mobile phones. Manufactures have been using phase change materials (PCMs), such as waxes and fatty acids, for cooling in phones. These materials can absorb heat produced by devices when they melt. However, the total amount of energy exchanged during the solid-liquid transition is relatively low.

In contrast, the liquid-vapor transition of water can exchange 10 times the energy compared to that of PCM solid-liquid transition. Inspired by mammals' sweating mechanism, Wang and his team studied a group of porous materials that could absorb moisture from the air and release water vapor when heated. Among them, metal organic frameworks (MOFs) were the most promising because they could store a large amount of water and thus take away more heat when heated.

"Previously, researchers have tried to use MOFs to extract water from the desert air," Wang says. "But MOFs are still really expensive, so large-scale application isn't really practical. Our study shows electronics cooling is a good real-life application of MOFs. We used less than 0.3 grams of material in our experiment, and the cooling effect it produced was significant."

The team selected a type of MOFs called MIL-101(Cr) for the experiment because of its good water-absorbing capacity and high sensitivity to temperature changes. They coated three 16-square-centimeter aluminum sheets with MIL-101(Cr) of different thicknesses--198, 313, and 516 micrometers, respectively--and heated the them on a hot plate.

The team found that MIL-101Cr coating was able to delay the temperature rise of the sheets, and the effect increased with coating thickness. While an uncoated sheet reached 60°C after 5.2 minutes, the thinnest coating doubled the time and didn't reach the same temperature until 11.7 minutes. The sheet with the thickest coating reached 60°C after 19.35 minutes of heating.

"In addition to effective cooling, MIL-101(Cr) can quickly recover by absorbing moisture again once the heat source is removed, just like how mammals rehydrate and ready to sweat again," Wang says. "So, this method is really suitable for devices that aren't running all the time, like phones, charging batteries and telecommunications base stations, which can get overloaded sometimes."

To investigate the cooling effect of MIL-101(Cr) on actual devices, Wang and his team tested a coated heat sink on a microcomputing device. Compared to an uncoated heat sink, the coated one reduced the chip temperature by up to 7°C when the device was run at heavy workloads for 15 minutes.

Looking forward, the team plans to improve the material's thermal conductivity. "Once all the water is gone, the dried coating will become a resistance that affects devices' heat dissipation" says first author Chenxi Wang. Incorporating thermal conductive additives such as graphene into the material may help address the problem, he says.

Before manufacturers can install this cooling system onto our phones, cost is a major issue, Ruzhu Wang says. "By finding MOFs a practical application, we hope to increase the market demand for them and encourage more research on MOFs to bring down the costs."

Credit: 
Cell Press

Urine reuse as fertilizer is not likely to transfer antibiotic resistance

Urine is a goldmine of useful substances that can be captured and converted into products such as fertilizer. However, going "green" with urine carries some potential risks. For instance, DNA released from antibiotic-resistant bacteria in urine could transfer resistance to other organisms at the site where the fertilizer is used. Now, research published in ACS' Environmental Science & Technology (ES&T) shows this risk is likely to be minimal.

Upcycling urine isn't new: Farmers have collected and fertilized crops with it for millennia. The Rich Earth Institute in Brattleboro, Vermont, operates the only contemporary community-scale system in the U.S. for capturing urine and processing it into fertilizer, according to the authors of this ES&T study. In addition, some researchers have installed urine collection toilets and waterless urinals at various locations, but these are mainly for research or demonstration purposes. If the practice took hold with 10% of the U.S. population, it could save millions of gallons of flushing water and recover about 300 tons of nitrogen and 18 tons of phosphorus per day, the authors calculate. However, urine contains bacteria, including strains resistant to antibiotics. Previous studies have reported finding antibiotic-resistant DNA in urine, but it has been unclear whether that DNA could move into microbes in the environment if the urine is applied to soil. So, Krista Wigginton and colleagues conducted experiments to see if upcycling urine could spread that resistance.

In their study, the researchers used "aged" urine that had been stored in a sealed container for several months. This traditional practice increases ammonia concentration, raises pH and alters the microbial makeup of the liquid. After incubating DNA containing resistance genes for tetracycline and ampicillin in the urine, the team found that the genetic material rapidly lost 99% of its ability to confer resistance on a soil bacteria. In sum, urine-derived fertilizer poses a low risk of spreading resistance from extracellular DNA in the environment, the team says.

Credit: 
American Chemical Society

Locomotor engine in the spinal cord revealed

image: Abdel El Manira, Professor at the Department of Neuroscience, Karolinska Institutet, Sweden.

Image: 
Stefan Zimmerman

Researchers at Karolinska Institutet in Sweden have revealed a new principle of organisation which explains how locomotion is coordinated in vertebrates akin to an engine with three gears. The results are published in the scientific journal Neuron.

A remarkable feature of locomotion is its capacity for rapid starts and to change speed to match our intentions. However, there is still uncertainty as to how the rhythm-generating circuit - the locomotor engine - in the spinal cord is capable of instantaneously translating brain commands into rhythmic and appropriately paced locomotion.

Using zebrafish as a model organism, researchers at Karolinska Institutet reveal in detail a full reconstruction of the rhythm-generating engine driving locomotion in vertebrates.

"We have uncovered a novel principle of organisation that is crucial to perform an intuitively simple, yet poorly understood function: the initiation of locomotion and the changing of speed," says Abdel El Manira, Professor at the Department of Neuroscience at Karolinska Institutet, who led the study.

The researchers performed a comprehensive and quantitative mapping of connections (synapses) between neurons combined with behavioural analyses in zebrafish. The results revealed that the excitatory neurons in the spinal cord which drive locomotion form three recurrent, rhythm-generating circuit modules acting as gears which can be engaged at slow, intermediate or fast locomotor speeds. These circuits convert signals from the brain into coordinated locomotor movements, with a speed that is aligned to the initial intention.

"The insights gained in our study can be directly applicable to mammals, including humans, given that the organising principle of the brainstem and spinal circuits is shared across vertebrate species," says Abdel El Manira. "Understanding how circuits in the brainstem and spinal cord initiate movements and how speed is controlled will open up for new research avenues aimed at developing therapeutic strategies for human neurological disorders, including traumatic spinal cord injury, and motoneuron degenerative diseases such as amyotrophic lateral sclerosis (ALS)."

Credit: 
Karolinska Institutet

Urine fertilizer: 'Aging' effectively protects against transfer of antibiotic resistance

ANN ARBOR--Recycled and aged human urine can be used as a fertilizer with low risks of transferring antibiotic resistant DNA to the environment, according to new research from the University of Michigan.

It's a key finding in efforts to identify more sustainable alternatives to widely used fertilizers that contribute to water pollution. Their high levels of nitrogen and phosphorus can spur the growth of algae, which can threaten our sources of drinking water.

Urine contains nitrogen, phosphorus and potassium--key nutrients that plants need to grow. Today, municipal treatment systems don't totally remove these nutrients from wastewater before it's released into rivers and streams. At the same time, manufacturing synthetic fertilizer is expensive and energy intensive.

U-M leads the nation's largest consortium of researchers exploring the technology, systems requirements and social attitudes associated with urine-derived fertilizers.

Over the last several years, the group has studied the removal of bacteria, viruses, and pharmaceuticals in urine to improve the safety of urine-derived fertilizers.

In this new study, researchers have shown that the practice of "aging" collected urine in sealed containers over several months effectively deactivates 99% of antibiotic resistant genes that were present in bacteria in the urine.

"Based on our results, we think that microorganisms in the urine break down the extracellular DNA in the urine very quickly," said Krista Wigginton, U-M associate professor of civil and environmental engineering, and corresponding author on a study published today in Environmental Science and Technology.

"That means that if bacteria in the collected urine are resistant to antibiotics and the bacteria die, as they do when they are stored in urine, the released DNA won't pose a risk of transferring resistance to bacteria in the environment when the fertilizer is applied."

Previous research has shown that antibiotic-resistant DNA can be found in urine, raising the question of whether fertilizers derived from it might carry over that resistance.

The researchers collected urine from more than 100 men and women and stored it for 12 to 16 months. During that period, ammonia levels in the urine increase, lowering acidity levels and killing most of the bacteria that the donors shed. Bacteria from urinary tract infections often harbor antibiotic resistance.

When the ammonia kills the bacteria, they dump their DNA into the solution. It's these extracellular snippets of DNA that the researchers studied to see how quickly they would break down.

Urine has been utilized as a crop fertilizer for thousands of years, but has been getting a closer look in recent years as a way to create a circular nutrient economy. It could enable manufacturing of fertilizers in a more environmentally friendly way, reduce the energy required to manage nutrients at wastewater treatment plants, and create localized fertilizer sources.

"There are two main reasons we think urine fertilizer is the way of the future," Wigginton said. "Our current agricultural system is not sustainable, and the way we address nutrients in our wastewater can be much more efficient."

In their ongoing work, the U-M-lead team is moving towards agricultural settings.

"We are doing field experiments to assess technologies that process urine into a safe and sustainable fertilizer for food crops and other plants, like flowers. So far, our experimental results are quite promising," said Nancy Love, the Borchardt and Glysson Collegiate Professor and professor of civil and environmental engineering at U-M.

Credit: 
University of Michigan

Study reveals pre-Hispanic history, genetic changes among indigenous Mexican populations

image: To better understand the broad demographic history of pre-Hispanic Mexico and to search for signatures of adaptive evolution, an international team led by Mexican scientists have sequenced the complete protein-coding regions of the genome, or exomes, of 78 individuals from different indigenous groups from Mexico. The genomic study is the largest of its kind for indigenous populations from the Americas.

Image: 
Ruben Mendoza, National Laboratory of Genomics for Biodiversity (LANGEBIO) - UGA, CINVESTAV

As more and more large-scale human genome sequencing projects get completed, scientists have been able to trace with increasing confidence both the geographical movements and underlying genetic variation of human populations.

Most of these projects have favored the study of European populations, and thus, have been lacking in representing the true ethnic diversity across the globe.

To better understand the broad demographic history of pre-Hispanic Mexico and to search for signatures of adaptive evolution, an international team led by Mexican scientists have sequenced the complete protein-coding regions of the genome, or exomes, of 78 individuals from five different indigenous groups from Northern (Rara?muri or Tarahumara, and Huichol), Central (Nahua), South (Triqui, or TRQ) and Southeast (Maya, or MYA) Mexico. The genomic study, the largest of its kind for indigenous populations from the Americas, appeared recently in the advanced online edition of Molecular Biology and Evolution.

"We modeled the demographic history of indigenous populations from Mexico with northern and southern ethnic groups (Tarahumara and Huichol) splitting 7.2 kya and subsequently diverging locally 6.5 kya (Huichol groups) and 5.7 kya (Triqui and Maya), respectively," said lead author Maria A?vila-Arcos, of the National Autonomous University of Mexico. The Nahua were excluded from the final analysis due to the noise it brought to the overall analysis.

Overall, they identified 120,735 single nucleotide variants (SNV) among the individuals studied, which were used to trace back the population history. Furthermore, they were able to reconcile their data with the demographic history and fossil records of ancestral Native Americans.

"The split times we found are also coherent with previous estimates of ancestral Native Americans diverging ~17.5-14.6 KYA into Southern Native Americans or "Ancestral A," comprising Central and Southern Native Americans) and Northern Native Americans or "Ancestral B," and with an initial settlement of Mexico occurring at least 12,000 years ago, as suggested by the earliest skeletal remains dated to approximately this age found in Central Mexico and the Yucata?n peninsula," said Avila-Arcos. "Studies on genome-wide data from ancient remains from Central and South America reveal genetic continuity between ancient and modern populations in some parts of the Americas over the last 8,500 years."

"This suggests that, by that time, the ancestral population of MYA was not yet genetically differentiated from others, so our estimates of northern/southern split at 7.2 KYA and Mayan/Triqui divergence at 5.7 KYA fit with this scenario."

Next, they scanned the data to identify candidate genes most important for adaptation.

"Interestingly, some of these genes had previously been identified as targets of selection in other populations," said co-corresponding author Andrés Moreno Estrada, principal investigator at National Laboratory of Genomics for Biodiversity (LANGEBIO) - UGA, CINVESTAV.

These genes include SLC24A5, involved in skin pigmentation, and FAP, which was previously suggested to be under adaptive archaic introgression in Peruvians and Melanesians. Three genes were involved in the immune response. These include SYT5, implicated in innate immune response, and interleukins IL17A and IL13. The remaining candidate genes were involved in signal transduction (MPZL1), protein localization and transport (GRASP and ARFRP1), cell differentiation and spermatogenesis (GMCL), Golgi apparatus organization (UBXN2B), neuron differentiation (MANF), signaling and cardiac muscle contraction (ADRBK1), cell cycle (CDK5), microtubule organization and stabilization (NCKAP5L), and stress fiber formation (NCKIPSD).

A couple of genes stood out for the team. These included, BCL2L13, which is highly expressed in skeletal muscle and could be related to physical endurance, including high endurance long-distance running, a well-known trait of the northern Mexico Rara?muri. The KBTBD8 gene has been associated with idiopathic short stature (also found in Koreans) and the team found it to be highly differentiated in Triqui, a southern indigenous group from Oaxaca whose height is extremely low compared to other Native populations.

"We carried out the most comprehensive characterization of potentially adaptive functional variation in Indigenous peoples from the Americas to date," said Moreno Estrada. "We identified in these populations over four thousand new variants, most of them singletons, with neutral, regulatory, as well as protein-truncating and missense annotations. The average number of singletons per individual was higher in Nahua (NAH) and Maya (MYA), which is expected given these two Indigenous groups embody the descendants of the largest civilizations in Mesoamerica, and that today Nahua and Maya languages are the most spoken Indigenous languages in Mexico. Furthermore, the generated data also allowed us to propose a demographic model inferred from genomic data in Native Mexicans and to identify possible events of adaptive evolution in pre-Columbian Mexico."

Credit: 
SMBE Journals (Molecular Biology and Evolution and Genome Biology and Evolution)

Male fertility after chemotherapy: New questions raised

image: Human sperm under the microscope

Image: 
Géraldine Delbès, INRS

A pilot study conducted by INRS researchers highlights the effect of chemotherapy on male fertility before and after puberty.

"It is often thought that cancer treatments for prepubescent boys will have no effect on their fertility because their testicles would be "dormant". But in fact, the prepubertal testis are not immune to chemotherapy that affects dividing cells and it is now well recognized that there can be long-term effects," explains Géraldine Delbès, a professor at the Institut National de la Recherche Scientifique (INRS) in Laval.

Professor Delbès, who specializes in reproductive toxicology, conducted a pilot study in collaboration with oncologists and fertility specialists from the McGill University Health Centre (MUHC) on a cohort of 13 patients, all survivors of pediatric leukemia and lymphoma. Their results, recently published in the journal Plos One, raise important questions about male fertility and the long-term quality of life of cancer survivors.

"The originality of our study lies in the fact that we dissociated the effects before and after puberty and found that the effect of chemotherapy does not appear to be age-dependent. All patients were at higher risk of infertility due to the absence or low quantity of sperm produced. So it appears that there is no "period" when boys are insensitive to the toxicity of these types of chemotherapy," says Professor Delbès, the study's lead author, who has been studying oncofertility issues since 2005.

Factors still poorly understood

Large epidemiological studies have already shown that pediatric chemotherapy treatments can affect male fertility in the long term, but the effect of age at diagnosis and the type of treatment on sperm quality or production is still poorly understood.

Researchers have analyzed spermograms and DNA damage in sperm from adult survivors of pediatric leukemia or lymphoma. They compared these parameters in patients diagnosed before and after puberty, and men without a history of cancer.

Patients in the cohort had received cocktails of several types of chemotherapy agents, including alkylating agents, which are linked to the long-term decreased sperm production.

Impaired sperm quality

In her laboratory, Géraldine Delbès is particularly interested in anthracyclines, which are used in the treatment of several cancers. The researchers show that these anthracyclines, which should have no long-term effect on the quantity of sperm, could potentially affect its quality.

According to Professor Delbès, the use of anthracyclines is thought to be correlated with abnormalities in chromatin and sperm DNA over the long term. These abnormalities are often associated with infertility problems and poor embryonic development.

"We are aware of the limitations of our study due to the small number of participants, but these data are rare and particularly relevant to boys for whom sperm banking may not be feasible," adds Dr. Peter Chan, study's co-author and Director of Male Reproductive Medicine in the Department of Urology at the MUHC. "While further large-scale research is needed to confirm our preliminary results, they are already important for reproductive specialists and oncologists who counsel these young adult cancer survivors about their fertility care."

"When children with cancer are treated in pediatric oncology, the medical staff don't necessarily think about preserving their fertility for years to come as adults. The priority is to cure them," adds Professor Delbès. "However, thanks to the success of cancer treatments, long-term quality of life is becoming a major concern, and the consequences for the fertility of these individuals are still poorly understood."

This study highlights the need for further research on the fertility of men who have had pediatric cancer and the importance of educating them about the potential long-term effect of chemotherapy on male fertility, regardless of age at diagnosis.

Credit: 
Institut national de la recherche scientifique - INRS

Physicists trap light in nanoresonators for record time

image: Conversion (doubling) of light frequency using a nanoresonator

Image: 
(left) Anastasia Shalaeva; (right) Koshelev et al. Science

An international team of researchers from ITMO University, the Australian National University, and Korea University have experimentally trapped an electromagnetic wave in a gallium arsenide nanoresonator a few hundred nanometers in size for a record-breaking time. Earlier attempts to trap light for such a long time have only been successful with much larger resonators. In addition, the researchers have provided experimental proof that this resonator may be used as a basis for an efficient light frequency nanoconverter. The results of this research have raised great interest among the scientific community and were published in Science, one of the world's leading academic journals. Scientists have suggested about drastically new opportunities for subwavelength optics and nanophotonics - including the development of compact sensors, night vision devices, and optical data transmission technologies.

The problem of manipulating the properties of electromagnetic waves at the nanoscale is of paramount importance in modern physics. Using light, we can transfer data over long distances, record and read-out data, and perform other operations critical to data processing. To do this, light needs to be trapped in a small space and held there for a long period of time, which is something that physicists have only succeeded in doing with objects of a significant size, larger than the wavelength of light. This limits the use of optical signals in optoelectronics.

Two years ago, an international research team from ITMO University, the Australian National University, and the Ioffe Institute had theoretically predicted a new mechanism that allows scientists to trap light in miniature resonators much smaller than the wavelength of light and measured in hundreds of nanometers. However, until recently, no one had implemented the mechanism in practice.

An international team of researchers from ITMO University, the Australian National University, and Korea University was assembled to prove this hypothesis. First, they developed the concept: gallium arsenide was chosen as the key material, being a semiconductor with a high refractive index and strong nonlinear response in the near-infrared range. Researchers also decided on the most optimal shape for the resonator that would efficiently trap electromagnetic radiation.

In order to trap light efficiently, the ray must be reflected from the object's inner boundaries as many times as possible without escaping the resonator. One might assume that the best solution would be to make the object as complex as possible. As a matter of fact, it is just opposite: the more planes a body has, the more likely light is to escape it. The near-ideal shape for this case was a cylinder, which possesses the minimal number of boundaries. One question that remained to be solved was which ratio of diameter to height would be the most effective for trapping of light. After mathematical calculations, the hypothesis had to be confirmed experimentally.

"We used gallium arsenide to create cylinders around 700 nanometers in height and with varying diameters close to 900 nanometers. They're almost invisible to the naked eye. As our experiments have shown, the reference particle had captured light for a time exceeding 200 times the period of one wave oscillation. Usually, for particles of that size the ratio is five to ten periods of wave oscillations. And we obtained 200! " says Kirill Koshelev, the the first co-author of the paper.

The scientists divided their study into two parts: one is an experimental confirmation of the theory expressed earlier, and the other is an example of how such resonators could be used. For instance, the trap has been utilized for a nanodevice capable of changing the frequency, and therefore color, of a light wave. Upon passing through this resonator, the infrared beam turned red, becoming visible to the human eye.

The frequency conversion of electromagnetic oscillations is not the only application for this technology. It also has potential applications in various sensing devices and even special glass coatings that would make it possible to produce colorful night-vision.

"If the resonator is able to efficiently trap light, then placing, say, a molecule next to it will increase the efficiency of the molecule's interaction with light by an order of magnitude, and the presence of even a singular molecule can easily be detected experimentally. This principle can be used in the development of highly-sensitive biosensors. Due to the resonators' ability to modify the wavelength of light, they can be used in night vision devices. After all, even in the darkness, there are electromagnetic infrared waves that are unseen to the human eye. By transforming their wavelength, we could see in the dark. All you'd need to do is to apply these cylinders to glasses or the windshield of a car. They'd be invisible to the eye but still allow us to see much better in the dark than we can on our own," explains Kirill Koshelev.

Besides gallium arsenide, such traps can be made using other dielectrics or semiconductors, such as, for instance, silicon, which is the most common material in modern microelectronics. Also, the optimal form for light trapping, namely the ratio of a cylinder's diameter to its height, can be scaled up to create larger traps.

Credit: 
ITMO University

Rising global temperatures turn northern permafrost region into significant carbon source

image: David Cook, a recently retired Argonne meteorologist, performs maintenance on an eddy correlation flux measurement tower, operated by the DOE-funded Atmospheric Radiation Measurement (ARM) program, in Utqia?vik, Alaska. The tower exemplifies one of several types of instrumentation used to generate the data in this study.

Image: 
Argonne National Laboratory/Ryan Sullivan

Permafrost, the perennially frozen subsoil in Earth’s northernmost regions, has been collecting and storing plant and animal matter since long before the last Ice Age. The decomposition of some of this organic matter naturally releases carbon dioxide (CO2) into the atmosphere year-round, where it is absorbed by plant growth during the warmer months.

This region, called the northern permafrost region, is difficult to study, and experiments there are few and far between compared with those in warmer and less remote locations. However, a new synthesis that incorporates datasets gathered from more than 100 Arctic study sites by dozens of institutions, including the U.S. Department of Energy’s (DOE) Argonne National Laboratory, suggests that as global temperatures rise, the decomposition of organic matter in permafrost soil during the winter months can be substantially greater than previously thought. The new numbers indicate a release of CO2 that far exceeds the corresponding summer uptake.

“Arctic warming is driven by a combination of natural and anthropogenic greenhouse gas emissions, and these new findings indicate that natural emissions from permafrost thaw during winter may be accelerating in response to Arctic warming.” — Roser Matamala, Environmental Science division

Even more importantly, when modeling the carbon balance using the large collection of data, the scientists found that the CO2 released by permafrost soil in the winter could increase 41 percent by 2100 if human-caused greenhouse gas emissions continue at their current rate. 

The study, published in Nature Climate Change this past October, is the most comprehensive study on this phenomenon to date. It highlights the need for more research on the permafrost region’s net CO2 emissions, and it demonstrates the significant impact these emissions could have on the greenhouse effect and global warming.

The study brings together a combination of in-field measurements and laboratory-based studies — or soil incubations — like those performed at Argonne. To better understand how future warming might affect CO2 emissions in permafrost regions, the Argonne scientists sampled a variety of permafrost soils and monitored CO2 release at a range of laboratory-controlled temperatures above and below freezing that mimic typical Arctic conditions. The researchers wanted to identify how different soil properties or other factors influence the rate of decomposition and CO2 release from frozen and thawing soils — information that could help improve climate and Earth system models. 

“Climate and Earth system models often treat these winter permafrost CO2 emissions as insignificant or even non-existent,” said Roser Matamala, a scientist in Argonne’s Environmental Science division and a contributor to the study. “But this study, with its large volume of data extending over multiple seasons, shows that winter respiration is substantial and significant. The study should convince modelers that this flux of winter-time carbon to the atmosphere can no longer be overlooked. It is not small, and it needs to be taken into account.”

The northern permafrost region covers approximately 15 percent of the Earth’s land area, extending from the Arctic Ocean’s coastline through much of Alaska, northern Canada and northern Eurasia. The ever-frozen soil in these regions contains more carbon than humans have ever released, and roughly a third of the carbon stored in all of Earth’s soil exists in this region.

During the summer, plants whose roots grow in thawed soil above the perennially frozen subsoil absorb CO2 as they photosynthesize. At the same time, microbes release CO2 into the atmosphere as they actively decompose soil organic matter. In the winter, when the surface soil and underlying permafrost are both frozen, the rate of decay and the amount of CO2 returned to to the atmosphere drops significantly. Yet, a small amount of microbial activity continues to decompose some of the organic matter contained in thin, unfrozen water films surrounding soil particles, releasing smaller amounts of CO2. For years, this balance was tipped toward greater absorption rather than release of CO2, but this study indicates that the loss of CO2 from permafrost soils to the atmosphere over the entire year is now greater than its uptake during the summer.

“Arctic soils have retained disproportionately large amounts of organic matter because frozen conditions greatly slow microbial decay of dead plant roots and leaves,” said Argonne soil scientist and study contributor Julie Jastrow. “But just as food in the freezer compartment of a refrigerator will spoil faster than it would in a chest freezer, the temperature of seasonally frozen soils and permafrost affects the amount of microbial activity and decomposition.”

According to the Argonne scientists, microbial activity can increase exponentially as rising global temperatures warm the permafrost to levels just below freezing. Even before permafrost thaws, the acceleration in microbial activity in permafrost soil causes acceleration of its CO2 emissions.

Based on these results and upscaling across the Arctic, the authors estimate that about 1.7 billion metric tons of CO2 are released during current winter seasons, but that only 1 billion metric tons would be taken up by photosynthesizers in the summer months.

Computer models also showed that if humans were to mitigate their own emissions even minimally, winter CO2 emissions in the permafrost region would still increase 17 percent by 2100.

Credit: 
DOE/Argonne National Laboratory

New survey results reveal the experts and public's attitude towards gene-edited crops

image: A Japanese farmer working in the field of Northern Japan. The country has long been working on crop breeding including rice for many years. Rice is originally grown in tropics. However, plant breeding techniques enabled rice to grown in colder climate. The plant breeding in early times relied on farmers' intuition, but today, three breeding technologies shown below are used.

Image: 
Photo by Hisashi Urashima

Experts' interest in utilizing gene editing for the breeding crops has seen revolutionary growth. Meanwhile, people's awareness for food safety has also been increasing. To understand the attitudinal difference among experts and public towards gene-edited crops, a team of Japanese researchers, led by Dr. Naoko Kato-Nitta, a research scientist at the Joint Support-Center for Data Science Research and The Institute of Statistical Mathematics, Tokyo, Japan, conducted a survey of perceptions of the Japanese experts and public to gene editing versus other emerging or conventional breeding techniques in Japan, where the production of genetically modified crops is strictly regulated and not readily accepted.

Their findings were published on November 5th, 2019 in Palgrave Communications.
https://www.nature.com/articles/s41599-019-0328-4

The authors conducted experimental web-based surveys to obtain clearer understanding of both experts and public perception of the benefits, risks and value of utilizing gene editing technology for the breeding of crops compared to other technologies. Participants for surveys consisted of 3,197 volunteers of the lay public and scientists with and without expertise in molecular biology.

According to the study, participants who had expert knowledge of molecular biology perceived emerging technologies to offer the lowest risk and highest benefits or value for food application, while lay public showed the highest risk and lowest benefit. Experts from other disciplines had similar perceptions to the lay public in terms of the risk, but similar perceptions to the molecular biology experts in terms of value. The lay public tended to perceive gene-edited crops as being both more beneficial and valuable than other genetically modified crops, while also posing less risk. Even though the differences in perception between gene editing and genetic modification was very small compared to the differences in perception between genetic modification and conventional plant breeding techniques, obtaining such results from the participants living in Japan, may hold great potential for this emerging technology.

Additionally, "the results enabled us to elucidate the deficit model's boundary conditions in science communication by proposing two new hypotheses," said Kato-Nitta. The model's assumption is that as scientific knowledge increases, so too does public acceptance of new technologies. "Firstly, this assumption was valid only for conventional science, knowledge of which can be acquired through classroom education, but not valid for emerging science, such as gene editing, knowledge on which may be acquired mainly through informal learning," Kato-Nitta said. "Secondly, the model's assumption on emerging science is valid only for increasing benefit perceptions but not for reducing risk perceptions."

Food scarcity is becoming a worldwide problem and the famine is frequently found in many regions on the earth. One of the major reasons for this is the rapid increase of the global population which has reached 7.7 billion, and is still growing, whereas area of farmable land has been continuingly decreasing because of various reasons such as extensive industrial or urban development, extended droughts and other extreme weather conditions. To compensate with the increasing needs for crops, enhancement of production through breeding has also been a powerful tool for farmers to produce more crop products from their limited resources in addition to the extensive usage of fertilizers and pesticides. Recent development of recombinant DNA technology, which is commonly called genetic engineering technology, and its adaptation to crops are believed to speed up the time-consuming breeding processes and to widen the range of genetic features to the original plants such as enhancement of nutritional value, resistance to drought, frost, or pests, for example.

The gene editing technology has been very tantalizing for molecular biologists, and in theory offers significant potential to improve the global food security; however, there are many people who are not convinced and remain somewhat skeptical, preferring to take a more cautious approach to how we produce the food that ultimately goes into our bodies and which significantly contributes to our overall health and well-being.

There are two viewpoints concerning gene editing. The first, known as product-based policy, views gene editing as technology through which no foreign genetic material is added, that is more similar to conventional plant breeding procedure than genetic modification. The second, known as process-based policy views gene editing similar to that of genetic modification, as both requires genetic manipulation to achieve the desired results, but gene editing can just do so quicker. In their study, experts in molecular biology tended to adopt product-based policy, while non-specialists tended to take the process-based policy.

For many, the potential benefits associated with utilizing these unconventional plant breeding methods are not worth the potential risks. But are these attitudes and beliefs influenced by lack of sufficient scientific knowledge of the subject, and can they be changed if the information is passed on from experts in an effort to improve public scientific literacy? This concept, which is known as the deficit model of science communication, attributes public skepticism of science and technology to a lack of understanding, arising from lack of scientific literacy or knowledge on the subject. Their research statistically elucidates where such explanation types are valid and where they are not.

According to Kato-Nitta, their findings suggest that people perceive gene editing to have greater potential than genetic modification, especially in terms of the benefit aspects of utilizing this technology. "We still have to be cautious in terms of people's attitude toward the risk and value aspects associated with this technology," she noted. "In the survey, the experts in other field perceived even more risk in gene editing than genetic modification in terms of "Possibility of misusing this technology."

"I hope my research will help to narrow the gap between scientific experts and the public." said Kato-Nitta. "The scientific experts need to understand the diverse range of people outside their domain-specific community. I am currently working on developing a new model on public communication of science and technology that can explain the key factors that affect various facets of people's attitudes toward emerging science more comprehensively than the previous studies have done."

Credit: 
Research Organization of Information and Systems

PET/MRI identifies notable breast cancer imaging biomarkers

image: A 50-y-old postmenopausal woman with fibroadenoma (arrows) in left breast. (A) Unenhanced fat-saturated T1-weighted MRI shows extreme amount of FGT (ACR d). (B) Moderate BPE is seen on dynamic contrast-enhanced MRI at 90 s. (C) Mean ADC of breast parenchyma of contralateral breast on diffusion-weighted imaging with ADC mapping is 1.5 × 10?3 mm2/s. (D) On 18F-FDG PET/CT, lesion is not 18F-FDG-avid, and BPU of normal breast parenchyma is relatively high, with SUVmax of 3.2.

Image: 
K Pinker, et al., Medical University of Vienna, Vienna, Austria

Reston, VA--Researchers have identified several potentially useful breast cancer biomarkers that indicate the presence and risk of malignancy, according to new research published in the January issue of The Journal of Nuclear Medicine. By comparing healthy contralateral breast tissue of patients with malignant breast tumors and benign breast tumors, researchers found that multiple differences in biomarkers can be assessed with PET/MRI imaging, which could impact risk-adapted screening and risk-reduction strategies in clinical practice.

In breast cancer, early detection remains key to improved prognosis and survival. While screening mammography has decreased mortality for breast cancer patients by 30 percent, its sensitivity is limited and is decreased in women with dense breast tissue. "Such shortcomings warrant further refinements in breast cancer screening modalities and the identification of imaging biomarkers to guide follow-up care for breast cancer patients," said Doris Leithner, MD, research fellow at Memorial Sloan Kettering Cancer Center in New York, New York. "Our study aimed to assess the differences in 18F-FDG PET/MRI biomarkers in healthy contralateral breast tissue among patients with malignant or benign breast tumors."

The study included 141 patients with imaging abnormalities on mammography or sonography on a tumor-free contralateral breast. The patients underwent combined PET/MRI of the breast with dynamic contrast-enhanced MRI, diffusion-weighted imaging (DWI) and the radiotracer 18F-FDG. In all patients, several imaging biomarkers were recorded in the tumor-free breast: background parenchymal enhancement and fibroglandular tissue (from MRI), mean apparent diffusion coefficient (from DWI) and breast parenchymal uptake (from 18F-FDG PET). Differences among the biomarkers were analyzed by two independent readers.

A total of 100 malignant and 41 benign lesions were assessed. In the contralateral breast tissue, background parenchymal enhancement and breast parenchymal uptake were decreased and differed significantly between patients with benign and malignant lesions. The difference in fibroglandular tissue approached but did not reach significance, and the mean apparent diffusion coefficient did not differ between the groups.

"Based on these results, tracer uptake of normal breast parenchyma in 18F-FDG PET might serve as another important, easily quantifiable imaging biomarker in breast cancer, similar to breast density in mammography and background parenchymal enhancement in MRI," Leithner explained. "As hybrid PET/MRI scanners are increasingly being used in clinical practice, they can simultaneously assess and monitor multiple imaging biomarkers--including breast parenchymal uptake--which could consequently contribute to risk-adapted screening and guide risk-reduction strategies."

Credit: 
Society of Nuclear Medicine and Molecular Imaging

Are BMD and CT-FEA effective surrogate markers of femoral bone strength?

video: Virtual stress testing at the hip by finite element analysis for a simulated sideways fall. Grey regions depict bone density (white is more dense tissue) and colored regions depict failed bone tissue.

Image: 
Video courtesy of Dr. David Lee, O.N. Diagnostics, Berkeley, CA.

A new International Osteoporosis Foundation (IOF) position paper reviews experimental and clinical evidence showing that hip bone strength estimated by bone mineral density (BMD) and/or finite element analysis (FEA) reflects the actual strength of the proximal femur. The paper 'Perspectives on the non-invasive evaluation of femoral strength in the assessment of hip fracture risk' , published in Osteoporosis International, is authored by experts from the IOF Working Group on Hip Bone Strength as a Therapeutic Target.

Professor Serge Ferrari, corresponding author and co-chair of the IOF Working Group, noted: "With the number of debilitating hip fractures increasing worldwide, there is a pressing need to prioritize the development of accurate methods for estimating bone strength in vivo and predicting hip fracture risks. Validation of surrogate endpoints for fracture could potentially lead to shorter and less expensive clinical trials, possibly spurring innovations of new drugs and procedures which might otherwise not be investigated due to the high cost of conducting a clinical trial with fracture outcomes."

The authors extensively reviewed relevant experimental and clinical studies, examining associations between experimentally measured femoral strength and areal BMD or FEA estimated strength; surrogates of hip strength (densitometric and structural variables, and FEA); predictive power for hip fracture of computed-tomography (CT)-based and DXA-based FE approaches; effects of osteoporosis treatment on bone mass, FEA and bone strength in pre-clinical studies; and effects of osteoporosis treatment on FEA estimates of bone strength in clinical trials.

Professor Mary L. Bouxsein, first author of the paper and Working Group co-chair, stated: "The findings of this extensive review confirm that femoral areal BMD and bone strength estimates by CT-FEA are good predictors of fracture risk and are excellent candidates to replace fracture endpoints in clinical trials."

The authors also conclude that further improvements of FEA may be achieved by incorporating trabecular orientations, enhanced cortical modelling, effects of aging on bone tissue ductility, and multiple sideway fall loading conditions.

Credit: 
International Osteoporosis Foundation

Self-moisturising smart contact lenses

image: Illustration of a self-moisturizing soft contact lens that supplies tears via electroosmotic flow from the temporary tear reservoir behind the lower eyelid.

Image: 
Tohoku University

Researchers at Tohoku University have developed a new type of smart contact lenses that can prevent dry eyes. The self-moisturising system, which is described in the journal Advanced Materials Technologies, maintains a layer of fluid between the contact lens and the eye using a novel mechanism.

Smart contact lenses are wearable devices that could accelerate vision beyond natural human capabilities. They are being developed for a wide range of applications from non-invasive monitoring to vision correction to augmented reality display.

"Although there have been many recent advancements in new functions for smart contact lenses, there has been little progress in solving the drawbacks associated with wearing contact lenses day to day," says Professor Matsuhiko Nishizawa, an engineer at Tohoku University.

One of the biggest problems with contact lenses is they can cause "dry eye syndrome" due to reduced blinking and increased moisture evaporation. Dry eye syndrome can lead to corneal wounds and inflammation as well as a feeling of discomfort.

In order to tackle this important problem, the researchers developed a new mechanism that keeps the lens moist. The system uses electroosmotic flow (EOF), which causes liquid to flow when a voltage is applied across a charged surface. In this case, a current applied to a hydrogel causes fluid to flow upwards from the patient's temporary tear reservoir behind the lower eyelid to the surface of the eye.

"This is the first demonstration that EOF in a soft contact lens can keep the lens moist," says Nishizawa.

The researchers also explored the possibility of using a wireless power supply for the contact lenses. They tested two types of battery, a magnesium-oxygen battery and an enzymatic fructose-oxygen fuel cell, both of which are known to be safe and non-toxic for living cells. They showed that the system can be successfully powered by these biobatteries, which can be mounted directly on the charged contact lens.

Further research is needed to develop improved self-moisturizing contact lenses that are tougher and capable of operating at smaller currents.

"In the future, there is scope to expand this technology for other applications, such as drug delivery," says Nishizawa.

Credit: 
Tohoku University

Quo vadis Antarctic bottom water?

Ocean currents are essential for the global distribution of heat and thus also for climate on earth. For example, oxygen is transferred into the deep sea through the formation of new deep water around Antarctica. Weddell Sea sourced Antarctic Bottom Water (AABW) normally spreads northwards into the South Atlantic and Indian Oceans. However, during the peak of the last two ice ages, the supply of deep water from the Weddell Sea to the South Atlantic Ocean was apparently interrupted, as shown by a new study led by scientists of the GEOMAR Helmholtz Centre for Ocean Research Kiel.

"Up to now, it has been widely assumed that Antarctic Bottom Water was also formed during the ice ages and exported to large parts of the Southern Ocean", explains Dr. Marcus Gutjahr, co-author of the study from GEOMAR. "It may still be possible that deep water formation indeed took place, but unlike today, it was not circulating into the South Atlantic Ocean further north", Gutjahr continues. Most likely, generally weaker circulation within the Southern Ocean during cold periods was responsible for this interruption of AABW export.

The authors of the study, which has now been published in the scientific journal Nature Communications, have analyzed various sediment cores from this region. The scientists were able to determine the origin of Southern Ocean deep water over the last two major glaciation phases of the last 140,000 years via the chemical extraction of seawater-derived neodymium and lead isotope signatures from sediments. "While neodymium isotopes dissolved from the sediments provide information about the origin of the bottom water, lead isotope signatures provide information about the average composition of the entire water column", explains the first author of the study, Dr. Huang Huang from GEOMAR.

Some of the results of the study were surprising in that similar disturbances of AABW export were also seen during the climatic optimum of the last interglacial (warm period) about 128,000 years ago. These could have been caused by strong melting, especially in the area of the West Antarctic Ice Sheet, an effect that will most likely occur in a warmer climate in the future. "As a result of such a disturbance in the overturning circulation, the total heat budget of the Southern Ocean and its ability to absorb heat from the atmosphere may be significantly altered in the long term," says Dr. Gutjahr.

The properties of AABW formed today have already significantly changed over the past decades. It is now warmer, less saline and less voluminous, in turn indicating lower rates of formation. As a bottom line, the new results suggest that the required conditions for the export of AABW from the Weddell Sea are not present both in extreme cold or warm phases. The generation of these very clear insights into the circulation state of the Southern Ocean during etreme past climates would not have been achievable without the possibility of using a set of unique sediments from the Alfred Wegener Institute (AWI) in Bremerhaven and the active collaboration with co-author Dr. Gerhard Kuhn there.

In a next step, Dr. Gutjahr wants to investigate the export of AABW from the Weddell Sea closer to Antarctica in the outflow area into the Scotia Sea area over climatically unstable periods of the last million years. For this purpose, new samples are available, which were obtained during an international expedition within the International Ocean Discovery Program (IODP expedition 382) in early 2019. In the medium term, samples from other regions of the Southern Ocean are to be added in order to investigate the dispersal paths into other ocean basins in more detail. It is a rather lengthy detective game with which the researchers want to learn more about control mechanisms of the climate of the Southern Hemisphere. "Ultimately, our goal is to be able to predict under which climatic conditions which parts of the Antarctic ice sheet will substantially melt, and what direct effects this will have on the circulation of the Southern Ocean," Gutjahr concluded.

Credit: 
Helmholtz Centre for Ocean Research Kiel (GEOMAR)