Tech

The future of solar technology: New technology makes foldable cells a practical reality

image: Foldable Perovskite Solar Cells Using Carbon Nanotube-Embedded Ultrathin Polyimide Conductor

Image: 
Pusan National University

With the recent development of foldable mobile phone screens, research on foldable electronics has never been so intensive. One particularly useful application of the foldable technology is in solar panels.

Current solar cells are restricted to rigid, flat panels, which are difficult to store in large numbers and integrate into everyday appliances, including phones, windows, vehicles, or indoor devices. But, one problem prevents this formidable technology from breaking through: to be integrated into these items, solar cells need to be foldable, to bend at will repeatedly without breaking. Traditional conducting materials used in solar cells lack flexibility, creating a huge obstacle in developing fully foldable cells.

A key requirement for an efficient foldable conductor is the ability to withstand the pressure of bending within a very small radius while maintaining its integrity and other desirable properties. In short, a thin, flexible, transparent, and resilient conductor material is needed. Professor Il Jeon of Pusan National University, Korea, elaborates, "Unlike merely flexible electronics, foldable devices are subject to much harsher deformations, with folding radii as small as 0.5 mm. This is not possible with conventional ultra-thin glass substrates and metal oxide transparent conductors, which can be made flexible but never fully foldable."

Fortunately, an international team of researchers, including Prof. Jeon, have found a solution, in a study published in Advanced Science. They identified a promising candidate to answer all of these requirements: single-walled carbon nanotube (SWNT) films, owing to their high transparency and mechanical resilience. The only problem is that SWNTs struggle to adhere to the substrate surface when force is applied (such as bending) and requires chemical doping. To address this problem, the scientists embedded the conducting layer into a polyimide (PI) substrate, filling the void spaces in the nanotubes.

To ensure maximum performance, they also "doped" the resulting material to increase its conductivity. By introducing small impurities (in this case, withdrawn electrons to molybdenum oxide) into the SWNT-PI nanocomposite layer, the energy needed for electrons to move across the structure is much smaller, and hence more charge can be generated for a given amount of current.

Their resulting prototype far exceeded the team's expectations. Only 7 micrometers thick, the composite film exhibited exceptional resistance to bending, almost 80% transparency, and a power conversion efficiency of 15.2%, the most ever achieved in solar cells using carbon nanotube conductors! In fact, as pointed out by Prof. Jeon, "The obtained results are some of the best among those reported thus far for flexible solar cells, both in terms efficiency and mechanical stability."

With this novel breakthrough in solar harvesting technology, one can only imagine what next-generation solar panels will look like.

Credit: 
Pusan National University

Physicists have optimized the method of smelting the MAX phase

image: Working

Image: 
IKBFU

MAX-phases are the new promising class of artificially created compounds that started to be extensively studied in the last two decades. They are a family of ternary layered compounds with the general formula Mn+1AXn (n = 1, 2, 3 ...), where M is an early transition metal (Sc, Ti, V, Cr, et cetera; elements from the left side of the d-block of the periodic table from group III to group VII); A -- an element from group IIIA or IVA (the most common are Al, Ga, Si, Ge); X is carbon or nitrogen, that is, the MAX phase is carbide or nitride, respectively.

Due to their structure and composition, ternary layered carbides and nitrides of d- and p-elements have a unique combination of physical properties. These compounds have high electrical and heat conductivity like metals do. At the same time, they are resistant to damage and chemical influence, oxidation and thermal shock, which is usually characteristic of ceramics. Besides, these materials are easily machinable, have a high melting point, and are quite stable at temperatures up to 1000°C and above. Even though little is known about the magnetic properties of MAX-phases, these materials are considered suitable for creating spin-electronic and microelectronic devices, magnetic cooling systems and other revolutionary technologies.

The Cr2AlC compound, both in pure form and with manganese doping according to the formula (Cr1-xMnx)2AlC, is considered to be a potential candidate for discovering various unusual effects in the magnetic field. In order to understand the changes its properties caused by the addition of manganese, it was necessary to obtain a highly pure (without impurities) initial phase.

Scientists from the Immanuel Kant Baltic Federal University have optimized the method of MAX-phase synthesis via the arc melting. Using this technique, mixtures of pure elements are re-melted by the plasma arc discharge at the temperature of more than 3000°C in an inert atmosphere.

"Arc melting can be extremely effective in terms of increasing the content of manganese in the substituted MAX-phase (Cr1-xMnx)2AlC MAX-phase. When conventional synthesis methods, such as hot sintering and self-propagating high-temperature synthesis, are used manganese atoms poorly incorporate into the MAX-phase. The additional energy, provided to them by the high-energy plasma arc contributes to the qualitative enhancement of doping," says Kirill Sobolev, an engineer of the laboratory of new magnetic materials of the IKBFU.

Physicists studied the influence of the initial ratio of the components on the formation of the structure and found which one gave the purest phase. For this, they varied the amount of aluminum with respect to chromium and carbon, optimizing the 2Cr: xAl: 1C ratio, where x was in the range from 1 to 1.5. The researchers came up with the fact that the excess of aluminum at the beginning of the process increased the purity of Cr2AlC, and that 2Cr: 1.3Al: 1C stoichiometry of the initial composition after annealing formed predominantly MAX-phase samples (almost single-phase material), as the increased concentration of aluminum in the initial mixture compensates its higher tendency to evaporate during melting. With a change in this ratio, other phases appeared in the samples -- Cr7C3 and Cr5Al8, resulting due to the excess of various elements in the mixture.

Further, while maintaining the optimal ratio of Cr, Al, and C, the scientists changed the arc melting parameters to further improve the phase purity of the samples. They controlled the pressure in the furnace chamber and the time of the subsequent annealing and studied their effects on the quality of the samples. The increased pressure in the melting chamber worked like the increased concentration of aluminum and prevented the formation of by-products: the phases Cr7C3 and Cr5Al8. Pressure control was proven to be even more efficient: it tunes phase purity more accurately than the stoichiometric variation.

The scientists also tested the possibility to obtain Cr2AlC MAX-phase from the Cr3C2 precursor instead of the mixture of Cr and C in the original protocol. Considering the relatively high melting point of carbon (which is difficult to achieve in a melting chamber) the use of Cr3C2 with a much lower melting temperature should have improved the quality of the synthesized MAX-phase. This was done in order to achieve faster and more uniform distribution of carbon in the samples. However, it turned out that the use of Cr3C2 insignificantly affected the number of side phases.

"In the future, we will use these results for the synthesis of the doped (Cr1-xMnx)2AlC MAX-phase. Some preliminary experiments indicate that we are able to overcome already published values of the manganese incorporation inside this structure. If at the same time it is possible to obtain samples that do not have side phases in their volume and to analyze their magnetic properties, this will be extremely important, first of all, for the fundamental understanding of the magnetism of MAX-phases," explained Kirill Sobolev.

Credit: 
Immanuel Kant Baltic Federal University

Origami powered by light

image: Researchers at Pitt and CMU found that by forming light-reactive polymer into a curved shape, as shown here, the bending action happened much more quickly and generated more torque.

Image: 
Mahnoush Babaei

If you watch the leaves of a plant long enough, you may see them shift and turn toward the sunlight through the day. It happens slowly, but surely.

Some man-made materials can mimic this slow but steady reaction to light energy, usually triggered by lasers or focused ambient light. New research from the University of Pittsburgh and Carnegie Mellon University has discovered a way to speed up this effect enough that its performance can compete against electrical and pneumatic systems.

"We wanted to create machines where light is the only source of energy and direction," explained M. Ravi Shankar, professor of industrial engineering and senior author of the paper. "The challenge is that while we could get some movement and actuation with light-driven polymers, it was too slow of a response to be practical."

When the polymer sheet is flat, the light animates it slowly, curving or curling over time. The researchers found that by forming the polymer into a curved shape, like a shell, the bending action happened much more quickly and generated more torque.

"If you want to move something, like flip a switch or move a lever, you need something that will react quickly and with enough power," said Shankar, who holds a secondary appointment in mechanical engineering and materials science. "We found that by applying a mechanical constraint to the material by confining it along on the edges, and embedding judiciously thought-out arrangements of molecules, we can upconvert a slow response into something that is more impulsive."

The researchers used a photoresponsive azobenzene-functionalized liquid crystalline polymer (ALCP) film that is 50 micrometers thick and several millimeters in width and length. A shell-like geometry was created by confining this material along its edges to create a curve. Shining light on this geometry folds the shell at a crease that spontaneously nucleates. This folding occurs within tens of milliseconds and generates torque densities of up to 10 newton-meters per kilogram (10Nm/kg). The light driven response is magnified by about three orders-of-magnitude in comparison to the material that was flat.

"The outcomes of the project are very exciting because it means that we can create light powered actuators that are competitive with electrical actuators," said Kaushik Dayal, coauthor and professor of civil and environmental engineering at CMU.

"Our approach towards scaling up the performance of light-driven polymers could reinvent the design of fully untethered soft robots with numerous technological applications," added lead author and post-doctoral researcher at CMU Mahnoush Babaei.

Credit: 
University of Pittsburgh

Flooding in the Columbia River basin expected to increase under climate change

image: A 2019 flood along the Willamette River.

Image: 
David Baker, Oregon State University

CORVALLIS, Ore. - The Columbia River basin will see an increase in flooding over the next 50 years as a result of climate change, new modeling from Oregon State University indicates.

The magnitude of flooding - the term used to describe flooding severity - is expected to increase throughout the basin, which includes the Columbia, Willamette and Snake rivers and hundreds of tributaries. In some areas, the flooding season will expand, as well.

"The flood you're used to seeing out your window once every 10 years will likely be larger than it has been in the past," said the study's lead author, Laura Queen, a research assistant at OSU's Oregon Climate Change Research Institute.

The findings are based on natural river conditions and do not take into account potential flood control measures, including dams, but the increases are significant nonetheless, said study co-author Philip Mote, a professor in the College of Earth, Ocean, and Atmospheric Sciences and dean of the Graduate School at OSU.

"We don't know how much of this increased flood risk can be managed through mitigation measures until we study the issue further," Mote said. "But managing a 30% to 40% increase, as is predicted for many areas, is clearly beyond our management capabilities."

The findings were published recently in the journal Hydrology and Earth System Science. Co-authors are David Rupp of the Oregon Climate Change Research Institute and Oriana Chegwidden and Bart Nijssen of the University of Washington.

The study emerged out of Queen's work on her honors thesis as an undergraduate in the University of Oregon's Robert D. Clark Honors College. Queen, a Corvallis native, continued the work at OCCRI and is now enrolled in a doctoral program at Victoria University of Wellington in New Zealand.

The goal of Queen's research was to better understand how flooding in the Columbia River basin might change as the planet warms. The Columbia River drains much of the Pacific Northwest, including portions of seven states and British Columbia. It has the fourth-largest streamflow volume in the United States.

The Pacific Northwest has a history of costly and disruptive flooding. The largest flood in modern history occurred in late spring 1948 when flooding from the Columbia River destroyed the city of Vanport, Oregon, displacing more than 18,500 people. Floods on the Chehalis River in 2007 and 2009 closed Interstate 5 in Washington and floods along the Willamette River in 1996 and 2019 caused hundreds of millions of dollars in damage.

Queen ran simulations using hydrology models and a previously collected set of streamflow data for 396 sites throughout the Columbia River basin and other watersheds in western Washington. The data included a 50-year window from the past, 1950-1999, as well as a 50-year window of expected streamflows in the future, 2050 to 2099, that was developed using several different climate models.

Previous studies predicting future streamflows showed mixed results, but the results of this new analysis were clear and surprising, Mote said.

"This was the best and most complete set of data," he said. "It shows that the magnitude of one-, 10- and 100-year floods is likely go up nearly everywhere in the region. These are profound shifts."

The Willamette River and its tributaries are expected to see the biggest increase in flooding magnitude, with 50% to 60% increases in 100-year floods. The streamflows are expected to be smaller downstream and grow larger upstream.

On the Snake River, streamflows will grow larger as they move downstream until they reach the confluence of the Salmon River tributary and then will drop abruptly. Parts of the Snake River will see a 40% increase in 10-year floods and a 60% increase in 100-year floods. But below the confluence with the Salmon River on the Oregon-Idaho border, the increase drops to 20% for 10-year floods and 30% for 100-year floods.

The model also suggests a significant increase in the flood season on the Snake River, which is largely concentrated in late spring now but could start as early as December or January in the future, Mote said.

One of the drivers of the change is warmer winters that will see precipitation fall more as rain instead of snow. Lower spring snowpack will lead to earlier spring streamflows in many rivers. The cold upper Columbia River basin in Canada is projected to experience little change in snowpack volume, but the snow will melt faster.

The study's findings could have implications for flood management policy in the coming decades, Mote said. A logical next step in the research is to run the models again and include existing dams to see the role they may play in mitigating flooding.

"This work provides information and impetus for the people who manage flood risk," he said. "We'll need to know how much of this can be mitigated by existing flood control."

Credit: 
Oregon State University

Google Scholar renders documents not in English invisible

The visibility of scientific articles and conference papers is conditional upon being easily found in academic search engines, especially Google Scholar. To enhance this visibility, search engine optimization (SEO) has been applied in recent years to academic search engines in order to optimize documents and, thereby, ensure they are better ranked in search pages (i.e., academic search engine optimization or ASEO).

Recent research, published in Future Internet, has found out whether the language of the document is a factor involved in the sorting algorithm of search results on Google Scholar. The study authors are Cristòfol Rovira, Lluís Codina and Carlos Lopezosa, members of the Department of Communication at UPF.

"To implement this optimization we need to further our understanding of Google Scholar's relevance ranking algorithm, so that, based on this knowledge, we can highlight or improve those characteristics that academic documents already present and which are taken into account by the algorithm", says Rovira, first author of the study. To prevent fraudulent practices, Google Scholar does not explain this algorithm and, therefore, this kind of research becomes necessary.

For the study, the authors applied an inverse engineering research methodology based on statistical analysis using Spearman's correlation coefficient. Three different types of search were conducted yielding a sample of 45 searches each with 1,000 results (45,000 documents): by author, by year, and by keyword.

Quality articles with hundreds of citations are treated in a discriminatory manner

The results show that when a search is performed on Google Scholar with results in various languages, the vast majority (90%) of documents in languages other than English are systematically relegated to positions that render them totally invisible. These documents are almost always placed in positions above rank position 900, even though they are quality articles with hundreds of citations. Thus, it can be stated that Google Scholar discriminates against documents not written in English in searches with multilingual results.

A lack of awareness of this factor could be detrimental to researchers from all over the non-English-speaking world, making them believe that there is no literature in their national language when they conduct searches with multilingual results.

"This is particularly the case in the most frequent searches, that is, those conducted by year. Nevertheless, it can also occur in searches using certain keywords that are the same in languages around the world, including trademarks, chemical compounds, industrial products, acronyms, drugs, and diseases, with Covid-19 being the most recent example", the study authors reveal.

And they add "moreover, if we consider the results of this study from the perspective of ASEO, it is more than evident that until this bias is addressed, the chances of being ranked in a multilingual Google Scholar search increase remarkably if the researchers opt for publication in English".

Graph of the results of the study

The scatter plot above summarizes the research results. There are 45,000 dots, one per document. The grey dots represent documents written in English, other languages in red, and blue shows the median positions.

The graph shows how articles written in languages other than English appear above 900th position in the Google Scholar ranking. This is so even for quality documents that have hundreds of citations and are well placed in the ranking for number of citations.

The most striking cases are the red dots located in the bottom-right corner. They correspond to documents written in languages other than English that are ranked by number of citations below 100 and have a Google Scholar ranking over 900. This means that all of them receive over a thousand citations and appear in Google Scholar in the same positions as documents in English cited just a few dozen times.

Credit: 
Universitat Pompeu Fabra - Barcelona

Infant and toddler food product names may not accurately reflect ingredient amounts

UNIVERSITY PARK, Pa. -- The descriptions on the fronts of infant and toddler food packages may not accurately reflect the actual ingredient amounts, according to new research. The team found that vegetables in the U.S. Department of Agriculture's "dark green" category were very likely to appear in the product name, but their average order in the ingredient list was close to fourth. In contrast, juice and juice concentrates that came earlier on the ingredient list were less likely to appear in product names.

"Early experiences with food can mold children's preferences and contribute to building healthful, or unhealthful, eating habits that last a lifetime," said author Alyssa Bakke, staff sensory scientist, Penn State. "Our previous work found combining vegetables with fruits reduced the amount of vegetable flavor adults perceived, as the fruit flavors were more pronounced. Other research has also indicated that parents predominantly use front-of-package information to make purchasing decisions. This means that when children are given commercial foods, they may be receiving less exposure to vegetable flavors than their parents assume based on the way the products are labeled and marketed."

The team created a database of over 500 commercial infant and toddler foods containing vegetables and documented inclusion of each vegetable and fruit in the product name; the form of the vegetable or fruit -- such as whole, puree, juice or juice concentrate -- and the position of the vegetable or fruit in the ingredient list. The researchers classified the vegetables based on the Department of Agriculture's categories--Dark Green, Red/Orange, Legumes, Starchy and Other. They classified the fruits into two categories: Common Fruits, comprising pears, apples and grapes, and Other Fruits, including mangos, pineapples and cherries.

The team conducted statistical analyses to examine associations between: (1) vegetable and fruit category and inclusion in front-of-package product name; (2) vegetable and fruit form and inclusion in front-of-package product name; and (3) vegetable and fruit form and inclusion in front-of-package product name, by vegetable and fruit category.

"There was never an instance in which a vegetable or a fruit listed in the product name did not appear in the ingredient list," said John Hayes, professor of food science, Penn State. "However, we still observed a disconnect between product names and ingredient lists. The front-of-pack labels did not always accurately represent the amount of various ingredients in the product, which are listed in descending order. This means parents may not be buying what they are hoping to buy if they only look at the name."

Specifically, the team found that dark green vegetables were more likely than expected to appear in product names; yet, their average order in the ingredient list was close to fourth. Interestingly, the team found that common fruits were less likely than expected to be included in the product names when found as juice/juice concentrates, but more likely than expected to be included in product names when no form was listed.

"Fruit juice and fruit juice concentrates were found in almost all of the food products we examined, but were often excluded from product names, presumably to avoid drawing attention to the use of juice concentrates as sweeteners," said Mackenzie Ferrante, graduate student, Colorado State University, and lead author on the paper.

Ferrante added that in the ingredient lists, fruits tended to be positioned close to the beginning of the ingredient list on the back or side of the package, indicating that the products were composed more of fruits than the vegetables suggested by front-of-package labeling.

"Companies producing infant and toddler foods sometimes use nutrition-related statements that can confuse the consumer and are intended to sway consumers to purchase their product," said Susan Johnson, professor of pediatrics at the University of Colorado Anschutz Medical Campus. "Parents believe vegetables are important for their children's health and that presumably they are purchasing food they believe contain significant amounts of vegetables because of front-of-package labeling. The discrepancies between which foods are included in the product name, where these foods fall on the ingredient lists, and whether these products actually taste like vegetables are key concerns to communicate to parents."

Credit: 
Penn State

Nanoparticle gel unites oil and water in manufacturing-friendly approach

image: Unlike other gel-creation approaches, where nanoparticles remain at the interface between the gel's two constituent solvents (top left), the new approach concentrates nanoparticles in the interior of one of the solvents (top right), giving the resulting "SeedGel" unusual mechanical strength. The method could lead to gels that could be manufactured at industrial scales for a wide variety of potential applications.

Image: 
N. Hanacek / NIST

Oil and water may not mix, but adding the right nanoparticles to the recipe can convert these two immiscible fluids into an exotic gel with uses ranging from batteries to water filters to tint-changing smart windows. A new approach to creating this unusual class of soft materials could carry them out of the laboratory and into the marketplace.

Scientists at the National Institute of Standards and Technology (NIST) and the University of Delaware have found what appears to be a better way to create these gels, which have been an area of intense research focus for more than a decade. Part of their potentially broad utility is the complex set of interconnected microscopic channels that form within them, creating a spongelike structure. These channels not only offer passageways for other materials to travel through, making them useful for filtration, but also give the gel a high amount of internal surface area, a characteristic valuable for speeding up chemical reactions or as scaffolding on which living tissue can grow.

While these and other advantages make it sound like gel innovators have struck oil, their creations have not yet mixed well with the marketplace. The gels are commonly formed of two liquid solvents mingled together. As with oil and water, these solvents do not mix well, but to prevent them from completely separating, researchers add custom-designed nanoparticles that can stay at the interface between them. Carefully cooking these ingredients allows a cohesive gel to form. However, the process is demanding because custom-designing nanoparticles for each application has been difficult, and forming the gels has required carefully controlled rapid temperature change. These constraints have made it hard to create this type of gel in any more than small quantities suitable for lab experiments rather than on an industrial scale.

As described in a new Nature Communications paper, the NIST/Delaware team has found ways to sidestep many of these problems. Its novel approach forms what the researchers refer to as a "SeedGel," an abbreviation for "solvent segregation driven gel." Instead of designing nanoparticles to remain at the interface between the two solvents, their chosen particles concentrate within one of them. While these particles tend to repel one another, the particles' affinity toward one of the solvents is stronger and keeps them together in the channel. Using neutron scattering tools at the NIST Center for Neutron Research (NCNR), the team unambiguously proved that it had succeeded at concentrating the nanoparticles where it wanted.

The resulting gel could be far easier to create, as its two solvents are essentially oil and water, and its nanoparticles are silicon dioxide -- essentially tiny spheres of common quartz. It also could have a variety of industrial uses.

"Our SeedGel has great mechanical strength, it's much easier to make, and the process is scalable to what manufacturers would need," said Yun Liu, who is both an NCNR scientist and an affiliated full professor at the University of Delaware. "Plus it's thermo-reversible."

This reversibility refers to an optical property that the finished SeedGel possesses: It can switch from transparent to opaque and back again, just by changing its temperature. This property could be harnessed in smart windows that sandwich a thin layer of the gel between two panes of glass.

"This optical property could make the SeedGel useful in other light-sensitive applications as well," said Yuyin Xi, a researcher from the University of Delaware also working at the NCNR. "They could be useful in sensors."

Because the team's gel-creation approach could be used with other solvent-and-nanoparticle combinations, it could become useful in filters for water purification and possibly other filtration processes depending on what type of nanoparticles are used.

Liu also said that the creation approach allows for the size of the channels within the gel to be tuned by changing the rate at which the temperature changes during the formation process, offering application designers another degree of freedom to explore.

"Ours is a generic approach working for many different nanoparticles and solvents," he said. "It greatly extends the applications of these sorts of gels."

Credit: 
National Institute of Standards and Technology (NIST)

Response to cancer immunotherapy may be affected by genes we carry from birth

For all their importance as a breakthrough treatment, the cancer immunotherapies known as checkpoint inhibitors still only benefit a small minority of patients, perhaps 15 percent across different types of cancer. Moreover, doctors cannot accurately predict which of their patients will respond.

A new study finds that inherited genetic variation plays a role in who is likely to benefit from checkpoint inhibitors, which release the immune system's brakes so it can attack cancer. The study also points to potential new targets that could help even more patients unleash their immune system's natural power to fight off malignant cells.

People who respond best to immunotherapy tend to have "inflamed" tumors that have been infiltrated by immune cells that are capable of killing both viruses and cancer. This inflammation is also driven by the immune signaling molecule interferon.

"There are some factors that are already associated with how well the immune system responds to tumors," said Elad Ziv, MD, professor of medicine at UCSF and co-senior author of the paper, published Feb. 9, 2021, by an international team in Immunity. "But what's been less studied is how well your genetic background predicts your immune system's response to the cancer. That's what is being filled in by this work: How much is the immune response to cancer affected by your inherited genetic variation?"

The study suggests that, for a range of important immune functions, as much as 20 percent of the variation in how different people's immune systems are able to attack cancer is due to the kind of genes they were born with, which are known as germline genetic variations.

That is a significant effect, similar to the size of the genetic contribution to traits like high blood sugar levels or obesity.

"Rather than testing selected genes, we analyzed all the genetic variants we could detect across the entire genome. Among all of them, the ones with the greatest effect on the immune system's response to the tumor were related to interferon signaling. Some of these variants are known to affect our response to viruses and our risk of autoimmune disorders," said Davide Bedognetti, MD, PhD, director of the Cancer Program at the Sidra Medicine Research Branch in Doha, Qatar, and co-senior author of the paper. "As observed with other diseases, we demonstrated that specific genes can also predispose someone to have a more effective anti-cancer immunity."

The team identified variants in 22 regions in the genome, or in individual genes, with significant effects - including one gene, IFIH1, that is already well known for the role its variants play in autoimmune diseases as varied as type 1 diabetes, psoriasis, vitiligo, systemic lupus erythematosus, ulcerative colitis and Crohn's disease.

The IFIH1 variants act on cancer immunity in different ways. For instance, people with the variant that confers risk of type 1 diabetes had a more inflamed tumor, which suggests they would respond better to cancer immunotherapy. But the researchers saw the opposite effect for patients with the variant associated with Crohn's, indicating they might not benefit.

Another gene, STING1, was already thought to play a role in how patients respond to immunotherapy, and drug companies are looking for ways to boost its effects. But the team discovered that some people carry a variant that makes them less likely to respond, which may require further stratification of patients to know who could benefit most from those efforts.

The study required a huge amount of data that could only be found in a dataset as large as The Cancer Genome Atlas (TCGA), and from which they analyzed the genes and immune responses of 9,000 patients with 30 different kinds of cancer.

All told, the scientific team, which includes members from the United States, Qatar, Canada, and Europe, examined nearly 11 million gene variants to see how they matched with 139 immune parameters measured in patient tumor samples.

But the 22 regions or genes identified in the new study are just the tip of the iceberg, the researchers said, and they suspect many more germline genes likely play a role in how the immune system responds to cancer.

The next step, Ziv said, is to use the data to formulate "polygenic" approaches - taking a large number of genes into account to predict which cancer patients will benefit from current therapies, and developing new drugs for those who will not.

"It's further off," he said, "but it's a big part of what we hope will come out of this work."

Credit: 
University of California - San Francisco

White contours induce red hue

image: While a red line can be seen in (A) due to an visual illusion, the line is actually gray. See the enlarged diagram. The difference is a contour formed from white lines. Once removed, it appears gray (B).

Image: 
COPYRIGHT (C) TOYOHASHI UNIVERSITY OF TECHNOLOGY. ALL RIGHTS RESERVED.

Overview:

A color illusion that strongly induces color contrast effect has been found by a research team at the Toyohashi University of Technology Department of Computer Science and Engineering, and Electronics-Inspired Interdisciplinary Research Institute (EIIRIS). The powerful visual illusion clarified a century-old contradiction relating to simultaneous color contrast theory. Through a human psychophysical experiment, the team demonstrated that the presence or absence of flanking contours formed from extremely thin white lines could be used to switch between contradictory visual phenomena (Figure 1), enabling consistent explanation for both discrepant theories. This solution alters theories of visual computation relating to color appearance, and is expected to contribute to industrial design and high-definition imaging.

Details:

Under certain circumstances, colors and shapes of physically identical objects will appear to be different. This is known as a visual illusion. Visual illusions dealing with colors have long been documented. In the 19th century, a French chemist Michel-Eugène Chevreul has explained that the cause of a complaint relating to textile dyes was not due to chemical reactions but to the visual illusion. These types of optical illusions may have a strong impact on the appearance of a product, with designers having avoided them through trial and error. However, researchers believe that visual illusions are not a failure of human visual function, but they occur as an accompanying side effect of intrinsically important functions. Some novel functions that allow us to see the outside world effectively are often responsible for producing a variety of visual illusions. Thus, discovering new visual illusions is a clue to discover an unknown visual function, and many vision researchers are engaged in such projects.

Among visual illusions alters colors appearance, the most famous one is a simultaneous color contrast. Simultaneous color contrast is a phenomenon in which color appearance of gray lines changes depending on the background color, based on the opposite color of the background color. Simultaneous color contrast is considered to be an important factor in color constancy, which compensates color appearances of certain objects under various colored illuminations. While it is known that the effects of simultaneous color contrast vary depending on the luminance of the gray lines being affected, there are two discrepant theories about what level of luminance results produces the strongest simultaneous color contrast. Kirschmann's Third Law says that the effect is strongest when the luminance is the same as that of the background, and the Helson-Judd Effect says that the darker it gets, the stronger the effect. These phenomena have been verified in the separate research fields of psychology and lighting technology, respectively.

Upon discovering a new color illusion, the research team realized the potential to reconcile the above contradiction. In the discovered visual illusion, a phenomenon occurs in which a thin gray line atop a cyan background appears red when bordered by a thin white line (Figure 1). The illusion is gaining attention, and was reported in the World Illusion Contest, winning a place as a Top 10 Finalist in the Best Illusion of the Year 2018 (Figure 2). Through human psychophysical experiment, it was demonstrated that a strong color contrast was produced regardless of the luminance levels of the gray line, and that the visual illusion effects became stronger the darker the color was comparatively. This result is concordant with the Helson-Judd Effect. On the other hand, with the white line removed, the visual illusion is strongest with identical luminance, reproducing Kirschmann's Third Law. In other words, the presence or absence of the flanking white line makes it possible to switch between the two contradictory phenomena. A consistent explanation becomes possible by assuming that the adjacent white line created a separate phenomenon in the gray line known as color assimilation. In conclusion, the team succeeded in resolving a contradiction that has existed among visual illusion researchers for some 100 years.

"This new color illusion produced extremely strong illusory effects regardless of the luminance of the gray line," says lead author Tama Kanematsu (PhD student in the Department of Computer Science and Engineering, and DC2 research fellow at the Japan Society for the Promotion of Science). This contradicts accepted color illusion research dating back to 1891. By incorporating additional knowledge about color constancy into the illusion, I believe we succeeded in illuminating the true nature of simultaneous contrast in the visual system. Moreover, our newly devised color appearance model allows for a consistent explanation that includes past research," explains Kanematsu.

Development Background:

Lead author Tama Kanematsu says the visual illusion was found accidentally. "Kowa Koida (Associate Professor, Electronics-Inspired Interdisciplinary Research Institute), one of the co-authors, reported seeing purple (which he had not been using) when creating a diagram using light blue and blue lines. Koida first thought it was an optical phenomenon known as chromatic aberration. Upon precisely analyzing the characteristics of the area which appeared purple, we were able to prove that it was a new illusion. In addition, it is clear that this illusion requires an extremely thin line. Without an extremely detailed modern high-performance monitor, it may not have been discovered. This illusion showcased an instance in which a development in engineering contributed to the further development of basic science."

Future Outlook:

The research team believes it will be necessary to construct a computational model of the illusion that reproduces the color appearance. The visual illusions also help to understand theoretical framework of visual neurons in the brain. This model is expected to help in improving user impressions of high-definition digital device screen designs, and in industrial design fields such as textiles.

Credit: 
Toyohashi University of Technology (TUT)

Time perception and sense of touch: a new connection

image: The percept of time relates to the sense of touch.

Image: 
chenspec da Pixabay

The percept of time relates to the sense of touch. A new SISSA study "A sensory integration account for time perception" published in PLOS Computational Biology uncovers this connection. "The challenge to neuroscience posed by the sense of time lies, first and foremost, in the fact there do not exist dedicated receptors - the passage of time is a sensory experience constructed without sensors," notes Mathew Diamond, director of the Tactile Perception and Learning Lab. "One might imagine a precise clock in the brain, a sort of stopwatch that registers the start and stop and computes the elapsed time between those two instants. But decades of research have not found any brain mechanism resembling a stopwatch. We thought that understanding sensory systems might be the key to understanding sense of time."

The lead author of the study, SISSA PhD student Alessandro Toso, explains how the team (including also Arash Fassihi, Luciano Paz and Francesca Pulecchi as co-authors) approached the problem: "We trained both humans and rats to compare the durations of two tactile vibrations. The main clue leading to the new theory is that the perceived duration of a vibration increases not only in relation to actual elapsed time but also in relation to the intensity of the vibration. In other words, subjects (of both species) feel that a stronger vibration lasts longer."

The team then proposed a model where the experience of the elapsed time accompanying a stimulus is generated when the neuronal representation of the stimulus itself is collected and summated by a downstream accumulator. This model would explain both characteristics of sense of time: a stimulus is judged as longer when it is in fact longer, but also when its higher intensity evokes a larger sensory response. They tested the plausibility of the model by simulating the time percept that would emerge through integration of the neuronal firing of real spike trains recorded from the sensory cortex of rats receiving the vibratory stimulus. The close match of the model's prediction of perceived time to actual perceived time for the same stimuli supports the model. Now the research will continue with the identification and analysis of the accumulator.

"For many years, this research group has been interested in touch perception and memory." Diamond says. "Following unexpected findings, our curiosity has led to a new research line, time perception. This brings us in synergy with Domenica Bueti, SISSA neuroscientist with an outstanding track record in time perception. We are looking forward to collaborating."

Credit: 
Scuola Internazionale Superiore di Studi Avanzati

Rabies treatment demonstrated as safe and effective for use in children in first pediatric trial

A treatment, known as KEDRAB (Rabies Immune Globulin [Human]), currently used in the prevention of rabies has been demonstrated to be safe and effective for patients age 17 and under.

Results published today in Human Vaccines & Immunotherapeutics report the first and only pediatric trial of any human rabies immunoglobulin (HRIG) currently available in the US. Findings have been submitted to the US Food and Drug Administration for review.

In the United States, someone is treated for possible exposure to rabies every 10 minutes. Globally, the World Health Organization (WHO) estimates that rabies causes 59,000 human deaths annually in over 150 countries, with 95% of cases occurring in Africa and Asia - however they concede it is likely a gross underestimate of the true burden of disease.

The WHO also estimates that 40% of the global rabies disease burden occurs in children under 15 years of age, and that most encounters of the disease follow a dog bite.

Once clinical symptoms appear; rabies is virtually 100% fatal.

The current treatment for previously unvaccinated people potentially exposed to rabies is called rabies post-exposure prophylaxis (PEP), which includes thorough wound washing, passive neutralization of the virus with infiltration of human rabies immune globulin (HRIG) into and around the wound site, and a series of 4 doses of rabies vaccine given over a 2-week timeframe.

And in this latest study carried out by a team of international experts from the US and Israel, KEDRAB® (HRIG150) has become the first HRIG shown to be safe and effective in children when administered promptly and properly as part of the rabies PEP process.

"Despite the large proportion of pediatric cases, limited safety and efficacy data had previously existed for use in pediatric patients," says senior author Dr James Linakis, from the Warren Alpert Medical School of Brown University.

"Evidence from this KEDRAB US Pediatric Trial confirms that this product addresses an unmet need in children who may have been exposed to rabies, and gives healthcare providers confidence when preventing this deadly condition in countless numbers of young patients across the US," says lead author Dr Nicholas Hobart-Porter, Pediatric Emergency Physician at University of Arkansas for Medical Sciences and Arkansas Children's Hospital.

The study looked at a group of 30 trial participants, with suspected or confirmed rabies exposure, over an 84-day period (as to include a 3-month follow-up). Each participant received PEP. KEDRAB was infiltrated into and around detectable wound site(s) and/or given intramuscularly along with the first of a 4-dose series of rabies vaccine. Although the study did not include a placebo control group, placebo treatment of exposed patients is ethically unacceptable due to the near 100% fatality rate of rabies.

No participant showed an active rabies infection at any point, and there were no deaths and no serious adverse events.

While 70% of participants experienced some form of unrelated or related side effect, all of these were mild.

"The study not only confirms the safety and efficacy of KEDRAB, but also that KEDRAB could be well tolerated by all patients who participated in this trial," says Novinyo Serge Amega, M.D., head of US Medical Affairs at Kedrion Biopharma. "It was the first and only pediatric study of any HRIG available in the United States and, as such, may provide a healthy degree of reassurance for physicians and others who treat children exposed to rabies."

In the US, rabies in humans is extremely rare; with around two deaths on average per year. The low incidence of human rabies in the US can be attributed to successful pet vaccination and animal control programs, public health surveillance and testing, and availability of post-exposure prophylaxis (PEP) for rabies.

It is important to note, that KEDRAB is not licensed outside the United States. Therefore, the authors cannot make any connection to its use in other nations. Availability, accessibility and affordability of PEP in developing nations remains a major component of the global burden of rabies.

Credit: 
Taylor & Francis Group

Cataloguing genetic information about yams

image: The edible yam tuber has a starchy, white flesh. Yams are a great source of fiber and potassium.

Image: 
S. Yamanaka

Yams are a staple food in West Africa, which produces over 90% of the world's yams each year. Yams play a key role in the food security, economic income, and traditional culture for the region.

While they are commonly assumed to be the same as sweet potatoes in the U.S., yams are a completely different plant. The yam tubers are much starchier and drier compared to sweet potatoes. Yams are native to Africa and Asia, and most Americans have never had a true yam.

Even though yam is a staple crop for West Africa, there has been limited research to improve the genetic diversity or productivity.

Researcher Shinsuke Yamanaka focuses on improving crop breeding resources for yams. His research was recently published in Crop Science, a journal from the Crop Science Society of America.

The goal of Yamanaka's research was to increase the knowledge about the genetic information within yams - to help with future endeavors of breeding more varieties. Presently, there is little information for breeders to rely on - so Yamanaka is creating a type of "library" of information for future yam breeders.

There are more than 600 species of yams. The research team focused on the white Guinea yam because of its economic importance.

Farmers in tropical and sub-tropical Africa rely on yams to make a living. But yams are not an easy crop to grow.

Yams can take up to 11 months to grow before harvest. Also, the male and female flowers grow on different plants, so it is hard to time pollination correctly for a successful breeding.

"The long growth cycle, inconsistency in flowering between plants, and polyploidy are major limitations of the yam breeding study," explains Yamanaka.

In addition to improving those characteristics, breeding new varieties can increase the crop yield, improve cooking properties, and decrease harvesting time. These would be beneficial for farmers.

When scientists breed crops, it can take several years to identify which plants have the best traits to be used as parents. Once the parent plants are chosen, the breeding process can continue as the plants are cross pollinated to create new, improved offspring.

Previous research has focused to collect and characterize genetic material from countries in the "yam belt" of West Africa. But maintaining this large collection is a challenge. Researchers wanted to better understand the physical and genetic variations of plants in the collection.

To do this, researchers used molecular markers. Molecular markers are segments of DNA that correspond with certain plant characteristics. This helps researchers predict what the plant will be like based on DNA instead of growing and observing the plant.

The team used plant material from over 400 yam plants, and DNA was extracted for analysis.

Researchers were able to reduce the size of the collection by eliminating plants that were genetically similar or not unique. A total of 100 yam plants were found to be unique, which will make up the new smaller collection.

Important agronomic traits were recorded about each plant in the new collection. These traits included the number of stems per plant, the growth period, number of tubers per plant, yield, and tuber weight.

In breeding terms, this is a "mini-core collection." Similar collections are available for rice, millet and palm, among other crops.

This smaller collection will be much easier to maintain and gather information from. And creating new, preferable types of yams will help African farmers.

Breeding better crops takes time. This collection of genetic resources will help scientists save time as they evaluate and select which plants to use for breeding.

"Although our research is just the beginning of better utilization of the wide genetic diversity in yam, we hope our research will pave the way to improve yam breeding for farmers," says Yamanaka.

Credit: 
American Society of Agronomy

Brazil: Air conditioning equipment days of use will double without climate action

image: Annual wet-bulb Cooling Degree Days CDDwb (°C-days) in the baseline and the specific warming level scenarios.

Image: 
See the paper

Space cooling already accounts for 14% of residential electricity demand in Brazil, and it is expected to increase further because of climate change.

Very few studies investigate the relationship between climate change, cooling needs, and electricity demand. In a new study in Energy and Buildings, a team of researchers from Universidade Federal do Rio de Janeiro and CMCC@Ca'Foscari - a joint program of Ca'Foscari University of Venice and CMCC Foundation - investigate how climate and income during the period 1970-2010 shaped cooling services in Brazil. This historical relationship allows projecting the resulting energy demand for cooling services across three warming scenarios: +1.5°C, +2°C, +4°C.

The study shows that the average air conditioning equipment days of use would increase by more than 100% in Brazil, in a 4°C warming scenario. This would substantially impact the need for space cooling and consequently, the associated energy consumption. But even in the case of more optimistic future warming scenarios, energy consumption - and consequent emissions - will increase. Because of cooling needs, average CO2 emissions, today of 0.62 Mt per year, are projected to increase in the three warming scenarios respectively by 70% (+1.5°C), 99% (+2°C) and 190% (+4°C).

"To define past and future ambient thermal comfort needs we use the wet-bulb Cooling Degree Days (CDDwb), a measure of temperature which accounts for air's humidity" explains Enrica De Cian from CMCC@Ca'Foscari, co-author of the study and principal investigator of the ERC Starting Grant project ENERGYA - Energy use for Adaptation. "Brazil is a very peculiar country as it varies widely in climatic conditions and population density. Our study shows that the highest temperature growth will happen in the northern region, characterised by low population density. Therefore, it will not translate into relevant energy consumption, excepting for the city of Manaus, the seventh largest city of Brazil, which is in the North region of the country, at the center of the Amazon rainforest".

The North is already saturated, with an average of 328 annual days of air conditioning use. On the opposite in the South region, a temperature increase of 4°C would inflate the energy consumption by almost 5 times.

Total energy demand for space cooling in the country may rise consistently because of the income effect alone, as it has been observed in the ?rst decade of this century. "In addition to temperature and population density, income of a region is crucial in shaping energy demand" explains Malcolm Mistry, a researcher at CMCC@Ca'Foscari. "Socio-economic drivers are also important to assess trends in ownership rates and types of air conditioning units in use, as typically there is a de?cit in achieving thermal comfort in many Brazilian households because of budget constraints."

Considering population and income increase alone, the ownership rate of space cooling appliances in Brazil can reach 96 air conditioning units per 100 households in 2035, compared to a current average of 40 units, boosting energy demand by 125%.

Energy ef?ciency can potentially reduce this growth in energy consumption observed for all warming scenarios. The potential carbon emissions avoided by energy savings from ef?ciency measures depends on the fuel mix of the power sector. In Brazil, the authors conclude, a 59% improvement of ef?ciency is feasible but it would require much more aggressive energy ef?ciency policies than those currently in place.

Credit: 
CMCC Foundation - Euro-Mediterranean Center on Climate Change

Emerging robotics technology may lead to better buildings in less time

image: Purdue University innovators developed and are testing a novel construction robotic system that uses an innovative mechanical design with advances in computer vision sensing technology to work in a construction setting.

Image: 
Jiansong Zhang/Purdue University

WEST LAFAYETTE, Ind. - Emerging robotics technology may soon help construction companies and contractors create buildings in less time at higher quality and at lower costs.

Purdue University innovators developed and are testing a novel construction robotic system that uses an innovative mechanical design with advances in computer vision sensing technology to work in a construction setting.

The technology was developed with support from the National Science Foundation.

"Our work helps to address workforce shortages in the construction industry by automating key construction operations," said Jiansong Zhang, an assistant professor of construction management technology in the Purdue Polytechnic Institute. "On a construction site, there are many unknown factors that a construction robot must be able to account for effectively. This requires much more advanced sensing and reasoning technologies than those commonly used in a manufacturing environment."

The Purdue team's custom end effector design allows for material to be both placed and fastened in the same operation using the same arm, limiting the amount of equipment that is required to complete a given task.

Computer vision algorithms developed for the project allow the robotic system to sense building elements and match them to building information modeling (BIM) data in a variety of environments, and keep track of obstacles or safety hazards in the system's operational context.

"By basing the sensing for our robotic arm around computer vision technology, rather than more limited-scope and expensive sensing systems, we have the capability to complete many sensing tasks with a single affordable sensor," Zhang said. "This allows us to implement a more robust and versatile system at a lower cost."

Undergraduate researchers in Zhang's Automation and Intelligent Construction (AutoIC) Lab helped create this robotic technology.

The innovators worked with the Purdue Research Foundation Office of Technology Commercialization to patent the technology.

This work will be featured at OTC's 2021 Technology Showcase: The State of Innovation. The annual showcase, being held virtually this year Feb. 10-11, will feature novel innovations from inventors at Purdue and across the state of Indiana. More information is available by emailing showcase@prf.org.

The innovators are looking for partners to continue developing and commercializing their technology. For more information on licensing and other opportunities, contact Matt Halladay of OTC at mrhalladay@prf.org and mention track code 2018-ZHAN-68094.

Credit: 
Purdue University

International research team begins uncovering Arctic mystery

image: This artistic diagram of the subsea and coastal permafrost ecosystems emphasizes greenhouse gas production and release. Sandia National Laboratories geosciences engineer Jennifer Frederick is one of the authors in a recent study regarding the release of such gases from submarine permafrost.

Image: 
Artwork by Victor O. Leshyk, Center for Ecosystem Science and Society, Northern Arizona University

ALBUQUERQUE, N.M. -- Something lurks beneath the Arctic Ocean. While it's not a monster, it has largely remained a mystery.

According to 25 international researchers who collaborated on a first-of-its-kind study, frozen land beneath rising sea levels currently traps 60 billion tons of methane and 560 billion tons of organic carbon. Little is known about the frozen sediment and soil -- called submarine permafrost -- even as it slowly thaws and releases methane and carbon that could have significant impacts on climate.

To put into perspective the amount of greenhouse gases in submarine permafrost, humans have released about 500 billion tons of carbon into the atmosphere since the Industrial Revolution, said Sandia National Laboratories geosciences engineer Jennifer Frederick, one of the authors on the study published in IOP Publishing journal Environmental Research Letters.

While researchers predict that submarine permafrost is not a ticking time bomb and could take hundreds of years to emit its greenhouse gases, Frederick said submarine permafrost carbon stock represents a potential giant ecosystem feedback to climate change not yet included in climate projections and agreements.

"It's expected to be released over a long period of time, but it's still a significant amount," she said. "This expert assessment is bringing to light that we can't just ignore it because it's underwater, and we can't see it. It's lurking there, and it's a potentially large source of carbon, particularly methane."

Researchers combine expert analysis on known data

The team of researchers led by Brigham Young University graduate student Sara Sayedi and senior researcher Ben Abbott compiled available articles and reports on the subject to create a base analysis of submarine permafrost's potential to affect climate change. The study was coordinated through the Permafrost Carbon Network, which has more than 400 members from 130 research institutions in 21 countries.

The study was conducted through an expert assessment that sought answers to several central questions: What is the current extent of submarine permafrost? How much carbon is locked in submarine permafrost? How much has been and will be released? What is the rate of release into the atmosphere?

The participating experts answered questions using their scientific skills, which could include modeling, data analysis or literature synthesis. Frederick, one of the original advocates of the study, has been modeling submarine permafrost for almost 10 years and answered the questions through the lens of her research, which is primarily in numerical modeling. She said she uses published material for model inputs or works directly with researchers who visit the Arctic and provide datasets.

Her work on the study was funded by the Laboratory Directed Research and Development program that enables Sandia scientists and engineers to explore innovative solutions to national security issues.

Frederick's work aligned with Sandia's Arctic Science and Security Initiative. For more than 20 years, the Labs have had a presence in northern Alaska, said Sandia atmospheric sciences manager Lori Parrott.

Working for the Department of Energy Office of Biological and Environmental Research, Sandia manages the Atmospheric Radiation Measurement user facility that collects atmospheric data continuously. Researchers measure and predict the speed of de-icing at the North Slope to help federal leaders make decisions on climate change and national security. In addition, Sandia creates accurate models for both sea and land ice and develops technologies for greenhouse gas monitoring. With more than 20 years of data, researchers can begin to decipher trends, Parrott said.

Permafrost study a reason to unite

"I hope this study begins to unite the research community in submarine permafrost," said Frederick. "Historically, it's not only been a challenging location to do field work and make observations, but language barriers and other obstacles in accessibility to the existing observations and literature has challenged international scientific progress in this area."

The team estimates that submarine permafrost has been thawing since the end of the last glacial period 14,000 years ago, and currently releases about 140 million tons of carbon dioxide and 5.3 million tons of methane into the atmosphere each year. This represents a small fraction of total human-caused greenhouse gas emissions per year, about the same yearly footprint as Spain, Sayedi said.

However, modern greenhouse gas releases are predominantly a result of the natural response to deglaciation, according to the study. Expert estimates from this study suggest human-caused global warming may accelerate greenhouse gas release, but due to lack of research and uncertainties in this area, determining causes and rates of the release will remain unknown until better empirical and modeling estimates are available.

"I'm optimistic that this study will shed light on the fact that submarine permafrost exists, and that people are studying its role in climate," Frederick said. "The size of the research community doesn't necessarily reflect its importance in the climate system."

Almost every expert involved in the study mentioned the permafrost knowledge gap, which makes it harder for scientists to anticipate changes and reduces the reliability of estimates of carbon pools and fluxes, as well as the thermal and hydrological conditions of permafrost. Frederick said that while there is a wealth of ongoing research on terrestrial permafrost, submarine permafrost hasn't been taken on like this before, and hasn't been the subject of nearly as much international collaboration.

The amount of carbon sequestered or associated with submarine permafrost is relevant when compared to the numbers of carbon in terrestrial permafrost and what's in the atmosphere today, Frederick said.

"This is an example of a very large source of carbon that hasn't been considered in climate predictions or agreements," she said. "While it's not a ticking time bomb, what is certain is that submarine permafrost carbon stocks cannot continue to be ignored, and we need to know more about how they will affect the Earth's future."

Credit: 
DOE/Sandia National Laboratories