Tech

More efficient risk assessment for nanomaterials

From dyes to construction materials, and from cosmetic products to electronics and medicine, nanomaterials are found in many different applications. But what are these materials? "Nanomaterials are defined purely by their size," explains Dr Kristin Schubert from the Department of Molecular Systems Biology at UFZ. "Materials between one and 100 nanometres in size are referred to as a nanomaterials." To help envisage their diminutive size: one nanometre is just one millionth of a millimetre. Since nanomaterials are so small, they can easily enter the body, for example through the lungs, skin or gastrointestinal tract, where they can cause adverse effects. Just like conventional chemicals, nanomaterials must therefore be tested for potential health risks before they can be industrially manufactured, used and marketed. Currently, testing is carried out for each nanomaterial individually. And since even the smallest changes - for example in size or surface characteristics - can affect toxicity, separate tests are also needed for each variant of a nanomaterial. "Risk assessment for nanomaterials is sometimes difficult and very time-consuming," says Dr Andrea Haase from BfR. "And the list of substances to be tested is getting longer every day, because nanotechnology is growing to become a key technology with wide-ranging applications. We therefore urgently need to find solutions for more efficient risk assessment."

How can nanomaterials be appropriately classified into groups? Are there similarities in their effects? And what material properties are associated with these effects? In their recent study, researchers at UFZ and BfR - together with industry representatives - set about answering these questions. "We focused on the biological effects and examined which molecules and signalling pathways in the cell are influenced by which types of nanomaterials," says Schubert. In in vitro experiments, the researchers exposed epithelial cells from rats' lungs to different nanomaterials and looked for changes within the cells. To do this, they used what are known as multi-omics methods: they identified several thousand cell proteins, various lipids and amino acids, and studied important signalling pathways within the cell. Using a novel bioinformatic analysis technique, they evaluated huge volumes of data and came to some interesting results.

"We were able to show that nanomaterials with toxic effects initially trigger oxidative stress and that in the process certain proteins are up- or down-regulated in the cell," explains Schubert. "In future, these key molecules could serve as biomarkers to detect and provide evidence of potential toxic effects of nanomaterials quickly and effectively." If the toxicity of the nanomaterial is high, oxidative stress increases, inflammatory processes develop and after a certain point the cell dies. "We now have a better understanding of how nanomaterials affect the cell," says Haase. "And with the help of biomarkers we can now also detect much lower toxic effects than previously possible." The researchers also identified clear links between certain properties of nanomaterials and changes in the cellular metabolism. "For example, we were able to show that nanomaterials with a large surface area affect the cell quite differently from those with a small surface area," says Schubert. Knowing which parameters play a key role in toxic effects is very useful. It means that nanomaterials can be optimised during the manufacturing process, for example through small modifications, and hence toxic effects reduced.

"Our study has taken us several large steps forward," says Schubert. "For the first time, we have extensively analysed the biological mechanisms underlying the toxic effects, classified nanomaterials into groups based on their biological effects and identified key biomarkers for novel test methods." Andrea Haase from BfR is more than satisfied: "The results are important for future work. They will contribute to new concepts for the efficient, reliable risk assessment of nanomaterials and set the direction in which we need to go."

Credit: 
Helmholtz Centre for Environmental Research - UFZ

Carbon cocoons surround growing galaxies

video: an animation file of the ALMA and NASA/ESA Hubble Space Telescope (HST) images of a young galaxy surrounded by a gaseous carbon cocoon.
The red color shows the distribution of carbon gas imaged by combining the ALMA data for 18 galaxies. The stellar distribution photographed by HST is shown in blue. The gaseous carbon clouds are almost five times larger than the distribution of stars in the galaxies, as observed with the Hubble Space Telescope.

Image: 
ALMA (ESO/NAOJ/NRAO), NASA/ESA Hubble Space Telescope, Fujimoto et al.

Researchers have discovered gigantic clouds of gaseous carbon spanning more than a radius of 30,000 light-years around young galaxies using the Atacama Large Millimeter/submillimeter Array (ALMA). This is the first confirmation that carbon atoms produced inside of stars in the early Universe have spread beyond galaxies. No theoretical studies have predicted such huge carbon cocoons around growing galaxies, which raises questions about our current understanding of cosmic evolution.

"We examined the ALMA Science Archive thoroughly and collected all the data that contain radio signals from carbon ions in galaxies in the early Universe, only one billion years after the Big Bang," says Seiji Fujimoto, the lead author of the research paper who is an astronomer at the University of Copenhagen, and a former Ph.D. student at the University of Tokyo. "By combining all the data, we achieved unprecedented sensitivity. To obtain a dataset of the same quality with one observation would take 20 times longer than typical ALMA observations, which is almost impossible to achieve."

Heavy elements such as carbon and oxygen did not exist in the Universe at the time of the Big Bang. They were formed later by nuclear fusion in stars. However, it is not yet understood how these elements spread throughout the Universe. Astronomers have found heavy elements inside baby galaxies but not beyond those galaxies, due to the limited sensitivity of their telescopes. This research team summed the faint signals stored in the data archive and pushed the limits.

"The gaseous carbon clouds are almost five times larger than the distribution of stars in the galaxies, as observed with the Hubble Space Telescope," explains Masami Ouchi, a professor at the National Astronomical Observatory of Japan and the University of Tokyo. "We spotted diffuse but huge clouds floating in the coal-black Universe."

Then, how were the carbon cocoons formed? "Supernova explosions at the final stage of stellar life expel heavy elements formed in the stars," says Professor Rob Ivison, the Director for Science at the European Southern Observatory. "Energetic jets and radiation from supermassive black holes in the centers of the galaxies could also help transport carbon outside of the galaxies and finally to throughout the Universe. We are witnessing this ongoing diffusion process, the earliest environmental pollution in the Universe."

The research team notes that at present theoretical models are unable to explain such large carbon clouds around young galaxies, probably indicating that some new physical process must be incorporated into cosmological simulations. "Young galaxies seem to eject an amount of carbon-rich gas far exceeding our expectation," says Andrea Ferrara, a professor at Scuola Normale Superiore di Pisa.

The team is now using ALMA and other telescopes around the world to further explore the implications of the discovery for galactic outflows and carbon-rich halos around galaxies.

Credit: 
National Institutes of Natural Sciences

New heat model may help electronic devices last longer

image: Electrical and computer engineering professor Can Bayram, left, and graduate student Kihoon Park led a study that redefines the thermal properties of gallium nitride semiconductors.

Image: 
Photo by L. Brian Stauffer

CHAMPAIGN, Ill. -- A University of Illinois-based team of engineers has found that the model currently used to predict heat loss in a common semiconductor material does not apply in all situations. By testing the thermal properties of gallium nitride semiconductors fabricated using four popular methods, the team discovered that some techniques produce materials that perform better than others. This new understanding can help chip manufacturers find ways to better diffuse the heat that leads to device damage and decreased device lifespans.

Silicon chips are being pushed to their limit to meet the demands of today's electronic devices. Gallium nitride, another semiconductor material, is better suited for use in high-voltage and high-current applications like those needed for 5G phones, "internet of things" devices, robotics and autonomous vehicles. Gallium nitride chips are already in use, but there are no systematic studies that examine the thermal properties of the various forms of the material, the researchers said. Their findings are published in the Journal of Applied Physics.

Gallium nitride chips are produced by depositing gallium nitride vapor onto a surface where it crystallizes into a solid, the researchers said.

"The composition and atomic structure of the surface used to grow the crystals influences the number of defects in the final product," said Can Bayram, an electrical and computer engineering professor and lead author of the study. "For example, crystals grown on silicon surfaces produce a semiconductor with many defects - resulting in lower thermal conductivity and hotter hotspots - because the atomic structures of silicon and gallium nitride are very different."

The team tested the thermal conductivity of gallium nitride grown using the four most technologically important fabrication techniques: hydride vapor phase epitaxy, high nitride pressure, vapor deposition on sapphire and vapor deposition on silicon.

To figure out how the different fabrication techniques influence the thermal properties of gallium nitride, the team measured thermal conductivity, defect density and the concentration of impurities of each material.

"Using our new data, we were able to develop a model that describes how defects affect the thermal properties of gallium nitride semiconductors," Bayram said. "This model provides a means to estimate the thermal conductivity of samples indirectly using defect data, which is easier than directly measuring the thermal conductivity."

The team found that silicon - the most economical of all of the surfaces use to grow gallium nitride - produces crystals with the highest defect density of the four popular fabrication methods. Deposition on sapphire makes a better crystal with higher thermal conductivity and lower defect density, but this method is not nearly as economical. The hydride vapor epitaxy and high nitride pressure techniques produce superior products in terms of thermal properties and defect density, but the processes are very expensive, Bayram said.

Gallium nitride-based chips that use crystals grown on silicon are probably adequate for the consumer electronics market, where cost and affordability are key, he said. However, military-grade devices that require better reliability will benefit from chips made using the more expensive processes.

"We are trying to create a higher efficiency system so that we can get more out of our devices - maybe one that can last 50 years instead of five," Bayram said. "Understanding how heat dissipates will allow us to reengineer systems to be more resilient to hotspots. This work, performed entirely at the U. of I., lays the foundation in thermal management of the technologically important gallium nitride-based semiconductor devices."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

Smaller class sizes not always better for pupils, multinational study shows

A new statistical analysis of data from a long-term study on the teaching of mathematics and science has found that smaller class sizes are not always associated with better pupil performance and achievement.

The precise effect of smaller class sizes can vary between countries, academic subjects, years, and different cognitive and non-cognitive skills, with many other factors likely playing a role. These findings are reported in a paper in Research Papers in Education.

Smaller class sizes in schools are generally seen as highly desirable, especially by parents. With smaller class sizes, teachers can more easily maintain control and give more attention to each pupil. As such, many countries limit the maximum size of a class, often at around 30 pupils.

But research into the effects of class size has generally proved inconclusive, with some studies finding benefits and some not. Furthermore, these studies have often been rather small scale, have tended to focus purely on reading and mathematics, and have not considered the effect of class size on non-cognitive skills such as interest and attentiveness.

To try to get a clearer picture, Professor Spyros Konstantopoulos and Ting Shen at Michigan State University, US, decided to analyze data produced by the Trends in International Mathematics and Science Study (TIMSS). Every four years since 1995, TIMSS has monitored the performance and achievement of fourth grade (age 9-10) and eighth grade (age 13-14) pupils from around 50 countries in mathematics and science. It records pupils' academic ability in these subjects and their self-reported attitude and interest in them, and also contains information on class sizes.

To make the analysis more manageable, the researchers limited it to data from eighth grade pupils in four European countries - Hungary, Lithuania, Romania and Slovenia - collected in 2003, 2007 and 2011. They chose these four countries because they all mandate maximum class sizes, which would help to make the statistical analysis more reliable. Despite these limitations, the data still encompassed 4,277 pupils from 231 classes in 151 schools, making it much larger than most previous studies on class size. It was also the first study to investigate the effects of class size on both specific science subjects, comprising biology, chemistry, physics and earth science, and non-cognitive skills.

The analysis revealed that smaller class sizes were associated with benefits in Romania and Lithuania, but not in Hungary and Slovenia. The beneficial effects were most marked in Romania, where smaller classes were associated with greater academic achievement in mathematics, physics, chemistry and earth science, as well as greater enjoyment of learning mathematics. In Lithuania, however, smaller class sizes were mainly associated with improvements in non-cognitive skills such as greater enjoyment in learning biology and chemistry, rather than higher academic achievement in these subjects. The beneficial effects were also only seen in certain years.

"Most class size effects were not different than zero, which suggests that reducing class size does not automatically guarantee improvements in student performance," said Professor Konstantopoulos. "Many other classroom processes and dynamics factor in and have to work well together to achieve successful outcomes in student learning."

The researchers think smaller class sizes may have had greater beneficial effects on pupils in Romania and Lithuania than in Hungary and Slovenia because schools in Romania and Lithuania have fewer resources. "This finding is perhaps due to the fact that class size effects are more likely to be detected in countries with limited school resources where teacher quality is lower on average," said Professor Konstantopoulos.

Credit: 
Taylor & Francis Group

Nanoscience breakthrough: Probing particles smaller than a billionth of a meter

image: Tin oxide SNCs finely prepared by a dendrimer template method are loaded on the thin silica shell layers of plasmonic amplifiers, such that the Raman signals of the SNCs are substantially enhanced to a detectable level. The strength of the electromagnetic fields generated due to the surface plasmon resonance properties of the Au or Ag nanoparticles decays exponentially with distance from the surface. Therefore, a rational interfacial design between the amplifiers and SNCs is the key to acquiring strong Raman signals.

Image: 
<i>Science Advances</i>

Scientists at Tokyo Institute of Technology (Tokyo Tech) developed a new methodology that allows researchers to assess the chemical composition and structure of metallic particles with a diameter of only 0.5 to 2 nm. This breakthrough in analytical techniques will enable the development and application of minuscule materials in the fields of electronics, biomedicine, chemistry, and more.

The study and development of novel materials have enabled countless technological breakthroughs and are essential across most fields of science, from medicine and bioengineering to cutting-edge electronics. The rational design and analysis of innovative materials at nanoscopic scales allows us to push through the limits of previous devices and methodologies to reach unprecedented levels of efficiency and new capabilities. Such is the case for metal nanoparticles, which are currently in the spotlight of modern research because of their myriad potential applications. A recently developed synthesis method using dendrimer molecules as a template allows researchers to create metallic nanocrystals with diameters of 0.5 to 2 nm (billionths of a meter). These incredibly small particles, called "subnano clusters" (SNCs), have very distinctive properties, such as being excellent catalyzers for (electro)chemical reactions and exhibiting peculiar quantum phenomena that are very sensitive to changes in the number of constituent atoms of the clusters.

Unfortunately, the existing analytic methods for studying the structure of nanoscale materials and particles are not suitable for SNC detection. One such method, called Raman spectroscopy, consists of irradiating a sample with a laser and analyzing the resulting scattered spectra to obtain a molecular fingerprint or profile of the possible components of the material. Although traditional Raman spectroscopy and its variants have been invaluable tools for researchers, they still cannot be used for SNCs because of their low sensitivity. Therefore, a research team from Tokyo Tech, including Dr. Akiyoshi Kuzume, Prof. Kimihisa Yamamoto and colleagues, studied a way to enhance Raman spectroscopy measurements and make them competent for SNC analysis (Figure).

One particular type of Raman spectroscopy approach is called surface-enhanced Raman spectroscopy. In its more refined variant, gold and/or silver nanoparticles enclosed in an inert thin silica shell are added to the sample to amplify optical signals and thus increase the sensitivity of the technique. The research team first focused on theoretically determining their optimal size and composition, where 100-nm silver optical amplifiers (almost twice the size commonly used) can greatly amplify the signals of the SNCs adhered to the porous silica shell. "This spectroscopic technique selectively generates Raman signals of substances that are in close proximity to the surface of the optical amplifiers," explains Prof. Yamamoto.
To put these findings to test, they measured the Raman spectra of tin oxide SNCs to see if they could find an explanation in their structural or chemical composition for their inexplicably high catalytic activity in certain chemical reactions. By comparing their Raman measurements with structural simulations and theoretical analyses, they found new insights on the structure of the tin oxide SNCs, explaining the origin of atomicity-dependent specific catalytic activity of tin oxide SNCs.

The methodology employed in this research could have great impact on the development of better analytic techniques and subnanoscale science. "Detailed understanding of the physical and chemical nature of substances facilitates the rational design of subnanomaterials for practical applications. Highly sensitive spectroscopic methods will accelerate material innovation and promote subnanoscience as an interdisciplinary research field," concludes Prof. Yamamoto. Breakthroughs such as the one presented by this research team will be essential for broadening the scope for the application of subnanomaterials in various fields including biosensors, electronics, and catalysts.

Credit: 
Tokyo Institute of Technology

The wild relatives of major vegetables, needed for climate resilience, are in danger

image: These maps show the distribution of wild chile pepper taxa across the Americas. Green dots on the left map shows where wild species have been collected and stored in gene banks. The map on the right shows where species occur in comparison to protected areas.

Image: 
Khoury et al.

Growing up in the wild makes plants tough. Wild plants evolve to survive the whims of nature and thrive in difficult conditions, including extreme climate conditions, poor soils, and pests and disease. Their better-known descendants - the domesticated plants that are critical to a healthy diet - are often not nearly as hardy. The genes that make crop wild relatives robust have the potential to make their cultivated cousins - our food plants - better prepared for a harsh climate future. But a series of new research papers show these critical plants are imperiled.

"The wild relatives of crops are one of the key tools used to breed crops adapted to hotter, colder, drier, wetter, saltier and other difficult conditions," said Colin Khoury, a scientist at the International Center for Tropical Agriculture, or CIAT. "But they are impacted by habitat destruction, over-harvesting, climate change, pollution, invasive species, and more. Some of them are sure to disappear from their natural habitats without urgent action."

Khoury and colleagues' latest focus has been on the wild relatives of vegetables, including chile peppers, lettuce, and carrots. Their most recent publication was on the distribution, conservation status and stress tolerance of wild cucurbits, or the gourd family, which includes zucchini, pumpkins, and squash. The findings were published online Dec. 10 in Plants People Planet.

Even with protection in the wild, the researchers found that many crop wild relatives require urgent safeguarding in gene banks to assure long-term survival. They determined that more than 65 percent of wild pumpkins and more than 95 percent of wild chile peppers are not well represented in genebanks.

Gene banks are repositories for seeds and other plant material that assure continued propagation of new plants and allow scientists to study their often complex genetic traits.

The studies include the first highly detailed maps of the distributions of the wild relatives of these crops. Mapping their ranges, and especially areas with a great density, endemism, and diversity, can help policymakers and conservationists prioritize areas in need of protection. The findings will help crop breeders more efficiently find wild relatives with traits needed for crop development. The results will be used to guide rescue missions aimed at collecting vulnerable species before they disappear.

"If they disappear, they are gone," said Khoury. "Extinction is forever, which is a loss not only in terms of their evolution and persistence on the planet, but also a loss to the future of our food."

"Our main finding is that more conservation work needs to be done to ensure that these wild species are well represented in gene banks, and are also adequately protected in their natural habitats," said Khoury, who is also a researcher at the United States Department of Agriculture and Saint Louis University. "We were able to produce maps that can help indicate to plant collectors and to land managers where the most significant gaps are in terms of current conservation, including where you might go to find and protect many species in hotspots of diversity".

A global effort for a global concern

The work also highlights the extent to which the wild relatives of vegetables have not been a priority for conservation when compared to other crops.

"Since they aren't cereal commodities, vegetables get less attention, especially when it comes to their wild relatives. But for health and sustainability reasons, these are the kind of crops that researchers should be devoting more of their time to," said Khoury.

The collection of studies is a big step toward providing foundational information about the wild relatives of these four globally important vegetable crops.

Contributors included botanists, geographers, crop breeders, and conservationists from international and national agricultural research organizations and leading universities. They drew upon their expertise, combined with vast amounts of publicly available research data, for the studies. They also used global climate information to assess which species might have the most useful adaptations to heat, cold, drought, and other crop production challenges.

Finally, they assessed how well the species are represented in current international and national gene banks, as well as how well safeguarded the species are within officially designated protected areas.

Chile peppers, pumpkins, carrots, and lettuce are among the most widely consumed vegetables in the world, with the first three crops providing essential nutrients such as vitamin A and C. Research on such crops has been minor compared to cereals and starchy tubers such as wheat, maize, rice, and potatoes, despite the widely acknowledged need to consume more vegetables across essentially all people worldwide. Because of the lack of research, these crops are often much less productive than grains and tubers. At the same time, these crops need more resources including water and land to produce them and are generally more sensitive to climate change and pests and diseases.

"Filling the gaps in information about the wild relatives of vegetable crops such as chile and bell peppers will help these crops fulfill the nutritional roles they will need to in the future," said Derek Barchenger, a plant breeder at the World Vegetable Center, located in Taiwan, who was involved in the chile research.

"The results reveal at high resolution the geography of the wild relatives of these important crops. This is of interest not only to conservation, but also to better understand the origins and diversification of these species over millions of years, and even possibly to shed further light on where the crops may have been domesticated," said Heather Rose Kates, a postdoctoral associate at Florida Museum of Natural History.

"Our research outlines some of the major breeding challenges that the crops face, in terms of climatic stresses, for example, heat and drought for carrots," said Najla Mezghani, the curator of vegetable crops in the National Genebank of Tunisia who was involved in the wild carrot research. "We determined which populations of wild relatives might have adaptations to these stresses that can make them particularly useful in plant breeding".

Credit: 
The Alliance of Bioversity International and the International Center for Tropical Agriculture

Perinatal exposure to flame retardant alters epigenome, predisposing metabolic disease

image: Alexander Suvorov is an associate professor of environmental sciences in the UMass Amherst School of Public Health and Health Sciences.

Image: 
UMass Amherst

Studies have shown that perinatal exposure of rats and mice to common flame retardants found in household items permanently reprograms liver metabolism, often leading later in life to insulin resistance and non-alcoholic fatty liver disease.

Now, research led by University of Massachusetts Amherst environmental toxicologist Alexander Suvorov, with co-authors in Moscow, Russia, has identified the likely mechanism responsible for the pollutant's effect: an altered liver epigenome. The epigenome refers to heritable changes in gene expression without changes in the DNA sequence. "Changes in the liver epigenome can explain those functional changes in the liver," Suvorov explains. "We looked at two different epigenetic mechanisms and there were changes in both."

Published Dec. 13 in the medical journal Epigenomics, the study showed that environmentally relevant exposure to polybrominated diphenyl ether (PBDE) through the umbilical cord and breast milk permanently changed liver metabolism in rats. The mother rats were fed enough PBDEs to cause concentrations in their fat similar to those found in humans living in big cities in the U.S.

"The pups never got exposed directly, yet it altered the way their liver works forever," says Suvorov, associate professor of environmental health sciences in the School of Public Health and Health Sciences. "Normally when you remove the stressor, the organ will recover. But in this case, it's not recovering. Epigenetic changes can persist in a row of cellular divisions and can even propagate through generations."

The findings are potentially applicable to humans, a hypothesis Suvorov and colleagues will explore in a new study funded by a $230,000 grant from the National Institute of Environmental Health Sciences.

Suvorov says the new research in humans could begin to tie prenatal exposure to flame retardants - present in everything from baby pajamas to plastics and furniture - to an increased risk in adulthood of diabetes and other metabolic disorders, as well as heart disease. "Our research may have a tremendous impact on public health and public health spending," he says.

In the U.S., concentrations of PBDEs in human tissues is still increasing, even though the industry stopped using the flame retardants in 2013, five years after Europe phased out their use due to health concerns. "These chemicals are extremely stable, and they bioaccumulate and bioconcentrate," Suvorov says. "Likely we will be exposed for another 50 years or so. Even more important, we have never deeply analyzed the long-term effects of exposure."

Suvorov and colleagues will use data and samples from the GEStation, Thyroid and Environment (GESTE) prospective birth cohort study in Quebec, Canada, which was designed to investigate flame retardant toxicity in children. By following the same individuals over time, prospective cohort studies allow researchers to establish exposure level before outcome is known, providing stronger evidence than other study designs.

Between 2007 and 2009, the GESTE study began following 269 women who were less than 20 weeks pregnant. Suvorov will look for associations between PBDE levels in maternal blood and the activity of a protein known as mTOR in the baby's placenta. mTOR is thought to mediate the changes in liver metabolism caused by PBDE exposure. Researchers also will evaluate the effects of PBDE exposure on childhood lipid levels by examining the lipid profiles and markers of liver injury in the children at age 8-9.

"We hypothesize that high PBDE levels are associated with higher triglycerides in childhood," Suvorov says. "And the diseases come later. What will happen to them at age 50? That is my major research question."

Credit: 
University of Massachusetts Amherst

Salmon lose diversity in managed rivers, reducing resilience to environmental change

image: Chinook salmon smolts are facing increasingly warm waterways in order to reach the ocean.

Image: 
Photo: Rachel Johnson, NOAA Fisheries/University of California, Davis

The manipulation of rivers in California is jeopardizing the resilience of native Chinook salmon. It compresses their migration timing to the point that they crowd their habitats. They may miss the best window for entering the ocean and growing into adults, new research shows.

The good news is that even small steps to improve their access to habitat and restore natural flows could boost their survival.

The curtailment of high winter river flows by dams means that they no longer provide the cue for the smallest fish to begin their migration to the ocean. The loss of wetlands in the Sacramento-San Joaquin Delta leaves little of the refuge habitat they need to grow along the way. Meanwhile later-migrating fish suffer from rising summer temperatures that reduce their survival even though they migrate at a larger size.

Fish that begin their migration in mid-spring are the ones that survive best and dominate adult salmon returns to rivers such as the Stanislaus. These results were cited in a study published this week in Global Change Biology. Flow alteration and habitat loss have in effect homogenized the survival opportunities of salmon in this highly managed river system, researchers wrote.

That diminishes what is called the "portfolio effect," where a diversity of salmon migration strategies help the fish cope with changing environmental conditions. This is similar to how diversified investments help buffer your financial portfolio against jolts in the stock market. Chinook salmon in California evolved diverse migration timing to handle the wide variation in climate, ocean, and river conditions in the Central Valley region. This is also important as climate change and rapidly changing, "whiplash weather" patterns further alter the picture.

"You never know what's going to be a winning strategy in the future," said Anna Sturrock of the University of California, Davis, and lead author of the research that also included scientists from several other agencies and universities. "Keeping options on the table is the best strategy, but that is not what we see happening."

The research also found that the lower flows released from dams tend to reduce fish production. This is likely due to reduced access to floodplain habitats and lower food production in rivers.

Biologists analyzed two decades of salmon migration data and tracked seven generations of Chinook salmon in the Stanislaus River using chemical signals in their ear bones, called otoliths. Otoliths grow in proportion with the salmon and reflect the chemistry of the surrounding water. Researchers can use them to trace the way fish travel to the sea and gauge their size when they move among habitats.

The use of otoliths made it possible to track very young juvenile salmon called fry that are too small to fit with the electronic tags typically used for such research. This alternative approach revealed that large numbers of migrating fry can survive to adulthood--if they can find freshwater rearing habitat where they can grow along the way.

"That tells us there is this other life history strategy that may be really important," said Rachel Johnson, a research fisheries biologist at NOAA Fisheries' Southwest Fisheries Science Center and senior author of the research. "Tracking the smaller fish through their otoliths provides important new insights into Chinook salmon dynamics that have otherwise been missing from the picture."

The trouble is, less than 3 percent of wetland habitat remains in the Sacramento-San Joaquin Delta. This leaves the small, early migrating fry without the much needed feeding and rearing refuge they need to grow and thrive on their seaward journey.

The authors say that even minor steps to restore some of the natural fluctuations in river flow could benefit salmon by helping maintain some of their valuable diversity. Fry migrate early in such great numbers that even small improvements in their survival rates through the Delta could yield many more fish to help boost adult returns.

"As the climate gets more unpredictable, we need to think about incorporating bet-hedging into river management rather than manipulating the environment in ways that limit options for fish," Sturrock said. "The more options that are left on--or added to--the table the better chance that some fish will be in the right place at the right time."

Credit: 
NOAA Fisheries West Coast Region

Freestanding microwire-array enables flexible solar window

image: a, Optical images of transparency-controlled microwire embedded polymer composite films in a freestanding form; Visible transparency of I = ~ 50 %, II = ~ 40 %, III= ~25 % and IV= ~ 10 %. b, The representation of the color coordinates of transparent solar cells in this study, verifying the neutral-colour transparency c, The electric field intensity in the microwires which can manipulate the light absorption. d, Light J-V characteristics The transparent solar cell based on the freestanding film with advanced light-absorption technique: the visible transparency of each device is I = ~ 50 %, II = ~ 40 %, III= ~25 % and IV = ~ 10 %. e, Schematic of the transparent solar cell based on the freestanding film, p-type polymer is in the top portion of and transparent conductive film is at the bottom. f, Light J-V curves of transparent solar cells under the bending state with the different bending radius.

Image: 
by Sung Bum Kang, Ji-Hwan Kim, Myeong Hoon Jeong, Amit Sanger, Chan Ul Kim, Chil-Min Kim and Kyoung Jin Choi

TSCs are emerging devices that combine the advantages of visible transparency and light-to-electricity conversion. One of the valuable prospective applications of such devices is their integration into buildings, vehicles, or portable electronics. Therefore, colour-perception and flexibility are important as well as the efficiency. Currently, existing transparent solar cells are based predominantly on organics, dyes, perovskites and amorphous Si; however, the colour-tinted transparent nature or rigidity of those devices strongly limits the utility of the resulting TSCs for real-world applications.

In a new paper published in Light Science & Application, scientists from Department of Materials Science and Engineering, Ulsan National Institute of Science and Technology, Republic of Korea and co-workers developed flexible and efficient transparent solar cells which have colour-neutrality. Based on the silicon microwires embedded in the transparent polymer matrix, they demonstrated transparent, flexible and even stretchable solar cells. This freestanding film was successfully utilized for building block of transparent solar cells. A heterojunction between n-type Si and p-type polymeric semiconductor is formed at the top portion and a transparent conductive film at the bottom of the device. The transparency of the devices can be tuned from ~ 10% to 55% by adjusting the spacing between the microwires. Moreover, after the cyclic bending test or under the bending state, the performance was maintained without a significant decrease. indicating that the transparent solar cells have excellent flexibility. This stretchable and transparent freestanding-form of microwire array / polymer composites film in this study is promising for future transparent solar cells.

Transparent solar cells have an unavoidable trade-off between the energy generation and the light admission. Therefore, based on the criteria for transparent solar cells, there is an inevitable compromise of the efficiency to achieve transparency. This makes transparent solar cells difficult to get high performance. Overcoming this issue, these scientists suggest their strategy to obtain high performance transparent solar cells:

"Basically, the solar cells can generate the electricity from the absorbed light. Therefore, attaining the high performance as maintaining the transparency is very challenging. We revealed absorption mechanism inside Si microwire-array and developed the new morphology of microwire which can manipulate the path of light. As a result, we successfully enhanced light absorption which can contribute to the light-generated-current in the transparent solar cells maintaining the transparency"

"The transparent solar cell based on the freestanding film with advanced light-absorption technique shows the power conversion efficiency over 8 % at 10 % of visible transparency, which are comparable to state-of-the-art neutral-colour transparent solar cells based on organics and perovskites." they added.

"Moreover, the devices are based on the Si wafer already verified and widely used in the Si solar cell market. Therefore, this robust, ultra-light and flexible platform is feasible and promising as a commercialized transparent solar cell for practical application in the future" the scientists forecast.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Finding a non-invasive way to predict effectiveness of cancer therapy

image: (A) Reversible nonspecific uptake due to antibody in vascular tissue compartment and antibody entering tissue interstitium through paracellular pores, and through endothelial cells mediated by neonatal Fc-receptor, leaving tissue by convective transport through lymph flow. (B) Irreversible nonspecific uptake due to mAb degradation in lysosome, followed by residualization of Zr-89. (C) Specific uptake due to target engagement (target binding and internalization of mAb-target antigen).

Image: 
Adapted from Lobo et al., J Pharm Sci. (2004;93), and Chen et al., AAPSJ (2012;14).

Researchers have taken a critical step toward developing a non-invasive nuclear medicine technique that can predict the effectiveness of therapy for cancerous tumors, allowing for personalized, precision treatment. The study is featured in the December issue of The Journal of Nuclear Medicine.

89Zr-immuno-PET is a noninvasive, whole-body imaging technique with the potential to predict the effectiveness of therapeutic antibodies (or their conjugates) in treating tumors. This is a significant advance: currently, the only ways to measure this are through tissue sampling, which is invasive and noncomprehensive, or by measuring concentrations of monoclonal antibodies in blood samples.

"This study provides proof of concept that PET imaging with 89Zr-labeled antibodies can be used to assess physiological components of antibody biodistribution," explains Yvonne Jauw, MD, at the Cancer Center Amsterdam, Amsterdam UMC in The Netherlands. "This research enables us to apply molecular imaging as a noninvasive clinical tool to measure antibody concentrations in normal tissues."

In this retrospective analysis of clinical 89Zr-immuno-PET studies, data from 128 PET scans were collected from Amsterdam UMC; CHU Lille in Lille, France; and Memorial Sloan Kettering Cancer Center in New York, New York. The scans were of 36 patients and were done one to seven days after injection with the appropriate 89Zr-labeled antibodies for imaging their tumors. Nonspecific uptake was defined as uptake measured in tissues without known target expression (normal tissue).

The results show that imaging with 89Zr-immuno-PET can be used to optimize detection of tumors throughout a patient's body. Nonspecific uptake of monoclonal antibodies in tissues without target expression can be quantified using 89Zr-immuno-PET at multiple time points. These results form a crucial base for measurement of target engagement by therapeutic antibodies in a living person. For future studies, a pilot phase, including at least three scans at one or more days after injection, is needed to assess nonspecific uptake as a function of time and to optimize study design for detection of target engagement and effectiveness against tumors.

The study has important implications for patients, as Jauw points out: "Knowledge of antibody distribution to normal tissues and tumors can be used to increase our understanding of which drugs will be effective and which drugs are likely to cause toxicity." In the future, this clinical tool could also be used in the selection of monoclonal antibodies during drug development, as well as the selection of patients who could benefit from a specific treatment.

Credit: 
Society of Nuclear Medicine and Molecular Imaging

High-tech method for uniquely targeted gene therapy developed

Neuroscientists at Lund University in Sweden have developed a new technology that engineers the shell of a virus to deliver gene therapy to the exact cell type in the body that needs to be treated. The researchers believe that the new technology can be likened to dramatically accelerating evolution from millions of years to weeks.

Several of the new revolutionary treatments that have been used clinically in recent years to treat complex diseases - such as spinal muscular atrophy and enzyme deficiency - are based on gene therapy.

With gene therapy, the genetic material is controlled or altered using biological drugs. Examples of this are the gene scissors CRISPR / Cas9 and the so-called CAR-T cells that are used to treat various forms of cancer. This type of treatment is often engineered by growing viruses in the laboratory. The viruses are altered so that they are harmless and can deliver new genetic material to the body's cells, replacing the damaged genome. The virus's own genome, which is required for it to spread, has been completely removed.

In the last five years, neuroscientist Tomas Björklund and his research group have developed a process that tailors these virus shells, or virus capsids, so that they can reach precisely the cell type in the body that needs to be treated, for example nerve cells. The process combines powerful computer simulations and modeling with the latest gene technology and sequencing technology.

"Thanks to this technology, we can study millions of new virus variants in cell culture and animal models simultaneously. From this, we can subsequently create a computer simulation that constructs the most suitable virus shell for the chosen application- in this case, the dopamine-producing nerve cells for the treatment of Parkinson's disease", says Tomas Björklund, senior lecturer in translational neuroscience at Lund University.

"You can view this as dramatically speeding up evolution from millions of years to weeks. The reason we can do this is that we study each "generation" of the virus in parallel with all the others in the same nerve cells. Unlike evolution, where only the best suited live on to the next generation, we can also learn what makes the virus work less well through this process. This is crucial when building computer models that interpret all the information", he continues.

With the new method, researchers have been able to significantly reduce the need for laboratory animals, as millions of variants of the same drug are studied in the same individual. They have also been able to move important parts of the study from animals to cell culture of human stem cells.

"We believe that the new synthetic virus we succeeded in creating would be very well suited for gene therapy for Parkinson's disease, for example, and we have high hopes that these virus vectors will be able to be put into clinical use. Together with researchers at Harvard University, we have established a new biotechnology company in Boston, Dyno Therapeutics, to further develop the virus engineering technology, using artificial intelligence, for future treatments", concludes Tomas Björklund.

Credit: 
Lund University

Neural network for elderly care could save millions

If healthcare providers could accurately predict how their services would be used, they could save large sums of money by not having to allocate funds unnecessarily. Deep learning artificial intelligence models can be good at predicting the future given previous behaviour, and researchers based in Finland have developed one that can predict when and why elderly people will use healthcare services.

Researchers at the Finnish Centre for Artificial Intelligence (FCAI), Aalto University, the University of Helsinki, and the Finnish Institute for Health and Welfare (THL) developed a so-called risk adjustment model to predict how often elderly people seek treatment in a healthcare centre or hospital. The results suggest that the new model is more accurate than traditional regression models commonly used for this task, and can reliably predict how the situation changes over the years.

Risk-adjustment models make use of data from previous years, and are used to allocate healthcare funds in a fair and effective way. These models are already used in countries like Germany, the Netherlands, and the US. However, this is the first proof-of-concept that deep neural networks have the potential to significantly improve the accuracy of such models.

'Without a risk adjustment model, healthcare providers whose patients are ill more often than average people would be treated unfairly,' Pekka Marttinen, Assistant Professor at Aalto University and FCAI says. Elderly people are a good example of such a patient group. The goal of the model is to take these differences between patient groups into account when making funding decisions.

According to Yogesh Kumar, the main author of the research article and a doctoral candidate at Aalto University and FCAI, the results show that deep learning may help design more accurate and reliable risk adjustment models. 'Having an accurate model has the potential to save several millions of dollars,' Kumar points out.

The researchers trained the model by using data from the Register of Primary Health Care Visits of THL. The data consists of out-patient visit information for every Finnish citizen aged 65 or above. The data has been pseudonymized, which means that individual persons can not be identified. This was the first time researchers used this database for training a deep machine learning model.

The results show that training a deep model does not necessarily require an enormous dataset in order to produce reliable results. Instead, the new model worked better than simpler, count-based models even when it made use of only one tenth of all available data. In other words, it provides accurate predictions even with a relatively small dataset, which is a remarkable finding, as acquiring large amounts of medical data is always difficult.

'Our goal is not to put the model developed in this research into practice as such but to integrate features of deep learning models to existing models, combining the best sides of both. In the future, the goal is to make use of these models to support decision-making and allocate funds in a more reasonable way,' explains Marttinen.

The implications of this research are not limited to predicting how often elderly people visit a healthcare centre or hospital. Instead, according to Kumar, the researchers' work can easily be extended in many ways, for example, by focusing only on patient groups diagnosed with diseases that require highly expensive treatments or healthcare centers in specific locations across the country.

Credit: 
Aalto University

Finding a killer electron hot spot in Earth's Van Allen radiation belts

image: Multi-point satellite observations by JAXA/Arase and NASA/Van Allen Probes
Electrons detected at Van Allen Probes position (left) drift to the Arase position (right)

Image: 
ERG Science Team

A collaboration between researchers in Japan, the USA, and Russia has found a hot spot in Earth's radiation belt where killer electrons, which can cause serious anomalies in satellites, form. The finding, published in the journal Geophysical Research Letters, could help scientists more accurately forecast when these killer (relativistic) electrons will form.

Professor Yoshizumi Miyoshi of the Institute for Space-Earth Environmental Research at Nagoya University and colleagues compared data from two satellites situated on opposite sides of the Earth: the Arase satellite, developed by the Japanese Aerospace Exploration Agency (JAXA), and NASA's Van Allen Probes. Both satellites gather data from the Van Allen radiation belts, zones of energetic particles originating largely from solar wind. Energetic particles in the belts are trapped by Earth's magnetic field.

Scientists have known that electrons in Van Allen radiation belts that interact with ultralow frequency plasma waves accelerate to reach the speed of light. However, it has not been clear when or where these killer electrons start to accelerate.

To gain more insight about the electrons, Professor Miyoshi and his colleagues analyzed data generated on March 30, 2017, by the Arase satellite and Van Allen Probe. On one side of the Earth, the Van Allen Probe identified characteristic signs of an interaction between ultralow frequency waves and energetic electrons. On the opposite side, at the same point in time, the Arase satellite identified high-energy electron signatures, but no ultralow frequency waves.

The measurements indicate that the interaction region between electrons and waves is limited, but that the killer electrons then continue to travel on an eastward path around the Earth's magnetosphere.

"An important topic in space weather science is understanding the dynamics of killer electrons in the Van Allen radiation belt," says Miyoshi. "The results of this study will improve the modelling and lead to more accurate forecasting of killer electrons in Van Allen radiation belts."

Credit: 
Nagoya University

Simultaneous emission of orthogonal handedness in circular polarization

image: a, schematic diagrams of the fabrication process of the circular polarization-emitting device (i, the 1st rubbing of AL22636 coated on CuPc. ii, spin coating and drying of F8BT layer and iii, rubbing the F8BT (2nd rubbing) with different direction from the 1st rubbing. iv, coating optical adhesion (NOA) on the rubbed F8BT and v, thermal annealing the sample at liquid crystalline temperature of the F8BT. vi, cooling down the sample and peeling off NOA and vii, TPBi/LiF/Al deposition in vacuum, sequentially. An AFM image and the corresponding Fourier transformed image show the 2nd rubbed surface of the F8BT. Here, scale bar represents 5 ?m and arrows are indicating the rubbing directions). b, schematic diagram of the simultaneous emission with orthogonal handedness in circular polarization from single emitting layer. The multi-directionally rubbed AL22636 surface and the uni-directionally rubbed F8BT surface produce the reverse twisted structures. c, microscopic textures and d, PL textures under LH (top image) and RH (bottom image) circular polarizers. e, the CPEL spectra for the 1st (top spectra) and 2nd (bottom spectra) quadrants in the sample as in c. All spectra measured without a circular polarizer, and with LH and RH circular polarizers are presented by black (IT), red (IL), and blue (IR) solid lines, respectively.

Image: 
by Kyungmin Baek, Dong-Myung Lee, Yu-Jin Lee, Hyunchul Choi, Jeongdae Seo, Inbyeong Kang, Chang-Jae Yu, and Jae-Hoon Kim

Control of the polarization of light is a key feature for displays, optical data storage, optical quantum information, and chirality sensing. In particular, the direct emission of circularly polarized (CP) light has attracted great interest because of the enhanced performance of displays such as organic light-emitting diodes (OLEDs) and light sources for characterizing the secondary structure of proteins. To actually produce the CP light, the luminescent layer should contain chiral characteristics, which can be achieved, for example, by decorating the luminophores with chiral materials or doping chiral molecules into achiral materials. However, such chirality of the luminescent layer makes it possible to generate only one kind of CP light in an entire device since it is difficult to control the chiral sense spatially.

In a new paper published in Light Science & Application, scientists from Department of Electronic Engineering, Hanyang University, Republic of Korea demonstrated simultaneously emitting device with orthogonal handedness in circular polarization from an achiral luminophore with liquid crystalline (LC) phase. By rubbing alignments of luminophores in its upper and lower surfaces in different directions, the luminescent layer is continuously twisted and thus light passing through the luminescent layer emerges as right-handed (RH) or left-handed (LH) CP light without any chiral part. More interestingly, such twisting sense is determined by the rubbing directions in its upper and lower surfaces. As a result, by generating multiple alignments in the lower surface of the achiral luminophore and unidirectional alignment in its upper surface, light-emitting device with orthogonal handedness in circular polarization was implemented with a single achiral luminophore. This experimental demonstration highlights the feasibility of the light source with multi-polarization, including orthogonal CP states, thereby paving the way towards novel applications in biosensors as well as optical devices such as OLEDs.

In a conventional OLED, since a circular polarizer in front of the OLED panel is inevitably required to prevent reflection of ambient light from a metal electrode, only half of the light extracted from the OLED panel reaches the eye. As a result, direct emission of CP light from an OLED with the same handedness as that of the circular polarizer in front of the OLED panel can increase the efficiency of the emitted light. High efficient OLED is implemented by directly generating high degree of CP light, which is achieved from a twisted structure of the LC luminophore. The twisted sense of the LC luminophore was governed by producing the different boundary conditions in its upper and lower surfaces. In addition, the degree of CP light in the twisted luminophore was theoretically calculated based on Mueller matrix analysis and a CP light-emitting mechanism was confirmed. These scientists summarize scientific achievement in their CP light-emitting device:

"For the first time, we demonstrate direct CP light emissions by using a twisted achiral conjugate polymer without any chiral component by introducing different boundary conditions in upper and lower surfaces of the polymer. By patterning different alignment directions on its one of polymer surfaces, patterned CP light with various polarization states can be achieved through the fabricating process proposed herein. Also, twisting limitation of the polymer by surface boundary conditions was systematically analyzed based on the surface anchoring energy model and the degree of CP light was theoretically calculated based on the Mueller matrix analysis."

"The fabricating process and theoretical analysis proposed herein emphasizes the feasibility of the light source with multi-polarization, including orthogonal CP states, thereby paving the way towards novel applications in biosensors as well as optical devices such as OLEDs" the scientists forecast.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Knowledge-sharing: a how-to guide

image: Effective knowledge-sharing, especially between scientists in different fields, is crucial for interdisciplinary research

Image: 
University of Göttingen

How is knowledge exchanged and shared when interdisciplinary research teams work together? Professor Margarete Boos and Lianghao Dai from the University of Göttingen have investigated this by studying several different research projects. Their study makes concrete recommendations for how teams can best work together and achieve effective collaborations. The results have been published in the journal Nature.

"We observed two fundamental patterns of knowledge exchange and integration in interdisciplinary research teams," says Boos. "The first, which we refer to as the "theory-method interdisciplinary collaboration pattern", involves one party providing a theoretical understanding, and the other offering methods for collecting and analysing the data. The second, which we called the "technical interdisciplinary collaborative pattern", is characterised by the exchange of learning tools such as algorithms and technical know-how to solve a shared research question".

The researchers identified these patterns by conducting intensive fieldwork on three interdisciplinary collaborative projects at a German university. The studies included different methods to investigate both the cognitions and the interactions of the members of the interdisciplinary teams. "Using the cognitive mapping method, the participants were able to show how the ideas and knowledge of the team members were being shared in collaborative exchange and integrated into a common knowledge structure," says Dai.

"From this, we have developed recommendations for effective working in interdisciplinary teams. For instance, we found that interdisciplinary teams can save time and money if they agree on goals, communication rules and research tools in a kick-off meeting. They also need to discuss the understanding of their basic concepts - which is usually different! Further on in the course of the project, it is helpful to explicitly agree on the way in which research results are to be integrated."

Credit: 
University of Göttingen