Tech

Palm oil production can grow without converting rainforests and peatland

image: Bunches in an oil palm plantation in Indonesia. It takes about 38 weeks from initiation until bunches are ready for harvest.

Image: 
Hendra Sugianto/University of Nebraska-Lincoln

Lincoln, Neb., March 25, 2021 -- Palm oil, the most important source of vegetable oil in the world, is derived from the fruit of perennial palm trees, which are farmed year-round in mostly tropical areas. The palm fruit is harvested manually every 10 days to two weeks, then transported to a mill for processing, and ultimately exported and made into a dizzying array of products from food to toiletries to biodiesel.

"You probably ate palm oil for breakfast," said Patricio Grassini, an associate professor of agronomy at the University of Nebraska-Lincoln. "There is probably palm oil in your shampoo and for sure palm oil in your makeup."

Dozens of countries produce palm oil, but Indonesia produces approximately two-thirds of the world's supply, and demand for the product is ever-growing.

This is a double-edged sword for Indonesia and other palm-oil producing countries, Grassini said. Palm oil is a major export and contributes to the economic stability of countries that are major producers, as well as to the individual farmers who produce it. But to keep up with demand, rainforests and peatlands - valuable ecosystems that contribute greatly to biodiversity -- are often converted to palm production.

A four-year research project led by Grassini and supported by a $4 million grant from the Norwegian Ministry of Foreign Affairs suggests that keeping up with demand may not necessarily mean converting more valuable, fragile ecosystems into agricultural land.

According to research published March 25 in Nature Sustainability, palm oil yields on existing farms and plantations could be greatly increased with improved management practices. Researchers from the Indonesian Oil Palm Research Institute, the Indonesian Agency for Agriculture Research and Development, and Wageningen University in the Netherlands were also part of this project.

In Indonesia, about 42% of land used for palm oil production is owned by small holder farmers, with the rest managed by large plantations, said Juan Pablo Monzon, a UNL research assistant professor of agronomy and horticulture and first author of the published paper. "There is great potential to increase productivity of current plantations, especially in the case of smallholders' farms, where current yield is only half of what is attainable."

The research shows that palm farmers have significant opportunity to increase their production, said Grassini, one of the developers of the Global Yield Gap Atlas, a collaboration between UNL and Wageningen University in the Netherlands designed to estimate the difference between actual and potential yields for major food crops worldwide including palm oil.

"The potential impact is huge, and if we are able to realize some of that potential, that means a lot in terms of reconciling economic and environmental goals," Grassini said. "If we can produce more, we don't need to expand into new areas. But this would require the effective implementation of current Indonesia government policy and assuring that regulations are enforced so that intensification and productivity gains translate into sparing critical natural ecosystems."

The gap between the current and attainable yields could be bridged by implementing good agronomic practices, Monzon said. As a result, the country could produce 68% more palm oil on existing plantation area located in mineral soils.

Grassini and other researchers identified key management practices that could lead to larger yields. Those practices include improved harvest methods, better weed control, improved pruning and better plant nutrition. Grassini and other researchers now are working with producers, non-government organizations, Indonesian government officials and a host of other partners to put these management techniques into practice. Already they have begun to see improvements in yields.

This is exciting from both environmental and economic standpoints, Grassini said. It also stands to have a great impact on the millions of individual farmers who draw their livelihood from small palm farms often comprised of just a few acres.

"Whatever we do to help the farmers produce more palm oil on the land that they have directly impacts their income and directly impacts their families," Grassini said. "It could be the difference between sending kids to school or not."

The first phase of the research - the research that identified the yield gap - was surprising, Grassini said. Indonesia had already gone through a period of agricultural intensification that had resulted in better yields for rice and corn, and he hadn't anticipated quite so much room for improvement when it came to palm oil.

But it's the second phase of the research that really excites him. So many people from so many different backgrounds are all working together to fine-tune management strategies and put them into practice. After just 15 months, yields on test plots are already up, with potential for more growth in the future. Robust education and extension efforts will be key to fully exploit the potential for growth, Grassini said.

"I don't think you will find too many projects where people are working side-by-side on the production side, science side and environmental side," Grassini said. "All are bringing real solutions to the table and together can have a massive impact."

Credit: 
University of Nebraska-Lincoln

Distinctively Black names found long before Civil War

image: Abe Livingston, a former slave, was photographed in Texas in the 1930s

Image: 
Library of Congress

COLUMBUS, Ohio - Long before Tyrone, Jermaine and Darnell came along, there were Isaac, Abe and Prince.

A new study reveals the earliest evidence of distinctively Black first names in the United States, finding them arising in the early 1700s and then becoming increasingly common in the late 1700s and early 1800s.

The results confirm previous work that shows the use of Black names didn't start during the civil rights movement of the 1960s, as some scholars have argued, said Trevon Logan, co-author of the study and professor of economics at The Ohio State University.

"Even during slavery, Black people had names that were unlikely to be held by whites. It is not just a recent phenomenon," Logan said.

Logan conducted the study with Lisa Cook of Michigan State University and John Parman of the College of William and Mary. It was published online this month in the journal Historical Methods.

The study focuses on names for Black males, partly because earlier research suggests less distinctiveness of Black women's names historically.

This research is a follow-up to a 2014 study by the same researchers that found distinctive Black names were being used in the period following the Civil War.

It was more difficult to find records that document the names of the enslaved, Logan said. Many official records only list slaves as numbers without names.

The researchers found three sources that did contain the names of enslaved people in the United States. Two of the three sources also included the names of the buyers or sellers of the enslaved, which allowed comparisons between Black and white names. The researchers supplemented the evidence of racial name distinctiveness by analyzing white names in the 1850 Census.

The names given Blacks in the United States were distinctively African American, Logan said. None of them had roots in Africa. Many of them had Biblical origins, like Abraham and Isaac. Other Black names that appeared more frequently in one or more of the data sets included Titus and Prince.

Results showed a clear increase in the use of Black names over the period of the study. In one data set, 3.17% of enslaved males born between 1770 to 1790 were likely to hold a Black name, but that increased to 4.5% of those born between 1810 and 1830.

And they were truly distinctive from white people. Depending on the data source, enslaved people were more than four to nine times as likely to have a Black name than was a slave owner.

The appearance of distinctively Black names wasn't only the result of more African Americans using them, Logan said.

"Our results suggest a strong decline in the use of Black names among whites over time," he said. "The actions of both Black and white people fed into the process that resulted in distinctive Black names."

For white people born before 1770, more than 4.75% held Black names, but that declined to less than 2% for those born from 1810 to 1830.

Many of the Black names identified in this study were the same that the researchers found in the post-Civil War period, but there were some differences.

"Post-emancipation we found more Blacks being named Master and Freeman, which for obvious reasons were not found in the antebellum era," Logan said.

While this study revealed the existence and growth of Black names in the United States, it can't answer why it happened, Logan said.

"We believe these naming practices could say something about culture, about family, and about social formation among Black people of the time," Logan said.

"But we don't have any records of people talking about it at the time, so we're not sure. We know there's this pattern, but we can't say for sure what it means."

Credit: 
Ohio State University

Biocrude passes the 2,000-hour catalyst stability test

image: Wet wastes from sewage treatment and discarded food can provide the raw materials for an innovative process called hydrothermal liquefaction, which converts and concentrates carbon-containing molecules into a liquid biocrude. This biocrude then undergoes a hydrotreating process to produce bio-derived fuels for transportation.

Image: 
(Illustration by Michael Perkins | Pacific Northwest National Laboratory)

RICHLAND, WASH.--A large-scale demonstration converting biocrude to renewable diesel fuel has passed a significant test, operating for more than 2,000 hours continuously without losing effectiveness. Scientists and engineers led by the U.S. Department of Energy's Pacific Northwest National Laboratory conducted the research to show that the process is robust enough to handle many kinds of raw material without failing.

"The biocrude oil came from many different sources, including wastewater sludge from Detroit, and food waste collected from prison and an army base," said John Holladay, a PNNL scientist and co-director of the joint Bioproducts Institute, a collaboration between PNNL and Washington State University. "The research showed that essentially any biocrude, regardless of wet-waste sources, could be used in the process and the catalyst remained robust during the entire run. While this is just a first step in demonstrating robustness, it is an important step."

The milestone was first described at a virtual conference organized by NextGenRoadFuels, a European consortium funded by the EU Framework Programme for Research and Innovation. It addresses the need to convert biocrude, a mixture of carbon-based polymers, into biofuels. In the near term, most expect that these biofuels will be further refined and then mixed with petroleum-based fuels used to power vehicles.  

"For the industry to consider investing in biofuel, we need these kinds of demonstrations that show durability and flexibility of the process," said Michael Thorson, a PNNL engineer and project manager.

Biocrude to biofuel, the crucial conversion

Just as crude oil from petroleum sources must be refined to be used in vehicles, biocrude needs to be refined into biofuel. This step provides the crucial "last mile" in a multi-step process that starts with renewables such as crop residues, food residues, forestry byproducts, algae, or sewage sludge. For the most recent demonstration, the biocrude came from a variety of sources including converted food waste salvaged from Joint Base Lewis-McChord, located near Tacoma, Wash., and Coyote Ridge Corrections Center, located in Connell, Wash. The initial step in the process, called hydrothermal liquefaction, is being actively pursued in a number of demonstration projects by teams of PNNL scientists and engineers.

The "last mile" demonstration project took place at the Bioproducts, Sciences, and Engineering Laboratory on the Richland, Wash. campus of Washington State University Tri-Cities. For 83 days, reactor technician Miki Santosa and supervisor Senthil Subramaniam fed a constant flow of biocrude into carefully honed and highly controlled reactor conditions. The hydrotreating process introduces hydrogen into a catalytic process that removes sulfur and nitrogen contaminants found in biocrude, producing a combustible end-product of long-chain alkanes, the desirable fuel used in vehicle engines. Chemist Marie Swita analyzed the biofuel product to ensure it met standards that would make it vehicle-ready.

Diverting carbon to new uses

"Processing food and sewage waste streams to extract useful fuel serves several purposes," said Thorson. Food waste contains carbon. When sent to a landfill, that food waste gets broken down by bacteria that emit methane gas, a potent greenhouse gas and contributor to climate change. Diverting that carbon to another use could reduce the use of petroleum-based fuels and have the added benefit of reducing methane emissions.

The purpose of this project was to show that the commercially available catalyst could stand up to the thousands of hours of continuous processing that would be necessary to make biofuels a realistic contributor to reducing the world's carbon footprint. But Thorson pointed out that it also showed that the biofuel product produced was of high quality, regardless of the source of biocrude?an important factor for the industry, which would likely be processing biocrude from a variety of regional sources.

Indeed, knowing that transporting biocrude to a treatment facility could be costly, modelers are looking at areas where rural and urban waste could be gathered from various sources in local hubs. For example, they are assessing the resources available within a 50-mile radius of Detroit, Mich. There, the sources of potential biocrude feedstock could include food waste, sewage sludge and cooking oil waste. In areas where food waste could be collected and diverted from landfills, much as recycling is currently collected, a processing plant could be up to 10 times larger than in rural areas and provide significant progress toward cost and emission-reduction targets for biofuels.

Commercial biofuels on the horizon

Milestones such as hours of continuous operation are being closely watched by investor groups in the U.S. and Europe, which has set aggressive goals, including being the first climate-neutral continent by 2050 and achieving a 55% reduction in greenhouse gas emissions by 2030. "A number of demonstration projects across Europe aim to commercialize this process in the next few years," Holladay said.

The next steps for the research team include gathering more sources of biocrude from various waste streams and analyzing the biofuel output for quality. In a new collaboration, PNNL will partner with a commercial waste management company to evaluate waste from many sources. Ultimately, the project will result in a database of findings from various manures and sludges, which could help decide how facilities can scale up economically.

"Since at least three-quarters of the input and output of this process consists of water, the ultimate success of any industrial scale-up will need to include a plan for dealing with wastewater," said Thorson. This too is an active area of research, with many viable options available in many locations for wastewater treatment facilities.

Credit: 
DOE/Pacific Northwest National Laboratory

Researchers discover new organic conductor

image: Fig.1 Crystal Structure of One-Dimensional Charge Transfer Salt with an Infinite Anion Chain (TMTTF)(NbOF4)

Image: 
NINS/IMS

Salts are far more complicated than the food seasoning - they can even act as electrical conductors, shuttling current through systems. Extremely well studied and understood, the electrical properties of salts were first theorized in 1834. Now, nearly 200 years later, researchers based in Japan have uncovered a new kind of salt.

The results were published on March 17 in Inorganic Chemistry, a journal of the American Chemical Society.

The researchers were specifically investigating how one-dimensional versions of three-dimensional substances exhibit unique physical phenomena and functionality in a process called the phase diagram.

"We were developing a new substance to deepen our understanding of the phase diagram," said paper author Toshikazu Nakamura, researcher with the Institute for Molecular Science of the National Institutes of Natural Sciences. "In this process, I found a completely new salt."

Tetrathiafulvalene is a sulfuric compound that acts as a skeleton for several organic conductor salts. Its molecular structure can be built upon to develop new substances, and it can be easily tweaked to adjust the structural parameters as part of the phase diagram. Nakamura was building upon this compound with negatively charged ions and an atomic group derived from carbon disulfide. During this process, the one-dimensional substance transferred an electric charge, and converted into an entirely new material.

Conventional organic conductors have an easily deformed lattice structure and are composed of more complicated arrangements, according to Nakamura. The new conductor's negative ions are arranged with an infinite chain structure, stabilizing the atomic arrangements of niobium, oxygen and fluorine. When exposed to low temperatures of 5 Kelvin, about -450 degrees Fahrenheit, neighboring sites on the salt begin to develop a magnetic coordination.

"We will investigate this phenomenon in detail -- we want to understand the origin," Nakamura said.

The researchers plan to study other infinite chain salts with the goal of understanding and applying the structure as the skeleton of new organic conductors as one-dimensional electronic systems.

"Our ultimate goal is to understand the electronic state of these systems and what happens when we gradually increase the inter-chain interactions from one dimension to two dimensions," Nakamura said.

Credit: 
National Institutes of Natural Sciences

Gearing up nanoscale machines

image: Star-shape molecule as gear prototype. The top arm has a different chemical structure to follow the rotation.

Image: 
Gwénaël Rapenne (NAIST and UPS)

Ikoma, Japan - Gear trains have been used for centuries to translate changes in gear rotational speed into changes in rotational force. Cars, drills, and basically anything that has spinning parts use them. Molecular-scale gears are a much more recent invention that could use light or a chemical stimulus to initiate gear rotation. Researchers at Nara Institute of Science and Technology (NAIST), Japan, in partnership with research teams at University Paul Sabatier, France, report in a new study published in Chemical Science a means to visualize snapshots of an ultrasmall gear train - an interconnected chain of gears - at work.

NAIST project leader Professor Gwénaël Rapenne has devoted his career to fabricating molecular-scale mechanical devices, such as wheels and motors. Researchers recently designed a cogwheel for a molecular gear train but currently have no means to visualize the gears in action.

"The most straightforward way to monitor the motion of molecular gears is through static scanning tunneling microscopy images. For these purposes, one of the teeth of the cogwheels must be either sterically or electrochemically distinct from the other teeth," explains Rapenne.

The researchers first created a molecular cogwheel comprising five paddles, where one paddle is a few carbon atoms longer than the other four paddles. However, as they showed last year, differences in paddle length disrupt the coordinated motion along the gear train. Thus, differences in paddle electrochemistry are a more promising design approach but synthetically more challenging.

"We used computational studies to predict whether electron-withdrawing units or metal chemistry could tailor the electronic properties of a paddle, without changing paddle size," says Rapenne. Such tailored properties are important because one can observe them as differences in contrast by using scanning tunneling microscopy, and thereby facilitate static imaging.

"Our pentaporphyrinic cogwheel prototypes contained one paddle with either a cyanophenyl substituent or a zinc - rather than nickel - metal center," explains Rapenne. "Various spectroscopy techniques confirmed the architectures of our syntheses."

How can researchers use these cogwheels? Imagine shining a highly focused beam of light, or applying a chemical stimulus, to one of the gears to initiate a rotation. By so doing, one could rotate a series of cogwheels in a coordinated manner as in a conventional gear train, but on a molecular scale which consists in the ultimate miniaturizatio of devices. "We now have the means to visualize such rotations," notes Rapenne.

By using this development to carry out single-molecule mechanics studies, Rapenne is optimistic that the broad research community will have a powerful new design for integrated nanoscale machines. "We're not there yet, but are working collaboratively to make it happen as soon as possible," he says.

Credit: 
Nara Institute of Science and Technology

How improving acoustic monitoring of bats could help protecting biodiversity

image: Dead bat below a wind turbine.

Image: 
Photo: Leibniz-IZW/Christian Voigt

In order to assess the risk of bats dying at wind turbines, it is common practice to record the acoustic activity of bats within the operating range of the rotor blades. For this purpose, ultrasonic detectors are attached to the nacelles of the mast top. In a recent analysis, a team of scientists led by the Leibniz Institute for Zoo and Wildlife Research (Leibniz-IZW) concludes that the effectiveness of this acoustic monitoring is insufficient to reliably predict mortality risk, especially for bats at large turbines. They therefore recommend installing supplementary ultrasonic detectors at other locations on the wind turbines and developing additional techniques such as radar and thermal imaging cameras for monitoring. The results of their analysis are published in the scientific journal Mammal Review.

Wind is a form of renewable energy source which is widely used for energy generation. One downside of wind energy is that many bats die when colliding with rotor blades of wind turbines. This is an urgent problem for conservation because all bat species are protected by law because of their rarity. To find out when the operation of wind turbines poses a threat to bats and when it does not, the temperature and wind conditions at which bats are particularly active at turbines are determined. For this purpose, the echolocation calls of bats are recorded when they fly into the risk zone near the rotor blades. From this, threshold values for wind speed and temperature can be derived for a bat-safe operation of wind turbines. Wind turbines then only produce electricity when none or only a few bats are active.

"This approach is a good starting point. Its methodological implementation is, however, often insufficient, especially for large wind turbines," summarises bat expert Dr Christian Voigt, Head of the Leibniz-IZW Department of Evolutionary Ecology, together with colleagues from the German Bat Association (Bundesverband für Fledermauskunde Deutschland), the University of Naples Federico II, the University of Bristol and the Max Planck Institute for Ornithology in a joint publication. Automated ultrasonic detectors on the nacelles of wind turbines are usually used for acoustic monitoring. These record the calls of passing bats. "Each bat species produces echolocation sounds at a pitch and volume typical for the species," explains Voigt. He and his colleagues simulated sound propagation using the example of the common noctule, with calls of a low frequency (about 20 kHz) but a high sound pressure level (110 dB), and Nathusius's pipistrelle, with calls at a higher frequency (about 40 kHz) and a lower sound pressure level (104 dB). "Our simulations show that, according to the laws of physics, the calls are attenuated with each metre of distance as they propagate through the air by 0.45 dB per metre for common noctules and by 1.13 dB per metre for Nathusius's pipistrelle" says Voigt. With the widely used detection threshold of 60 dB, ultrasonic detectors record calls of common noctules at a distance of calls up to 40 m away. For Nathusius's pipistrelle, the detection range is on average 17 m. Neither maximum distance is sufficient to completely cover the danger zone of large wind turbines. New turbines in particular have rotor blades of more than 60 m in length, which is well above the detection distance of bats by ultrasonic detectors.

The sonar beam of bats also means that echolocation calls do not spread evenly in all directions, but preferentially towards the front in the direction of flying. If bats do not fly directly towards the microphone, the calculated detection range decreases further. In addition, ultrasonic detectors are usually mounted on the underside of the nacelles and the microphone therefore points downwards. Bat calls above the nacelle are therefore not registered. The focus is on the lower half of the danger zone, although bats can also be found in the upper half.

"At a wind turbine with rotor blades of 60 m length, the detectors only cover a maximum of 23 % of the risk zone for the common noctule and only a maximum of 4 % of the risk zone for Nathusius's pipistrelle, two species with a high risk of colliding with turbines. With modern wind turbines, rotor blade lengths continue to increase, so the relative coverage will be even lower in the future," says Voigt, first author of the article. As a consequence, the existing acoustic monitoring measures do not adequately reflect the collision risk. Therefore, the conditions under which wind turbines are switched off for bat protection are insufficient and many animals therefore continue to die.

In order to improve the cover of the risk zone of the rotor blades, the scientists recommend additional detectors at other locations, e.g. above as well as on the lee side of the nacelle. In order to also detect bats circling up the mast of the turbine, it may also be advisable to install ultrasonic detectors directly on the mast. This would also register animals flying at lower levels above ground or collecting insects from the mast surface. Complementary sensor technology such as radar systems or thermal imaging cameras could provide additional information.

Based on the recordings, consultants and researchers can determine the bat species and assess under which conditions (temperature, time of day, wind strength) they are most active. With this information, conditions can be described that restrict the operation of wind turbines during times of particularly high bat activity, thus reducing the risk of killing. "Through suitable monitoring schemes, the operation of wind turbines can be effectively adjusted to ensure that wind energy production does not come at the expense of biodiversity," Voigt concludes.

Credit: 
Leibniz Institute for Zoo and Wildlife Research (IZW)

The world's longest bottlebrush polymer ever synthesized

image: Bottlebrush polymer is a polymer consisting of a single main chain and numerous side chains grafting from the main chain.

Image: 
NIMS

NIMS and RIKEN have succeeded in synthesizing the longest ever bottlebrush polymer. This polymer--resembling a green foxtail--is composed of a main chain and numerous side chains grafting from it. The team also succeeded in giving various chemical properties to the ultralong bottlebrush polymer. These achievements are expected to substantially advance the current synthetic methods of bottlebrush polymers. This technique may be applicable to the development of flexible and low-friction polymeric materials.

In the development of polymeric materials, it is necessary to link molecular units with desired chemical properties, called monomers, to the desired length. In this context, bottlebrush polymers are attracting attention as a new type of polymer material, consisting of a single main chain and numerous side chains, and it is possible to design polymers with various chemical compositions by selecting the side chains. On the other hand, conventional synthetic methods are limited to lengths on the order of several hundred nanometers, or at most about 1 μm, due to issues such as monomer reactivity and the presence of trace impurities, and there is no precedent for the synthesis of bottlebrush polymers longer than 2 μm.

This research team recently succeeded in synthesizing the longest bottlebrush polymer ever by devising the molecular design of the monomer as starting material and using a single crystal of the monomer to set up a polymerization environment with very few impurities. The length reached 7 μm, which is about 3.8 times longer than the longest value so far. Furthermore, by combining two types of polymerization methods, the research team succeeded in synthesizing bottlebrush polymers with four types of side chains while maintaining the length of the main chain.

Use of the monomers developed in this research enables the synthesis of a variety of bottlebrush polymers with controlled length, diameter and chemical properties. Bottlebrush polymers may be used as a low-friction surface coating. Applying this polymer to the surfaces of moving machinery parts, for example, may reduce energy loss caused by friction. In future studies, we plan to develop flexible and low-friction materials taking advantage of the ultralong bottlebrush polymer.

Credit: 
National Institute for Materials Science, Japan

More protein doesn't mean more strength in resistance-trained middle-aged adults

image: Graduate student Colleen McKenna, professor Nicholas Burd and their colleagues tested the hypothesis that a high-protein diet would confer more benefits than moderate-protein intake in middle-aged adults engaged in weight training. They found no evidence to support the hypothesis.

Image: 
Photo by L. Brian Stauffer

CHAMPAIGN, Ill. -- A 10-week muscle-building and dietary program involving 50 middle-aged adults found no evidence that eating a high-protein diet increased strength or muscle mass more than consuming a moderate amount of protein while training. The intervention involved a standard strength-training protocol with sessions three times per week. None of the participants had previous weightlifting experience.

Published in the American Journal of Physiology: Endocrinology and Metabolism, the study is one of the most comprehensive investigations of the health effects of diet and resistance training in middle-aged adults, the researchers say. Participants were 40-64 years of age.

The team assessed participants' strength, lean-body mass, blood pressure, glucose tolerance and several other health measures before and after the program. They randomized participants into moderate- and high-protein diet groups. To standardize protein intake, the researchers fed each person a freshly cooked, minced beef steak and carbohydrate beverage after every training session. They also sent participants home with an isolated-protein drink to be consumed every evening throughout the 10 weeks of the study.

"The moderate-protein group consumed about 1.2 grams of protein per kilogram of body weight per day, and the high-protein group consumed roughly 1.6 grams per kilogram per day," said Colleen McKenna, a graduate student in the division of nutritional sciences and registered dietician at the University of Illinois Urbana-Champaign who led the study with U. of I. kinesiology and community health professor Nicholas Burd. The team kept calories equivalent in the meals provided to the two groups with additions of beef tallow and dextrose.

The study subjects kept food diaries and McKenna counseled them every other week about their eating habits and protein intake.

In an effort led by U. of I. food science and human nutrition professor Hannah Holscher, the team also analyzed gut microbes in fecal samples collected at the beginning of the intervention, after the first week - during which participants adjusted to the new diet but did not engage in physical training - and at the end of the 10 weeks. Previous studies have found that diet alone or endurance exercise alone can alter the composition of microbes in the digestive tract.

"The public health messaging has been that Americans need more protein in their diet, and this extra protein is supposed to help our muscles grow bigger and stronger," Burd said. "Middle age is a bit unique in that as we get older, we lose muscle and, by default, we lose strength. We want to learn how to maximize strength so that as we get older, we're better protected and can ultimately remain active in family and community life."

The American Food and Nutrition Board recommends that adults get 0.8 grams of protein per kilogram of body weight per day to avoid developing a protein deficiency. The team tried to limit protein consumption in the moderate-protein group to the Recommended Daily Allowance, but their food diaries revealed those participants were consuming, on average, 1.1 to 1.2 grams of protein per kilogram of body weight per day. Those in the high-protein group ate about 1.6 grams of protein per kilogram per day - twice the recommended amount.

Burd and his colleagues hypothesized that getting one's protein from a high-quality source like beef and consuming significantly more protein than the RDA would aid in muscle growth and strength in middle-aged adults engaged in resistance training. But at the end of the 10 weeks, the team saw no significant differences between the groups. Their gains in strength, their body fat, lean body mass, glucose tolerance, kidney function, bone density and other "biomarkers" of health were roughly the same.

The only potentially negative change researchers recorded between the groups involved alterations to the population of microbes that inhabit the gut. After one week on the diet, those in the high-protein group saw changes in the abundance of some gut microbes that previous studies have linked to negative health outcomes. Burd and his colleagues found that their strength-training intervention reversed some of these changes, increasing beneficial microbes and reducing the abundance of potentially harmful ones.

"We found that high protein intake does not further increase gains in strength or affect body composition," Burd said. "It didn't increase lean mass more than eating a moderate amount of protein. We didn't see more fat loss, and body composition was the same between the groups. They got the gain in weight, but that weight gain was namely from lean-body-mass gain."

Burd said the finding makes him question the push to increase protein intake beyond 0.8-1.1 grams per kilogram of body weight, at least in middle-aged weightlifters consuming high-quality animal-based protein on a regular basis.

McKenna said the team's multidisciplinary approach and in-depth tracking of participants' dietary habits outside the laboratory makes it easier to understand the findings and apply them to daily life.

"We have recommendations for healthy eating and we have recommendations for how you should exercise, but very little research looks at how the two together impact our health," she said. The study team included exercise physiologists, registered dietitians and experts on gut microbiology.

"This allowed us to address every aspect of the intervention in the way it should be addressed," McKenna said. "We're honoring the complexity of human health with the complexity of our research."

Credit: 
University of Illinois at Urbana-Champaign, News Bureau

New study sheds light on how X and Y chromosomes interact

Researchers at Lund University in Sweden have investigated how the X and Y chromosomes evolve and adapt to each other within a population. The results show that breaking up coevolved sets of sex chromosomes could lead to lower survival rates among the offspring - something that could be of importance in species conservation, for example. The study is published in the journal PNAS.

The results provide new clues on how species are formed, and suggest it could be harmful to bring together individuals from different populations that have been separated for a long time. The reason is that the offspring have lower survival rates.

"This is something worth keeping in mind in conservation biology, where you want to see a population grow", says Jessica Abbott, researcher in evolutionary ecology at Lund University.

It is previously known that hybrids between different species often do better if they are female (two X chromosomes) rather than male (X and Y chromosome).

In the study, the researchers crossed fruit flies from five different populations from different continents in order to combine X and Y chromosomes with different origins. They then followed and studied the subsequent generations.

The results show that males with X and Y chromosomes that don't match had higher reproductive success than males with matching X and Y chromosomes. However, the higher male fertility was paired with lower survival rates among their offspring.

"We were expecting the opposite, that males with different origin X and Y chromosomes would have lower reproductive success, so that was surprising", says Jessica Abbott.

Credit: 
Lund University

New documentation: Old-growth forest carbon sinks overestimated

The claim that old-growth forests play a significant role in climate mitigation, based upon the argument that even the oldest forests keep sucking CO2 out of the atmosphere, is being refuted by researchers at the University of Copenhagen. The researchers document that this argument is based upon incorrectly analysed data and that the climate mitigation effect of old and unmanaged forests has been greatly overestimated. Nevertheless, they reassert the importance of old-growth forest for biodiversity.

Old and unmanaged forest has become the subject of much debate in recent years, both in Denmark and internationally. In Denmark, setting aside forests as unmanaged has often been argued to play a significant role for climate mitigation. The argument doesn't stand up according to researchers at the University of Copenhagen, whose documentation has just been published as a commentary in Nature.

The entire climate mitigation argument is based upon a widely cited 2008 research article which reports that old-growth forests continue to suck up and sequester large amounts of CO2 from the atmosphere, regardless of whether their trees are 200 years old or older. UCPH researchers scrutinised the article by reanalysing the data upon which it was based. They conclude that the article arrives at a highly overestimated climate effect for which the authors' data presents no evidence.

"The climate mitigation effect of unmanaged forests with trees more than 200 years old is estimated to be at least one-third too high--and is based solely upon their own data, which, incidentally, is subject to great uncertainty. Thus, the basis for the article's conclusions is very problematic," explains Professor Per Gundersen, of the University of Copenhagen's Department of Geosciences and Natural Resource Management.

An unlikely amount of nitrogen

The original research article concluded that old-growth forests more than 200 years old bind an average of 2.4 tonnes of carbon per hectare, per year, and that 1.3 tonnes of this amount is bound in forest soil. According to the UCPH researchers, this claim is particularly unrealistic. Carbon storage in soil requires the addition of a certain amount of externally sourced nitrogen.

"The large amounts of nitrogen needed for their numbers to stand up don't exist in the areas of forest which they studied. The rate is equivalent to the soil's carbon content doubling in 100 years, which is also unlikely, as it has taken 10,000 years to build up the soil's current carbon content. It simply isn't possible to bind such large quantities of carbon in soil," says Gundersen.

Trees don't grow into the sky

Unlike the authors of the 2008 article, and in line with the classical view in this area, the UCPH researchers believe that old unmanaged forests reach a saturation point after a number of years. At that point, CO2 uptake ceases. After longer periods (50-100 years in Denmark) of high CO2 sequestration, storage decreases and eventually come to a stop. This happens when a forest reaches an equilibrium, whereby, through the respiration of trees and degradation of organic matter in the soil, it emits as much CO2 into the atmosphere as it absorbs through photosynthesis.

"As we know, trees don't just grow into the sky. Trees age. And at some point, they die. When that happens, decay begins, sending carbon back into the atmosphere as CO2. Other smaller trees will then take over, thereby leaving a fairly stable CO2 stock in the forest. As trees age, the risk of a forest being impacted by storms, fire, droughts, disease, death and other events increases more and more. This releases a significant portion of the stored carbon for a period of time, until newer trees replace the old ones," explains Gundersen.

He adds that the 2008 article does not document any mechanism which allows the forest to keep sequestering CO2.

The UCPH researchers' view is supported by observations from Suserup Forest, near Sorø, Denmark, a forest that has remained largely untouched for the past century. The oldest trees within it are 300 years old. Inventories taken in 1992, 2002 and 2012 all demonstrated that there was no significant CO2 uptake by the forest.

Old-growth forest remains vital for biodiversity

"We feel a bit like the child in the Emperor's New Clothes, because what we say is based on classic scientific knowledge, thermodynamics and common sense. Nevertheless, many have embraced an alternative view--and brought the debate to a dead end. I hope that our contribution provides an exit," says Per Gundersen.

He would like to make it clear that this should in no way be perceived as a position against protection of old-growth forest or setting aside unmanaged forest areas.

"Old-growth forest plays a key role in biodiversity. However, from a long-term climate mitigation perspective, it isn't an effective tool. Grasping the nuance is important so that debate can be based upon scientifically substantiated assertions, and so that policy is not influenced on an incorrect basis," concludes Gundersen.

Credit: 
University of Copenhagen - Faculty of Science

Automated embryo selection system might rise likelihood of success in treating infertility

image: "Assessing the health of an early-stage embryo from its visual information is not an easy task. That's where the technologies can come to help", says Dr Vidas Raudonis

Image: 
KTU

The team of researchers at Kaunas University of Technology (KTU), Lithuania applied artificial intelligence (AI) methods to evaluate data of human embryo development. The AI-based system photographs the embryos every five minutes, processes the data of their development and notifies any anomalies observed. This increases the likelihood of choosing the most viable and healthy early-stage embryo for IVF procedures. The innovation was developed in collaboration with Esco Medical Technologies, a manufacturer of medical equipment.

Almost one in six couples face infertility; about 48.5 million couples, 186 million individuals worldwide are inflicted. Europe has one of the lowest birth rates in the world, with an average of just 1.55 children per woman.

The most effective form of assisted reproductive technology is in vitro fertilisation (IVF) - a complex series of procedures used to help with fertility. However, the success of IVF procedures is closely linked to many biological and technical issues. The fertilisation and in vitro culturing of embryos are dependent upon an environment that should be stable and correct concerning temperature, air quality, light, media pH and osmolality. At the end of this procedure, there are several embryos, which consequently leads to a problem of choosing the best of them that is likely to give the greatest success of pregnancy and should be transferred to the uterus.

"Typically, a healthy embryo progresses consistently through known development stages, has a low percentage of cell fragmentation, and there are other indicators of its healthy development. However, assessing the health of an early-stage embryo from its visual information is not an easy task. That's where the technologies can come to help", says Dr Vidas Raudonis, Professor at the Department of Automation, KTU Faculty of Electrical and Electronics Engineering.

The interdisciplinary team of KTU researchers lead by Dr Raudonis, developed an automated method for early-stage embryo evaluation. The method is based on processing the visual data collected by photographing the developing embryo every five minutes from seven different sides for up to five days. Up to 20,000 images are generated during the image-capturing process. To evaluate them all manually would be an impossible task for the embryologist in charge of the procedure.

"At this stage, the embryologist can rely on the AI algorithm developed by KTU. Our algorithm automatically analyses the photos and marks all the events and anomalies that can affect the successful further development of the embryo. It's important to note that the final decision is made by the embryologist - the AI algorithm is only a tool that allows to objectively justify the decision", Dr Raudonis explains.

He emphasises that due to the rate of embryo development, changes that indicate developmental abnormalities may not be noticed during manual data analysis. AI is trained to notice all the essential features of a healthy embryo in a sequence of photos. Therefore, it can replicate the work of an embryologist, or, to be more exact - to perform the visual inspection of the embryo's development.

Automated embryo development assessment is a new field not only in Lithuania but also in the world. It has been actively developed in the last six years, when the technical possibilities to create more sophisticated AI methods and algorithms emerged. Strong teams of scientists from Israel, Australia, Denmark and other countries are working in this field. More and more clinics all over the world are applying AI-based solutions for assisting infertility treatment.

The technology created by the KTU team is already being used in certain medical facilities in Lithuania. The commercialisation of the solution is facilitated by Esco Medical Technologies, the leading manufacturer of high-quality equipment for IVF procedures.

"My team and I were probably the first in Lithuania to apply AI methods to process data obtained by registering the development of human embryos in incubators. Our first steps towards cooperation with business were small but focused. Now this work is turning not only into scientific products but also into technological solutions", says Dr Raudonis.

Credit: 
Kaunas University of Technology

The CNIO describe how embryonic stem cells keep optimal conditions for use in regenerative medicine

image: Embryonic stem cells, in naïve state (left) and primed state (right).

Image: 
CNIO

Scientists at the Proteomics Core Unit of the Spanish National Cancer Research Centre (CNIO), headed by Javier Muñoz, have described the mechanisms, unknown to date, involved in maintaining embryonic stem cells in the best possible state for their use in regenerative medicine. Their results, published in Nature Communications, will help to find novel stem-cell therapies for brain stroke, heart disease or neurodegenerative conditions like Alzheimer's or Parkinson's disease.

Naïve pluripotent stem cells, ideal for doing research

Embryonic stem cells (ESCs) are pluripotent cells that can grow into all somatic cell types - a characteristic that is extremely useful for researchers and regenerative medicine. There are two types of pluripotency: naïve and primed. The naïve state comes before the primed one during embryonic development. Naïve ESCs have the potential to differentiate into any cell types. Thus, they are more relevant in research. However, the naïve state is unstable, because naïve ESCs are constantly receiving signals that regulate the transition to the primed state and their self-renewal. Understanding the mechanisms that regulate the pluripotent states is important because they might help achieve long-term maintenance of stable naïve pluripotent stem cells in ESC cultures.

Traditionally, maintenance of naïve ESC cultures is based on the inhibition of two of the signalling pathways that regulate cell differentiation - aka as the 2i culture method. Recently, naïve ESCs have been maintained adopting a totally different approach, namely, the inhibition of Cdk8/19, a protein that regulates the expression of numerous genes, including the genes that help maintain the naïve state. "While the two approaches are used to culture naïve cells, little is known about the mechanisms involved," says Javier Muñoz, who led the study.

Now, using proteomics, the large-scale characterisation of proteins coded in a genome, CNIO scientists have described a large number of the molecular events that help stabilise these valuable ESC. "This is the first time proteomics has been used in this context," says Ana Martí­nez del Val, from the Proteomics Core Unit at CNIO, first author of the article. "We analysed the mechanisms at a number of levels. First, we conducted phosphoproteomic analyses, studying phosphorylated proteins. Phosphorylation regulates protein functions (by activating or inhibiting them). Second, we analysed the expression of these proteins. Finally, we identified changes in metabolites (reaction intermediates or end products). With our integrated approach, we got an accurate picture of the causes of the high degree of plasticity of ESC," Martí­nez del Val explains.

The results of the study might have implications for research on some types of cancer. We know that "the inhibition of Cdk8 leads to reduced cell proliferation in acute myeloid leukaemia by enhancing tumour suppressors", and that "Cdk8 is a colorectal cancer oncogene." "Cdk8 activity is somehow enigmatic, since its functions vary considerably with the cell environment," says Muñoz. "We have identified a number of Cdk8 targets that were unknown until now. This can help understand the function this protein regulates in other biological contexts."

Going beyond genomics with proteomics

The study by the CNIO team shows the need for a greater focus on proteomics in cancer research strategies.

Research into and treatment of disease have made huge progress in the past decades, courtesy of the techniques used in molecular biology. Two of the most frequently used techniques are genomics, the analysis of the DNA sequence - the molecule that carries all our genetic information - and transcriptomics, the study of the sets of RNA transcripts - the molecules that translate into proteins. Proteins are macromolecules that are directly involved in chemical processes essential for life. The proteomic approach was adopted relatively recently by biomedical researchers. Proteomics has gained momentum over the past 15 years, yet it has become essential for genomics and transcriptomics to come full circle. Genomics and proteomics study processes that take place before proteins are produced. "We use proteomics to study a number of properties of proteins that cannot be analysed by studying DNA or RNA," says Martínez del Val. This is extremely important, since "proteins are responsible for a whole range of basic life functions that take place within cells," Muñoz adds.

Credit: 
Centro Nacional de Investigaciones Oncológicas (CNIO)

Renewable energy, new perspectives for photovoltaic cells

image: Ultra-short pulse lasers used at the Physics Department of the Politecnico di Milano to study photovoltaic cells

Image: 
Politecnico di Milano

In the future, photovoltaic cells could be "worn" over clothes, placed on cars or even on beach umbrellas. These are just some of the possible developments from a study published in Nature Communications by researchers at the Physics Department of the Politecnico di Milano, working with colleagues at the University of Erlangen-Nuremberg and Imperial College London.

The research includes among its authors the Institute of Photonics and Nanotechnology (IFN-CNR) researcher Franco V. A. Camargo and Professor Giulio Cerullo. It focused on photovoltaic cells made using flexible organic technology. Today's most popular photovoltaic cells, based on silicone technology, are rigid and require a sophisticated and expensive infrastructure to manufacture them and have high disposal costs.

An alternative to replace silicon in the future is "plastic" solar cells, in which a mixture of two organic semiconductors, one electron donor and an electron acceptor, absorbs light energy and converts it into electrical energy. Using organic molecules brings several advantages, such as simpler technology, reduced production and disposal costs, mechanical flexibility, and access to organic materials' chemical diversity. However, organic materials have more complex physics than crystalline inorganic materials (such as silicone), particularly for charge transfer processes at donor-acceptor interfaces, which cause efficiency losses.

After four years of work, the researchers succeeded in creating solar cells with new materials in which losses due to interface states are minimised. By studying these materials with ultra-short laser pulses, they identified the physical reasons behind this exceptional performance, presenting a general optimisation model valid for other material combinations.

Future photovoltaic cells made from organic technology will be a cheaper source of energy with less environmental impact. They can be incorporated into various everyday objects such as windows, cars, or even clothes and coats because of their mechanical flexibility.

The study falls within the scope of renewable energy, as one of the critical challenges for humanity's future is the development of clean and renewable sources of energy. The Earth's primary energy source is sunlight, which provides more than 100 times more energy daily than humanity needs, making photovoltaic technologies among the most promising for the future. With its climate and few clouds, Italy has one of the most considerable photovoltaic potentials in Europe, comparable to that of non-desert tropical countries.

Credit: 
Politecnico di Milano

New nanotransistors keep their cool at high voltages

image: The transistor developed by EPFL researchers can substantially reduce the resistance and cut the amount of heat dissipation in high-power systems.

Image: 
© EPFL

Power converters are the little-known systems that make electricity so magical. They are what allow us to plug in our computers, lamps and televisions and turn them on in a snap. Converters transform the alternating current (AC) that comes out of wall sockets into the exact level of direct current (DC) that our electronics need. But they also tend to lose, in average, up to 20% of their energy in the process.

Power converters work by using power transistors - tiny semiconductor components designed to switch on and off and withstand high voltages. Designing novel power transistors to improve the converters' efficiency is the aim of the team of EPFL engineers. With their entirely new transistor design, based on the counterintuitive application of nanoscale structures for high voltage applications, much less heat is lost during the conversion process, making the transistors especially well-suited to high-power applications like electric vehicles and solar panels. Their findings have just been published in Nature Electronics.

The heat dissipation in converters is caused by the high electrical resistance, among other factors, which is the biggest challenge in power electronic devices. "We see examples of electric power losses every day, such as when the charger of your laptop heats up," says Elison Matioli, a coauthor of the paper and head of EPFL's POWERlab.

This becomes even more of a problem in high-power applications. "The higher the nominal voltage of semiconductor components, the greater the resistance," he adds. Power losses shorten the ranges of electric vehicles, for instance, and reduce the efficiency of renewable-energy systems.

Matioli, along with his PhD student Luca Nela and their team, have developed a transistor that can substantially reduce the resistance and cut the amount of heat dissipation in high-power systems. More specifically, it has less than half as much resistance as conventional transistors, while holding voltages of over 1,000 V. The EPFL technology incorporates two key innovations.

The first involves building several conductive channels into the component so as to distribute the flow of current - much like new lanes that are added to a highway to allow traffic to flow more smoothly and prevent traffic jams. "Our multi-channel design splits up the flow of current, reducing the resistance and overheating," says Nela.

The second innovation involves using nanowires made of gallium nitride, a semiconducting material ideal for power applications. Nanowires are already used in low-power chips, such as those in smartphones and laptops, not in high voltage applications. The POWERlab demonstrated nanowires with a diameter of 15 nm and a unique funnel-like structure enabling them to support high electric fields, and voltages of over 1,000 V without breaking down.

Thanks to the combination of these two innovations - the multi-channel design that allows more electrons to flow, and the funnel structure that enhances the nanowires' resistance - the transistors can provide greater conversion efficiencies in high-power systems. "The prototype we built using slanted nanowires performs twice as well as the best GaN power devices in the literature," says Matioli.

While the engineers' technology is still in the experimental phase, there shouldn't be any major obstacles to large-scale production. "Adding more channels is a fairly trivial matter, and the diameter of our nanowires is twice as big as the small transistors made by Intel," says Matioli.?The team has filed several patents for their invention.

Demand for chips that can perform efficiently at high voltages is set to boom as electric vehicles become more widely adopted, since more efficient chips translate directly into longer ranges. Several major manufacturers have expressed interest in teaming up with Matioli to further develop this technology.

Credit: 
Ecole Polytechnique Fédérale de Lausanne

The case of the cloudy filters: Solving the mystery of the degrading sunlight detectors

image: Two EUV filters that were used in a space flight. The wrinkly looking filter on top is made of zirconium; the smoother bottom filter is made of aluminum. Each filter is extremely thin - a fraction of the diameter of a human hair - and about 1.4 mm wide by 4.5 mm long, roughly half the size of a very flat Tic Tac.

Image: 
Andrew Jones/LASP

More than 150 years ago, the Sun blasted Earth with a massive cloud of hot charged particles. This plasma blob generated a magnetic storm on Earth that caused sparks to leap out of telegraph equipment and even started a few fires. Now called the Carrington Event, after one of the astronomers who observed it, a magnetic storm like this could happen again anytime, only now it would affect more than telegraphs: It could damage or cause outages in wireless phone networks, GPS systems, electrical grids powering life-saving medical equipment and more.

Sun-facing satellites monitor the Sun's ultraviolet (UV) light to give us advance warning of solar storms, both big ones that could cause a Carrington-like event as well as the smaller, more common disturbances that can temporarily disrupt communications. One key piece of equipment used in these detectors is a tiny metal filter that blocks out everything except the UV signal researchers need to see.

But for decades, there has been a major problem: Over the course of just a year or two, these filters mysteriously lose their ability to transmit UV light, "clouding up" and forcing astronomers to launch expensive annual recalibration missions. These missions involve sending a freshly calibrated instrument into space to make its own independent observations of the sunlight for comparison.

A leading theory has been that the filters were developing a layer of carbon, whose source is contaminants on the spacecraft, that blocked incoming UV light. Now, NIST scientists and collaborators from the Laboratory for Atmospheric and Space Physics (LASP) in Boulder, Colorado, have found the first evidence indicating that carbonization is not the problem, and it must be something else, such as another possible stowaway from Earth. The researchers describe their work in Solar Physics today.

"To my knowledge, it's the first quantitative, really solid argument against carbonization as the cause of the filter degradation," said NIST physicist Charles Tarrio.

What Are They Good For? Absolutely Everything

Most of the light produced by the Sun is visible and includes the rainbow of colors from red (with a wavelength of around 750 nanometers) to violet (with a wavelength of about 400 nm). But the Sun also produces light with wavelengths too long or short for the human eye to see. One of these ranges is extreme ultraviolet (EUV), extending from 100 nm down to just 10 nm.

Only about a tenth of a percent of sunlight is in the EUV range. That tiny EUV signal is extremely useful because it spikes in tandem with solar flares. These eruptions on the surface of the Sun can cause changes to Earth's upper atmosphere that disrupt communications or interfere with GPS readings, causing your phone to suddenly think you are 40 feet away from your true location.

Satellites that measure EUV signals help scientists monitor these solar flares. But the EUV signals also give scientists a heads-up of hours or even days before more destructive phenomena such coronal mass ejections (CMEs), the phenomenon responsible for the Carrington Event. Future CMEs could potentially overload our power lines or increase radiation exposure for airline crew and passengers traveling in certain locations.

And nowadays, the satellites do more than merely give us warnings, said LASP senior research scientist Frank Eparvier, a collaborator on the current work.

"In the past few decades we've gone from just sending out alerts that flares have happened to being able to correct for solar variability due to flares and CMEs," Eparvier said. "Knowing in real time how much the solar EUV is varying allows for the running of computer models of the atmosphere, which can then produce corrections for the GPS units to minimize the impacts of that variability."

The Mystery of the Cloudy Filters

Two metals are particularly useful for filtering out the massive amounts of visible light to let through that small but important EUV signal. Aluminum filters transmit EUV light between 17 nm and 80 nm. Zirconium filters transmit EUV light between 6 nm and 20 nm.

While these filters begin their lives transmitting a lot of EUV light in their respective ranges, the aluminum filters, in particular, quickly lose their transmission abilities. A filter might start by allowing 50% of 30-nm EUV light through to the detector. But within just a year, it only transmits 25% of this light. Within five years, that number is down to 10%.

"It's a significant issue," Tarrio said. Less light transmitted means less data available -- a little like trying to read in a dimly lit room with dark sunglasses.

Scientists have long known that carbon deposits can build up on instruments when they are subjected to UV light. Sources of carbon on satellites can be everything from fingerprints to the materials used in the construction of the spacecraft itself. In the case of the mysteriously cloudy UV filters, researchers thought carbon might have been deposited on them, absorbing EUV light that would otherwise have passed through.

However, since the 1980s, astronomers have been carefully designing spacecraft to be as carbon-free as possible. And that work has helped them with other carbonization problems. But it didn't help with the aluminum EUV filter issue. Nevertheless, the community still suspected carbonization was at least partially responsible for the degradation.

Make-Your-Own Space Weather

To test this in a controlled setting, NIST researchers and collaborators used a machine that effectively lets them create their own space weather.

The instrument is NIST's Synchrotron Ultraviolet Radiation Facility (SURF), a room-sized particle accelerator that uses powerful magnets to move electrons in a circle. The motion generates EUV light, which can be diverted via specialized mirrors to impact targets -- in this case, the aluminum and zirconium satellite filters.

Each filter was 6 millimeters by 18 mm, smaller than a postage stamp, and only 250 nm thick, about 400 times thinner than a human hair. The sample filters were actually slightly thicker than real satellite filters, with other small changes designed to prevent the SURF beam from literally burning holes into the metals. During a run, the back side of each filter was exposed to a controlled source of carbon.

To speed up the testing process, the team blasted the filters with the equivalent of five years' worth of space weather in a mere hour or two. Incidentally, getting that kind of beam power was no sweat for SURF.

"We turn SURF down to about half a percent of its normal power in order to expose the filters to a reasonable amount of light," Tarrio said. "The satellites are 92 million miles away from the Sun, and the Sun's not putting out an awful lot of EUV to begin with."

Finally, after exposure, researchers tested each filter to see how much EUV light in the correct wavelength range was able to pass through.

The team found that transmission was not significantly different after exposure versus before exposure, for either the aluminum or the zirconium. In fact, the difference in transmission was just a fraction of a percent, not nearly enough to explain the kind of clouding that happens in real space satellites.

"We were looking for a 30% decrease in transmission," Tarrio said. "And we just didn't see it."

As an extra test, the scientists gave the filters even larger doses of light -- the equivalent of 50 years' worth of ultraviolet radiation. And even that didn't produce much of a light transmission problem, growing just 3 nm of carbon on the filters -- 10 times less than researchers would have expected if carbon was responsible.

So If It's Not Carbon ...

The real culprit hasn't yet been identified, but researchers already have a different suspect in mind: water.

Like most metals, aluminum naturally has a thin layer on its surface of a material called an oxide, which forms when aluminum binds with oxygen. Everything from aluminum foil to soda cans has this oxide layer, which is chemically identical to sapphire.

In the proposed mechanism, the EUV light would pull atoms of aluminum out of the filter and deposit them on the filter's exterior, which already has that thin oxide layer. The exposed atoms would then react with the oxygen in water from Earth that has hitched a ride on the spacecraft. Together, the exposed aluminum and water would react to form a much thicker oxide layer, which could theoretically be absorbing the light.

Further SURF experiments scheduled for later this year should answer the question of whether the problem really is water, or something else. "This would be the first time that people have looked at the deposition of aluminum oxide in this context," Tarrio said. "We're looking into it as a serious possibility."

-- Reported and written by Jennifer Lauren Lee

Credit: 
National Institute of Standards and Technology (NIST)