Earth

Migration and molt affect how birds change their colors

image: Indigo buntings and other migratory birds molt before making the long trek south every fall.

Image: 
Eric Liffmann

In late summer and autumn, millions of birds fly above our heads, often at night, winging their way toward their wintering grounds.

Before the journey, many birds molt their bright feathers, replacing them with a more subdued palette. Watching this molt led scientists to wonder how feather color changes relate to the migrations many birds undertake twice each year. Molt matters -- not only because replacing worn feathers is necessary for flight, but because molt is the catalyst for plumage changes that affect whether birds find mates and reproduce.

"We're really blessed here, as nature lovers and birdwatchers, that we have lots of species of warblers here, which come in blues, greens, red and yellows," said Jared Wolfe, assistant professor in Michigan Technological University's College of Forest Resources and Environmental Science and one of the founders of the Biodiversity Initiative. "These brightly colored birds migrate and nest here and then leave for the winter. Everyone is so focused on the coloration, but the mechanism of the change of coloration is the process of molt, of replacing feathers."

While migration distances vary, many species fly thousands of miles each year, chasing summer as the planet tilts toward and away from winter. These lengthy journeys tend to wear out feathers. In research published in the journal Ecology and Evolution, Wolfe and co-authors analyzed the variation in distances traveled against the extent of molt in a particular species. "Birds that go farther distances replace more feathers," said Wolfe.

"Sun is the primary reason feathers degrade, and harsh environments," he said. "In northerly latitudes in the summer, it's sunny all day. As the birds move south, tracking the sun, they are maximally exposing themselves to sun all year."

Feathers must be replaced because of wear and tear; what's the significance of brightly colored plumage? Wouldn't black be more protective against sunburn, or white better at deflecting heat?

For birds, like many animals, an attention-getting physical appearance plays a crucial role in attracting a mate. As stylish haircuts and makeup are to humans, beautiful feathers are to birds. But a spectacular plumage is also pragmatic; it broadcasts age and health, which determine who gets to mate and who doesn't.

"Bright plumages are signals of habitat quality in the tropics," Wolfe said. "Acquiring mates is based on a signal of habitat quality from the wintering grounds. Undergoing a second molt on the wintering grounds before migrating north allows the birds to become colorful. Color is a signal to potential mates in places like the Midwest what jungle wintering habitats are like."

Experiences during the winter months affect how colorful birds become, which affects how successful they are at finding mates and breeding in North America. Scientists call these carryover effects. "It's so elegant, but we're just now starting to understand it," Wolfe said.

Growing vibrant feathers is a physically taxing activity, and the easier a bird has it during the winter, the more brightly colored their plumage during the summer. This makes quality and availability of food, places to shelter and safety from predators important components of a wintering habitat.

Like humans seeking out coveted locations to live, birds flock to the best habitats. In both cases, resources are finite. What might have been an ideal wintering ground one year might be depleted of food sources or other important attributes the next.

"The best habitats offer resource stability over time, versus poorer quality habitats which are variable month-to-month, year-to-year," he said.

But what about birds that don't migrate, preferring to spend their lives within a single home range? For them, it turns out molt is comparable to changing one's clothes on a regular basis rather than changing appearances to impress someone. Molting and breeding are constricted by multiple factors: Seasons, food abundance and size of home range play major roles in plumage and feather replacement.

"Birds here in the temperate zones are restricted in when they can breed and undergo their annual molt by winter," Wolfe said. "In the tropics, there are wet and dry seasons, but there is less constraint from a real absence of food sources. Molt is an expensive process calorically; birds need lots and lots of food while they're molting."

Wolfe and his collaborators found that adjusting the time it takes Amazonian birds to complete their annual molt affects how they go about making a living. For example, ant-following birds in Brazil eat insects that are trying to outrun army ants. One tiny species, the white-plumed antbird, opportunistically darts ahead of the ants -- not your garden variety ant but a species that can overpower and eat lizards, birds and small mammals in addition to insects -- to take advantage of a moveable feast.

"Its molt is crazy slow; it takes an entire year," Wolfe said, noting that the bird essentially lives in a constant state of molt, dropping one feather at a time.

Obligate antbirds have huge home ranges that overlap with multiple army ant colonies, which means they spend a large part of their day flying around the jungle in search of army ants. The bird's lengthy daily commute is a problem when they molt wing feathers, which creates gaps in their wings and compromises their ability to fly. How do they get around this problem? A very slow molt.

"A single feather at a time to minimize gaps thereby improving their ability to fly and maintain large home ranges," Wolfe said. "This unique adaptation has made the white-plumed antbird the slowest-molting songbird on Earth."

Despite the predilection of migrant birds to return to the same breeding territory year after year, Wolfe and collaborators note that not all birds return to the same molting grounds. This finding confounds the assumption of home field advantage, where birds benefit from completing their annual molt in a familiar location. But it appears there isn't much of a relationship between molting activity and what Wolfe calls "site fidelity."

"Until our research, it had remained a mystery whether or not migratory songbirds return to the same site to molt," Wolfe said. "This is an important question because there is growing evidence that mortalities accrued after the breeding season - during molt, migration and overwintering periods - is responsible for the continued loss of migratory songbirds. In fact, bird abundance has decreased by 29% since 1970. Understanding where and why birds molt is an important step towards protecting vulnerable populations of songbirds."

Wolfe and colleagues used 31 years of bird banding data from northern California and southern Oregon to measure the site fidelity of 16 species of songbird during molt. While the researchers did find that breeding activity strongly correlated with site fidelity, molt did not appear to influence a bird's decision to return to a particular place or not. It appears that birds, like humans, tend to splurge on fine feathers -- and then go home to show them off.

Credit: 
Michigan Technological University

Astronomers discover clues that unveil the mystery of fast radio bursts

image: The Five-hundred-meter Aperture Spherical radio Telescope (FAST) in Guizhou, China.

Image: 
(Photo Bojun Wang, Jinchen Jiang & Qisheng Cui)

Fast radio bursts, or FRBs - powerful, millisecond-duration radio waves coming from deep space outside the Milky Way Galaxy - have been among the most mysterious astronomical phenomena ever observed. Since FRBs were first discovered in 2007, astronomers from around the world have used radio telescopes to trace the bursts and look for clues on where they come from and how they're produced. 

UNLV astrophysicist Bing Zhang and international collaborators recently observed some of these mysterious sources, which led to a series of breakthrough discoveries reported in the journal Nature that may finally shed light into the physical mechanism of FRBs.

The first paper, for which Zhang is a corresponding author and leading theorist, was published in the Oct. 28 issue of Nature

"There are two main questions regarding the origin of FRBs," said Zhang, whose team made the observation using the Five-hundred-meter Aperture Spherical Telescope (FAST) in Guizhou, China. "The first is what are the engines of FRBs and the second is what is the mechanism to produce FRBs. We found the answer to the second question in this paper."

Two competing theories have been proposed to interpret the mechanism of FRBs. One theory is that they're similar to gamma-ray bursts (GRBs), the most powerful explosions in the universe. The other theory likens them more to radio pulsars, which are spinning neutron stars that emit bright, coherent radio pulses. The GRB-like models predict a non-varying polarization angle within each burst whereas the pulsar-like models predict variations of the polarization angle.

The team used FAST to observe one repeating FRB source and discovered 11 bursts from it. Surprisingly, seven of the 11 bright bursts showed diverse polarization angle swings during each burst. The polarization angles not only varied in each burst, the variation patterns were also diverse among bursts. 

"Our observations essentially rules out the GRB-like models and offers support to the pulsar-like models," said K.-J. Lee from the Kavli Institute for Astronomy and Astrophysics, Peking University, and corresponding author of the paper.

Four other papers on FRBs were published in Nature on Nov. 4. These include multiple research articles published by the FAST team led by Zhang and collaborators from the National Astronomical Observatories of China and Peking University. Researchers affiliated with the Canadian Hydrogen Intensity Mapping Experiment (CHIME) and the Survey for Transient Astronomical Radio Emission 2 (STARE2) group also partnered on the publications.

"Much like the first paper advanced our understanding of the mechanism behind FRBs, these papers solved the challenge of their mysterious origin," explained Zhang. 

Magnetars are incredibly dense, city-sized neutron stars that possess the most powerful magnetic fields in the universe. Magnetars occasionally make short X-ray or soft gamma-ray bursts through dissipation of magnetic fields, so they have been long speculated as plausible sources to power FRBs during high-energy bursts. 

The first conclusive evidence of this came on April 28, 2020, when an extremely bright radio burst was detected from a magnetar sitting right in our backyard - at a distance of about 30,000 light years from Earth in the Milky Way Galaxy. As expected, the FRB was associated with a bright X-ray burst. 

"We now know that the most magnetized objects in the universe, the so-called magnetars, can produce at least some or possibly all FRBs in the universe," said Zhang. 

The event was detected by CHIME and STARE2, two telescope arrays with many small radio telescopes that are suitable for detecting bright events from a large area of the sky. 

Zhang's team has been using FAST to observe the magnetar source for some time. Unfortunately, when the FRB occurred, FAST was not looking at the source. Nonetheless, FAST made some intriguing "non-detection" discoveries and reported them in one of the Nov. 4 Nature articles. During the FAST observational campaign, there were another 29 X-ray bursts emitted from the magnetar. However, none of these bursts were accompanied by a radio burst. 

"Our non-detections and the detections by the CHIME and STARE2 teams delineate a complete picture of FRB-magnetar associations," Zhang said. 

To put it all into perspective, Zhang also worked with Nature to publish a single-author review of the various discoveries and their implications for the field of astronomy. 

"Thanks to recent observational breakthroughs, the FRB theories can finally be reviewed critically," said Zhang. "The mechanisms of producing FRBs are greatly narrowed down. Yet, many open questions remain. This will be an exciting field in the years to come."

Credit: 
University of Nevada, Las Vegas

Ecologically friendly agriculture doesn't compromise crop yields

image: Increasing diversity in crop production benefits biodiversity without compromising crop yields, according to new research.

Image: 
Jamil Rhajiak / University of British Columbia, Communications & Marketing.

Increasing diversity in crop production benefits biodiversity without compromising crop yields, according to an international study comparing 42,000 examples of diversified and simplified agricultural practices.

Diversification includes practices such as growing multiple crops in rotation, planting flower strips, reducing tillage, adding organic amendments that enrich soil life, and establishing or restoring species-rich habitat in the landscape surrounding the crop field.

"The trend is that we're simplifying major cropping systems worldwide," says Giovanni Tamburini at the Swedish University of Agricultural Sciences and lead author of the study. "We grow monoculture on enlarged fields in homogenized landscapes. According to our study diversification can reverse the negative impacts that we observe in simplified forms of cropping on the environment and on production itself."

The research, published in Science Advances, is based on 5,188 studies with 41,946 comparisons between diversified and simplified agricultural practices. Crop yield was in general maintained at the same level or even increased under diversified practices. The enhanced biodiversity benefited pollination and pest regulation by natural predation. It also improved water regulation and preserved soil fertility. Diversification, however, had variable effects on climate regulation. In some cases, it increased greenhouse gas emissions.

"By bringing together so much data, this work powerfully shows the potential for diversified farming to maintain productivity while reducing environmental harms and sustaining biodiversity and ecosystem services," says Claire Kremen at the University of British Columbia and co-author of the study.

"However, we need to tune these techniques to specific crops and regions, maximize these benefits and reduce trade-offs that otherwise occur. Much more investment is needed to support adoption of diversified farming practices, through research, management incentives and extension programs."

Increasing biodiversity is assumed to enhance yields and ecosystem services such as pollination, pest regulation by natural enemies, nutrient turnover, water quality and climate change mitigation by carbon sequestration. Although much research has been invested to explore this, outcomes of diversification had not previously been synthesized. Further, the focus had mainly been diversification of crops and vegetation. Diversification of soil organisms is seldom recognized.

"An important next step is to identify which practices and conditions that result in positive or negative climate mitigation, and to avoid practices that give negative impacts," says Sara Hallin at the Swedish University of Agricultural Sciences and co-author of the study.

Studies where yield had been examined together with one or more other ecosystem services were few but still many enough to analyze occurrence of win-win, trade-off and lose-lose situations. Win-win outcomes between yield and another service dominated with 63 % of the cases, but all other possible outcomes (i.e. representing tradeoffs between yield and ecosystem services) were also represented.

Many of the tested diversification practices are in use already today, but can be more widely adopted and combined both on and off the crop field.

How can we diversify our farming systems?

There are many ways to increase diversity both on and off the crop field. Farms can add crop species to crop rotations, or grow crops together in the same field with intercropping. Flowering crops provide pollen and nectar for pollinating and predatory insects. Farms can also support below-ground biodiversity by mulching crop residues and adding manure or minimizing soil disturbance by reducing tillage.

Credit: 
University of British Columbia

Past is key to predicting future climate, scientists say

image: Past carbon dioxide concentrations (at left) compared to possible future emissions scenarios (at right): The rate of current emissions is much faster - occurring over decades - unlike geological changes, which occur over millions of years. If emissions continue unabated, carbon dioxide levels could meet or exceed values associated with past warm climates, such as the Cretaceous period (100 million years ago) or the Eocene epoch (50 million years ago), by the year 2300.

Image: 
Jessica Tierney/University of Arizona

An international team of climate scientists suggests that research centers around the world using numerical models to predict future climate change should include simulations of past climates in their evaluation and statement of their model performance.

"We urge the climate model developer community to pay attention to the past and actively involve it in predicting the future," said Jessica Tierney, the paper's lead author and an associate professor in the University of Arizona's Department of Geosciences. "If your model can simulate past climates accurately, it likely will do a much better job at getting future scenarios right."

As more and better information becomes available about climates in Earth's distant history, reaching back many millions of years before humans existed, past climates become increasingly relevant for improving our understanding of how key elements of the climate system are affected by greenhouse gas levels, according to the study's authors. Unlike historic climate records, which typically only go back a century or two - a mere blink of an eye in the planet's climate history - paleoclimates cover a vastly broader range of climatic conditions that can inform climate models in ways historic data cannot. These periods in Earth's past span a large range of temperatures, precipitation patterns and ice sheet distribution.

"Past climates should be used to evaluate and fine-tune climate models," Tierney said. "Looking to the past to inform the future could help narrow uncertainties surrounding projections of changes in temperature, ice sheets, and the water cycle."

Typically, climate scientists evaluate their models with data from historical weather records, such as satellite measurements, sea surface temperatures, wind speeds, cloud cover and other parameters. The model's algorithms are then adjusted and tuned until their predictions mesh with the observed climate records. Thus, if a computer simulation produces a historically accurate climate based on the observations made during that time, it is considered fit to predict future climate with reasonable accuracy.

"We find that many models perform very well with historic climates, but not so well with climates from the Earth's geological past," Tierney said.

One reason for the discrepancies are differences in how the models compute the effects of clouds, which is one of the great challenges in climate modeling, Tierney said. Such differences cause different models to diverge from each other in terms of what climate scientists refer to as climate sensitivity: a measure of how strongly the Earth's climate responds to a doubling of greenhouse gas emissions.

Several of the latest generation models that are being used for the next report by the Intergovernmental Panel on Climate Change, or IPCC, have a higher climate sensitivity than previous iterations, Tierney explained.

"This means that if you double carbon dioxide emissions, they produce more global warming than their previous counterparts, so the question is: How much confidence do we have in these very sensitive new models?"

In between IPCC reports, which typically are released every eight years, climate models are being updated based on the latest research data.

"Models become more complex, and in theory, they get better, but what does that mean?" Tierney said. "You want to know what happens in the future, so you want to be able to trust the model with regard to what happens in response to higher levels of carbon dioxide."

While there is no debate in the climate science community about human fossil fuel consumption pushing the Earth toward a warmer state for which there is no historical precedent, different models generate varying predictions. Some forecast an increase as large as 6 degrees Celsius by the end of the century.

Tierney said while Earth's atmosphere has experienced carbon dioxide concentrations much higher than today's level of about 400 parts per million, there is no time in the geological record that matches the speed at which humans are contributing to greenhouse gas emissions.

In the paper, the authors applied climate models to several known past climate extremes from the geological record. The most recent warm climate offering a glimpse into the future occurred about 50 million years ago during the Eocene epoch, Tierney said. Global carbon dioxide was at 1,000 parts per million at that time and there were no large ice sheets.

"If we don't cut back emissions, we are headed for Eocene-like CO2 levels by 2100," Tierney said.

The authors discuss climate changes all the way to the Cretaceous period, about 90 million years ago, when dinosaurs still ruled the Earth. That period shows that the climate can get even warmer, a scenario that Tierney described as "even scarier," with carbon dioxide levels up to 2,000 parts per million and the oceans as warm as a bathtub.

"The key is CO2," Tierney said. "Whenever we see evidence of warm climate in the geologic record, CO2 is high as well."

Some models are much better than others at producing the climates seen in the geologic record, which underscores the need to test climate models against paleoclimates, the authors said. In particular, past warm climates such as the Eocene highlight the role that clouds play in contributing to warmer temperatures under increased carbon dioxide levels.

"We urge the climate community to test models on paleoclimates early on, while the models are being developed, rather than afterwards, which tends to be the current practice," Tierney said. "Seemingly small things like clouds affect the Earth's energy balance in major ways and can affect the temperatures your model produces for the year 2100."

Credit: 
University of Arizona

Surprising insights into the role of autophagy in neuron

image: The endoplasmic reticulum strongly accumulates in KO synapses. Neurotransmitter-containing synaptic vesicles are shown in blue.

Image: 
Authors: Dmytro Puchkov, Marijn Kuijpers

It appears that autophagy protects our neurons in the brain, but evidently for entirely different reasons than previously assumed, as researchers from the Leibniz-Forschungsinstitut für Molekulare Pharmakologie (FMP) and Charité in Berlin have now shown. When the scientists used a genetic trick to switch off autophagy-mediated "cellular waste disposal", instead of detecting protein deposits, as expected, they found elevated levels of the endoplasmic reticulum, a system composed of membrane sacs which acts, among other functions, as a calcium store. This leads to elevated neurotransmitter release and, ultimately, to fatal neuronal hyperexcitability. These fundamentally new findings have now been published in the prestigious journal "Neuron".

Autophagy plays a key role in the maintenance of healthy cells, one example being the degradation and recycling of damaged protein molecules or entire organelles such as defective mitochondria by means of so-called autophagosomes. This cleaning mechanism is particularly important for neurons in the brain, which serve us throughout our lives, given that autophagy clears protein aggregates, such as those occurring in neurodegenerative diseases. The neuroprotective effects of autophagy have since been confirmed by numerous experiments in model organisms.

However, it is possible that the causes of this protective effect are different than previously assumed. By investigating the role of autophagy in the central nervous system of young, healthy mice, Professor Volker Haucke from the Leibniz-Forschungsinstitut für Molekulare Pharmakologie (FMP) and his research group have now arrived at entirely new insights.

Using a genetic trick, the researchers first switched off an essential autophagy gene, and then used proteomics to analyze neuronal protein levels. Proteins previously hypothesized to be primarily degraded by autophagy were not enriched in the neurons at all - although this would have been expected, if their degradation occurred via autophagy.

"It came as a complete surprise to us," remarked Marijn Kuijpers, lead author of the study now published in "Neuron", "but what surprised us even more was what we found in the neurons instead."

Largest intracellular calcium buffer no longer degraded

Instead of the expected autophagy substrates, the researchers discovered unusually large levels of the endoplasmic reticulum in the neuronal axons. One of the functions of these membrane sacs and tubules, which occur in all cells, is to provide a large intracellular store for calcium. The regulation of calcium is in turn fundamentally important for excitatory transmission in the central nervous system: When neurons communicate with each other, calcium channels open at synapses leading to an influx of extracellular calcium into synapses, and the release of neurotransmitters (neuronal messengers) from synaptic vesicles. Calcium can then either be pumped out of the neuron or enter the endoplasmic reticulum from where it can also be re-released, as required.

When autophagy had been switched off, the calcium store of the endoplasmic reticulum turned out to be damaged. The researchers found that the calcium buffering function of the endoplasmic reticulum no longer worked properly, resulting in elevated calcium levels in axons and at synapses. This in turn boosted the release of the excitatory transmitter glutamate, which led to permanent neuronal hyperactivity.

Too much excitatory neurotransmitter release is the problem

"Until now, it was assumed that less autophagy meant the release of fewer transmitter molecules. We have now demonstrated the exact opposite," stated postdoctoral fellow Marijn Kuijpers, commenting on the results of her study. "Too much, not too little neurotransmitter release is the problem. As a result, neurons become less plastic, and we suspect that they ultimately perish from hyperexcitability," added Charité's Professor Dietmar Schmitz, whose team contributed to the study.

Since the study was conducted with healthy neurons from young animals, it does not preclude additional functions of autophagy under pathological conditions, in Alzheimer's disease, for example. That said, the study is of tremendous importance for our fundamental understanding of the physiology of autophagy.

"All things considered, our discovery puts our understanding of autophagy in the central nervous system on a new basis," stated group leader Professor Volker Haucke. This new information would explain, for example, why learning becomes more difficult as autophagy declines during aging. "It is not possible to ramp up a synapse that is already hyperactivated; it has reached its limit and can, thus, barely be plastically reinforced - a basic requirement for learning."

Using this new understanding to explore the key question of the trigger

The key question which regulatory mechanisms trigger autophagy in neurons remains open, however. While the availability of nutrients has a regulatory effect on other cells of the body - fasting has been shown to stimulate cellular waste disposal - no trigger for autophagy in the central nervous system is known so far.

"If we knew what produces more or less autophagy in neurons, we would be able at some point to therapeutically intervene," stressed Professor Haucke. "We are now keen to understand more about this fundamentally important problem, and our present study provides an excellent starting point for this endeavor."

Credit: 
Forschungsverbund Berlin

Large-scale cancer proteomics study profiles protein changes in response to drug treatments

image: Han Liang, Ph.D.

Image: 
MD Anderson Cancer Center

HOUSTON — Through large-scale profiling of protein changes in response to drug treatments in cancer cell lines, researchers at The University of Texas MD Anderson Cancer Center have generated a valuable resource to aid in predicting drug sensitivity, to understand therapeutic resistance mechanisms and to identify optimal combination treatment strategies.

Their findings, published today in Cancer Cell, include expression changes in more than 200 clinically relevant proteins across more than 300 cell lines after treatment with 168 different compounds, making it the largest dataset available on protein responses to drug treatments in cancer cell lines.

"We've seen a number of perturbation studies that look at gene expression changes following drug treatments or CRISPR-mediated changes, but there is a significant gap in terms of proteomic profiling," said senior author Han Liang, Ph.D., professor of Bioinformatics and Computational Biology. "We hoped to fill that gap by profiling changes in major therapeutic target proteins, which provides a lot of insight in terms of drug resistance and designing drug combinations."

Perturbation biology measures how a system, such as cancer cells, responds to various stimuli. These types of experiments have proven useful in modeling cancer behaviors and understanding responses at a system level, explained Liang. To profile protein perturbations, the researchers used a technique called reverse-phase protein array (RPPA), which enables the rapid quantitative analyses of a select group of proteins. Protein levels were measured at baseline and after treatment, often at multiple time points.

The study evaluated drugs targeting a variety of signaling pathways and cellular processes across 319 commonly used, well-characterized cell lines from many cancer types, including breast, ovarian, uterine, skin, prostate and hematologic cancers.

Rather than analyzing all possible drug-cell line combinations, the researchers focused on those most likely to be relevant to the field. In total, they generated RPPA profiles of 15,492 samples, including 11,884 drug-treated samples and 3,608 control samples. The data was highly reproducible and verified by multiple independent pathways.

The data obtained from these analyses provides important insight into the mechanisms of drug response or resistance, highlighting signaling pathways that are activated or suppressed following treatment with a given drug. Further, having data on both baseline and post-treatment protein levels is much more useful in modeling to predict sensitivity to additional drugs, explained Liang.

The researchers also constructed a comprehensive map of protein-drug connections to visualize responses and to better study relationships between different proteins and signaling pathways. The maps showcase which proteins have significant changes from a given drug, which drugs yield similar responses and which proteins saw similar patterns of change. Studying these complex relationships can reveal unknown connections and can point to potentially effective therapeutic combinations.

"Through this dataset, one can immediately see the consequences of a given drug, including perturbed pathways and adaptive responses, which can help to identify optimal drug combinations," said Liang. "As we continue working to expand the data, we think this will be a valuable starting place for researchers doing drug mechanism studies."

The protein response data is publicly available for researchers in a data portal, which provides various methods for visualizing and downloading the data.

Although the study includes only a subset of cancer types, the researchers hope to continue adding to the dataset in the future. In the long-term, the research team anticipates that proteomic profiling at baseline and following treatment may be a useful tool in clinical trials to better follow patient treatment responses and to optimize therapeutic strategies.

Credit: 
University of Texas M. D. Anderson Cancer Center

Archive of animal migration in the Arctic

Warmer winters, earlier springs, shrinking ice, and increased human development--the Arctic is undergoing dramatic changes that are impacting native animals. Researchers from around the world have now established an archive for data documenting movements of animals in the Arctic and Subarctic, hosted on the Movebank platform at the Max Planck Institute of Animal Behavior in Radolfzell, Germany. With the Arctic Animal Movement Archive, scientists can share their knowledge and collaborate to ask questions about how animals are responding to a changing Arctic. Three recent studies from the archive reveal large-scale patterns in the behaviors of golden eagles, bears, caribou, moose, and wolves in the region, and illustrate how the archive can be used to recognize larger ecosystem changes.

Researchers have long been observing the movements and behaviour of animals in the Arctic. However, it has been difficult to discover and access these data. To address this problem, an international team led by Sarah Davidson, data curator at the Max Planck Institute of Animal Behavior in Radolfzell, and Gil Bohrer, professor at the Ohio State University, have established a NASA-funded global data archive for studies of animal migration in the Arctic and Subarctic.

The aim of the Arctic Animal Movement Archive is to network scientists and promote their cooperation. This is particularly important because the Arctic region extends around the world. Researchers from over 100 universities, government agencies and conservation groups across 17 countries are involved in the archive, which is hosted on Movebank, a research platform developed by Martin Wikelski, director at the Max Planck Institute. "Our goal is to use the archive to build a global community across institutions and political boundaries", says Wikelski. The archive currently contains over 200 projects with the movement data of more than 8,000 marine and terrestrial animals from 1991 to the present.

Migration patterns of golden eagles

Researchers recently published a set of discoveries bringing together data and expertise using the archive. By comparing movements of more than 100 golden eagles from 1993 to 2017, researchers found that immature birds migrating north in the spring arrived earlier following mild winters. However, the arrival time of adults has remained rather constant, regardless of conditions at their breeding grounds, with consequences for nesting and chick survival. "Our approach revealed the importance of assessing data that span generations and decadal climate patterns, that when ignored can dramatically affect our results and consequently management strategies", says Scott LaPoint, scientist at Black Rock Forest.

A second study of more than 900 female caribou from 2000 to 2017 found that more northern herds are giving birth earlier in the spring, while the calving dates of more southern populations have not shown the same change. "The ability to look at biological processes, like birth, at such a large scale, across populations and subspecies and over millions of square kilometres, is unprecedented for a species in such a remote and harsh environment. These results reveal patterns that we would not have suspected, and point to further lines of inquiry about everything from caribou evolution to their ability to adapt to environmental changes moving forward," explains Elie Gurarie from the University of Maryland.

Distinct responses to climate change

A third analysis looking at the movement speeds of bears, caribou, moose, and wolves from 1998 to 2019 showed that species respond differently to seasonal temperatures and winter snow conditions. "How animals respond to variable weather conditions through movement will have interesting implications for species competition and predator-prey dynamics," says Peter Mahoney, who conducted this research while at the University of Washington.

In addition to the hundreds of studies already included in the archive, the resource is continually growing, as data are transmitted from animals in the field and as more researchers join. This should help to detect changes in the behaviour of animals and ultimately in the entire Arctic ecosystem. "We are also providing a much-needed baseline of past behaviors and movements", says Davidson. "This can be used to improve wildlife management, address critical research questions, and document changes in the Arctic for future generations.

Credit: 
Max-Planck-Gesellschaft

Boosting treatments for metastatic melanoma

image: Soma Sengupta, MD, PhD, and Daniel Pomeranz Krummel, PhD, pictured here in their lab in the Vontz, say they might have identified a treatment-boosting drug to enhance effectiveness of therapies for metastatic melanoma.

Image: 
UC Creative + Brand

University of Cincinnati clinician-scientist Soma Sengupta, MD, PhD, says that new findings from her and Daniel Pomeranz Krummel's, PhD, team might have identified a treatment-boosting drug to enhance effectiveness of therapies for metastatic cancer and make them less toxic, giving patients a fighting chance at survival and improved quality of life.

"Melanoma is a serious skin cancer that evolves from the pigment cells of the skin and eyes," says Sengupta, associate professor of neurology at UC, UC Health neuro-oncologist and co-director of the UC Gardner Neuroscience Institute's Brain Tumor Center. "There are millions of people in the U.S. living with this type of cancer, and the incidence is projected to increase.

"While physicians can often quickly find and treat melanoma of the skin, metastatic melanoma, or melanoma that has spread to other parts of the body, often the brain, is a lethal cancer. Often, patients who receive immunotherapy, which uses the patient's own immune system to treat the condition, do not have good responses. They also experience many uncomfortable side effects that impact their quality of life."

Sengupta says that her team, including colleagues at Emory University, Columbia University, and University of Wisconsin, recently published results in the International Journal of Radiation Oncology * Biology * Physics that show by targeting a particular neurotransmitter receptor through the use of a new class of sedatives, related to Valium or Xanax, cancer treatments like radiation and immunotherapy could be boosted to better fight cancer in patients with reduced toxic side effects. These studies were conducted in animal models but the hope is to soon study outcomes in patients with metastatic cancer.

Researchers found by adding this particular study drug, infiltration of immune cells to the tumor was greatly improved, enhancing the efficacy of the treatment and allowing them to combat melanoma; tumors shrunk and, in some cases, completely disappeared.

"Our long-term goal is to add this new class of drugs to a patient's radiation and immunotherapy treatment," says Sengupta, who is a corresponding author on the paper. They are collaborating with investigators from UC who are helping to formulate the lead compound, including Pankaj Desai, PhD, with the James L. Winkle College of Pharmacy, as well as with Mohammad Khan, PhD, of Emory University, who will test this approach via clinical trial once Sengupta has approval from the FDA to test this drug in humans.

"We hope this will help patients avoid side effects, and that by adding this drug to the regimens, we will reduce costs, since we think the treatments will become more effective, and in turn, doses of standard treatments can be lowered. More studies are needed, but this is a promising new approach using a non-toxic drug from a class of compounds that have already been approved for anxiety, but now used for a serious condition that claims lives every day."

When researchers were preparing this study to report their team's findings, they asked a senior colleague and UC cancer researcher Peter Stambrook, PhD, to read it and provide feedback. Sadly, Stambrook passed away from melanoma before their work was published.

"Peter died as a result of his melanoma," says Daniel Pomeranz Krummel, PhD, research associate professor of neurology and lead author on this study. "He was an outstanding scientist and was incredibly supportive of our research. We feel more driven than ever to push forward with our research and to honor Peter in this way."

Credit: 
University of Cincinnati

Researchers shrink imaging spectrometer without compromising performance

WASHINGTON -- Researchers have developed a new imaging spectrometer that is much lighter and smaller than state-of-the-art instruments while maintaining the same high level of performance. Because of its small size and modular design, the new instrument is poised to bring this advanced analytical technique to airborne vehicles and even planetary exploration missions.

Imaging spectrometers record a series of monochromatic images that are used for both spatial and spectral analysis of an area. This analytical approach is widely applied in fields such as atmospheric science, ecology, geology, agriculture and forestry. However, the large size of the instruments has prevented it from being used in some applications.

In the Optical Society (OSA) journal Applied Optics, researchers led by Ronald B. Lockwood from MIT Lincoln Laboratory describe their new Chrisp compact VNIR/SWIR imaging spectrometer (CCVIS). It has a volume about 10 or more times smaller than most of today's devices. One version of the CCVIS is 8.3 cm in diameter and 7 cm long, about the size of a soda can.

The spectrometer is designed to record spectral images over wavelengths spanning 400 to 2500 nm. This includes the visible and near infrared (VNIR) as well as shortwave near infrared (SWIR) portions of the spectrum.

"Our compact instrument facilitates the application of imaging spectroscopy for a variety of scientific and commercial problems, such as deployment on small satellites for planetary exploration or using unmanned aerial systems for agricultural purposes," said Lockwood. "We believe that our new spectrometer could also be used to study climate change, one of the most exciting applications of an imaging spectrometer."

Making a smaller spectrometer

Most of today's imaging spectrometers use an Offner-Chrisp optical configuration because it offers excellent control of optical errors called aberrations. However, this design requires a relatively large optical setup. The new CCVIS developed by the researchers performs much like the Offner-Chrisp configuration but with new optical components that create a more compact design.

To make the new CCVIS, the researchers used a catadioptric lens that combines reflective and refractive elements into one component. This created a more compact instrument while still controlling optical aberrations. The researchers also used a special flat reflection grating that is immersed in a refractive medium rather than in air. This grating takes up less space than a traditional grating while maintaining the same resolution.

Easy manufacturing

"The CCVIS uses a flat grating rather than a concave or convex grating that requires manufacturing with complex electron-beam lithography or diamond machining techniques," said Lockwood. "We developed a grayscale photolithographic microfabrication approach that uses a one-time exposure to make the grating and doesn't require labor intensive electron-beam processing."

To test their new design, the researchers demonstrated the spectrometer using a laboratory setup. Their experiments verified that the CCVIS had the expected performance over the full field of view.

"The CCVIS's compact size means that it can be made into modules that could be stacked to increase the field of view," said Lockwood. "It also makes it relatively easy to keep stable with no temperature changes so that optical alignment, and thus spectral performance, remains unchanged."

As a step toward the ultimate goal of a space-based demonstration, the researchers are seeking funding to develop a full prototype that could be thoroughly tested from an airborne vehicle.

Credit: 
Optica

The ebb and flow of brain ventricles

image: A 3D view of the brain ventricles of a study participant.

Image: 
Millward et al., MDC

It is not only the heart that has chambers - the brain does, too. Its four ventricles are connected to the spinal canal and filled with a clear liquid called cerebrospinal fluid, which removes metabolic waste from the neurons. If the brain becomes inflamed, immune cells also circulate in this fluid. This is the case in diseases like multiple sclerosis (MS), where the immune system attacks the body's own protective layer around axons (nerve fibers) in the brain and spinal cord. This triggers inflammation, which ultimately leads to the destruction of neurons.

Usually, the brain's ventricle volume remains fairly constant. However, in 2013, Dr. Sonia Waiczies and her colleagues from the Max Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC) and Charité - Universitätsmedizin Berlin made a discovery in an MS animal model: They observed that the ventricle volume changed over the course of the disease. When they used an antigen to trigger encephalitis (inflammation of the brain) in mice, MRI scans clearly showed that the ventricles expanded. "Everyone thought it was a sign of brain atrophy," Waiczies recalls.

The swelling goes back down

If the brain's ventricles become larger, it follows that the brain must become smaller. After all, the surrounding skull bone leaves it with nowhere else to go. Inflammation does indeed cause brain tissue damage, but atrophy - i.e., a massive loss of brain volume -
does not always occur immediately . And if it did, this process would be non-reversible. "So we conducted a series of further animal experiments, and monitored the brain volumes over two months," says the neuroimmunologist and senior author of the current study. About ten days after the encephalitis was induced, the rodents' brain ventricles were significantly enlarged. Then, a few days later, they shrunk back to normal size when the symptoms remitted. Just like the patients, they went on to develop temporary relapses - albeit with milder symptoms than at first - and the ventricles would again become enlarged.

Waiczies, who also works as an MR scientist, finds these results quite logical: "If I have an inflamed joint, for example, an edema forms and it swells up. Once the inflammation subsides, the swelling also goes down." The team is interested in the molecular mechanisms behind these changes. But first, they wanted to know whether their findings had clinical relevance.

Archival data confirms new findings

Enlarged brain ventricles in people with MS is commonly thought to be a sign of brain atrophy. A reduction in ventricle size had never been reported in patients. So what does this observation mean for MS patients? And can the finding even be transferred from mice to humans? The current study saw the researchers test this with the help of extensive MRI data sets of MS patients. From 2003 to 2008, they had participated in a clinical trial at the Charité to test the effects of a new MS drug. "I was involved in the immunological planning and evaluation of this study, and I knew that the generated MRI data was extensive and robust," says Waiczies.

A diagnosis of multiple sclerosis is made from MR images and by analyzing cerebrospinal fluid obtained by puncturing the spinal cord. Regular scans allow a better prognosis of disease progression. In this study, participants had received a monthly MRI scan. Countless images now had to be viewed and statistically evaluated. Lead author Dr. Jason Millward, neuroimmunologist at the MDC and Charité and statistics enthusiast, set to work on the new study.

"The key factor was the number of measurements taken over time, which provided us with a unique opportunity to see if the patients exhibited similar trends," explains Millward. That was indeed the case: "The majority of patients with relapsing-remitting MS exhibited fluctuations in ventricle volume - just as we observed with the mice." Interestingly, Millward also found that the patients with changes in ventricle volume seemed to be in an earlier stage of the disease.

"We are used to seeing ventricular enlargement in other neurodegenerative diseases - such as Alzheimer's or Parkinson's disease. But in those diseases, rather than being reversible, the ventricles just keep expanding," explains Professor Thoralf Niendorf of the MDC, who also works at the Experimental and Clinical Research Center (ECRC), a joint institution of the MDC and Charité. "Regular monitoring of ventricle volume in MS patients could help to distinguish temporary fluctuations from progressive brain atrophy." This would also make it possible to better tailor therapies to the individual patient.

Professor Friedemann Paul, a clinical neuroimmunologist at Charité and, together with Waiczies and Niendorf, the current study's senior author, adds: "From a clinical perspective, examining fluctuations in ventricle volumes in routine MRI patient scans could be an interesting approach to monitoring the course of the disease or of immune therapies. But this will require us to study even larger cohorts over a longer period of time. Comparing these results with clinical findings - for example, regarding cognition - is also going to be important."

The researchers now want to understand how the "ebb and flow" of brain ventricles occurs at the molecular level.

Credit: 
Max Delbrück Center for Molecular Medicine in the Helmholtz Association

Research reveals infertile spikelets contribute to yield in sorghum and related grasses

ST. LOUIS, MO, November 5, 2020 - Much of the food we eat comes from grasses such as rice, wheat, corn, sorghum, and sugarcane. These crops still resemble the wild species from which they were derived. In all grasses the structures that contain the flowers and seeds are called spikelets. In the tribe Andropogoneae, a major group of grasses that cover 17 percent of the earth's surface, the spikelets come in pairs, one of which bears a seed and one of which doesn't (although in some species it produces pollen). This structure can be seen clearly in sorghum, and in the many wild grasses that make up North American prairies and African grasslands. It's tempting to think that spikelets that don't produce seeds are useless, but the fact that they have been kept around for 15 million years implies that they have an important function.

A team of scientists at the Donald Danforth Plant Science Center, in laboratories led by Elizabeth (Toby) Kellogg PhD, member and Robert E. King Distinguished Investigator, and Doug Allen, PhD, associate member and USDA research scientist, set out to answer the questions; could this apparently useless floral structure capture and move photosynthetic carbon to the seed? And, ultimately, if removed, would we notice a difference in yield?

The researchers used radioactive and stable isotopes of carbon, RNA-seq of metabolically important enzymes, and immunolocalization of Rubisco to show that the sterile spikelet collects carbon from the air and carries out photosynthesis while the awn does not. By tracking the flow of carbon, they discovered that the infertile spikelet transfers carbon to the seed-bearing one which appears to use it for energy, storing it in the seed. When they removed the infertile spikelet from a subset of the branches of sorghum plants, they found that seed weight (yield) was lower by ca. 9%.

"We used to think these floral structures might be vestigial, but they turned out to be quite the asset in terms of productivity," said first author, Taylor AuBuchon, senior technician in the Kellogg lab.

The findings, Sterile spikelets contribute to yield in sorghum and related grasses, were recently published in the journal Plant Cell. Additional comments can be found in the In Brief.

"This is a great example of how plant organs and structure can contribute to biomass and yield in ways not previously described" Allen said.

"This project was incredibly rewarding because of the strong collaboration, creativity and determination of everyone, sharing expertise, designing and conducting the experiments and analyzing the data together," Kellogg reflected.

In addition to the unique collaboration, Kellogg and Allen also attribute the success of the project to the expertise and cutting-edge technology provided by the Advanced BioImaging, Plant Growth and Proteomics and Mass Spectrometry facilities at the Danforth Center.

The next step would be to determine to what extent infertile spikelets affect yield in diverse varieties of field grown plants. Existing sorghum diversity could indicate whether the size of the infertile spikelet affects the size of the seed.

Credit: 
Donald Danforth Plant Science Center

Utilizing a 'krafty' waste product: Toward enhancing vehicle fuel economy

image: Fine carbon fiber network fabricated using optimized conditions (from left to right: electrospun fibers, thermally stabilized fibers, carbon fibers).

Image: 
Kanazawa University

Kanazawa, Japan - Given concerns over global climate warming, researchers are hard at work on minimizing the amount of fuel that we all use in everyday life. Reducing the weight of vehicles will lessen the amount of fuel required to power them, and put money back into your pocket.

In a study recently published in Chemical Engineering Journal, researchers from Kanazawa University have chemically modified an industrial waste product, and processed it into a possible lightweight structural material. This development may increase the fuel economy of private and commercial transportation.

The researchers started with Kraft lignin, a byproduct of a common wood pulping process. Paper mills usually burn Kraft lignin to generate power, because it's difficult to use for anything except specialized purposes. Chemically processing Kraft lignin into a more useful material would improve the environmental sustainability of paper production.

"We performed a chemical modification of Kraft lignin polymer known as acetylation," says first author László Szabó. "Optimizing the extent of acetylation was critical to our research effort."

A controlled reaction was important for optimizing Kraft lignin's ability to be compatible with another polymer called polyacrylonitrile, and thus prepare quality carbon fibers creating an engineered composite. If there's too little--or too much--acetylation, the carbon fibers are of low quality.

"Our reaction was quite mild, producing only a rather benign side product--acetone--without changing the polydispersity of the Kraft lignin," explains Kenji Takahashi, co-senior author. "We thus were able to mix Kraft lignin with polyacrylonitrile to obtain a dope solution for electrospinning containing more compatible polymer segments and eventually fabricate quality carbon fibers."

The researchers' carbon fiber mats contain fine uniform fibers, without the thermal treatment lessening fiber quality. In fact, compared with unmodified Kraft lignin, by using the modified polymer the fiber mat exhibited an almost 3-fold improvement in mechanical strength.

"Our fibers' mechanical performance is attributable to the tailored graphitic structure of the materials," explains Szabó. "This outcome is owing to the improved polymer interactions leading to a more aligned polymeric network which is then subjected to the thermal treatment."

Engineered composites are common in spacecraft, cars, plastic, concrete, and many other products and technologies. When these researchers minimize the cost of preparing their new carbon fibers, perhaps vehicles of the future will be lighter, more durable, and more fuel-efficient. Given that every industry uses transportation, everyone will save money and every business will be more environmentally sustainable.

Credit: 
Kanazawa University

Trehalose 6-phosphate promotes seed filling by activating auxin biosynthesis

image: Upper panel: The wrinkled seed phenotype and NMR* images of intact embryos with reduced T6P content (left) versus wild type (right). Relaxation time (T2) values are color?coded to visualize gradient within the embryo tissues. The miniature 3D-scheme indicates the virtual cross?section plane used for visualization of cotyledons.

Lower panel: A simplified model of the T6P-auxin signaling pathway regulating embryo maturation in pea; details are further explained in the manuscript.

Image: 
IPK/ Meitzel

Efficient deposition of storage compounds in seeds is a key determinant of crop yield, but the underlying regulatory network of seed filling remains undefined. For many years, researchers have been working on the role of sugars for spatial regulation of seed growth and storage (Borisjuk et al., 2002). In addition to their role as a carbon source for starch and protein biosynthesis, sugar molecules have important signaling functions. Numerous studies on the model plant Arabidopsis suggested that the signaling sugar trehalose 6-phosphate (T6P) forms an essential part of a signaling network regulating plant performance in general (Figueroa and Lunn, 2016). The small size of Arabidopsis seeds, however, presents practical difficulties in investigating how T6P participates precisely in the regulation of seed filling. Therefore, the researchers made use of the larger size of pea seeds, allowing an easy preparation and compositional analysis of individual embryos. Their results have recently been published in the magazine New Phytologist.

"Our work identified T6P as a key regulator of seed filling in the grain legume pea and highlighted a link between T6P and the major plant hormone auxin", says Dr. Tobias Meitzel, researcher at IPK and first-author of the study. "This discovery represents a significant step forward in understanding interactions between metabolites and hormones with T6P reporting the raising sucrose status in the maturing seed. As a result of this, T6P mediates the activation auxin biosynthesis, which leads to a stimulation of embryo growth and reserve starch accumulation."

To better understand how T6P controls seed filling, researchers engineered transgenic pea plants aiming on the embryo?specific modulation of T6P levels. An impressive outcome of the targeted reduction of embryonic T6P content was a strongly wrinkled seed phenotype similar to that studied by Gregor Mendel in the mid term of the nineteenth century. "Nuclear magnetic resonance imaging of these embryos revealed a substantial impairment in the formation of a spatial gradient in storage product accumulation and tissue differentiation of embryos", says Dr. Ljudmilla Borisjuk, head of the research group Assimilate Allocation and NMR at IPK. "These findings explain many of the observations we already made two decades ago, when we first described the wave-like differentiation pattern of storing legume embryos."

This study is also of interest for various other areas of plant research as our findings identified T6P as an upstream regulator of auxin biosynthesis. "This hitherto unknown interaction between T6P and auxin might play a general role in mediating the sugar-auxin link", says Dr. Tobias Meitzel. It will be of ongoing interest to determine how this relationship fits within the current understanding of the regulatory frameworks surrounding growth processes and developmental transitions in plants.

Credit: 
Leibniz Institute of Plant Genetics and Crop Plant Research

New genus of chimaerid fish classified with help from Kazan University expert

image: Canadodus suntoki, gen. et sp. nov., RBCM.EH2014.065.0001.001, left mandibular (L?=?50?mm, Km?=?19?mm) plate from the Sooke Formation of Vancouver Island, British Columbia. A, B, photographs of A, occlusal and B, labial views; C, D, line-drawing interpretations with terminology and measurements applied to C, occlusal and D, labial views; E, photograph.

Image: 
Royal British Columbia Museum, Kazan Federal University

A dental plate was found by Canadian national Stephen Suntok on the Pacific coast of British Columbia. Evgeny Popov, a renowned expert in chimaerids, was asked to assist in classification.

"The new species and genus is most close to the extant members of Chimaeridae - Chimaera and Hydrolagus. They are quite widely present in the oceans and comprise about 82% of the existing Holocephali fish," explains Popov.

The dental plate shows that the extinct Canadodus was close in appearance to the extant relatives, with length between 83 and 125 centimeters. Its diet most likely consisted of worms, mollusks, and crustaceans. The dental plate never left Canada - it was studied in Russia via high-definition photos, adds Popov.

As the scientists report, the finding was rather lucky, because vertebrate fossils are rarely found on the shores of Juan de Fuca Strait.

The research significantly contributes to the understanding of chimaerid fauna of the late Paleogene in the Pacific Ocean.

Credit: 
Kazan Federal University

Conflicts in kindergarten can reduce children's interest in reading and math

Teacher-perceived conflict predicts lower interest and pre-academic skills in math and literacy among kindergarteners, a new study from Finland shows.

Kindergarten represents a crucial context in which children develop school-related skills and patterns of engagement that form the basis for the development of later competencies important for academic success. Kindergarten achievement has been found to be highly predictive of later academic skills.

Given the long-lasting effects that kindergarten experiences have on later schooling, it is important to understand the factors associated with children's learning and motivation during this time. The quality of teacher-student interaction has been found to be important in terms of many different academic and socio-emotional outcomes. However, much of the previous work in the field has focused on children in later grades in elementary school and has been conducted in the United States. Fewer studies have been conducted in other educational contexts and in kindergarten specifically.

Researchers from the University of Jyväskylä, the University of Eastern Finland and New York University of Abu Dhabi investigated bidirectional links between the quality of teacher-child relationships and children's interest and pre-academic skills in literacy and math in Finland. Participants were 461 Finnish kindergarteners (6-year-olds) and their teachers (48). The study is part of the Teacher Stress Study, led by Professor Marja-Kristiina Lerkkanen and Associate Professor Eija Pakarinen at the University of Jyväskylä.

The results indicated that teacher-perceived conflict predicted lower interest and pre-academic skills in both literacy and math. It is possible that when children experience conflict with teachers, the negative emotions attached to these conflicts are harmful for children's engagement in learning and diminish their interest in academic tasks. It is also possible that children experiencing conflicts are missing out on time on learning literacy and math, either because they are disengaged from instructional activities or because teachers have to spend more instructional time on behavioural management.

The findings highlight the importance of kindergarten teachers being aware of how their relationships with children can in?uence children's later schooling. It would be important to develop pre-service and in-service programmes and interventions to assist teachers in building supportive, low conflict relationships with children. Teacher education programmes may also benefit from educating teachers not only about academic content and pedagogical practices but also in strategies that build supportive relationships with children.

"Compared to daycare, kindergarten introduces children to a more structured learning environment. The experiences children gain in this environment may have long-term consequences on the development of their academic motivation and competencies. Therefore, it is essential that our teachers are aware of the power their interaction with children may have, and that they are supported in finding optimal ways to interact with each child, while taking individual strengths and needs into consideration," Professor Jaana Viljaranta from the University of Eastern Finland says.

Credit: 
University of Eastern Finland