Earth

Switzerland's energy transition

image: Evangelos Panos is convinced that if Switzerland wants to achieve the zero-emissions target by 2050, it need great efforts.

Image: 
Paul Scherrer Institute/Mahir Dzambegovic

Can Switzerland, as planned, cut its CO2 emissions to zero by 2050? In a study, researchers at the Paul Scherrer Institute PSI have investigated what measures would be necessary to achieve this reduction and how much it might cost per person.

In August 2019, the Swiss Federal Council decided on an ambitious target to limit climate change: From the year 2050 onward Switzerland should, on balance, discharge no further greenhouse gas emissions. With this commitment, Switzerland meets the internationally agreed goal of limiting global warming to a maximum of 1.5° C compared to the pre-industrial era.

Now a study by the Paul Scherrer Institute, conducted within the Joint Activity "Scenarios and Modelling" of the eight Swiss Competence Centres for Energy Research (SCCER), probes what options for achieving this goal exist in the energy sector.

"The goal of achieving net zero CO2 emissions by 2050 requires drastic transformations in the provision and consumption of energy in nearly all areas," concludes Tom Kober, head of the PSI Energy Economics Group and one of the study's main authors.

In their analyses, the researchers considered energy-related CO2 emissions as well as CO2 emissions from industrial processes. Today these emissions represent around 80% of the entire Swiss greenhouse gas inventory. Not included in the study's calculations are emissions from international aviation, agriculture - with the exception of emissions from fuel combustion - land use, changes in land use, and forestry, as well as waste - except for emissions from waste incineration. Also, emissions in other countries that are associated with consumption of goods in Switzerland were not a subject of the study.

Electricity from photovoltaics must at least double every decade

The central conclusions of the study are: Between now and 2050, the installed capacity of photovoltaic systems must at least double every decade. With 26 terawatt hours of production envisioned in 2050, photovoltaic systems will be the second largest generation technology group behind hydropower (approx. 38 terawatt hours in 2050). Furthermore, power plants with cogeneration of heat and power, as well as wind power plants, hydrogen fuel cells, and electricity imports, all contribute to meeting the demand for electricity. In the main scenario for achieving the net zero emissions target, overall electricity generation from power plants and storage facilities in Switzerland will increase by around one-fifth, to 83 terawatt hours in 2050. The study assumes that Swiss nuclear power plants will be decommissioned by 2045. The private car fleet would have to be largely based on electric motors by 2050, meaning that by 2030 every third new car registered would have to be fully electric. In addition, the use of heat pumps in service and living areas would have to be significantly accelerated, so that by 2050 they could cover almost three-quarters of the demand for heating and hot water. At the same time, it would be necessary to achieve significant energy savings through accelerated renovation of residential buildings.

If Switzerland wants to achieve the net zero emissions target, a significant increase in electricity consumption must be expected. Thus in 2050, electricity consumption might be around 20 terawatt hours above today's level. A fundamental driver of this growth is the use of electricity to power cars, buses, and trucks, either directly in battery-electric vehicles or indirectly through hydrogen or so-called e-fuels - that is, synthetic fuels, which are produced by means of electricity from hydrogen and CO2. In the stationary sectors, the proliferation of installed heat pumps will increase consumption of electricity. If the necessary efficiency gains in heating and hot water supply are achieved, however, these could compensate for the increased electricity consumption. The study results show that stationary sectors could achieve an almost constant level of electricity consumption.

Besides electrical energy, other forms of energy will play a role. For example, long-distance and freight transport as well as energy-intensive industry offer prospects for new hydrogen applications. To produce such low or zero emission hydrogen requires a substantial amount of sustainably generated electricity would be necessary - 9 terawatt hours in 2050.

It probably won't work without CO2 capture

"If Switzerland wants to achieve the zero emissions target by 2050, then in the future CO2 emissions will have to be reduced every year by an average of one to one and a half million tonnes compared to the previous year," says Evangelos Panos, lead author of the study. "We saw changes in CO2 emissions of this magnitude between 1950 and 1980 - albeit in the opposite direction - back then they increased massively." Though it has limitations, CO2 capture was shown to be necessary to implement the emissions reduction cost-effectively. In some subsectors, it might even be possible to reach a negative balance in terms of CO2 emissions. This would be the case, for example, if biomass is used as an energy source and the CO2 produced during energy generation is not emitted, but rather is captured and stored underground. In the event that this should not be possible in Switzerland - for example due to rejection by the population or because of limited sites for CO2 storage - cross-national transport of captured CO2 and storage in other countries could offer aa alternative. In their study the researchers assume, for the year 2050, a total of almost 9 million tonnes of CO2 would be captured in Switzerland.

"More than two-thirds of the emission reductions required for the net zero emissions target can be achieved with technologies that are already commercially available or are in the demonstration phase," Panos explains. The decarbonised energy system of the future is achievable but would require carbon-free energy sources, for example appropriately generated electricity, biofuels and e-fuels, access to the corresponding transport and distribution infrastructures, and the possibility of importing clean fuels and electricity.

Costs are hard to estimate

With regard to costs, the energy system researchers are cautious. "The costs are very difficult to estimate, because an enormous number of components play a role," Kober says. In the net zero main scenario assumed in the study, the average discounted additional costs of the climate protection scenario compared to the reference scenario with moderate climate protection (40% CO2 reduction in 2050 compared to 1990) in Switzerland would amount to around 330 CHF per person per year (basis 2010) for the period up to 2050. Looking at all of the scenarios examined, one can see a range of average costs between 200 and 860 CHF2010 per person per year, which ultimately reflects different developments in energy technologies, resource availability, and market integration, in the acceptance of technologies, and in preferences regarding supply security. The trend in costs shows, above all, a long-term increase, so comparatively high costs can also be expected after 2050.

The study is based on calculations made with the Swiss TIMES Energy System Model (STEM) of PSI, which maps the entire energy system of Switzerland including the various interactions between technologies and sectors. STEM combines a long-term time horizon with high intra-year temporal resolution and calculates, for various future framework assumptions, the cost-minimal configurations of the energy system and the attainment of different energy and climate policy goals. The model was significantly further developed as part of this research project, especially with regard to the options for realising net zero CO2 emissions scenarios. The model is used to calculate scenarios, not to make predictions, but rather give insights into the diverse interactions in the energy system and thus to make a contribution to decision-making support in politics, industry, and society. Specifically, three main scenarios were examined in this study: A reference scenario, a net zero CO2 emissions reduction scenario, and a scenario that assumes the goals of the Swiss Energy Strategy 2050 without explicitly specifying a CO2 reduction target. In addition, seven different variants of the main scenarios were analysed, such as one variant with high technological innovation potential and another variant oriented towards reducing dependence on energy imports.

Credit: 
Paul Scherrer Institute

After old age, intellectual disability is greatest risk factor for death from COVID-19

PHILADELPHIA - Intellectual disability puts individuals at higher risk of dying earlier in life than the general population, for a variety of medical and institutional reasons. A new study from Jefferson Health examined how the COVID-19 pandemic has affected this group, which makes up 1-3% of the US population. The study, published today in the New England Journal of Medicine (NEJM) Catalyst, found that intellectual disability was second only to older age as a risk factor for dying from COVID-19.

"The chances of dying from COVID-19 are higher for those with intellectual disability than they are for people with congestive heart failure, kidney disease or lung disease," says lead author Jonathan Gleason, MD, the James D. and Mary Jo Danella Chief Quality Officer for Jefferson Health. "That is a profound realization that we have not, as a healthcare community, fully appreciated until now."

The authors examined 64 million patient records from 547 healthcare organizations between January 2019 to November 2020 to understand the impact of the COVID-19 pandemic on patients with intellectual disabilities. They identified variables such as COVID-19, intellectual disability or other health conditions, as well as demographic factors such as age.

The results showed that those with intellectual disabilities were 2.5 times more likely to contract COVID-19, were about 2.7 times more likely to be admitted to the hospital and 5.9 times more likely to die from the infection than the general population.

"Our failure to protect these deeply vulnerable individuals is heart-breaking," says co-author Wendy Ross, MD, a developmental and behavioral pediatrician and director for the Center for Autism and Neurodiversity at Jefferson Health. "I believe that if we can design a system that is safe and accessible for people with intellectual disabilities, it will benefit all of us."

The authors write that patients with intellectual disabilities may have less ability to comply with strategies that reduce the risk of infection, such as masking and social distancing. In addition, the researchers showed that these patients are more likely to have additional health conditions that contribute to a more severe course of COVID-19 disease. The results of the study highlight how these issues become compounded in this population.

"We need to understand more about what is happening with these patients," says Dr. Gleason. "I do believe these patients and their caregivers should be prioritized for vaccination and healthcare services. We should reflect on why we have failed this vulnerable population, and how we can better serve them during this health crisis, and into the future," Dr. Gleason says. "Even prior to the pandemic, individuals with intellectual disabilities have had poor health outcomes. We need to do much better."

The authors suggest key action steps that require a rapid response. "First, those with intellectual disabilities and their caregivers should be prioritized for vaccines by organizations that set federal guidelines, including the CDC," says Dr. Gleason. "Second, federal and state healthcare regulatory offices should measure access, quality and safety in this population in order to track our ability to improve health outcomes for these patients. Finally, the United States should redesign the care model for individuals with intellectual disabilities."

"As an organization deeply committed to advocating for the health of one of the most marginalized populations - those with intellectual disabilities (ID) - we have seen the need for people with ID to be prioritized as a high-risk group during this pandemic. It's devastating to hear that people with ID are almost six times more likely to die from COVID-19," said Alicia Bazzano, MD, PhD, MPH, Chief Health Officer of the Special Olympics. "Most health authorities do not recognize that people with ID who get COVID-19 have a much higher risk of dying. Special Olympics is grateful to the Jefferson team for shining a spotlight on these devastating numbers."

Credit: 
Thomas Jefferson University

Novel urine test developed to diagnose human kidney transplant rejection

Patients can spend up to six years waiting for a kidney transplant. Even when they do receive a transplant, up to 20 percent of patients will experience rejection. Transplant rejection occurs when a recipient's immune cells recognize the newly received kidney as a foreign organ and refuse to accept the donor's antigens. Current methods for testing for kidney rejection include invasive biopsy procedures, causing patients to stay in the hospital for multiple days. A study by investigators from Brigham and Women's Hospital and Exosome Diagnostics proposes a new, noninvasive way to test for transplant rejection using exosomes -- tiny vesicles containing mRNA -- from urine samples. Their findings are published in the Journal of the American Society of Nephrology.

"Our goal is to develop better tools to monitor patients without performing unnecessary biopsies. We try to detect rejection early, so we can treat it before scarring develops," said Jamil Azzi, MD, associate physician in the Division of Renal Transplant at the Brigham and an associate professor of Medicine at Harvard Medical School. "If rejection is not treated, it can lead to scarring and complete kidney failure. Because of these problems, recipients can face life-long challenges."

Before this study, physicians ordered biopsies or blood tests when they suspected that a transplant recipient was rejecting the donor organ. Biopsy procedures pose risks of complications, and 70-80 percent of biopsies end up being normal. Additionally, creatinine blood tests do not always yield definitive results. Because of the limitations surrounding current tests, researchers sought alternate and easier ways to assess transplant efficacy.

In this study, researchers took urine samples from 175 patients who were already undergoing kidney biopsies advised by physicians. From these samples, investigators isolated urinary exosomes from the immune cells of the newly transplanted kidneys. From these vesicles, researchers isolated protein and mRNA and identified a rejection signature -- a group of 15 genes -- that could distinguish between normal kidney function and rejection. Notably, researchers also identified five genes that could differentiate between two types of rejection: cellular rejection and antibody-mediated rejection.

"These findings demonstrate that exosomes isolated from urine samples may be a viable biomarker for kidney transplant rejection," said Azzi.

This research differs from prior attempts to characterize urinary mRNA because clinicians isolated exosomes rather than ordinary urine cells. The exosomal vesicle protects mRNA from degrading, allowing for the genes within the mRNA to be examined for the match rejection signature. In previous research, mRNA was isolated from cells that shed from the kidney into urine. However, without the extracellular vesicles to protect the mRNA, the mRNA decayed very quickly, making this test difficult to do in a clinical setting.

"Our paper shows that if you take urine from a patient at different points in time and measure mRNA from inside microvesicles, you get the same signature over time, allowing you to assess whether or not the transplant is being rejected," said Azzi. "Without these vesicles, you lose the genetic material after a few hours."

One limitation to this research is that these tests were done on patients undergoing a biopsy ordered by their physician, who already suspected that something was wrong. In the future, Azzi and his colleagues aim to understand whether a test such as this one can be used on kidney transplant recipients with normal kidney activity as measured in the blood to detect hidden rejection (subclinical rejection). They are currently doing a second study on patients with stable kidney function, looking to see if the same signature they identified in this current study could be used on patients without previously identified issues but still detect subclinical rejection.

"What's most exciting about this study is being able to tell patients who participated that their effort allowed us to develop something that can help more people in the future," said Azzi. "As a physician-scientist, seeing an idea that started as a frustration in the clinic, and being able to use the lab bench to develop this idea into a clinical trial, that is very fulfilling to me."

Credit: 
Brigham and Women's Hospital

Misinformation, polarization impeding environmental protection efforts

image: UBCO researchers are concerned about how the actions of some scientists, advocacy groups and the public are eroding efforts to conserve biodiversity, including grizzly bears, wild bees and salmon.

Image: 
UBC Okanagan

A group of researchers, spanning six universities and three continents, are sounding the alarm on a topic not often discussed in the context of conservation--misinformation.

In a recent study published in FACETS, the team, including Dr. Adam Ford, Canada Research Chair in Wildlife Restoration Ecology, and Dr. Clayton Lamb, Liber Ero Fellow, both based in the Irving K. Barber Faculty of Science, explain how the actions of some scientists, advocacy groups and the public are eroding efforts to conserve biodiversity.

"Outcomes, not intentions, should be the basis for how we view success in conservation," says Dr. Ford.

"Misinformation related to vaccines, climate change, and links between smoking and cancer has made it harder for science to create better policies for people," he says. "Weaponizing information to attack other groups impedes our ability to solve problems that affect almost everyone. We wanted to know if these issues were also a problem for people working to conserve biodiversity.

"Conservation is not perfect and things can go wrong. Sometimes people mean well, and harm ensues by accident. Sometimes people's actions are much more sinister."

The study points to multiple examples of good intentions ending badly from across the globe, including the case of the Huemul deer in Patagonia National Park, Chile.

"We reviewed one case where the primary objective of a newly-established park was to protect the endangered Huemul deer. The goal was to make the landscape a little better for these deer in hopes of increasing the population," explains Dr. Lamb. "In doing so, they removed the domestic livestock from the park, and as a result, the natural predators in the system lost their usual food source and ate many of the deer, causing the population to decline further. It's a textbook case of misplaced conservation."

Dr. Lamb points to other cases including mass petitions against shark finning in Florida, although the practice was previously banned there; planting a species of milkweed in an attempt to save monarch butterflies, only to ultimately harm them; and closer to home, the sharing of misinformation in regards to the British Columbia grizzly bear hunt.

"When we see province-wide policies like banning grizzly hunting, those go against the wishes of some local communities in some parts of the province--and choosing to steamroll their perspectives is damaging relationships and alienating the partners we need on board to protect biodiversity," says Dr. Ford.

He suggests using a 'big tent' approach may help combat some of the problems.

"We need to work together on the 90 per cent of goals that we share in common, as opposed to focusing on the 10 per cent of issues where we disagree. There are many clear wins for people and wildlife waiting to be actioned right now, we need to work together to make those happen," says Dr. Ford.

Dr. Lamb says doing so is likely to improve cooperation among parties and increase the use of evidence-based approaches in conservation; ultimately suppressing the spread of misinformation and occurrences of polarization.

"Although we're seeing some misplaced efforts, we're also seeing genuine care and good community energy in many of these cases--we just need to find a way to harness this energy in the right direction."

Credit: 
University of British Columbia Okanagan campus

Protein discovery could help enable eco-friendly fungicides

image: Strawberry infected with gray mold, a fungal disease that primarily affects ripening or damaged fruit.

Image: 
Nicole Ward Gauthier/University of Kentucky

New research reveals an essential step in scientists' quest to create targeted, more eco-friendly fungicides that protect food crops.

Scientists have known for decades that biological cells manufacture tiny, round structures called extracellular vesicles. However, their pivotal roles in communication between invading microorganisms and their hosts were recognized only recently.

UC Riverside geneticist Hailing Jin and her team found plants use these vesicles to launch RNA molecules at fungal invaders, suppressing the genes that make the fungi dangerous.

"These vesicles shuttle small RNAs between cells, like tiny Trojan horses with weapons hidden inside," said Jin, a professor of genetics and the Cy Mouradick Chair in the Department of Plant Pathology and Microbiology. "They can silence pathogenic fungal gene expression."

Using extracellular vesicles and small RNAs has several advantages over conventional fungicides. They're more eco-friendly because they are similar to naturally occurring products. Eventually, they degrade and do not leave toxic residues in the soil. Also, Jin explained, this method of fighting fungi is less likely to breed drug-resistant pathogens.

A sticking point for scientists in creating these fungicides has been figuring out how to load their desired small RNAs into the vesicles.

"We've wondered how these weaponized small RNAs get into the bubbles," Jin said. "Now, we think we have an answer."

Her laboratory has identified several proteins that serve as binding agents, helping to select and load small RNAs into the vesicles. The lab's research is detailed in a new Nature Plants journal article.

The Jin laboratory has been working for several years on the development of gene-silencing RNA fungicides. Work toward this goal led to the team's landmark discovery in 2013 that gene-silencing RNA messages can be sent from the fungal pathogen to the plant host to suppress host immunity. Later, the team learned small RNAs can move both ways -- from plants into pathogenic invader cells as well.

In 2018, the team worked out that extracellular vesicles were the major delivery system for these small RNAs. They observed that Arabidopsis plants secrete extracellular vesicles into Botrytis cinerea, a fungus that causes grey mold disease and destroys millions of crops every year.

"This was the first example of a host using these vesicles to deliver small RNAs to another organism," Jin said. "Previously we saw movement of RNA, but didn't know how the small RNA are selected and transported."

Now, she and her colleagues have identified several RNA-binding proteins in Arabidopsis that bind to specific small RNA molecules and load them into extracellular vesicles. This suggests the proteins play an important role in loading and stabilizing small RNAs in the vesicles. The finding can help increase the payload of gene-silencing RNAs that make it into vesicles and enhance the efficiency of disease control.

Some scientists have taken inspiration from the RNA communication in plant vesicles to design human therapies. For example, some are attempting to load anti-cancer RNAs and drugs into extracellular vesicles in fruits or vegetables, so people can eat or drink them. Jin is hopeful that her lab's discovery can aid these efforts.

Credit: 
University of California - Riverside

Preventing injuries and improving recovery with micro-Doppler radars

?Micro-Doppler radars could soon be used in clinical settings to predict injury risk and track recovery progress, according to Penn State researchers.?

Being able to view subtle differences in human movement?would allow health care workers to more accurately identify individuals who may be at risk for injury and to track progress precisely while individuals are recovering from an injury. In an effort to find an accurate, reliable and cost-effective way to measure these subtleties ?in human movement, College of Engineering and College of Medicine researchers teamed up to develop a radar in front of which athlete study-subjects could jump.??

"My students and I designed and constructed the radar system to characterize the micro-Doppler features of human gait, developed and tested various classification algorithms to separate patterns from different gait types and validated our hypothesis using measured data from athletes mimicking different gait patterns,"?said Ram Narayanan, professor of electrical engineering in the School of Electrical Engineering and Computer Science.??

The radar system relies on the Doppler?effect?-- a way of measuring the change in wave frequency between a target and an observer -- to provide precise information about the movements of that target, in this case, the athlete. This radar system could be a cost-effective, portable and scalable alternative to motion capture systems, which are currently the most accurate system for showing subtle movements. However, they are too expensive, large and time-intensive with use to be a viable option in most situations.?

"The micro-Doppler radar has not been used in health care to this point and is a novel way to look at human movement," said Dr.? Cayce? Onks,?associate professor of family and community medicine and of orthopedics and rehabilitation in the College of Medicine, and physician at Penn State Health.?"Our publication is the first of its kind evaluating the accuracy and predictability of the radar."?

The results were published in the journal Gait and Posture.?

The study had NCAA athletes jump in front of the radar barefoot, wearing shoes, and wearing shoes with?a heel lift.?The radar was able to classify the jumps into each of those three categories with greater than 90% accuracy, something that existing motion-capture systems cannot accomplish, according to Onks.

"The findings of our study show that the micro-Doppler radar is able to?'see' differences in human movement that the human eye is not able to differentiate," Onks?said. "This type of information has the potential to be applied to hundreds of clinical applications, including but not limited to prevention of falls and disabilities, early detection of Parkinson's, early detection of dementia, concussion diagnosis and identification of movement patterns that place individuals at risk for any number of musculoskeletal injuries, such as ankle injuries and ACL tears.?Other applications may include determining readiness of an individual to return to movement following rehabilitation from an injury or surgery."?

Credit: 
Penn State

'Egg carton' quantum dot array could lead to ultralow power devices

Images

A new path toward sending and receiving information with single photons of light has been discovered by an international team of researchers led by the University of Michigan.

Their experiment demonstrated the possibility of using an effect known as nonlinearity to modify and detect extremely weak light signals, taking advantage of distinct changes to a quantum system to advance next generation computing.

Today, as silicon-electronics-based information technology becomes increasingly throttled by heating and energy consumption, nonlinear optics is under intense investigation as a potential solution. The quantum egg carton captures and releases photons, supporting "excited" quantum states while it possesses the extra energy. As the energy in the system rises, it takes a bigger jump in energy to get to that next excited state--that's the nonlinearity.

"Researchers have wondered whether detectable nonlinear effects can be sustained at extremely low power levels--down to individual photons. This would bring us to the fundamental lower limit of power consumption in information processing," said Hui Deng, professor of physics and senior author of the paper in Nature.

"We demonstrated a new type of hybrid state to bring us to that regime, linking light and matter through an array of quantum dots," she added.

The physicists and engineers used a new kind of semiconductor to create quantum dots arranged like an egg carton. Quantum dots are essentially tiny structures that can isolate and confine individual quantum particles, such as electrons and other, stranger things. These dots are the pockets in the egg carton. In this case, they confine excitons, quasi-particles made up of an electron and a "hole." A hole appears when an electron in a semiconductor is kicked into a higher energy band, leaving a positive charge behind in its usual spot. If the hole shadows the electron in its parallel energy band, the two are considered as a single entity, an exciton.

In conventional devices--with little to no nonlinearity--the excitons roam freely and scarcely meet with each other. These materials can contain many identical excitons at the same time without researchers noticing any change to the material properties.

However, if the exciton is confined to a quantum dot, it becomes impossible to put in a second identical exciton in the same pocket. You'll need an exciton with a higher energy if you want to get another one in there, which means you'll need a higher energy photon to make it. This is known as quantum blockade, and it's the cause of the nonlinearity.

But typical quantum dots are only a few atoms across--they aren't on a usable scale. As a solution, Deng's team created an array of quantum dots that contribute to the nonlinearity all at once.

The team produced this egg carton energy landscape with two flakes of semiconductor, which are considered two-dimensional materials because they are made of a single molecular layer, just a few atoms thick. 2D semiconductors have quantum properties that are very different from larger chunks. One flake was tungsten disulfide and the other was molybdenum diselenide. Laid with an angle of about 56.5 degrees between their atomic lattices, the two intertwined electronic structures created a larger electronic lattice, with pockets about 10 atoms across.

In order for the array of quantum dots inside the 2D semiconductor to be controlled as a group with light, the team built a resonator by making one mirror at the bottom, laying the semiconductor on top of it, and then depositing a second mirror on top of the semiconductor.

"You need to control the thickness very tightly so that the semiconductor is at the maximum of the optical field," said Zhang Long, a postdoctoral research fellow in the Deng lab and first author on the paper.

With the quantum egg carton embedded in the mirrored "cavity" that enabled red laser light to resonate, the team observed the formation of another quantum state, called a polariton. Polaritons are a hybrid of the excitons and the light in the cavity. This confirmed all the quantum dots interact with light in concert. In this system, Deng's team showed that putting a few excitons into the carton led to a measurable change of the polariton's energy--demonstrating nonlinearity and showing that quantum blockade was occurring.

"Engineers can use that nonlinearity to discern energy deposited into the system, potentially down to that of a single photon, which makes the system promising as an ultra-low energy switch," Deng said.

Switches are among the devices needed to achieve ultralow power computing, and they can be built into more complex gates.

"Professor Deng's research describes how polariton nonlinearities can be tailored to consume less energy," said Michael Gerhold, program manager at the Army Research Office, an element of the U.S. Army Combat Capabilities Development Command's Army Research Laboratory. "Control of polaritons is aimed at future integrated photonics used for ultra-low energy computing and information processing that could be used for neuromorphic processing for vision systems, natural language processing or autonomous robots."

The quantum blockade also means a similar system could possibly be used for qubits, the building blocks for quantum information processing. One forward path is figuring out how to address each quantum dot in the array as an individual qubit. Another way would be to achieve polariton blockade, similar to the exciton blockade seen here. In this version, the array of excitons, resonating in time with the light wave, would be the qubit.

Used in these ways, the new 2D semiconductors have potential for bringing quantum devices up to room temperature, rather than the extreme cold of liquid nitrogen or liquid helium.

"We are coming to the end of Moore's Law," said Steve Forrest, the Peter A. Franken Distinguished University Professor of Electrical Engineering and co-author of the paper, referring to the trend of the density of transistors on a chip doubling every two years. "Two dimensional materials have many exciting electronic and optical properties that may, in fact, lead us to that land beyond silicon."

Credit: 
University of Michigan

By detecting genetic material, fast sensor has potential use as a clinical tool

In less than a second, a small sensor used in brain chemistry research can detect the key molecules that provide the genetic instructions for life, RNA and DNA, a new study from American University shows.

The AU researchers believe the sensor is a useful tool for scientists engaged in clinical research to measure DNA metabolism, and that the sensor could be a quick way for lab clinicians to distinguish 'healthy' from 'sick' samples and determine if a pathogen is fungal, bacterial, or viral, before conducting further analysis.

To explore whether the sensors could detect RNA and DNA, Alexander Zestos, assistant professor of chemistry, teamed up with John Bracht, associate professor of biology, to test a new method for detection of RNA and DNA. Both professors are part of AU's Center for Neuroscience and Behavior, which brings together researchers from a variety of fields to investigate the brain and its role in behavior.

Novel Electrode Measures RNA and DNA

The sensors, also known as carbon fiber microelectrodes, allow researchers like Zestos to conduct precise measurement of chemicals in the brain. Researchers can learn more about the brain's complex circuitry of neural pathways and neurotransmitters, chemicals in the brain that pass messages along a given pathway.

Zestos and Bracht used a typical carbon fiber microelectrode with fast scan cyclic voltammetry, the same kind of sensor used to detect dopamine in the brain. Zestos' work frequently involves using sensors to detect and measure dopamine in the brain, because the neurotransmitter figures in a wide range of activity in the nervous system, from bodily movements to emotional responses.

The researchers modified the sensor with a specialized electrode. They weren't sure that it would work, and were surprised when the electrode, or waveform, detected the oxidative peaks of adenosine and guanosine, two of the building blocks of DNA. The detection time is fast, occurring in less than a second. Research methods were verified using both animal and synthetic RNA and DNA.

A Research Tool and Pre-Diagnostic

In the near term, Bracht and Zestos envision the tool as useful in clinical research. Researchers who use the tool could gain useful information about nucleic acids and measure the relative ratios of adenosine, guanosine and cytidine, another DNA nucleobase. Around the size of a strand of human hair, the sensor is small enough to implant in cells, tissue, or in live organisms. The sensor can detect DNA or RNA in any fluid sample, including liquid droplets, saliva, blood or urine.

The sensor could also be used as a pre-diagnostic. The onset of disease or fungal infection can cause a quick rise in nucleic acids, which the sensor can measure, and possibly predict rapid infections, the researchers said. It can take up to a day or more for results from tests for coronavirus, for example.

"Electrochemical sensors can be used for evaluating samples prior to sequence-based methods," Bracht said. "We can envision several cases where clinically it's useful to quickly measure DNA or RNA in a sample before further sequencing. For example, it might be used when there are a lot of samples to quickly check before doing more extensive testing."

One current limitation is the sensor will need to detect more than just the strands of DNA and RNA. To detect a specific virus or for genetic testing, the sensor will need to detect the gene sequence of a virus. A next step in the research will be to modify the sensor further to see if the sensor can detect a virus. The sensor potentially has a variety of applications for which further research will be needed, including within forensic science and other fields where sensors play a prominent role.

"We have also thought about whether we can measure DNA metabolism inside living brains and cells," Bracht said. "We could possibly use one electrode to measure neurotransmitters like dopamine and also measure DNA and RNA and their building blocks in real-time in a brain."

Credit: 
American University

Fermented wool is the answer

image: Prepared for analysis using an x-ray fluorescence microscope: Fibres from the historical Pazyryk carpet embedded in epoxy resin (left).The image on the right shows standard samples that the researchers fermented and dyed themselves as a comparison.

Image: 
FAU/Dr. Andreas Späth

The Pazyryk carpet is the world's oldest example of a knotted-pile carpet and is kept at the State Hermitage Museum in St. Petersburg, Russia. The carpet, which was made out of new wool at around 400 BC, is one of the most exciting examples of central Asian craftsmanship from the Iron Age. Ever since the carpet was discovered in 1947 by Russian archaeologists in a kurgan tomb in the Altai mountains, experts in traditional dyeing techniques have been puzzled by the vivid red, yellow and blue colours of the carpet, which lay buried in extreme conditions for almost two thousand five hundred years.

Red fibres under the microscope

Prof. Dr. Karl Meßlinger from the Institute of Physiology and Pathophysiology at FAU, and x-ray microscopy experts Dr. Andreas Späth and Prof. Dr. Rainer Fink from the Chair of Physical Chemistry II at FAU have now shed some light on this secret. Together, they came up with the idea of imaging the distribution of pigments across the cross section of individual fibres of wool using high-resolution x-ray fluorescence microscopy (μ-XRF). Dr. Späth and Prof. Fink conducted the experiments using the PHOENIX x-ray microscope at the Paul Scherrer Institute in Villigen, Switzerland. With three to five micrometres, the microscope provides sufficient spatial resolution combined with high sensitivity for characteristic chemical elements.

The study focused mainly on red wool fibres, as the pigment Turkey red has been in use almost exclusively for centuries in Central Asia and in the Far East to create a characteristic shade of red. Turkey red is a metal organic complex made of alizarin, which is derived from the roots of the rose madder, and aluminium. 'μ-XRF imaging shows the characteristic distribution of the aluminium along the cross section of fermented wool fibres,' explains Dr. Andreas Späth. 'We found the same pattern in fibres from the Pazyryk carpet.' This is by far the earliest example of the fermentation technique and provides an insight into the already highly-developed techniques used by textile craftsmen and women in the Iron Age. The results also show the high potential of x-ray microscopy for analysing samples of textiles from archaeological sites. Up to now, research in this field has used scanning electron microscopy (SEM).

Fermented wool does not fade

Prof. Dr. Karl Meßlinger received a sample of some knots from the Pazyryk carpet 30 years ago in 1991 for analysis with a scanning electron microscope. Together with Dr. Manfred Bieber, an expert in oriental textile dyeing techniques, he previously discovered that SEM imaging can identify wool fibres that have been treated with a special dyeing technique based on previous fermentation of the wool. The fermentation process increases the diffusion of the pigments towards the centre of the wool fibres resulting in significantly more brilliant and permanent colours. Fermented wool can be identified by SEM imaging by means of the characteristic raised position of the outermost layers of the cuticle. 'Traditional Anatolian textile craftspeople are familiar with a less costly yet reliable technique,' says Meßlinger. 'They spread the dyed wool out on a field for several weeks in direct sunlight, then put it in a barn as bedding for their animals before rinsing it out in a stream or river. Only fermented wool retains its colour without any significant bleaching.'

Prof. Meßlinger and Dr. Bieber were able to trace the origins of this traditional dyeing technique back to the 17th century. However, the more the treated textile is used or the more it is exposed to the elements, the less remains of the cuticle layers. Most of the cuticle layers of the world-famous Pazyryk carpet were also missing. The researchers succeeded in proving the effect of fermentation by comparing the fluorescent images with those of samples of wool they fermented and dyed themselves.

Credit: 
Friedrich-Alexander-Universität Erlangen-Nürnberg

Artificial intelligence reveals current drugs that may help combat Alzheimer's disease

BOSTON - New treatments for Alzheimer's disease are desperately needed, but numerous clinical trials of investigational drugs have failed to generate promising options. Now a team at Massachusetts General Hospital (MGH) and Harvard Medical School (HMS) has developed an artificial intelligence-based method to screen currently available medications as possible treatments for Alzheimer's disease. The method could represent a rapid and inexpensive way to repurpose existing therapies into new treatments for this progressive, debilitating neurodegenerative condition. Importantly, it could also help reveal new, unexplored targets for therapy by pointing to mechanisms of drug action.

"Repurposing FDA-approved drugs for Alzheimer's disease is an attractive idea that can help accelerate the arrival of effective treatment--but unfortunately, even for previously approved drugs, clinical trials require substantial resources, making it impossible to evaluate every drug in patients with Alzheimer's disease," explains Artem Sokolov, PhD, director of Informatics and Modeling at the Laboratory of Systems Pharmacology at HMS. "We therefore built a framework for prioritizing drugs, helping clinical studies to focus on the most promising ones."

In an article published in Nature Communications, Sokolov and his colleagues describe their framework, called DRIAD (Drug Repurposing In Alzheimer's Disease), which relies on machine learning--a branch of artificial intelligence in which systems are "trained" on vast amounts of data, "learn" to identify telltale patterns and augment researchers' and clinicians' decision-making.

DRIAD works by measuring what happens to human brain neural cells when treated with a drug. The method then determines whether the changes induced by a drug correlate with molecular markers of disease severity.

The approach also allowed the researchers to identify drugs that had protective as well as damaging effects on brain cells.

"We also approximate the directionality of such correlations, helping to identify and filter out neurotoxic drugs that accelerate neuronal death instead of preventing it," says co-first author Steve Rodriguez, PhD, an investigator in the Department of Neurology at MGH and an instructor at HMS.

DRIAD also allows researchers to examine which proteins are targeted by the most promising drugs and if there are common trends among the targets, an approach designed by Clemens Hug, PhD, a research associate in the Laboratory of Systems Pharmacology and a co-first author.

The team applied the screening method to 80 FDA-approved and clinically tested drugs for a wide range of conditions. The analysis yielded a ranked list of candidates, with several anti-inflammatory drugs used to treat rheumatoid arthritis and blood cancers emerging as top contenders. These drugs belong to a class of medications known as Janus kinase inhibitors. The drugs work by blocking the action of inflammation-fueling Janus kinase proteins, suspected to play a role in Alzheimer's disease and known for their role in autoimmune conditions. The team's analyses also pointed to other potential treatment targets for further investigation.

"We are excited to share these results with the academic and pharmaceutical research communities. Our hope is that further validation by other researchers will refine the prioritization of these drugs for clinical investigation," says Mark Albers, MD, PhD, the Frank Wilkins Jr. and Family Endowed Scholar and associate director of the Massachusetts Center for Alzheimer Therapeutic Science at MGH and a faculty member of the Laboratory of Systems Pharmacology at HMS. One of these drugs, baricitinib, will be investigated by Albers in a clinical trial for patients with subjective cognitive complaints, mild cognitive impairment, and Alzheimer's disease that will be launching soon at MGH in Boston and at Holy Cross Health in Fort Lauderdale, Florida. "In addition, independent validation of the nominated drug targets could provide new insights into the mechanisms behind Alzheimer's disease and lead to novel therapies," says Albers.

Credit: 
Massachusetts General Hospital

Original error

There is no stronger risk factor for cancer than age. At the time of diagnosis, the median age of patients across all cancers is 66. That moment, however, is the culmination of years of clandestine tumor growth, and the answer to an important question has thus far remained elusive: When does a cancer first arise?

At least in some cases, the original cancer-causing mutation could have appeared as long as 40 years ago, according to a new study by researchers at Harvard Medical School and the Dana-Farber Cancer Institute.

Reconstructing the lineage history of cancer cells in two individuals with a rare blood cancer, the team calculated when the genetic mutation that gave rise to the disease first appeared. In a 63-year-old patient, it occurred at around age 19; in a 34-year-old patient, at around age 9.

The findings, published in the March 4 issue of Cell Stem Cell, add to a growing body of evidence that cancers slowly develop over long periods of time before manifesting as a distinct disease. The results also present insights that could inform new approaches for early detection, prevention, or intervention.

"For both of these patients, it was almost like they had a childhood disease that just took decades and decades to manifest, which was extremely surprising," said co-corresponding study author Sahand Hormoz, HMS assistant professor of systems biology at Dana-Farber.

"I think our study compels us to ask, when does cancer begin, and when does being healthy stop?" Hormoz said. "It increasingly appears that it's a continuum with no clear boundary, which then raises another question: When should we be looking for cancer?"

In their study, Hormoz and colleagues focused on myeloproliferative neoplasms (MPNs), a rare type of blood cancer involving the aberrant overproduction of blood cells. The majority of MPNs are linked to a specific mutation in the gene JAK2. When the mutation occurs in bone marrow stem cells, the body's blood cell production factories, it can erroneously activate JAK2 and trigger overproduction.

To pinpoint the origins of an individual's cancer, the team collected bone marrow stem cells from two patients with MPN driven by the JAK2 mutation. The researchers isolated a number of stem cells that contained the mutation, as well normal stem cells, from each patient, and then sequenced the entire genome of each individual cell.

Over time and by chance, the genomes of cells randomly acquire so-called somatic mutations--nonheritable, spontaneous changes that are largely harmless. Two cells that recently divided from the same mother cell will have very similar somatic mutation fingerprints. But two distantly related cells that shared a common ancestor many generations ago will have fewer mutations in common because they had the time to accumulate mutations separately.

Cell of origin

Analyzing these fingerprints, Hormoz and colleagues created a phylogenetic tree, which maps the relationships and common ancestors between cells, for the patients' stem cells--a process similar to studies of the relationships between chimpanzees and humans, for example.

"We can reconstruct the evolutionary history of these cancer cells, going back to that cell of origin, the common ancestor in which the first mutation occurred," Hormoz said.

Combined with calculations of the rate at which mutations accumulate, the team could estimate when the JAK2 mutation first occurred. In the patient who was first diagnosed with MPN at age 63, the team found that the mutation arose around 44 years prior, at the age of 19. In the patient diagnosed at age 34, it arose at age 9.

By looking at the relationships between cells, the researchers could also estimate the number of cells that carried the mutation over time, allowing them to reconstruct the history of disease progression.

"Initially, there's one cell that has the mutation. And for the next 10 years there's only something like 100 cancer cells," Hormoz said. "But over time, the number grows exponentially and becomes thousands and thousands. We've had the notion that cancer takes a very long time to become an overt disease, but no one has shown this so explicitly until now."

The team found that the JAK2 mutation conferred a certain fitness advantage that helped cancerous cells outcompete normal bone marrow stem cells over long periods of time. The magnitude of this selective advantage is one possible explanation for some individuals' faster disease progression, such as the patient who was diagnosed with MPN at age 34.

In additional experiments, the team carried out single-cell gene expression analyses in thousands of bone marrow stem cells from seven different MPN patients. These analyses revealed that the JAK2 mutation can push stem cells to preferentially produce certain blood cell types, insights that may help scientists better understand the differences between various MPN types.

Together, the results of the study offer insights that could motivate new diagnostics, such as technologies to identify the presence of rare cancer-causing mutations currently difficult to detect, according to the authors.

"To me, the most exciting thing is thinking about at what point can we detect these cancers," Hormoz said. "If patients are walking into the clinic 40 years after their mutation first developed, could we have caught it earlier? And could we prevent the development of cancer before a patient ever knows they have it, which would be the ultimate dream?"

The researchers are now further refining their approach to studying the history of cancers, with the aim of helping clinical decision-making in the future.

While their approach is generalizable to other types of cancer, Hormoz notes that MPN is driven by a single mutation in a very slow growing type of stem cell. Other cancers may be driven by multiple mutations, or in faster-growing cell types, and further studies are needed to better understand the differences in evolutionary history between cancers.

The team's current efforts include developing early detection technologies, reconstructing the histories of greater numbers of cancer cells, and investigating why some patients' mutations never progress into full-blown cancer, but others do.

"Even if we can detect cancer-causing mutations early, the challenge is to predict which patients are at risk of developing the disease, and which are not," Hormoz said. "Looking into the past can tell us something about the future, and I think historical analyses such as the ones we conducted can give us new insights into how we could be diagnosing and intervening."

Credit: 
Harvard Medical School

Latinos, Blacks less swayed by college-bound friends

ITHACA, N.Y. - Close friends are important drivers of adolescent behavior, including college attendance, according to Steven Alvarado, assistant professor of sociology in the College of Arts and Sciences.

In new research published March 4 in American Educational Research Journal, Alvarado reports that having college-bound friends increases the likelihood that a student will enroll in college. However, the effect of having college-bound friends is diminished for Black and Latino students compared with white and Asian students, especially for males and especially for selective and highly selective colleges, due to structural and cultural processes.

"Black and Latino students certainly reap some benefits from having college-bound friends in high school," Alvarado said, "but the benefits are not as widespread for these students as they are for white and Asian students when it comes to college enrollment."

Black and Latino students demonstrate a clear and persistent disparity in their college enrollment rates relative to their white and Asian counterparts, Alvarado wrote, citing a 2020 study by the U.S. Department of Education National Center for Education Statistics. In 2018, the college enrollment rate was 59% for Asian young adults and 42% for white young adults; the rates were 36% for Latino young adults and 37% for Black young adults.

Alvarado said he has long been fascinated by the idea that placing disadvantaged and minority students in higher socioeconomic and achievement settings can help them excel in school. This study is part of a series of research papers that test whether having friends who plan to go to college is associated with college-going behavior.

"Friends may directly encourage and motivate one another to study hard, focus and remain on a college path throughout high school," Alvarado wrote. "Friends also provide companionship and camaraderie that may ease the oftentimes isolating academic path to college during adolescence."

However, Alvarado said, the key question is: Do friends influence college-going behavior, or are college-going students already internally motivated to go, regardless of their friends' input?

To quantify how much friendship influences college enrollment for specific groups, Alvarado analyzed data from the U.S. Department of Education's High School Longitudinal Study of 2009, a nationally representative survey of approximately 24,000 students who were followed and surveyed through college.

This survey asked students in 11th grade: "How many of your close friends plan to attend a four-year college?"

Alvarado found that for all students combined, having college-bound friends increased the probability of enrolling in any college by 6 percentage points. Yet, Black and Latino students benefited less than white and Asian students. The diminishment of benefit was starker for male students than female and as the colleges became more selective.

Alvarado theorized in the paper that structural and cultural forces mitigate the influence of friendship for Black and Latino males.

Alvarado wrote that Black and Latino students often internalize negative stereotypes from the wider society about their educational ability. They may also reject an educational system that often marginalizes them. In these ways, both the supply of and the demand for college-bound friends may be diminished by structural discrimination that Black and Latino students deal with every day.

The college-leaning influence of friends may also be tempered among Black and Latino students who grow up in communities that value family above the individual - a dynamic active in Latino immigrant communities and Black communities, as well, Alvarado wrote.

Among the potentially effective strategies for improving college enrollment rates for Black and Latino students is for schools to think of ways to better incorporate Black and Latino families in the college-going process, he said.

"Friendships," Alvarado said, "perhaps when combined with a culturally sensitive approach to college-going, may be one essential piece of the puzzle that is necessary to ameliorate racial and ethnic disparities in college enrollment."

Credit: 
Cornell University

An unstable working life affects the future mental health of young people

image: Spain has been among the European countries with the lowest employment rates, which are accentuated in the young active population.

Image: 
Adapted from The Noun Project

A new study reveals that a precarious, unstable initiation by young people to working life is associated with poorer future mental health. The study was conducted by researchers from the Center for Research in Occupational Health (CISAL, a joint group of UPF and the Hospital del Mar Medical Research Institute) in Barcelona, Spain. Amaya Ayala-Garcia, Laura Serra and Mònica Ubalde-López are the authors of the study, which has been published in the journal BMJ Open.

Since the 1990s, Spain has been among the European countries with the lowest employment rates, which are accentuated in the young active population. Moreover, in 2017, Spain had the highest proportion of temporary contracts and one of the highest rates of precariousness. Previous studies show that unemployment, temporary employment and job insecurity are related to a higher incidence of mental disorders.

This study evaluates the relationship between the various possible pathways at the start of working life with future absenteeism due to mental disorders in a sample of salaried workers. The cohort study is based on employees aged between 18 and 28 years, resident in Catalonia, who presented at least one episode of absenteeism as a result of mental disorders between 2012 and 2014.

"It is a novel approach that evaluates how transitions between types of contracts, situations of employment/unemployment and periods without social security coverage can affect the evolution of mental health in the younger working population entering the labour market. It also investigates the possible effect of the public or private ownership of the companies in which the subjects have begun their working life", says Mònica Ubalde-López, study coordinator, currently a researcher at the Barcelona Institute for Global Health (ISGlobal), a centre driven by the "la Caixa" Foundation.

Amaya Ayala-Garcia, first author of the article, says that "to evaluate job stability, we apply a statistical technique that has allowed us to take an initial photo of the previous 10 years of working life, in which we identified four different patterns of participation in the labour market". These four patterns are: a stable, permanent job, increasing stability (a decrease in the number of transitions between temporary contracts and lack of social security coverage towards permanent contracts), unstable employment with varying types of contracts, and finally, a fourth pattern characterized by later entry to the job market. "To approach the severity of mental disorders, we measured the cumulative days of absenteeism due to mental disorders over three years. Thus, we detected a more or less favourable evolution over time", she adds.

The authors of the article noted that people with a more stable working life, for example increasing job stability, tended to have a more favourable future of accumulation of days absent due to mental disorders (fewer days accumulated) than people who had a more unstable working life. They also noted that having worked in large companies at the start of their working life was associated with better mental health later on.

"Job insecurity among young people can be seen in rates of temporarity and unemployment, which have also greatly increased with covid. Our results show that a precarious labour market may be shaping the future mental health of the young working population. Therefore, future public health policies should address this problem in order to prevent long-term absenteeism", they conclude.

Credit: 
Universitat Pompeu Fabra - Barcelona

More than 80 percent of all infant deaths in Zambian cohort experienced delays in receiving care

(Boston)--Children in Zambia under age 5 die at a rate that is between nearly six to more than 10 times higher than those in the U.S; it is estimated at 40-75 per 1000, compared to 6.98 per 1000. Identifying why these children are dying is the mission of Rotem Lapidot, MD, assistant professor of pediatrics at Boston University School of Medicine (BUSM).

"Significantly, over 80 percent of all community infant deaths involved some form of delay. While it is impossible to know what would have occurred in the absence of such delays, the majority of infant deaths in Lusaka, the capital of Zambia, are from causes for which effective treatments currently exist," explained Lapidot, the corresponding author on the study that appeared online in the journal Pediatrics. A significant number of infants die in the community and are referred to as "brought in dead" (BID). There are limited data around the problem of infant community deaths and identifying the circumstances surrounding them is critical to reduce infant mortality rates.

In an effort to try and better identify common patterns of health seeking behaviors that contributed to these deaths, researchers from BUSM and Boston University School of Public Health (BUSPH), analyzed free-text narratives from verbal autopsies from 230 families of BID infants younger than 6 months of age. They found almost 83 percent of infants had one or more delays in care--the most common delay being the family's decision to seek care (54.8 percent), even as severe symptoms were frequently described. Almost 28 percent of infants died in route to a healthcare facility. Delays in receiving adequate care, including infants dying while waiting in line at a clinic or during referral from a clinic to a hospital, occurred in almost 25 percent of infants. While a third of infants had been previously evaluated by a clinician in the days prior to their death.

According to the researchers, delays in care were the rule rather than the exception in this population of infants. "In many cases infants are dying because they do not receive existing treatments at all or receive them only after the illness has become unsalvageable. If our goal is to reduce child mortality, these findings have profound implications," adds Lapidot, who also is a pediatric infectious diseases specialist at Boston Medical Center.

The researchers said it is important to emphasize that delays in seeking care are likely complex, multifactorial, and do not necessarily imply negligence by the child's caregivers. Logistical barriers they believe may be insurmountable, particularly in deeply impoverished, under-resourced communities, such as typified among the urban poor in Lusaka. "However, our current analysis suggests that there are relatively simple interventions that are low-tech and could be achieved at low cost to avoid such delays and save many infants lives," said Lapidot.

"By analyzing open-ended narratives from the verbal autopsies, we were able to explore the context surrounding infant deaths beyond what is written on a death certificate. We could gain a deeper understanding of the circumstances and social factors that led to the infant death, in the caregivers' own words. This type of data is often not reported in the scientific literature, but these voices and stories of infant death in underserved communities should be elevated and urgently listened to," added former BUSPH research fellow Anna Larson, MPH.

The data used in this study was collected as part of the larger Zambia Pertussis RSV Infant Mortality Estimation (ZPRIME) study. "In global health, we are often very focused on introducing new interventions, drugs, vaccines or technologies as strategies to reduce childhood mortality. What this study reminds us is that sometimes very simple interventions have the potential to save lives," said principal investigator of that study Christopher Gill, MD, associate professor of global health at BUSPH.

Credit: 
Boston University School of Medicine

'Falling insect' season length impacts river ecosystems

image: When there is a pulsed (intensive) supply of terrestrial invertebrates, competition between the salmon is reduced, resulting in hardly any size variation between individual fish. In the pulsed groups, the majority of salmon ate terrestrial invertebrates, reducing the predation pressure on benthic invertebrates. This had a domino effect on the numbers of benthic invertebrates and the breakdown rate of leaf detritus. Solid arrows show predator/prey relationships and the dotted arrows indicate the strength of intraspecific competition. The thickness of the arrow corresponds to the scale of the effect.

Image: 
Takuya Sato

Insects that fall from the surrounding forest provide seasonal food for fish in streams. Researchers at Kobe University and The University of Tokyo have shown that the lengthening of this period has a profound effect on food webs and ecosystem functions present in streams.

These research results provide proof that changes in forest seasonality also affect the ecosystems of nearby rivers. This finding highlights the importance of predicting the effects of climate change on ecosystems.

The research group consisted of Associate Professor SATO Takuya and post-graduate student UEDA Rui of Kobe University's Graduate School of Science, and Associate Professor TAKIMOTO Gaku of The University of Tokyo's Graduate School of Agricultural and Life Sciences.

The results were published in the Journal of Animal Ecology on March 4, 2021.

Main Points

If terrestrial insects are only present in rivers for short, intensive periods, this reduces the competition for food between salmon. As a result, all of the salmon experience equal growth and little size variation is found between individuals. However if the same amount of terrestrial insects is available over an extended period, this results in a hierarchy where large salmon monopolize the food supply of terrestrial insects. As a result, only the large fish grow bigger and greater variation is found in the size of individual fish.

Whether terrestrial insects are available for an intensive or prolonged period of time has a different effect on benthic invertebrate (*1) numbers and the leaf breakdown rate (*2). This result was only observed in the experimental groups where the majority of benthic invertebrates present were easy for the fish to consume.

This study has illuminated that the length of seasonal cycles has a domino effect on ecosystems. Climate change is causing significant alterations in the seasonal cycles of living things. The results of this research indicate the importance of understanding and predicting the response of ecosystems to climate change.

Research Background

Cold, clear flowing streams are home to many salmonid species including red-spotted masu salmon, cherry salmon and Japanese char (hereafter referred to as 'stream fish'). These stream fish prefer to eat the terrestrial invertebrates that fall into the river from the surrounding forests. When there are many of these land-dwelling insects in the water, the stream fish tend not to eat the benthic invertebrates that reside in the river, such as amphipods and the young of aquatic insects. This results in a sustained large population of benthic invertebrates, which eat the leaves that fall into the water. Consequently, this high population in turn accelerates the speed at which leaves in the river are decomposed (leaf breakdown rate = stream ecosystem functionality). Thus the presence of terrestrial invertebrates changes the diets of fish, which has a big impact (a positive indirect effect (*3)) on river food webs and ecosystem functions (Figure 1).

The amount of terrestrial invertebrates that end up in rivers increases as trees grow new leaves in spring, reaches a peak in early summer, and then decreases as the trees lose their leaves in fall. This seasonal pattern is common to streams located in cool temperate to temperate zones. However, the period between the growth of new leaves and defoliation is short in forests at high latitudes and elevations, but long in forests at low latitudes and elevations. Therefore, even though there may be a similar total number of terrestrial invertebrates in rivers over the course of a year, it is likely that they are present in the water for intensive periods at high latitudes/elevations and prolonged periods at low latitudes/elevations.

Research Aims

What kind of effect does the length of the terrestrial invertebrate supply period have on the food webs of streams and stream fish?

This research study investigated the impacts that different supply periods had on red-spotted masu salmon (Oncorhynchus masou ishikawae), as well as the impact on food webs and ecosystems in streams.

Research Methodology and Findings

Outdoor experiments were conducted in large pools that mimic river ecosystems at Kyoto University's Wakayama Research Forest Station. The experiments were carried out from August until November 2016, and exactly the same total amount and type of terrestrial invertebrates (mealworms) were supplied in each experiment during the 90-day period. In the pulsed experiment groups, concentrated amounts of mealworms were supplied every day during the 30-day period in the middle of the 90-day experiment (i.e., from the 30th to 60th days), whereas the prolonged experiment groups were given a steady supply of mealworms for a third of the 90-day period (Photos B and C). Control groups that were not given terrestrial invertebrates were also set up. The following aspects were investigated: salmonid fishes' stomach contents and body size, the number of benthic invertebrates, and the leaf breakdown rate.

In the pulsed groups, it was difficult for the bigger fish to monopolize the mealworm supply because a large amount was given at each time, therefore smaller fish were also able to eat mealworms (Figure 2A). After the experiment, it was found that there was little difference in size between fish in the pulsed groups (Figure 3), indicating that these conditions resulted in a community where it was difficult for individual fish to dominate the food supply. Conversely, in the prolonged groups, it was easy for larger fish to monopolize food flowing downstream, meaning that the out-competed smaller fish hardly ate any mealworms (Figure 2A). Post-experiment, a big variation in the size of fish was found in the prolonged group (Figure 3), revealing that these conditions had resulted in a community where large fish could easily monopolize the food supply. Furthermore, individuals that had reached a mature size were found among the dominant fish in the prolonged group, which is also indicative of the impact on population growth.

In the pulsed group where both large and small fish could eat mealworms, there was a significant decrease in the amount of benthic invertebrates eaten by all fish compared with the control group (Figure 2B). On the other hand, small fish had a tendency to frequently consume benthic invertebrates in the prolonged groups where large fish monopolized the mealworm supply (Figure 2B). Consequently, there was no significant decrease in the amount of consumed benthic invertebrates in the prolonged groups compared with the control.

Benthic invertebrate populations were at their highest in the pulsed groups where all salmon consumed fewer benthic invertebrates, resulting in the quickest breakdown of fallen leaves. On the other hand, in the prolonged groups where smaller fish ate many benthic invertebrates, the numbers of these insects and the leaf breakdown rate did not reach the levels seen in the pulsed groups. In other words, the presence of terrestrial invertebrates changed the feeding habits of the fish, which had a positive indirect effect on benthic invertebrates and the leaf breakdown rate, and this impact was greater in the pulsed groups than in the prolonged groups. Significant contrast was observed in the strength of this indirect effect between pulsed and prolonged groups when a large percentage of the benthic invertebrates consisted of midges, which are easy for salmon to consume. However, the effect was not observed when isopods, which are rarely found in the stomach contents of salmon, made up a large percentage of the benthic invertebrates.

The main cause behind this last finding is believed to be that it is difficult for the fishes' dietary habits to influence benthic invertebrate numbers and the leaf breakdown rate in such circumstances. If the majority of benthic invertebrates present are difficult for the fish to eat, then their diet is unlikely to change from terrestrial to benthic invertebrates.

Further Research

This research provides initial proof that the length of the period where forest-dwelling insects are present in rivers has an extensive impact on salmon growth rate and size distribution, stream food webs and ecosystem functions. In addition, the effect on stream ecosystems is more pronounced when there is a high population of benthic invertebrate species that are easy for salmon to consume. These results show the vital importance of studying organisms' seasonality, which connects ecosystems such as those of forests and rivers, in order to understand food web structures and ecosystem functions.

Based on these research results, we can see how worldwide climate change is impacting the seasonality of organisms living in specific ecosystems and that these changes in turn are likely to have a significant ripple effect on the surrounding ecosystems. Investigating these aspects, and being able to understand and predict the domino effect that climate change has on ecosystem behavior are important issues in the study of macrobiology.

At present, the researchers have set up observation sites all across Japan, from Hokkaido in the north to Kyushu in the south. They are conducting longitudinal observations on the seasonal rise and decline of forest and river-dwelling insects in collaboration with local researchers. Through a combination of wide-ranging, longitudinal species observations and outdoor experiments like the ones in this study, they hope to deepen our understanding of how climate change impacts ecosystems' seasonal aspects, with a view to being able to predict these effects.

Credit: 
Kobe University