Tech

From publication bias to lost in information

The availability of clinical trial records has increased markedly in recent years. For instance, several documents from numerous sources are often available for a single clinical trial - sometimes with overlapping, but often incomplete information.

Identifying and processing this information involves a great deal of resources and challenges. Using an example of information retrieval for a health technology assessment report, this is shown in the current issue of BMJ Evidence-Based Medicine by two researchers from the German Institute for Quality and Efficiency in Health Care (IQWiG): Beate Wieseler from the Drug Assessment Department and Natalie McGauran from the Communications Unit.

In their article "From publication bias to lost in information: Why we need a central public portal for clinical trial data", they call for the establishment of a central public portal for clinical trial records in order to simplify and standardize the extremely time-consuming search for these records. The primary goal is the complete and timely availability of so-called "clinical study reports", i.e., of those records that comprehensively describe clinical trials and their results. The establishment of such a portal would also support a further aim of the growing data transparency movement, namely, to improve patient care.

The main target group of the portal would be researchers conducting evidence syntheses as a basis for informed decision-making in health care, such as the development of guidelines or health policy directives.

Credit: 
Institute for Quality and Efficiency in Health Care

Pre-existing flu immunity impacts antibody quality following infection and vaccination

New research by scientists at the University of Chicago suggests a person's antibody response to influenza viruses is dramatically shaped by their pre-existing immunity, and that the quality of this response differs in individuals who are vaccinated or naturally infected. Their results highlight the importance of receiving the annual flu vaccine to induce the most protective immune response.

The researchers found that most of the initial antibodies stimulated after both influenza infections and influenza vaccinations came from old B cells -- a type of white blood cell that secretes antibodies --indicating the immune system's memory plays a major role in how the body responds early on to a viral infection. These antibodies displayed higher reactivity toward strains of influenza that circulated during an individual's childhood compared to more recent strains.

The study, published December 10, 2020 in the journal Science Translational Medicine, provides those working on a universal influenza vaccine further understanding of how pre-existing immunity affects the development and performance of neutralizing and non-neutralizing antibodies following infection and vaccination. Any effective universal influenza vaccine will depend on scientists identifying 'conserved' parts of the influenza virus that do not mutate over time and that antibodies can target to prevent infection.

"Most interestingly, we found that people who were actively sick with influenza had old antibodies that predominantly targeted parts of the virus that don't change -- but those antibodies specifically targeted non-neutralizing sites," said Haley Dugan, co-first author of the study and a PhD candidate in immunology. "When we tested these same antibodies in mice, they weren't able to protect them from being infected with influenza."

In contrast, the researchers found that influenza vaccinations boost antibodies that tended to target conserved yet neutralizing regions of the virus, which suggests vaccinations can draw upon pre-existing immunity to prompt more protective responses. Vaccinated individuals also generated many antibodies that targeted new and mutated regions on the virus, suggesting these vaccine-induced antibodies are more adaptable.

Immune system memory ensures a rapid and specific response to previously encountered pathogens. Vaccinations work by exposing the immune system to a small amount of virus, which causes B cells to develop a biological memory to the virus. If the body encounters the same virus later, the immune system is alerted to attack and eliminate the virus.

But in order to be protected, the viral proteins of the infecting strain must typically match those of the strain used in the vaccine. The memory B cells are like keys that fit and bind to the locks -- the viral proteins. These memory B cells can survive for decades, providing long-lasting protection from future infections. But if the virus mutates and is significantly different, the memory B cells can no longer recognize the viral proteins, potentially leading to infection.

For this reason, the human body is pitted in an evolutionary arms race with the flu. Because influenza viruses rapidly evolve and mutate each season, our immune system has trouble recognizing the viral surface proteins on new influenza strains. As a result, our bodies often rely on old antibodies to fight new influenza strains; this is possible because some parts of the influenza virus that are critical to its structure or function do not change, remaining familiar to our immune system.

Researchers now understand that specific structural and functional parts of the influenza virus that do not change are better for antibodies to target than others. Antibodies that bind to one of these neutralizing sites are able to prevent infection, while antibodies that target non-neutralizing sites often cannot. Scientists believe a person's age, history of exposure to the influenza virus and type of exposure -- either through infection or vaccination -- all shape whether their immune system antibodies target neutralizing or non-neutralizing sites on a virus.

In the UChicago study, scientists sought to address a major knowledge gap: Which conserved viral sites are preferentially targeted following natural infection versus vaccination in people, and how does pre-existing immunity play a role in shaping the landscape of neutralizing and non-neutralizing antibodies?

"For people who have caught the flu, their pre-existing immunity may make them susceptible to infection or increase the severity of their influenza symptoms if their antibodies are targeting 'bad' or non-neutralizing viral sites," said co-first author and Immunology postdoctoral fellow Jenna Guthmiller, PhD.

By contrast, vaccination largely induces neutralizing and protective antibodies, old and new, highlighting the importance of receiving the seasonal influenza vaccine.

"This study provides a major framework for understanding how pre-existing immunity shapes protective antibody responses to influenza in humans," said Patrick Wilson, PhD, a professor of immunology and lead author of the study. "We need more studies to determine whether the targeting of specific neutralizing and non-neutralizing viral sites directly impacts a person's likelihood of becoming ill."

The researchers are now examining how early exposure to the influenza virus in children shapes their immune response later in life as a follow-up to this work.

Credit: 
University of Chicago Medical Center

Oregon researchers find that like adults, children by age 3 prefer seeing fractal patterns

image: A fractal inspired carpet designed by Richard Taylor and his University of Oregon colleagues is underfoot during a reception in a Chicago building. A new study by UO psychologists working with Taylor have found that a visual preference for fractal patterns, seen in nature, has developed in children by the age of 3. Taylor's team has used fractal patterns in solar panels and window blinds to heighten people's aesthetic appreciation.

Image: 
Photo by Richard Taylor

EUGENE, Ore. -- Dec. 11, 2020 -- By the time children are 3 years old they already have an adult-like preference for visual fractal patterns commonly seen in nature, according to University of Oregon researchers.

That discovery emerged among children who've been raised in a world of Euclidean geometry, such as houses with rooms constructed with straight lines in a simple non-repeating manner, said the study's lead author Kelly E. Robles, a doctoral student in the UO's Department of Psychology.

"Unlike early humans who lived outside on savannahs, modern-day humans spend the majority of their early lives inside these manmade structures," Robles said. "So, since children are not heavily exposed to these natural low-to-moderate complexity fractal patterns, this preference must come from something earlier in development or perhaps are innate."

The study published online Nov. 25 in the Nature journal Humanities and Social Sciences Communication. In it, researchers explored how individual differences in processing styles may account for trends in fractal fluency. Previous research had suggested that a preference for fractal patterns may develop as a result of environmental and developmental factors acquired across a person's lifespan.

In the UO study, researchers exposed participants -- 82 adults, ages 18-33, and 96 children, ages 3-10 -- to images of fractal patterns, exact and statistical, ranging in complexity on computer screens.

Exact fractals are highly ordered such that the same basic pattern repeats exactly at every scale and may possess spatial symmetry such as that seen in snowflakes. Statistical fractals, in contrast, repeat in a similar but not exact fashion across scale and do not possess spatial symmetry as seen in coastlines, clouds, mountains, rivers and trees. Both forms appear in art across many cultures.

When viewing the fractal patterns, Robles said, subjects chose favorites between different pairs of images that differed in complexity. When looking at exact fractal patterns, selections involved different pairs of snowflake-like or tree-branch-like images. For the statistical fractals, selections involved choosing between pairs of cloud-like images.

"Since people prefer a balance of simplicity and complexity, we were looking to confirm that people preferred low-to-moderate complexity in statistically repeating patterns, and that the presence of order in exact repeating patterns allowed for a tolerance of and preference for more complex patterns," she said.

Although there were some differences in the preferences of adults and children, the overall trend was similar. Exact patterns with greater complexity were more preferred, while preference for statistical patterns peaked at low-moderate complexity and then decreases with additional complexity.

In subsequent steps with the participants, the UO team was able to rule out the possibility that age-related perceptual strategies or biases may have driven different preferences for statistical and exact patterns.

"We found that people prefer the most common natural pattern, the statistical fractal patterns of low-moderate complexity, and that this preference does not stem from or vary across decades of exposure to nature or to individual differences in how we process images," Robles said. "Our preferences for fractals are set before our third birthdays, suggesting that our visual system is tuned to better process these patterns that are highly prevalent in nature."

The aesthetic experience of viewing nature's fractals holds huge potential benefits - ranging from stress-reduction to refreshing mental fatigue, said co-author Richard Taylor, professor and head of the UO's Department of Physics.

"Nature provides these benefits for free, but we increasingly find ourselves surrounded by urban landscapes devoid of fractals," he said. "This study shows that incorporating fractals into urban environments can begin providing benefits from a very early age."

Taylor is using fractal inspired designs, in his own research, in an effort to create implants to treat macular degeneration. He and co-author Margaret Sereno, professor of psychology and director of the Integrative Perception Lab, also have published on the positive aesthetic benefits of installing fractal solar panels and window blinds.

Fractal carpets, recently installed in the UO's Phil and Penny Knight Campus for Accelerating Scientific Impact, are seen in the new facility's virtual grand opening tour. Sereno and Taylor also are collaborating on future applications with Ihab Elzeyadi, a professor in the UO's Department of Architecture.

Credit: 
University of Oregon

New online COVID-19 mortality risk calculator could help determine who should get vaccines first

A new online calculator for estimating individual and community-level risk of dying from COVID-19 has been developed by researchers at the Johns Hopkins Bloomberg School of Public Health. The researchers who developed the calculator expect it to be useful to public health authorities for assessing mortality risks in different communities, and for prioritizing certain groups for vaccination as COVID-19 vaccines become available.

The algorithm underlying the calculator uses information from existing large studies to estimate risk of COVID-19 mortality for individuals based on age, gender, sociodemographic factors and a variety of different health conditions. The risk estimates apply to individuals in the general population who are currently uninfected, and captures factors associated with both risk of future infection and complications after infection.

"Our calculator represents a more quantitative approach and should complement other proposed qualitative guidelines, such as those by the National Academy of Sciences and Medicine, for determining individual and community risks and allocating vaccines," says study senior author Nilanjan Chatterjee, PhD, Bloomberg Distinguished Professor in the departments of Biostatistics and Epidemiology at the Bloomberg School.

The new risk calculator is presented in a paper that appears in the journal Nature Medicine.

The researchers also collaborated with PolicyMap, Inc. to develop interactive maps for viewing numbers and the proportion of individuals at various levels of risks across U.S. cities, counties and states. These maps will allow local policymakers to plan for vaccination, shielding high-risk individuals, and other targeted intervention efforts.

COVID-19, the pandemic infectious disease that has swept the world over the past ten months, afflicting nearly 70 million people and killing more than 1.5 million worldwide, can affect different people in starkly different ways. Children and young adults may suffer very mild disease or no symptoms at all, whereas the elderly have infection mortality rates of at least several percent. There are also clear ethnic and racial differences--Black and Latinx patients in the U.S., for example, have died of COVID-19 infections at much higher rates than white patients--as well as differences linked to preexisting medical conditions such as diabetes.

"Although we have long known about factors associated with greater mortality, there has been limited effort to incorporate these factors into prevention strategies and forecasting models," Chatterjee says.

He and his team developed their risk model using several COVID-19-related datasets, including from a large U.K.-based study and state-level death rates published by the Centers for Disease Control and Prevention, and then validated the model for predicting community-level mortality rates using recent deaths across U.S. cities and counties.

The calculator based on the model is available online for public health officials and interested individuals alike. It enables a user to determine individual risk based on factors such as age, sex, race/ethnicity, and medical history and can be used to define risk for a group, such as for a particular community, corporation, or university, based on the mix of relevant factors that define the group.

In their paper, Chatterjee and colleagues used their calculator to describe the risk distribution for the whole U.S. population, showing, for example, that only about four percent of the population at high risk--defined as five times greater risk than the U.S. average--is expected to contribute close to 50 percent of the total deaths. The researchers also showed that population-level risk varies considerably from city to city and county to county. "For example, the percentage of the adult population exceeding the fivefold risk threshold varies from 0.4 percent in Layton, Utah, to 10.7 percent in Detroit, Michigan," Chatterjee says.

The calculator allows users to calculate mortality risk of individuals by combining information on individual-level factors with community-level pandemic dynamics, as available from a large variety of forecasting models. Thus, when a big wave of infections hits a population, the risk estimates for individuals will rise in that community. Currently, the tool is updated on a weekly basis to incorporate information on state-level pandemic dynamics.

Chatterjee and his colleagues expect that their calculator will be useful in setting priorities for allocating early COVID-19 vaccines and other scarce preventive resources such as N95 masks. Proposed guidelines from the U.S. National Academy of Sciences, Engineering and Medicine put frontline medical workers in the top-priority category to maximize societal benefits and minimize the chance that they will infect others, but most other priority categories are based broadly on estimated risks for infection and disease severity and, for example, give higher priority to the elderly and to people with conditions such as diabetes.

"People may understand broadly that with a preexisting condition such as obesity or diabetes, for example, they are at higher risk, but with our calculator they should be able to understand their risk in a way that takes multiple factors into account," Chatterjee says.

Credit: 
Johns Hopkins Bloomberg School of Public Health

High-tech fixes for the food system could have unintended consequences

image: A farmer in Beora, a small community in Rupandehi District of Nepal.

Image: 
Neil Palmer / International Center for Tropical Agriculture

Protein derived from organic waste to feed livestock could decrease demand for soybean meal. This could lead to less deforestation caused by soy farming. But decreased production of soybean, which is also used to produce oil for food products, could increase demand for palm oil. This could clear more forests for oil palm plantations.

This is just one example of how innovations to fix our food systems could backfire. In a new analysis in The Lancet Planetary Health, a team of scientists builds on recent research that discusses how new technology is needed to improve human health and the wellbeing of the planet.

The authors say that the urgency to meet the United Nations' Sustainable Development Goals (called SDGs; there are 17) must be tempered by the understanding that there are no quick fixes to ending poverty, eliminating hunger and conserving biological diversity.

"The food system is in the mess it is right now because we introduce technologies and approaches to managing it without fully understanding all the indirect impacts the intervention can have," said Andy Jarvis, a co-author and the associate director of the Alliance of Bioversity International and CIAT.

Symptoms of our ailing food system include unsustainable farming practices, habitat destruction, biodiversity loss and the waste or loss of about 30 percent of all food produced. Some 2 billion people are unhealthy because of their diets and some 8 million people died in 2019 due to dietary risk factors.

In addition to tapping organic waste to produce microbial protein (called "circular feed") the authors looked at trade-offs of three other food-system remedying technologies on the horizon:

Using cereals to replenish nitrogen in soils (called "nitrogen fixation") could decrease the overuse of chemical fertilizers and its unsustainable impacts on the environment such as water pollution. But this could reduce prices for already over-consumed foods, potentially leading to further increases in non-communicable diseases (NDCs) like diabetes.

Personalized nutrition technologies could substantially reduce NDCs by tailoring diets to people's genetic profiles and metabolism. But this could lead to a rapidly unsustainable increase in demand for healthy foods (see: Mexico's avocado sector). The cost of personalized nutrition could also be out of the economic reach of many. And, were it to become widespread, personalized nutrition would generate high volumes of sensitive personal data.

Automation and robotics could increase the reach of precision agriculture. This could reduce food prices, stabilize food supply and reduce overuse of fertilizers and water, which would benefit the environment. But this could reduce the need for unskilled labor, further threaten the precarious livelihoods of smallholder farmers, and drive more migration to haphazardly growing megacities.

"Exciting new technologies are needed for transitioning towards a sustainable food system," said Ana Maria Loboguerrero, a co-author and the Alliance's research director for climate action. "But we must be aware that "win-win" technological solutions do not always exist, with losers and winners and trade-offs and synergies across different SDGs."

Helping the SDGs

The study was led by Mario Herrera, the chief research scientist at CSIRO, Australia's national research agency. The authors calculated the potential direct effects of different technologies on the food system (including digital agriculture, gene technology and resource efficiency) and their indirect effects on the SDGs.

The analysis showed most technologies will have neutral or varying degrees of positive impacts across most of the SDGs. But in the case of decent work and economic growth for all (SDG 8), reduced inequality (SDG 10) and peace, justice and strong institutions (SDG 16), the results will be mixed.

Some of the SDGs, which were created in 2015 to expand upon 2000's Millenium Development Goals, are not trending in the right direction. Hunger was already increasing before the COVID-19 pandemic made undernourishment worse. Rapid action is necessary and the temptation to adopt quick-fix actions with unknown negative impacts may be greater now than ever.

The authors conclude, "[C]hange and innovation come with trade-offs, but we now have methods, the science, the targets, and the socioeconomic mechanisms in place to ensure that the trade-offs of our actions do not become insurmountable. Now is the time to put our arsenal of sociotechnical innovation and immense human ingenuity to use to secure the future of our planet and the next generations."

Credit: 
The Alliance of Bioversity International and the International Center for Tropical Agriculture

Sea star listed as critically endangered following research by Oregon State University

image: Sunflower sea star

Image: 
Janna Nichols

CORVALLIS, Ore. - The iconic sunflower sea star has been listed as critically endangered by the International Union for Conservation of Nature following a groundbreaking population study led by Oregon State University and The Nature Conservancy.

"These sea stars used to be easy to find and were a hit with students and divers because they are unforgettable - they can be as big as a trash bin lid with 20 slimy arms covered in suction cups," said OSU's Sarah Gravem, a research associate in the College of Science and the lead author on the study. "Unfortunately, your chances of finding one now are next to nothing in most of the contiguous United States - this listing is one step above extinction - and I don't think they're coming back without help like captive rearing and reintroduction and reducing direct harvest and accidental harvest."

More than 60 institutions joined Oregon State and The Nature Conservancy in the population study on the sunflower sea star, known scientifically as Pycnopodia helianthoides, which plays an important role in maintaining kelp forests, and thus sustaining marine life, along the West Coast from Alaska to Baja, California.

Populations of the sunflower sea star suffered dramatic crashes because of a marine wildlife epidemic event, referred to as sea star wasting syndrome, that began in 2013.

Scientists used more than 61,000 population surveys from 31 datasets to calculate a 90.6% decline in the sunflower sea stars and estimated that as many as 5.75 billion animals died from the disease, whose cause has not been determined.

Moreover, the research produced no indications of population recovery in any region in the five to seven years since the outbreak.

Sunflower sea stars are now nearly absent in Mexico as well as the contiguous United States, the scientists say. No stars have been seen in Mexico since 2016, none in California since 2018, and only a handful in Oregon and Washington since 2018.

Sunflower sea stars are a key predator of purple sea urchins and the sea star decline has helped fuel an explosion in the urchin population in many regions. An overabundance of urchins is linked to a decline in kelp forests already facing pressure from marine heat wave events, making the future uncertain for ecosystems that provide habitat for thousands of marine animals and help support coastal economies.

"Because most people aren't out in the ocean every day, we don't realize how much it's being changed and impacted by humans," said study co-author Sara Hamilton, a Ph.D. candidate in the OSU College of Science. "We need to think creatively about how to keep our ocean healthy. While drawing down carbon emissions is the most pressing need, rebuilding key predator populations, like the sunflower sea star, can be an important piece of the puzzle too."

The IUCN Red List of Threatened Species is an important resource for guiding conservation action and policy decisions, assessing the risk of extinction of a species should no conservation action be taken. Species are assigned to one of eight categories of threat based on criteria linked to population trend, population size and structure and geographic range.

Species listed as critically endangered, endangered or vulnerable are collectively described as threatened.

Credit: 
Oregon State University

Researchers find a better way to design metal alloys

image: Researchers have found a new way to predict the properties of metal alloys based on reactions at the boundaries between the crystalline grains of the primary metal. In this image, the colored dots indicate the likelihood that atoms will collect along these boundaries rather than penetrating through.

Image: 
Image courtesy of the researchers

Advanced metal alloys are essential in key parts of modern life, from cars to satellites, from construction materials to electronics. But creating new alloys for specific uses, with optimized strength, hardness, corrosion resistance, conductivity, and so on, has been limited by researchers' fuzzy understanding of what happens at the boundaries between the tiny crystalline grains that make up most metals.

When two metals are mixed together, the atoms of the secondary metal might collect along these grain boundaries, or they might spread out through the lattice of atoms within the grains. The material's overall properties are determined largely by the behavior of these atoms, but until now there has been no systematic way to predict what they will do.

Researchers at MIT have now found a way, using a combination of computer simulations and a machine-learning process, to produce the kinds of detailed predictions of these properties that could guide the development of new alloys for a wide variety of applications. The findings are described today in the journal Nature Communications, in a paper by graduate student Malik Wagih, postdoc Peter Larsen, and professor of materials science and engineering Christopher Schuh.

Schuh explains that understanding the atomic-level behavior of polycrystalline metals, which account for the vast majority of metals we use, is a daunting challenge. Whereas the atoms in a single crystal are arranged in an orderly pattern, so that the relationship between adjacent atoms is simple and predictable, that's not the case with the multiple tiny crystals in most metal objects. "You have crystals smashed together at what we call grain boundaries. And in a conventional structural material, there are millions and millions of such boundaries," he says.

These boundaries help to determine the material's properties. "You can think of them as the glue holding the crystals together," he says. "But they are disordered, the atoms are jumbled up. They don't match either of the crystals they're joining." That means they offer billions of possible atomic arrangements, he says, compared to just a few in a crystal. Creating new alloys involves "trying to design those regions inside a metal, and it's literally billions of times more complicated than designing in a crystal."

Schuh draws an analogy to people in a neighborhood. "It's kind of like being in a suburb, where you may have 12 neighbors around you. In most metals, you look around, you see 12 people and they're all at the same distance away from you. It's totally homogenous. Whereas in a grain boundary, you still have something like 12 neighbors, but they're all at different distances and they're all different-size houses in different directions."

Traditionally, he says, those designing new alloys simply skip over the problem, or just look at the average properties of the grain boundaries as though they were all the same, even though they know that's not the case.

Instead, the team decided to approach the problem rigorously by examining the actual distribution of configurations and interactions for a large number of representative cases, and then using a machine-learning algorithm to extrapolate from these specific cases and provide predicted values for a whole range of possible alloy variations.

In some cases, the clustering of atoms along the grain boundaries is a desired property that can enhance a metal's hardness and resistance to corrosion, but it can also sometimes lead to embrittlement. Depending on the intended use of an alloy, engineers will try to optimize the combination of properties. For this study, the team examined over 200 different combinations of a base metal and an alloying metal, based on combinations that had been described on a basic level in the literature. The researchers then systematically simulated some of these compounds to study their grain boundary configurations. These were used to generate predictions using machine learning, which were in turn validated with more focused simulations. The machine-learning predictions closely matched the detailed measurements.

As a result, the researchers were able to show that many alloy combinations that had been ruled out as unviable in fact turn out to be feasible, Wagih says. The new database compiled from this study, which has been made available in the public domain, could help anyone now working on designing new alloys, he says.

The team is forging ahead with the analysis. "In our ideal world, what we would do is take every metal in the periodic table, and then we would add every other element in the periodic table to it," Schuh says. "So you take the periodic table and you cross it with itself, and you would check every possible combination." For most of those combinations, basic data are not yet available, but as more and more simulations are done and data collected, this can be integrated into the new system, he says.

Credit: 
Massachusetts Institute of Technology

Scientists publish open resource to help design 'greener' energy systems

Researchers have created a database of measurements from existing global power grid systems that will help develop new power systems capable of meeting changing demands, such as the move towards renewable energy sources.

The study, published in Nature Communications, is the first step towards a more collaborative approach to energy research. It is hoped the publicly available data can be used worldwide to design and test new energy concepts in response to current and future challenges.

For the study, the researchers collected power grid data from 17 locations across three continents and covering 12 synchronous areas - regions containing different power plants and consumers that are connected and operate under the same frequency.

The research team, including scientists from Forschungszentrum Jülich, Queen Mary University of London, Karlsruhe Institute of Technology (KIT), Technical University Dresden and Istanbul University, were particularly interested in understanding changes in frequency, which highlight the balance between energy supply and demand.

Using a novel measurement device designed at KIT, the scientists were able to precisely capture differences in frequency between various synchronous areas by simply connecting it to a wall socket.

The researchers used their experimental recordings to test theoretical predictions on how the size of a synchrous area can influence its stability. They found that smaller areas tended to be much more volatile than larger areas in their fluctuations in frequency.

Dr Benjamin Schäfer, a Marie Curie Research Fellow at the School of Mathematical Sciences at Queen Mary and lead author of the study, said: "The power grid in all European countries operates at a frequency of approximately 50 Hz and it is almost constant throughout a single synchronous area. If consumption rises, the frequency slightly drops, while a persistent burst of wind might increase the frequency as additional wind power generation is fed into the grid. The fluctuations of the frequency around the reference value tell us a lot about how a specific synchronous area is operated, including when trading takes place, how large the grid is, how much control is enforced and more. In this study we confirm that size has an impact on the stability of frequencies, highlighting the need to consider size in the design and control of electricity grids, including microgrids"

By simultaneously measuring frequencies in several locations within a synchrous area the researchers also observed that whilst on longer time scales of minutes or more, the frequencies were identical everywhere, on a shorter time scale of seconds, substantial differences between locations were observed.

"We would assume that within a synchronous area, as the name suggests, the frequency would be identical everywhere. However when we conducted simultaneous measurements at several locations within the Continential European synchrous area we observed significant differences between frequencies on a timescale of seconds. The further away two locations are, the longer it takes for them to fully synchronize. In our article, we quantify this effect for the first time and measure how long this time-to-bulk is," added Dr Schäfer.

The introduction of renewable energy sources to mitigate climate change is rapidly changing energy systems worldwide, and in particular, electricity grids. "Whilst new policies, technologies and business models are being implemented globally to meet these new requirements, it is also important for us to learn from the systems that have been implemented so far," said Professor Christian Beck, Professor of Applied Mathematics at Queen Mary.

"We believe this openly published data and their detailed statistical analysis provide a great source of information for those working on the control and design of power grids worldwide, providing empirical predictions for the future and helping us to better understand the complex dynamics of sustainable energy systems."

Credit: 
Queen Mary University of London

Researchers control multiple wavelengths of light from a single source

image: Photoluminescence change of dual-color-emissive carbon dots (CDs) depending on their concentration. Blue- and red-emissions show different contributions with different interparticle distances.

Image: 
Professor Do Hyun Kim, KAIST

KAIST researchers have synthesized a collection of nanoparticles, known as carbon dots, capable of emitting multiple wavelengths of light from a single particle. Additionally, the team discovered that the dispersion of the carbon dots, or the interparticle distance between each dot, influences the properties of the light the carbon dots emit. The discovery will allow researchers to understand how to control these carbon dots and create new, environmentally responsible displays, lighting, and sensing technology.

Research into nanoparticles capable of emitting light, such as quantum dots, has been an active area of interest for the last decade and a half. These particles, or phosphors, are nanoparticles made out of various materials that are capable of emitting light at specific wavelengths by leveraging quantum mechanical properties of the materials. This provides new ways to develop lighting and display solutions as well as more precise detection and sensing in instruments.

As technology becomes smaller and more sophisticated, the usage of fluorescent nanoparticles has seen a dramatic increase in many applications due to the purity of the colors emitting from the dots as well as their tunability to meet desired optical properties.

Carbon dots, a type of fluorescent nanoparticles, have seen an increase in interest from researchers as a candidate to replace non-carbon dots, the construction of which requires heavy metals that are toxic to the environment. Since they are made up of mostly carbon, the low toxicity is an extremely attractive quality when coupled with the tunability of their inherent optical properties.

Another striking feature of carbon dots is their capability to emit multiple wavelengths of light from a single nanoparticle. This multi-wavelength emission can be stimulated under a single excitation source, enabling the simple and robust generation of white light from a single particle by emitting multiple wavelengths simultaneously.

Carbon dots also exhibit a concentration-dependent photoluminescence. In other words, the distance between individual carbon dots affects the light that the carbon dots subsequently emit under an excitation source. These combined properties make carbon dots a unique source that will result in extremely accurate detection and sensing.

This concentration-dependency, however, had not been fully understood. In order to fully utilize the capabilities of carbon dots, the mechanisms that govern the seemingly variable optical properties must first be uncovered. It was previously theorized that the concentration-dependency of carbon dots was due to a hydrogen bonding effect.

Now, a KAIST research team, led by Professor Do Hyun Kim of the Department of Chemical and Biomolecular Engineering has posited and demonstrated that the dual-color-emissiveness is instead due to the interparticle distances between each carbon dot. This study was made available online in June 2020 ahead of final publication in the 36th Issue of Physical Chemistry Chemical Physics on September 28, 2020.

First author of the paper, PhD candidate Hyo Jeong Yoo, along with Professor Kim and researcher Byeong Eun Kwak, examined how the relative light intensity of the red and blue colors changed when varying the interparticle distances, or concentration, of the carbon dots. They found that as the concentration was adjusted, the light emitted from the carbon dots would transform. By varying the concentration, the team was able to control the relative intensity of the colors, as well as emit them simultaneously to generate a white light from a single source (See Figure).

"The concentration-dependence of the photoluminescence of carbon dots on the change of the emissive origins for different interparticle distances has been overlooked in previous research. With the analysis of the dual-color-emission phenomenon of carbon dots, we believe that this result may provide a new perspective to investigate their photoluminescence mechanism," Yoo explained.

The newly analyzed ability to control the photoluminescence of carbon dots will likely be heavily utilized in the continued development of solid-state lighting applications and sensing.

Credit: 
The Korea Advanced Institute of Science and Technology (KAIST)

Pizza can help address the dark matter mystery?

image: (Left to right) ? single large cavity, ? single small cavity, ? multiple small cavities ?multiple-cell cavity (pizza cavity) ? multiple-cell cavity with a gap

Image: 
IBS

Despite its vanishingly tiny mass, the existence of the axion, once proven, may point to new physics beyond the Standard Model. Born to explain a fundamental symmetry problem in the strong nuclear force associated with the matter-antimatter imbalance in our Universe, this hypothetical particle also makes an attractive dark matter candidate. Though axions would exist in vast enough numbers to be able to account for the "missing" mass from the Universe, the search for this dark matter has been quite challenging so far.

Scientists believe that when an axion interacts with a magnetic field, its energy would be converted into a photon. The resulting photon is expected to be somewhere in the microwave-frequency range. Hoping to hit the right match for the axion, experimentalists use a microwave detector, a cavity haloscope. Having a cylindrical resonator placed in a solenoid, the magnetic field filling the cavity enhances the signal. The haloscope also allows scientists to continually adjust the resonant frequency of the cavity. However, the most sensitive axion-search experiment, the Axion Dark Matter eXperiment (ADMX) at the University of Washington has been searching low frequency regions, below 1 GHz, as scanning higher frequency regions requires a smaller cavity radius, resulting in significant volume loss and hence less signal. (Figure 1-?)

A research team, led by Dr. YOUN SungWoo at the Center for Axion and Precision Physics Research (CAPP) within the Institute for Basic Science (IBS) in South Korea, has developed a novel multiple-cell cavity design, dubbed "pizza cavity". Just like pizzas are cut into several slices, multiple partitions vertically divide the cavity volume into identical pieces (cells). With almost no volume to be lost, this multiple-cell haloscope enables the meaningful output of high-frequency region scanning. (Figure 1-?). Though there were endeavors to bundle smaller cavities together and combine individual signals with all the cavities tuned at the same frequency, its complicated setup and non-trivial frequency matching mechanism have been bottlenecks. (Figure 1-?). "The pizza cavity haloscope features a simpler detector setup and a unique phase-matching mechanism as well as a larger detection volume compared to the conventional multi-cavity design," notes Dr. YOUN SungWoo, the corresponding author of the study.

The researchers proved that the multiple-cell cavity was able to detect high-frequency signals with improved efficiency and reliability. In an experiment using a 9T-superconducting magnet at a temperature of 2 kelvin (?271 °C), the team quickly scanned a frequency range of > 200 MHz above 3 GHz, which is 4~5 times higher region than that of ADMX yielding higher sensitivity to theoretical models than the previous results made by other experiments. Also this new cavity design enabled the researchers to explore a given frequency range four times faster than a conventional experiment could. "Getting things done four times faster." Dr. Youn jokingly adds, "Using this multiple-cell cavity design, our Ph.D. students should be able to graduate faster than those in other labs."

What makes this multiple-cell design simple to operate is the gap between partitions in the middle. Having all of the cells spatially connected, a single antenna picks up the signal from the entire volume. "As a pizza saver keeps pizza slices intact with its original toppings, the gap in between helps the cells to be up to the job," says Dr. Youn. The single antenna also allows researchers to assess whether the axion-induced electromagnetic fields are evenly distributed throughout the cavity, which is found to be critical to achieve the maximum effective volume. "Still, the inaccuracy and misalignment in cavity construction could hamper the sensitivity. For that, this multiple-cell design enables to relieve it by adjusting the size of the gap in the middle, leaving no volume to go to waste," explains Dr. Youn.

The two-year extensive efforts of the research team resulted in an optimal design for long-sought search of axion dark matter in high-frequency regions. The team is looking into incorporating several multiple-cell cavities onto the existing systems at CAPP to extend the axion search band to higher-frequency regions than currently explored.

Credit: 
Institute for Basic Science

Researchers find why 'lab-made' proteins have unusually high temperature stability

image: Structure of the de novo protein with most of the core filled with valine residues (green). Ten hydrophobic residues were mutated to smaller valine residues. This de novo protein still shows high thermal stability above 100 ºC.

Image: 
NINS/IMS

Bioengineers have found why proteins that are designed from scratch tend to be more tolerant to high temperatures than proteins found in nature.

Natural proteins with high 'thermostability' are prized for their wide range of applications, from baking and paper-making to chemical production. Efforts to enhance protein thermostability--and to discover the principles behind this--is one of the hottest topics in biotech.

The latest discoveries, described in the Proceedings of the National Academy of Sciences on November 23, 2020, open up the possibility of lab-made proteins with even better industrial applicability.

Researchers in the relatively young field of protein design have attempted to come up with new types of proteins for myriad medical, pharmaceutical and industrial applications. Until recently, protein engineers have focused on manipulating existing natural proteins. However, these natural proteins are difficult to alter without also distorting the general functioning of the protein--much like adding a fifth wheel to a car.

To avoid this, some protein engineers have begun to build novel proteins entirely from scratch, or what is called de novo protein design.

However, this quest has its own set of issues. For example, building proteins from scratch is much harder computationally, and requires a complete understanding of the principles of protein folding--the multiple levels of how a protein literally folds itself into a particular structure.

In biology, structure determines function, much like how a key fits into a keyhole or a cog into a sprocket. The shape of a biological entity is what allows it to do its job within an organism. And upon their production by cells, proteins just fall into their shape, simply as a result of physical laws.

But the principles that govern the interaction of these physical laws during the folding process are frustratingly complex--hence the computational difficulty. They are also still largely unknown. This is why a great deal of effort in protein engineering in recent years has focused on attempting to discover these protein design principles that emerge from physical laws.

And one of the mysteries facing protein designers has been the high thermostability of these 'lab-made' proteins.

"For some reason, de novo proteins have repeatedly shown increased tolerance in the face of quite high temperatures compared to natural proteins," said Nobuyasu Koga, associate professor at Institute for Molecular Science, and an author of the study. "Where others would 'denature', the lab-made proteins are still working just fine well above 100 ºC."

The design principles that have been discovered so far emphasize the importance of the backbone structure of proteins--the chain of nitrogen, carbon, oxygen and hydrogen atoms.

On the other hand, these principles have also held that the tight packing of the fatty, hydrophobic (water-resistant) core of naturally occurring proteins--or rather the molecular interactions that allow them to sit together as snugly as pieces of a jigsaw puzzle--is the dominant force that drives protein folding. Just as how oil and water don't mix, the fattier part of the protein when surrounded by water will naturally pull itself together without any need for an external 'push'.

"Indeed, according to our design principles, protein cores were engineered specifically to be as tightly packed and as fatty as possible," Nobuyasu Koga said. "So the question was: Which is more important for high thermostability, backbone structure or the fat and tight core packing?"

So the researchers took the de novo proteins they had designed that had shown the highest thermal stability, and began to tweak them with ten amino acids involved with the hydrophobic core packing. As they did this, they saw still folding ability and little reduction in overall thermal stability, suggesting that it is instead the backbone structure, not the hydrophobic core packing, that contributes the most to high thermostability. "It is surprising that the protein can fold with high thermal stability, even the loose core packing," said Naohiro Kobayashi, coauthor and a senior research fellow at RIKEN.

"Hydrophobic tight core packing may not even be very important for designed proteins," added Rie Koga, coauthor and a researcher at Exploratory Research Center on Life and Living Systems (ExCELLS). "We can create an exceptionally stable protein even if the core packing is not so optimized."

The next step for the researchers is to further develop rational principles for protein design, especially with respect to what extent that substructures of the backbone, especially loops within it, can be altered without endangering its folding ability and high thermostability.

Credit: 
National Institutes of Natural Sciences

Artificial intelligence helps scientists develop new general models in ecology

In ecology, millions of species interact in billions of different ways between them and with their environment. Ecosystems often seem chaotic, or at least overwhelming for someone trying to understand them and make predictions for the future.

Artificial intelligence and machine learning are able to detect patterns and predict outcomes in ways that often resemble human reasoning. They pave the way to increasingly powerful cooperation between humans and computers.

Within AI, evolutionary computation methods replicate in some sense the processes of evolution of species in the natural world. A particular method called symbolic regression allows the evolution of human-interpretable formulas that explain natural laws.

"We used symbolic regression to demonstrate that computers are able to derive formulas that represent the way ecosystems or species behave in space and time. These formulas are also easy to understand. They pave the way for general rules in ecology, something that most methods in AI cannot do," says Pedro Cardoso, curator at the Finnish Museum of Natural History, University of Helsinki.

With the help of the symbolic regression method, an interdisciplinary team from Finland, Portugal, and France was able to explain why some species exist in some regions and not in others, and why some regions have more species than others.

The researchers were able, for example, to find a new general model that explains why some islands have more species than others. Oceanic islands have a natural life-cycle, emerging from volcanoes and eventually submerging with erosion after millions of years. With no human input, the algorithm was able to find that the number of species of an island increases with the island age and peaks with intermediate ages, when erosion is still low.

"The explanation was known, a couple of formulas already existed, but we were able to find new ones that outperform the existing ones under certain circumstances," says Vasco Branco, PhD student working on the automation of extinction risk assessments at the University of Helsinki.

The research proposes that explainable artificial intelligence is a field to explore and promotes the cooperation between humans and machines in ways that are only now starting to scratch the surface.

"Evolving free-form equations purely from data, often without prior human inference or hypotheses, may represent a very powerful tool in the arsenal of a discipline as complex as ecology," says Luis Correia, computer science professor at the University of Lisbon.

Credit: 
University of Helsinki

Novel cathode design significantly improves performance of next-generation battery

image: An all-in-one solution for the design strategy of macroporous host with double-end binding sites.

Image: 
HKUST

A team led by Cheong Ying Chan Professor of Engineering and Environment Prof. ZHAO Tianshou, Chair Professor of Mechanical and Aerospace Engineering and Director of HKUST Energy Institute, has proposed a novel cathode design concept for lithium-sulfur (Li-S) battery that substantially improves the performance of this kind of promising next-generation battery.

Li-S batteries are regarded as attractive alternatives to lithium-ion (Li-ion) batteries that are commonly used in smartphones, electric vehicles, and drones. They are known for their high energy density while their major component, sulfur, is abundant, light, cheap, and environmentally benign.

Li-S batteries can potentially offer an energy density of over 500 Wh/kg, significantly better than Li-ion batteries that reach their limit at 300 Wh/kg. The higher energy density means that the approximate 400km driving range of an electric vehicle powered by Li-ion batteries can be substantially extended to 600-800km if powered by Li-S batteries.

While exciting results on Li-S batteries have been achieved by researchers worldwide, there is still a big gap between lab research and commercialization of the technology on an industrial scale. One key issue is the polysulfide shuttle effect of Li-S batteries that causes progressive leakage of active material from the cathode and lithium corrosion, resulting in a short life cycle for the battery. Other challenges include reducing the amount of electrolyte in the battery while maintaining stable battery performance.

To address these issues, Prof. Zhao's team collaborated with international researchers to propose a cathode design concept that could achieve good Li-S battery performance.

The highly oriented macroporous host can uniformly accommodate the sulfur while abundant active sites are embedded inside the host to tightly absorb the polysulfide, eliminating the shuttle effect and lithium metal corrosion. By bringing up a design principle for sulfur cathode in Li-S batteries, the joint team increased the batteries' energy density and made a big step towards the industrialization of the batteries.

"We are still in the middle of basic research in this field," Prof. Zhao said. "However, our novel electrode design concept and the associated breakthrough in performance represent a big step towards the practical use of a next-generation battery that is even more powerful and longer-lasting than today's lithium-ion batteries."

Credit: 
Hong Kong University of Science and Technology

Trapping nanoparticles with optical tweezers

Optical tweezers are a rapidly growing technology, and have opened up a wide variety of research applications in recent years. The devices operate by trapping particles at the focal points of tightly focused laser beams, allowing researchers to manipulate the objects without any physical contact. So far, optical tweezers have been used to confine objects just micrometres across - yet there is now a growing desire amongst researchers to extend the technology to nanometre-scale particles. In new research published in EPJ E, Janine Emile and Olivier Emile at the University of Rennes, France, demonstrate a novel tweezer design, which enabled them to trap fluorescent particles just 200 nanometres across for the first time.

If made available for widespread use, nanoscale optical traps could be used for experimental procedures requiring extreme degrees of precision - including direct measurements of nanoscale forces, alterations of cell membranes, and manipulations of viruses and DNA strands. Emile and Emile's design was based around 'Arago spots': bright points of light which form in the centres of circular shadows, as light diffracts around the objects creating them. In addition, they relied on the principle of 'total internal reflection' - where light rays hitting a glass-liquid interface at just the right angle are perfectly reflected.

In the experiment, the duo fired a perfectly aligned laser beam onto the interface between a glass plate, and a liquid containing suspended fluorescent nanoparticles; with an opaque circular disk partially blocking its path. The resulting Arago spot was then totally reflected at the interface, creating an exponentially fading wave which ran out from the spot in all directions. Finally, suspended nanoparticles could be positioned inside this donut shaped wave, and excited by a separate laser to emit light themselves. The resulting forces imparted by these light waves caused the particles to become tightly confined at the Arago spot. With further improvements to this setup, nanoscale optical tweezers could soon open new opportunities for research, in areas ranging from medicine to quantum computing.

Credit: 
Springer

The Protein Society announces 2022 appointment of Protein Science Editor-in-chief

image: Congratulations, Dr. John Kuriyan

Image: 
The Protein Society

CANYON COUNTRY, CA - The Protein Society is thrilled to announce the appointment of John Kuriyan, Ph.D., University of California, Berkeley as Editor-in-Chief of Protein Science, effective January 1, 2022. He will succeed outgoing editor Dr. Brian Matthews, who has served in this role since 2005 and provided continuous outstanding service to our society and the broader community.

Protein Science, the flagship journal of The Protein Society, serves as an international forum for publishing original reports on all scientific aspects of protein molecules. The journal publishes papers by leading scientists from all over the world that report on advances in the understanding of proteins in the broadest sense. Protein Science aims to unify this field by cutting across established disciplinary lines and focusing on "protein-centered" science.

Amy E. Keating, Professor of Biology and Biological Engineering at the Massachusetts Institute of Technology, and Protein Society President stated:

"To identify an appropriate successor to Brian Matthews, we conducted an extensive search and considered many excellent candidates. I couldn't be more excited to have Dr. Kuriyan coming on board as our next Editor-in-Chief. His deep appreciation of proteins and their fascinating properties, along with his thoughtful insights and innumerable scientific contributions, make him an ideal next leader for Protein Science. Dr. Kuriyan's recognition of the fast pace of change in science and in scientific publishing will also benefit the society as we work to support our members and the global protein science community."

Dr. Kuriyan earned his Ph.D. in 1986 from the Massachusetts Institute of Technology. He was a post-doctoral fellow with Professors Martin Karplus (Harvard) and Gregory A. Petsko (MIT). From 1987 to 2001, he was on the faculty of The Rockefeller University, New York, where he was promoted to full Professor in 1993. He is currently Professor of Molecular and Cell Biology and Professor of Chemistry at the University of California, Berkeley, a position he has held since 2001. Dr. Kuriyan is also an investigator of the Howard Hughes Medical Institute, a member of both the US National Academies of Sciences and Medicine, and a long-time member of the Protein Society.

Dr. Kuriyan's major scientific contributions have been in understanding the regulation of eukaryotic cell signaling proteins, particularly protein kinases such as the Src-family kinases, Abelson tyrosine kinase (Abl), the epidermal growth factor receptor (EGFR) and Ca2+/calmodulin-dependent kinase II (CaMKII). His laboratory uses structural analysis and computer simulations, as well as biochemical, biophysical, and cell biological analyses to elucidate mechanisms. His lab has also made fundamental contributions to understand the structural basis for high-speed DNA replication.

Among many awards, Dr. Kuriyan has received The Protein Society DuPont-Merck Award (1997) and the highest honor of our society, the Stein & Moore Award (2017). He served as Program Committee Chair for the 31st Annual Symposium of the Protein Society in Montreal, Canada.

The Protein Society is indebted to Dr. Matthews for his exceptional service to the journal and looks forward to working alongside Dr. Kuriyan to continue the journal's success.

Credit: 
The Protein Society