Tech

New algorithms give digital images more realistic color

video: Researchers developed algorithms that correlate digital signals with colors in a standard CIE color space. The video shows how various colors are created in the CIE 1931 chromatic diagram by mixing three colors of light.

Image: 
Min Qiu's PAINT research group, Westlake University

WASHINGTON -- If you've ever tried to capture a sunset with your smartphone, you know that the colors don't always match what you see in real life. Researchers are coming closer to solving this problem with a new set of algorithms that make it possible to record and display color in digital images in a much more realistic fashion.

"When we see a beautiful scene, we want to record it and share it with others," said Min Qiu, leader of the Laboratory of Photonics and Instrumentation for Nano Technology (PAINT) at Westlake University in China. "But we don't want to see a digital photo or video with the wrong colors. Our new algorithms can help digital camera and electronic display developers better adapt their devices to our eyes."

In Optica, The Optical Society's (OSA) journal for high impact research, Qiu and colleagues describe a new approach for digitizing color. It can be applied to cameras and displays -- including ones used for computers, televisions and mobile devices -- and used to fine-tune the color of LED lighting.

"Our new approach can improve today's commercially available displays or enhance the sense of reality for new technologies such as near-eye-displays for virtual reality and augmented reality glasses," said Jiyong Wang, a member of the PAINT research team. "It can also be used to produce LED lighting for hospitals, tunnels, submarines and airplanes that precisely mimics natural sunlight. This can help regulate circadian rhythm in people who are lacking sun exposure, for example."

Mixing digital color

Digital colors such as the ones on a television or smartphone screen are typically created by combining red, green and blue (RGB), with each color assigned a value. For example, an RGB value of (255, 0, 0) represents pure red. The RGB value reflects a relative mixing ratio of three primary lights produced by an electronic device. However, not all devices produce this primary light in the same way, which means that identical RGB coordinates can look like different colors on different devices.

There are also other ways, or color spaces, used to define colors such as hue, saturation, value (HSV) or cyan, magenta, yellow and black (CMYK). To make it possible to compare colors in different color spaces, the International Commission on Illumination (CIE) issued standards for defining colors visible to humans based on the optical responses of our eyes. Applying these standards requires scientists and engineers to convert digital, computer-based color spaces such as RGB to CIE-based color spaces when designing and calibrating their electronic devices.

In the new work, the researchers developed algorithms that directly correlate digital signals with the colors in a standard CIE color space, making color space conversions unnecessary. Colors, as defined by the CIE standards, are created through additive color mixing. This process involves calculating the CIE values for the primary lights driven by digital signals and then mixing those together to create the color. To encode colors based on the CIE standards, the algorithms convert the digital pulsed signals for each primary color into unique coordinates for the CIE color space. To decode the colors, another algorithm extracts the digital signals from an expected color in the CIE color space.

"Our new method maps the digital signals directly to a CIE color space," said Wang. "Because such color space isn't device dependent, the same values should be perceived as the same color even if different devices are used. Our algorithms also allow other important properties of color such as brightness and chromaticity to be treated independently and precisely."

Creating precise colors

The researchers tested their new algorithms with lighting, display and sensing applications that involved LEDs and lasers. Their results agreed very well with their expectations and calculations. For example, they showed that chromaticity, which is a measure of colorfulness independent of brightness, could be controlled with a deviation of just ~0.0001 for LEDs and 0.001 for lasers. These values are so small that most people would not be able to perceive any differences in color.

The researchers say that the method is ready to be applied to LED lights and commercially available displays. However, achieving the ultimate goal of reproducing exactly what we see with our eyes will require solving additional scientific and technical problems. For example, to record a scene as we see it, color sensors in a digital camera would need to respond to light in the same way as the photoreceptors in our eyes.

To further build on their work, the researchers are using state-of-art nanotechnologies to enhance the sensitivity of color sensors. This could be applied for artificial vision technologies to help people who have color blindness, for example.

Credit: 
Optica

New study shows glo has similar impact on indicators of potential harm as quitting smoking

image: New study shows glo has similar impact on indicators of potential harm as quitting smoking.

Image: 
BAT

Evidence shows significant reduction in indicators of potential harm over 6-months for smokers switching to exclusive use of glo compared with continuing to smoke cigarettes

Gold-standardi indicator supports scientific substantiation of glo's potential as a reduced risk product*

First ever long-term study showing sustained reduction in exposure to certain toxicants and indicators of potential harm in smokers switching completely to glo

Supports BAT's delivery of A Better TomorrowTM by reducing the health impact of its global business by encouraging smokers who would otherwise continue to smoke to switch completely to reduced risk alternatives*

LONDON 1st July 2021: New research published today in the journal Internal and Emergency Medicine provides the first real-world evidence that people switching from cigarettes to exclusive use of glo, BAT's flagship Tobacco Heating Product (THP), can significantly reduce their exposure to certain toxicants and indicators of potential harm related to several smoking-related diseases compared with continuing to smoke.

The results, recorded at 6-months of a 12-month study, showed that switching completely to glo resulted in statistically significant changes across a range of "biomarkers of exposure" (BoE)**, and indicators of potential harm, known as "biomarkers of potential harm" (BoPH)**, compared with continuing to smoke.

For most biomarkers measured, the reductions seen in people using glo were similar to those in participants who stopped smoking completely.

Based on the toxicants measured, glo users showed a:

Significant reduction in a biomarker for lung cancer risk

Significant reduction in white blood cell count, an inflammatory marker indicative of cardiovascular disease risk (CVD) and other smoking-related diseases

Improvement in HDL cholesterol associated with reduced risk of CVD

Improvements in two key indicators of lung health

Improvement in a key indicator of oxidative stress, a process implicated in several smoking-related diseases, such as CVD and hypertension

Dr David O'Reilly, BAT's Director of Scientific Research said: "These are exciting results as they allow us to understand the potential for reduction of risk that switching completely to glo can deliver. The study shows that smokers switching to glo can reduce their exposure to certain toxicants, which reduces their risk of developing certain smoking related diseases.* To have shown a significant reduction in measures of BOPH, some comparable to quitting completely, is very encouraging and provides further scientific substantiation of the harm reduction potential of glo and how it supports our ambition to build A Better TomorrowTM by reducing the health impact of our business."

About the study

Participants in this year-long randomised controlled study were UK-based smokers aged 23 to 55 in good general health who either did or did not want to quit. The smokers who did not intend to quit were randomised to either continue smoking cigarettes or switched to using only glo, while smokers who indicated they wanted to quit smoking received nicotine replacement therapy and access to a cessation counsellor. A group of "never smokers" was also included to act as a control group and continued not to use any tobacco or nicotine products.

This study was designed to explore the risk reduction potential of glo when used in a real world setting rather than in a controlled setting. The only intervention was a monthly clinic visit where samples of blood, urine and other measurements were taken. These samples were tested for "biomarkers of exposure" (to selected cigarette smoke toxicants) and "biomarkers of potential harm". In addition, to ensure compliance, the glo and cessation groups were tested for the biomarker, CEVal, which indicated if they had recently smoked cigarettes.

Further results from the completed study are due by the end of 2021 and will determine whether the reduced exposure to toxicants and biomarkers of potential harm are maintained over the duration of the study.

Credit: 
R&D at British American Tobacco

94% of patients with cancer respond well to COVID-19 vaccines

SAN ANTONIO (June 30, 2021) — In a U.S. and Swiss study, nearly all patients with cancer developed good immune response to the COVID-19 mRNA vaccines three to four weeks after receiving their second dose, but the fact that a small group of the patients exhibited no response raised questions about how their protection against the virus will be addressed moving forward.

Among the 131 patients studied, 94% developed antibodies to the coronavirus. Seven high-risk patients did not. “We could not find any antibodies against the virus in those patients,” said Dimpy P. Shah, MD, PhD, of the Mays Cancer Center, home to UT Health San Antonio MD Anderson. “That has implications for the future. Should we provide a third dose of vaccine after cancer therapy has completed in certain high-risk patients?”

Dr. Shah is corresponding author of the study, published in the high-impact journal Cancer Cell. Coauthors are from the Mays Cancer Center and the University of Geneva.

“With other vaccines and infections, patients with cancer have been shown not to develop as robust an immune response as the general population,” said study senior coauthor Ruben Mesa, MD, FACP, executive director of the Mays Cancer Center. “It made sense, therefore, to hypothesize that certain high-risk groups of patients do not have antibody response to COVID-19 vaccine.”

“Patients with hematological malignancies, such as myeloma and Hodgkin lymphoma, were less likely to respond to vaccination than those with solid tumors,” said Pankil K. Shah, MD, PhD, of the Mays Cancer Center, who served as co-lead author of the study with Alfredo Addeo, MD, senior oncologist at the Geneva University Hospital.

Among the high-risk groups, patients receiving a therapy called Rituximab within six months of vaccination developed no antibodies. Rituximab is a monoclonal antibody used in the treatment of hematological cancers and autoimmune diseases.

Patients on chemotherapy that is toxic to cells developed antibody response, but it was muted compared to the general population. “How that relates to protection against COVID-19, we don’t know yet,” Dr. Dimpy Shah said.

The Delta variant and other mutants of the COVID-19 virus were not examined in the study. The team also did not analyze the response of infection-fighting T cells and B cells in the patients with cancer.

The median age of patients in the study was 63. Most of the patients (106) had solid cancers as opposed to hematological malignancies (25). The study population was 80% non-Hispanic white, 18% Hispanic and 2% Black.

“We recommend that future studies be done in Black, Asian and Hispanic patients, as well, to see if there are any differences in vaccination immune response,” Dr. Mesa said.

In countries where there is lack of vaccination, there is talk that one dose might confer adequate protection, but this may not be true in the case of patients with cancer, Dr. Dimpy Shah said.

“We observed a significant difference in response when two doses were given,” Dr. Shah said. “At least for patients with cancer, two doses are very important for robust antibody response.”

Dr. Pankil Shah said the study is unique because, unlike a few studies conducted in the past that evaluated immune response on the day of the second dose or within seven days of it, this study waited three to four weeks to obtain results.

Patients with high-risk cancers, especially those receiving anti-CD20 antibodies, should continue to take precautions even after being vaccinated, the study implies. “They still need to have that awareness that they could potentially be at risk because their body has not responded to vaccination,” Dr. Pankil Shah said.

Credit: 
University of Texas Health Science Center at San Antonio

Underwater seismometer can hear how fast a glacier moves

image: Key advantages of deploying an ocean-bottom seismometer near the calving front of a tidewater glacier. Subglacial and ocean seismo-acoustic signals can be detected, while the impact of surface seismic sources is minimised (Evgeny A. Podolskiy, Yoshio Murai, Naoya Kanna, Shin Sugiyama. Nature Communications. June 24, 2021).

Image: 
Evgeny A. Podolskiy, Yoshio Murai, Naoya Kanna, Shin Sugiyama. Nature Communications. June 24, 2021

Scientists show that an ocean-bottom seismometer deployed close to the calving front of a glacier in Greenland can detect continuous seismic radiation from a glacier sliding, reminiscent of a slow earthquake.

Basal slip of marine-terminating glaciers controls how fast they discharge ice into the ocean. However, to directly observe such basal motion and determine what controls it is challenging: the calving-front environment is one of the most difficult-to-access environments and seismically noisy -- especially on the glacier surface -- due to heavily crevassed ice and harsh weather conditions.

A team of scientists from Hokkaido University, led by Assistant Professor Evgeny A. Podolskiy from the Arctic Research Center, have used ocean-bottom and surface seismometers to detect previously unknown persistent coastal shaking generated by a sliding of a glacier. Their findings were published in the journal Nature Communications.

Sensors to measure glacial motion can potentially be placed on top of, within, or below the glacier; however, each approach has its own drawbacks. For example, the surface of glaciers is 'noisy' due to wind and tide-modulated crevassing, which can overwhelm all other signals; while the interior is quieter, it is the hardest area to access. However, all of these locations are plagued by common issues such as station drift, melt out and level loss, cold temperatures, and potential instrument destruction by iceberg calving.

In the current study, the scientists used an ocean-bottom seismometer (OBS) that was deployed near the calving front of Bowdoin Glacier (Kangerluarsuup Sermia) to listen to icequakes caused by glacial basal motion. By doing so, they insulated the sensor from the near-surface seismic noise, and also circumvented all the issues that accompany the deployment of sensors on the glacier itself and nearby. The data they collected from the OBS was correlated with data from seismic and ice-speed measurements at the ice surface.

The analysis of the data revealed that there is a continuous seismic tremor generated by the glacier. In particular, the broad-band seismic signal (3.5 Hz to 14.0 Hz) detected by the OBS correlated well with the movement of the glacier. The scientists were able to identify signals that were not associated with glacial basal dynamics. Data from the OBS were necessary to establish a correlation between tremors detected by the surface stations and GPS-recorded displacement of the glacier. In the process, they demonstrated that continuous seismic data that was historically considered 'noise' contains signals that can be used to study glacier dynamics.

The scientists also suggested that glacier slip is similar to slow earthquakes. The characteristics of the Bowdoin-Glacier tremor remind those of tectonic tremors in Japan and Canada. Moreover, the presence of the tremor is in line with recent theoretical models and cold-laboratory experiments.

The scientists have presented a novel method to collect continuous glacioseismic information about glacier motion in an extremely noisy and harsh polar environment using ocean-bottom seismology. "Future research in this area could focus on replicating and expanding upon the findings of this study at other glaciers," says Evgeny A. Podolskiy. "The experimental support for the relationship between glacier tremors and tectonic tremors suggests that a long-term multidisciplinary approach would be beneficial in fully understanding this phenomenon."

Credit: 
Hokkaido University

Benefits of acute aerobic exercise on cognitive function: Why do 50% of studies find no connection?

image: The x-axis shows the participants' baseline cognitive performance (response accuracy on the pre-test) and the y-axis shows improvement in cognitive performance (i.e. pre-post changes in response accuracy). Red and blue lines show the exercise condition and non-exercise control condition, respectively. It was understood that the benefit of aerobic exercise on cognitive performance (i.e., the difference between the red and blue bands) was greater in those who had lower pre-test scores.

Image: 
Ishihara et al., Neuroscience and Biobehavioral Reviews, 2021

Over the past 20 years, many studies have investigated the effects of acute aerobic exercise on cognitive performance. In recent years, meta-analyses*1 of data from these previous research studies have demonstrated that these a single bout of moderate aerobic exercise temporarily improves cognitive performance. However, close examination of the individual research studies on this topic revealed that in approximately 50% of studies, no beneficial link between acute aerobic exercise and cognitive function was found.

An international research collaboration, including Associate Professor KAMIJO Keita (Faculty of Liberal Arts and Sciences, Chukyo University) and Assistant Professor ISHIHARA Toru (Graduate School of Human Development and Environment, Kobe University), conducted an IPD meta-analysis*2 with the aim of resolving these discrepancies. They conducted this analysis from the perspectives of 'What kind of people is this effective for?' and 'Which cognitive functions does it benefit?'

Their results illuminated the following main points regarding the benefits of acute aerobic exercise on cognitive function: 1. The benefits were greater in those who originally had lower cognitive performance (i.e., those with lower scores on the pre-test) (Figure 1). 2. These results show that acute aerobic exercise did not have greater beneficial effects specifically for prefrontal-dependent aspect of cognition but rather more generalized benefits across different types of cognitive performance.

These results were previously published in the online version of Neuroscience and Biobehavioral Reviews on June 18, 2021.

Main points

Meta-analysis studies conducted in recent years have shown that a single bout of moderate aerobic exercise (acute aerobic exercise) temporarily improves cognitive performance. Furthermore, these analyses have demonstrated that this kind of exercise is disproportionately beneficial to cognitive functions that rely on the prefrontal cortex*3 and associated networks.

However, upon close examination, around half of these previous studies did not find any beneficial effects of acute aerobic exercise.

The current research group conducted an IPD meta-analysis with the aim of resolving these discrepancies between the results of previous studies, by focusing on what kind of people benefitted and what kind of cognitive functions were affected.

They revealed that acute aerobic exercise has a greater beneficial effect in people with lower cognitive performance.

These findings show that acute aerobic exercise does not have greater beneficial effects specifically on the prefrontal-dependent aspect of cognition but rather more general benefits across different aspects of cognitive performance.

Research Significance

Many of the cognitive tests used in these previous research studies, which assessed the prefrontal-dependent aspect of cognition, have a high difficulty level. Based on the present research results, on the surface, acute aerobic exercise might have had greater benefits on the prefrontal-dependent aspect of cognition if the cognitive tests were difficult, in other words, if participants had a low score on the pre-test. Many studies didn't take into account individual differences in cognitive function and did not alter test difficulty accordingly, and this is thought to be linked to the discrepancies between the results of the different studies. That is, it is possible to detect the benefits of acute aerobic exercise if cognitive tests are appropriately selected and controlled by the researchers.

This IPD meta-analysis revealed that taking into account individual differences in cognitive performance and test difficulty can contribute towards a reduction in discrepancies between research studies on this topic. In addition, most studies so far have focused on the prefrontal-dependent aspect of cognition, however, conducting studies focusing on other types of cognitive function as well will contribute towards the development of this research area.

Credit: 
Kobe University

Striking a balance: Trade-offs shape flower diversity

image: Flower generalization has often been viewed as a suboptimal solution to managing the needs of different visitors. Researchers from the University of Tsukuba have developed a framework to examine flower-animal interactions and how different types of visitor-mediated trade-offs affect flower evolution. They found that mitigating trade-offs can lead to novel combinations of traits that enhance floral diversity. These findings could explain the discrepancy between observed flower visitors and those predicted based on a flower's traits.

Image: 
University of Tsukuba

Ibaraki, Japan - Flowers come in a multitude of shapes and colors. Now, an international research team led by a researcher from Japan has proposed the novel hypothesis that trade-offs caused by different visitors may play an important role in shaping this floral diversity.

In a study published last month, the team explored how the close associations between flowers and the animals that visit them influence flower evolution.

Visitors to flowers may be beneficial, like pollinators, or detrimental, like pollen thieves. All of these visitors interact with flowers in different ways and exert different selection pressures on flower traits such as color and scent. For example, a scent that attracts one pollinator may deter other potential pollinators. In this case, the flower would be expected to cater to the best pollinator.

"On the basis of this theory, you'd expect that flowers would mostly be visited by one particular group of pollinators," says lead author of the study, Professor Kazuharu Ohashi. "But flowers often host many different visitors at the same time and flowers appear to meet the needs of multiple visitors. The question we wanted to answer is how this happens in nature."

Balancing the demands of multiple visitors involves trade-offs. For example, diurnal bees and nocturnal moths can both pollinate goat willow but prefer different smells. A floral scent adapted to only one of these animals would mean missed opportunities for pollination by the other. To see how these types of visitor-mediated trade-offs affect the evolution of flowers, the researchers developed a conceptual framework to examine the different types of trade-offs and how flowers might adapt. They then looked at previous studies of flower-animal interactions to see whether the research supported the proposed framework.

What they found was a variety of strategies for mitigating trade-offs. In the case of goat willow, flowers produce different scents during the day and night, and therefore attract both types of pollinator. Another example is floral color change as a strategy to attract both bees and flies. Retaining old flowers could attract opportunistic foragers like flies, while repelling smart foragers like bees. The color change in flowers as they age could reduce this trade-off by allowing bees to select young, rewarding flowers. Many other strategies were noted, all of which involved acquiring novel combinations of traits to attract, or exclude, different visitors.

"Most flowers are ecologically generalized and the assumption to date has been that this is a suboptimal solution," explains Professor Ohashi. "But our findings suggest that interactions with multiple animals can actually be optimized by minimizing trade-offs in various ways, and such evolutionary processes may have enriched the diversity of flowers."

The discrepancy between observed flower visitors and those predicted on the basis of a flower's traits has long been a topic of debate. Taking visitor-mediated trade-offs into account in future studies of flower evolution may help settle that argument.

Credit: 
University of Tsukuba

Manufacturing the core engine of cell division

image: Scheme of the reconstituted kinetochore binding the centromere (yellow) of the chromosome (blue) on one side and a microtubule (green) on the other side.

Image: 
@MPI of Molecular Physiology

A wonder of nature

As a human cell begins division, its 23 chromosomes duplicate into identical copies that remain joined at a region called the centromere. Here lies the kinetochore, a complicated assembly of proteins that binds to thread-like structures, the microtubules. As mitosis progresses, the kinetochore gives green light to the microtubules to tear the DNA copies apart, towards the new forming cells. "The kinetochore is a beautiful, flawless machine: You almost never lose a chromosome in a normal cell!", says Musacchio. "We already know the proteins that constitute it, yet important questions about how the kinetochore works are still open: How does it rebuild itself during chromosome replication? How does it bind to the microtubules? And how does it control them?"

A life´s endeavour

Musacchio´s quest for answers started more than 20 years ago and has been guided by a simple motto: "Before we understand how things go wrong, we better understand why and how things work". He therefore embarked in the mission of rebuilding the kinetochore in vitro. In 2016 he could synthesize a partial kinetochore made of 21 proteins. In the new publication, Musacchio, graduate student Kai Walstein, and their colleagues at MPI Dortmund have been able to fully reconstruct the system: All subunits, from the ones that bind the centromere to the ones that bind the microtubules, are now present in the right numbers and stoichiometry. Scientists proved that the new system functions properly, by successfully substituting parts of the original kinetochore in the cell with artificial ones. "This is a real milestone in the reconstruction of an object that exists, unaltered, in all eukaryotic cells since more than one billion years!", says Musacchio. This breakthrough paves the way towards the making of synthetic chromosomes carrying functions that can be replicated in organisms. "The potential for biotech applications could be huge", he says.

In the protein factory

MPI scientists had to overcome a major hurdle to rebuild the kinetochore, namely to fully reconstruct the highly flexible Centromeric Protein C (CENP-C). This is an essential protein that bridges the centromeric region to the outer proteins of the kinetochore. Researchers rebuilt CENP-C by "gluing" together the two ends of it.

A highly organised laboratory, similar to a factory, is fundamental for the reconstitution of complex protein assemblies. For each protein of the kinetochore, MPI scientists built a production pipeline to isolate the genes, express them in insects´ cells, and collect them. "When we put them together in vitro, these proteins click-in to form the kinetochore, just like LEGO pieces following the instructions", he says. Other than the famous toys though, each kinetochore protein has a different interface and interaction with closer proteins. The group will now step up to the next level of complexity: Investigating how the kinetochore functions and interacts in the presence of microtubules and supplied energy (in the form of ATP). The project has been recently granted an ERC Synergy Grant and will be carried out by an international team comprising Musacchio's group and researchers from Cambridge, UK, and Barcelona, Spain.

Credit: 
Max Planck Institute of Molecular Physiology

Reducing plastic waste will require fundamental change in culture

Plastic waste is considered one of the biggest environmental problems of our time. IASS researchers surveyed consumers in Germany about their use of plastic packaging. Their research reveals that fundamental changes in infrastructures and lifestyles, as well as cultural and economic transformation processes, are needed to make zero-waste shopping the norm.

96 percent of the German population consider it important to reduce packaging waste. Nevertheless, the private end consumption of packaging in Germany has increased continuously since 2009. At 3.2 million tons in 2018, the amount of plastic packaging waste generated by end consumers in Germany has more than doubled since 1997. At 228 kilograms per capita, packaging consumption in Germany was significantly higher than the European average of 174 kilos per capita.

"Recycling only treats the symptoms of the plastic crisis and does not address the root cause, waste generation itself. We wanted to learn more about the barriers that prevent individuals in Germany from reducing their everyday consumption of plastic packaging for food and beverages. For our research project, a total of 40 participants contributed to discussions in four focus groups," explains Jasmin Wiefek, lead author of the study.

In their analysis of the discussions, the researchers identified twelve barriers to reducing plastic packaging consumption:

Habits: The focus group participants mainly shop at supermarkets or discounters rather than markets or zero-waste shops. The discussion also revealed that most participants do not take their own bags or containers when they go grocery shopping. Processed and packaged foods are popular.

Lack of knowledge: The researchers observed that participants were often uncertain which types of packaging are more sustainable than others.

Hygiene: Discussions revealed that participants held reservations about the hygienic properties of freely accessible displays of unpacked goods, the use of self-brought packaging and long-term reusable packaging options in general.

Material properties: Participants often preferred plastic packaging due to their material properties (e.g., lightweight, shatterproof, tear-resistant).

Priorities: Several participants described how their efforts to use less plastic packaging clashed with other priorities in their daily lives. One example given was that parents do not want to pack heavy backpacks for their children and accordingly prefer to use plastic instead of glass bottles.

Price: In general, groceries packaged in plastics are more affordable than plastic-free groceries.

Availability: By default, most groceries offered in supermarkets and discounters are only available in plastic packaging and so participants feel that they have little choice.

Diffusion of responsibility: According to the participants, both individuals and industry have a responsibility to solve the "plastic problem": On the one hand, because industry is responsible for the fact that so many products are packaged in plastic, it must offer solutions. However, they also emphasised that consumers should shop more consciously and avoid products in plastic packaging.

Reachability & infrastructure: Participants noted that places such as zero-waste shops or weekly markets were difficult to reach and required more time and effort to access than local supermarkets or discounters.

Time and time structures: Time is another crucial barrier to plastic-free shopping. Due to the travel distances involved, accessing zero-waste shops and markets would take up more time for most people. Participants pointed out that shopping would also take longer if they filled the food in their own containers and that the containers would have to be cleaned subsequently. They also noted that preparing unprocessed foodstuffs takes more time.

Convenience: Participants reported that they find it inconvenient to take their own containers to shops as it requires that they either carry the containers to work and back again or go out twice.

Consumer Culture: The participants stated that they did not attach much importance to the availability of a 'wide range of products' when shopping. However, many stressed the importance of reliably finding specific products in shops. This translates into an indirect demand for a wide range of products, which is difficult for zero-waste / low-plastic retailers to implement. Discussions in the focus groups also showed that our culture of spontaneous and on-the-go consumption makes it difficult to reduce packaging. Many participants were unaware that non-regional and non-seasonal foods, which we consume as a matter of course every day, must be packaged to maintain their freshness during long distance transport.

"Our results show that at present a lot of effort and knowledge is required for consumers to avoid plastic packaging. If we want to make low-waste goods and goods without single-use plastic packaging the cheapest and most convenient option, we will need to change the relevant infrastructures, economic incentives, and political framework conditions," explains project leader and co-author Katharina Beyerl. The goal of reducing the use of plastic packaging will not be achieved by merely asking consumers to shop exclusively in zero-waste stores. Instead, it requires fundamental changes in societal structures and lifestyles as well as a cultural shift.

Credit: 
Research Institute for Sustainability (RIFS) – Helmholtz Centre Potsdam

Researchers look to human 'social sensors' to better predict elections and other trends

Election outcomes are notoriously difficult to predict. In 2016, for example, most polls suggested that Hillary Clinton would win the presidency, but Donald Trump defeated her. Researchers cite multiple explanations for the unreliability in election forecasts -- some voters are difficult to reach, and some may wish to remain hidden. Among those who do respond to surveys, some may change their minds after being polled, while others may be embarrassed or afraid to report their true intentions.

In a new perspective piece for Nature, Santa Fe Institute researchers Mirta Galesic, Jonas Dalege, Henrik Olsson, Daniel Stein, Tamara van der Does, and their collaborators* propose a surprising way to get around these shortcomings in survey design -- not just in the world of politics, but in other types of research as well. While it's widely assumed that cognitive bias clouds our assessment of the people around us, their research and that of others suggests that in fact, our estimations of what our friends and family believe are often accurate.

"We realized that if we ask a national sample of people about who their friends are going to vote for, we get more accurate predictions than if we ask them who they're going to vote for," says Galesic, who is the corresponding author. "We found that people are actually pretty good at estimating the beliefs of people around them."

That means researchers can gather highly accurate information about social trends and groups by asking about a person's social circle rather than interrogating their own individual beliefs. That's because as highly social creatures, we have become very good at sizing up those around us -- what researchers call "social sensing."

When people are selected to represent a particular group, their perceptions, combined with new computational models of human social dynamics, can be used to identify emerging trends and better predict political and health-related developments in particular, the team writes. This approach, combining elements of psychology and sociology, can even be harnessed to devise interventions that "could steer social systems in different directions" after a major event, such as a natural disaster or a mass shooting, they suggest.

"I really hope human social sensing will be included in the standard social science toolbox, because I think it can be a very useful strategy for predicting and modeling societal trends", Galesic says.

Credit: 
Santa Fe Institute

Sweat-proof 'smart skin' takes reliable vitals, even during workouts and spicy meals

image: Engineers have developed a sweat-proof "electronic skin" -- a conformable, sensor-embedded sticky patch that reliably monitors a person's health, even when a wearer is perspiring.

Image: 
Courtesy of Jeehwan Kim, Hanwool Yeon, et al

MIT engineers and researchers in South Korea have developed a sweat-proof "electronic skin" -- a conformable, sensor-embedded sticky patch that monitors a person's health without malfunctioning or peeling away, even when a wearer is perspiring.

The patch is patterned with artificial sweat ducts, similar to pores in human skin, that the researchers etched through the material's ultrathin layers. The pores perforate the patch in a kirigami-like pattern, similar to that of the Japanese paper-cutting art. The design ensures that sweat can escape through the patch, preventing skin irritation and damage to embedded sensors.

The kirigami design also helps the patch conform to human skin as it stretches and bends. This flexibility, paired with the material's ability to withstand sweat, enables it to monitor a person's health over long periods of time, which has not been possible with previous "e-skin" designs. The results, published today in Science Advances, are a step toward long-lasting smart skins that may track daily vitals or the progression skin cancer and other conditions.

"With this conformable, breathable skin patch, there won't be any sweat accumulation, wrong information, or detachment from the skin," says Jeehwan Kim, associate professor of mechanical engineering at MIT. "We can provide wearable sensors that can do constant long-term monitoring."

Kim's co-authors include lead author and MIT postdoc Hanwool Yeon, and researchers in MIT's departments of Mechanical Engineering and Materials Science and Engineering, and the Research Laboratory of Electronics, along with collaborators from cosmetics conglomerate Amorepacific and other institutions across South Korea.

A sweaty hurdle

Kim's group specializes in fabricating flexible semiconductor films. The researchers have pioneered a technique called remote epitaxy, which involves growing ultrathin, high-quality semiconductor films on wafers at high temperature and selectively peeling away the films, which they can then combine and stack to form sensors far thinner and more flexible than conventional wafer-based designs.

Recently, their work drew the attention of the cosmetics company Amorepacific, which was interested in developing thin wearable tape to continuously monitor changes in skin. The company struck up a collaboration with Kim to fashion the group's flexible semiconducting films into something that could be worn over long periods of time.

But the team soon came against a barrier that other e-skin designs have yet to clear: sweat. Most experimental designs embed sensors in sticky, polymer-based materials that are not very breathable. Other designs, made from woven nanofibers, can let air through, but not sweat. If an e-skin were to work over the long-term, Kim realized it would have to be permeable to not just vapor but also sweat.

"Sweat can accumulate between the e-skin and your skin, which could cause skin damage and sensor malfunctioning," Kim says. "So we tried to address these two problems at the same time, by allowing sweat to permeate through electronic skin."

Making the cut

For design inspiration, the researchers looked to human sweat pores. They found that the diameter of the average pore measures about 100 microns, and that pores are randomly distributed throughout skin. They ran some initial simulations to see how they might overlay and arrange artificial pores, in a way that would not block actual pores in human skin.

"Our simple idea is, if we provide artificial sweat ducts in electronic skin and make highly-permeable paths for the sweat, we may achieve long-term monitorability," Yeon explains.

They started with a periodic pattern of holes, each about the size of an actual sweat pore. They found that if pores were spaced close together, at a distance smaller than an average pore's diameter, the pattern as a whole would efficiently permeate sweat. But they also found that if this simple hole pattern were etched through a thin film, the film was not very stretchable, and it broke easily when applied to skin.

The researchers found they could increase the strength and flexibility of the hole pattern by cutting thin channels between each hole, creating a pattern of repeating dumbbells, rather than simple holes, that relaxed strain, rather than concentrating it in one place. This pattern, when etched into a material, created a stretchable, kirigami-like effect.

"If you wrap a piece of paper over a ball, it's not conformable," Kim says. "But if you cut a kirigami pattern in the paper, it could conform. So we thought, why not connect the holes with a cut, to have kirigami-like conformability on the skin? At the same time we can permeate sweat."

Following this rationale, the team fabricated an electronic skin from multiple functional layers, each which they etched with dumbbell-patterned pores. The skin's layers comprise an ultrathin semiconductor-patterned array of sensors to monitor temperature, hydration, ultraviolet exposure, and mechanical strain. This sensor array is sandwiched between two thin protective films, all of which overlays a sticky polymer adhesive.

"The e-skin is like human skin -- very stretchable and soft, and sweat can permeate through it," Yeon says.

The researchers tested the e-skin by sticking it to a volunteer's wrist and forehead. The volunteer wore the tape continuously over a week. Throughout this period, the new e-skin reliably measured his temperature, hydration levels, UV exposure, and pulse, even during sweat-inducing activities, such as running on a treadmill for 30 minutes and consuming a spicy meal.

The team's design also conformed to skin, sticking to the volunteer's forehead as he was asked to frown repeatedly while sweating profusely, compared with other e-skin designs that lacked sweat permeability, and easily detached from the skin.

Kim plans to improve the design's strength and durability. While the tape is both permeable to sweat and highly conformable, thanks to its kirigami patterning, it's this same patterning, paired with the tape's ultrathin form, that makes it quite fragile to friction. As a result, volunteers had to wear a casing around the tape to protect it during activities such as showering.

"Because the e-skin is very soft, it can be physically damaged," Yeon says. "We aim to improve the resilience of electronic skin."

Credit: 
Massachusetts Institute of Technology

New microchip sensor measures stress hormones from drop of blood

image: A depiction of stress molecules in blood electronically being detected inside nano-wells.

Image: 
Ella Marushchenko

New Brunswick, N.J. (June 30, 2021) - A Rutgers-led team of researchers has developed a microchip that can measure stress hormones in real time from a drop of blood.

The study appears in the journal Science Advances.

Cortisol and other stress hormones regulate many aspects of our physical and mental health, including sleep quality. High levels of cortisol can result in poor sleep, which increases stress that can contribute to panic attacks, heart attacks and other ailments.

Currently, measuring cortisol takes costly and cumbersome laboratory setups, so the Rutgers-led team looked for a way to monitor its natural fluctuations in daily life and provide patients with feedback that allows them to receive the right treatment at the right time.

The researchers used the same technologies used to fabricate computer chips to build sensors thinner than a human hair that can detect biomolecules at low levels. They validated the miniaturized device's performance on 65 blood samples from patients with rheumatoid arthritis.

"The use of nanosensors allowed us to detect cortisol molecules directly without the need for any other molecules or particles to act as labels," said lead author Reza Mahmoodi, a postdoctoral scholar in the Department of Electrical and Computer Engineering at Rutgers University-New Brunswick.

With technologies like the team's new microchip, patients can monitor their hormone levels and better manage chronic inflammation, stress and other conditions at a lower cost, said senior author Mehdi Javanmard, an associate professor in Rutgers' Department of Electrical and Computer Engineering.

"Our new sensor produces an accurate and reliable response that allows a continuous readout of cortisol levels for real-time analysis," he added. "It has great potential to be adapted to non-invasive cortisol measurement in other fluids such as saliva and urine. The fact that molecular labels are not required eliminates the need for large bulky instruments like optical microscopes and plate readers, making the readout instrumentation something you can measure ultimately in a small pocket-sized box or even fit onto a wristband one day."

Credit: 
Rutgers University

Midlife change in wealth, later risk of cardiovascular events

What The Study Did: Researchers investigated the association between a midlife change in wealth and the risk of cardiovascular event after age 65.

Authors: Muthiah Vaduganathan, M.D., M.P.H., of Brigham and Women's Hospital Heart & Vascular Center and Harvard Medical School in Boston, is the corresponding author.

To access the embargoed study: Visit our For The Media website at this link https://media.jamanetwork.com/ 

(doi:10.1001/jamacardio.2021.2056)

Editor's Note: The article includes conflict of interest disclosures. Please see the articles for additional information, including other authors, author contributions and affiliations, conflicts of interest and financial disclosures, and funding and support.

Credit: 
JAMA Network

A white dwarf living on the edge

image: This illustration highlights a newfound small white dwarf, discovered by ZTF, that is 4,300 kilometers across, or roughly the size of Earth's moon, which is 3,500 kilometers across. The two bodies are shown next to each other for size comparison. The hot, young white dwarf is also the most massive white dwarf known, weighing 1.35 times as much as our sun.

Image: 
Giuseppe Parisi

Maunakea and Haleakala, Hawai'i - Astronomers have discovered the smallest and most massive white dwarf ever seen. The smoldering cinder, which formed when two less massive white dwarfs merged, is heavy, "packing a mass greater than that of our Sun into a body about the size of our Moon," says Ilaria Caiazzo, the Sherman Fairchild Postdoctoral Scholar Research Associate in Theoretical Astrophysics at Caltech and lead author of the new study appearing in the July 1 issue of the journal Nature. "It may seem counterintuitive, but smaller white dwarfs happen to be more massive. This is due to the fact that white dwarfs lack the nuclear burning that keep up normal stars against their own self gravity, and their size is instead regulated by quantum mechanics."

The discovery was made by the Zwicky Transient Facility, or ZTF, which operates at Caltech's Palomar Observatory; two Hawai'i telescopes - W. M. Keck Observatory on Maunakea, Hawai'i Island and University of Hawai'i Institute for Astronomy's Pan-STARRS (Panoramic Survey Telescope and Rapid Response System) on Haleakala, Maui - helped characterize the dead star, along with the 200-inch Hale Telescope at Palomar, the European Gaia space observatory, and NASA's Neil Gehrels Swift Observatory.

White dwarfs are the collapsed remnants of stars that were once about eight times the mass of our Sun or lighter. Our Sun, for example, after it first puffs up into a red giant in about 5 billion years, will ultimately slough off its outer layers and shrink down into a compact white dwarf. About 97 percent of all stars become white dwarfs.

While our Sun is alone in space without a stellar partner, many stars orbit around each other in pairs. The stars grow old together, and if they are both less than eight solar-masses, they will both evolve into white dwarfs.

The new discovery provides an example of what can happen after this phase. The pair of white dwarfs, which spiral around each other, lose energy in the form of gravitational waves and ultimately merge. If the dead stars are massive enough, they explode in what is called a type Ia supernova. But if they are below a certain mass threshold, they combine together into a new white dwarf that is heavier than either progenitor star. This process of merging boosts the magnetic field of that star and speeds up its rotation compared to that of the progenitors.

Astronomers say that the newfound tiny white dwarf, named ZTF J1901+1458, took the latter route of evolution; its progenitors merged and produced a white dwarf 1.35 times the mass of our Sun. The white dwarf has an extreme magnetic field almost 1 billion times stronger than our Sun's and whips around on its axis at a frenzied pace of one revolution every seven minutes (the zippiest white dwarf known, called EPIC 228939929, rotates every 5.3 minutes).

"We caught this very interesting object that wasn't quite massive enough to explode," says Caiazzo. "We are truly probing how massive a white dwarf can be."

What's more, Caiazzo and her collaborators think that the merged white dwarf may be massive enough to evolve into a neutron-rich dead star, or neutron star, which typically forms when a star much more massive than our Sun explodes in a supernova.

"This is highly speculative, but it's possible that the white dwarf is massive enough to further collapse into a neutron star," says Caiazzo. "It is so massive and dense that, in its core, electrons are being captured by protons in nuclei to form neutrons. Because the pressure from electrons pushes against the force of gravity, keeping the star intact, the core collapses when a large enough number of electrons are removed."

If this neutron star formation hypothesis is correct, it may mean that a significant portion of other neutron stars take shape in this way. The newfound object's close proximity (about 130 light-years away) and its young age (about 100 million years old or less) indicate that similar objects may occur more commonly in our galaxy.

MAGNETIC AND FAST

The white dwarf was first spotted by Caiazzo's colleague Kevin Burdge, a postdoctoral scholar at Caltech, after searching through all-sky images captured by ZTF. This particular white dwarf, when analyzed in combination with data from Gaia, stood out for being very massive and having a rapid rotation.

"No one has systematically been able to explore short-timescale astronomical phenomena on this kind of scale until now. The results of these efforts are stunning," says Burdge, who, in 2019, led the team that discovered a pair of white dwarfs zipping around each other every seven minutes.

The team then analyzed the spectrum of the star using Keck Observatory's Low Resolution Imaging Spectrometer (LRIS), and that is when Caiazzo was struck by the signatures of a very powerful magnetic field and realized that she and her team had found something "very special," as she says. The strength of the magnetic field together with the seven-minute rotational speed of the object indicated that it was the result of two smaller white dwarfs coalescing into one.

Data from Swift, which observes ultraviolet light, helped nail down the size and mass of the white dwarf. With a diameter of 2,670 miles, ZTF J1901+1458 secures the title for the smallest known white dwarf, edging out previous record holders, RE J0317-853 and WD 1832+089, which each have diameters of about 3,100 miles.

In the future, Caiazzo hopes to use ZTF to find more white dwarfs like this one, and, in general, to study the population as a whole. "There are so many questions to address, such as what is the rate of white dwarf mergers in the galaxy, and is it enough to explain the number of type Ia supernovae? How is a magnetic field generated in these powerful events, and why is there such diversity in magnetic field strengths among white dwarfs? Finding a large population of white dwarfs born from mergers will help us answer all these questions and more."

Credit: 
W. M. Keck Observatory

Thermal imaging offers early alert for chronic wound care

image: Thermal images of a venous leg ulcer showing healthy healing progress over three weeks.

Image: 
RMIT University

New research shows thermal imaging techniques can predict whether a wound needs extra management, offering an early alert system to improve chronic wound care.

It is estimated that 1-2% of the population will experience a chronic wound during their lifetime in developed countries - in the US, chronic wounds affect about 6.5 million patients with more than US$25 billion each year spent by the healthcare system on treating related complications.*

The Australian study shows textural analysis of thermal images of venous leg ulcers (VLUs) can detect whether a wound needs extra management as early as week two for clients receiving treatment at home.

The clinical study by RMIT University and Bolton Clarke, published in the Nature journal Scientific Reports, is the first to investigate textural analysis on VLUs using thermal images that do not require physical contact with the wound.

Researchers found the method, which provides information on spatial heat distribution in a wound, could accurately predict whether VLUs would heal in 12 weeks by the second week after baseline assessment.

This is because wounds change significantly over the healing trajectory, with higher temperatures signalling potential inflammation or infection while lower temperatures can indicate a slower healing rate due to decreased oxygen in the region.

Bolton Clarke Research Institute Senior Research Fellow Dr Rajna Ogrin said the current gold standard for predicting healing of VLUs - conventional digital planimetry - requires physical contact.

"A non-contact method like thermal imaging would be ideal to use when managing wounds in the home setting to minimise physical contact and therefore reduce infection risk," Ogrin said.

After showing that traditional thermal imaging methods do not give reliable results, the research team developed a new method for the analysis and used this in the clinical trial.

The new study, which involved 60 participants with VLUs, found thermal imaging offers an improvement on the current guidance for using digital imagery or planimetry wound tracings to detect the healing wounds by week four.

"The significance of this work is that there is now a method for detecting wounds that do not heal in the normal trajectory by week two using a non-contact, quick, objective and simple method," Ogrin said.

RMIT University Professor Dinesh Kumar said regular wound photography could not easily be used for accurate measurement of changes in wound size and other physiological parameters over time in the home care environment.

"This is because there are large variations between images due to changes in the lighting conditions, image quality and differences in camera angle across specific points in time," said Kumar, who leads the Biosignals for Affordable Healthcare group in RMIT's School of Engineering.

"Textural analysis of thermal images is resilient to these variations and is a time-efficient and cost-effective method to identify delayed healing of VLUs and improve patient outcomes."

'Thermal imaging potential and limitations to predict healing of venous leg ulcers', with RMIT co-authors Dr Mahta Monshipouri, Dr Behzad Aliahmad and Associate Professor Barbara Polus, is published in Scientific Reports (DOI: 10.1038/s41598-021-92828-2).

Credit: 
RMIT University

Study associates organic food intake in childhood with better cognitive development

A study analysing the association between a wide variety of prenatal and childhood exposures and neuropsychological development in school-age children has found that organic food intake is associated with better scores on tests of fluid intelligence (ability to solve novel reasoning problems) and working memory (ability of the brain to retain new information while it is needed in the short term). The study, published in Environmental Pollution, was conceived and designed by researchers at the Barcelona Institute for Global Health (ISGlobal)--a centre supported by the "la Caixa" Foundation--and the Pere Virgili Health Research Institute (IISPV-CERCA).

The explanation for this association may be that "healthy diets, including organic diets, are richer than fast food diets in nutrients necessary for the brain, such as fatty acids, vitamins and antioxidants, which together may enhance cognitive function in childhood," commented lead author Jordi Júlvez, a researcher at IISPV-CERCA who works closely with ISGlobal.

The study also found that fast food intake, house crowding and environmental tobacco smoke during childhood were associated with lower fluid intelligence scores. In addition, exposure to fine particulate matter (PM2.5) indoors was associated with lower working memory scores.

The study, titled "Early life multiple exposures and child cognitive function: A multi-centric birth cohort study in six European countries", used data on 1,298 children aged 6-11 years from six European country-specific birth cohorts (United Kingdom, France, Spain, Greece, Lithuania and Norway). The researchers looked at 87 environmental factors the children were exposed to in utero (air pollution, traffic, noise, various chemicals and lifestyle factors) and another 122 factors they were exposed to during childhood.

A Pioneering Study

The aim of the study was to analyse the influence of these exposures on the development and maturation of the human brain, since during childhood the brain is not yet fully developed for efficient defence against environmental chemicals and is particularly sensitive to toxicity, even at low levels that do not necessarily pose a risk to a healthy mature brain.

The originality of the study lies in its use of an exposome approach, i.e. the fact that it takes into account the totality of exposures rather than focusing on a single one. This approach aims to achieve a better understanding of the complexity of multiple environmental exposures and their simultaneous effect on children's neurodevelopment.

Another strength of the study, which analyses cohorts from six European countries, is its diversity, although this factor also poses the additional challenge of cultural differences, which can influence exposure levels and cognitive outcomes.

Notable Associations

The study found that the main determinants of fluid intelligence and working memory in children are organic diet, fast food diet, crowdedness of the family home, indoor air pollution and tobacco smoke. To date, there has been little research on the relationship between type of diet and cognitive function, but fast food intake has been associated with lower academic development success and some studies have also reported positive associations between organic diets and executive function scores. "In our study," explained Júlvez, "we found better scores in fluid intelligence and working memory with higher organic food intake and lower fast food intake."

In contrast, exposure to tobacco smoke and indoor PM2.5 during childhood may negatively affect cognitive function by enhancing pro-inflammatory reactions in the brain. Still, according to Júlvez, it is worth bearing in mind that "the number of people living together in a home is often an indicator of the family's economic status, and that contexts of poverty favour less healthy lifestyles, which in turn may affect children's cognitive test scores".

Some Surprising Findings

The study also found some unexpected associations, which could be explained by confounding and reverse causality. For example, a positive association was found between childhood exposure to perfluorooctane sulfonic acid (PFOS) and cognitive function, even though PFOS is considered an endocrine disruptor that may alter thyroid function and negatively influence cognitive development.

The study forms part of the large European project Human Early-Life Exposome (HELIX), as does another recent paper that used the same exposome and the same participants but looked at symptoms of attention deficit hyperactivity disorder (ADHD) and childhood behavioural problems. "We observed that several prenatal environmental pollutants (indoor air pollution and tobacco smoke) and lifestyle habits during childhood (diet, sleep and family social capital) were associated with behavioural problems in children," explained Martine Vrijheid, last author of the study and head of ISGlobal's Childhood and Environment programme.

"One of the strengths of this study on cognition and the earlier study on behavioural problems is that we systematically analysed a much wider range of exposure biomarkers in blood and urine to determine the internal levels in the model and that we analysed both prenatal and childhood exposure variables," concluded Vrijheid.

Tests used to quantify cognitive function:

Raven's Coloured Progressive Matrices (fluid intelligence)

Attention Network Test (attention)

N-Back (working memory)

Cohorts used in the study:

Born in Bradford (BiB), United Kingdom

Étude des déterminants pré- et postnatals du développement et de la santé de l'enfant (EDEN), France

Infancia y Medio Ambiente (INMA), Spain

Kaunus Cohort (KANC), Lithuania

Norwegian Mother, Father and Child Cohort Study (MoBa), Norway

Mother-Child Cohort in Crete (Rhea), Greece

Credit: 
Barcelona Institute for Global Health (ISGlobal)