Brain

Physicist generalized the measurement postulate in quantum mechanics

image: A Full measurement. If a detector is placed immediately after the three slits.

Image: 
©Science China Press

Measurement postulate is crucial to quantum mechanics. If we measure a quantum system, we can only get one of the eigenvalues of the measured observable, such as position, energy and so on, with a probability. Immediately after the measurement, the system will collapse into the corresponding eigenstate instantly, known as state collapse. It is argued that the non-cloning theorem is actually a result of the measurement postulate, because non-cloning theorem would also hold in classical physics. The possibility of cloning in classical physics is actually the ability to fully measure a classical system, so that a classical state can be measured and prepared [1].

To explain clearly the measurement in quantum mechanics, it is better to use the following example. Suppose a photon passes through a three-identical-slits and we place an ideal and nondemolition detector after each of the slit. According to the measurement-postulate, one of the detectors will detect the photon, and as a result the whole wavefunction will collapse into that slit.

What will happen if we just place only a single detector after the upper slit? It is natural to think that it will have one third probability to detect the photon, and collapses the whole wavefunction into slit-1, as shown in Fig. 2. However, what will happen if the detector at upper slit does not measure the photon? This is a partial measurement. This was encountered in the duality quantum computing formalism, where linear combination of unitaries (LCU) was proposed to perform quantum computing [2].

In Ref. [3], using the LCU formalism [2,4,5], Long proposed that when measuring a partial wave, something will surely happen: (1) collapse-in: it will collapse into one of the eigenvalue with some probability. After the measurement, the whole wavefunction will change instantly to the corresponding eigenstate; (2) collapse-out: the measured wavefunction will disappear, and shifts to the unmeasured part. As shown in Fig. 2, the detector will measure the photon with probability 1/3, and the whole photon wavefunction collapses into the supper slit. As shown in Fig. 3 for collapse-out, the measured part in upper slit disappears, and the unmeasured part, namely the wavefunction in the middle-slit and lower-slit increases.

In reality, partial measurement is more common than full measurement. It should be noted that collapse-in and collapse-out of partial measurement happen randomly not only in space, but also over time. For instance, the detection of photon by a detector can be naturally understood in terms of this partial measurement postulate. When the wavefunction of a photon goes to a detector, it is not measured in full at the same time, namely it is not a full measurement. Its front part arrives at the detector first, hitting some area of the detector. It either collapses in at any point of the intersecting area in the detector or collapses and the corresponding probability will be shifted to other part of the wavefunction. This process continues until the photon is detected. If the photon has not been detected until the last part of the wavefunction reaches the detector, then the amplitude of this remaining wavefunction increases to 1 so as to detect the photon with certainty at the final step.

This explanation is given in the view that Wavefunction Is just the quantum system Entity itself, the WISE interpretation [2,6]. In WISE interpretation, there is NO relation between the wavefunction and the quantum system, the wavefunction IS just the quantum system. The WISE interpretation is supported by the encounter delayed choice experiment [6], which has been reported in various media a few years ago [7].

Credit: 
Science China Press

ALPALGA: The search for mountain snow microalgae

image: Sampling snow covered with "glacier blood."

Image: 
© Jean-Gabriel VALAY/JARDIN DU LAUTARET/UGA/CNRS

In a white ocean, well above sea level, the algae thrive. Normally invisible to the naked eye, they are often spotted by hikers trekking through the mountains in late spring as strikingly coloured stretches of snow, in shades of ochre, orange and red. Known as "glacier blood", this colouring is the result of the punctual multiplication (or bloom) of the microalgae that inhabit the snow.

But apart from this impressive phenomenon, the life and organisation of mountain microalgae communities remains a secret. It is this still unknown ecosystem, now threatened by global warming, that needs to be explored. The ALPALGA* consortium aims to meet this challenge by organising and pooling research efforts on snow microalgae, and it has already received support from the Agence nationale de la recherche and the Kilian Jornet Foundation.

In an initial study involving three consortium laboratories**, researchers established the first map of snow microalgae distribution along elevation. In fact, as with vegetation, the different species of algae live at varying elevations on the mountains. The genus Sanguina, for example, which gives a characteristic red colour to the snow, has only been found at altitudes of 2000 metres and above. In contrast, the green microalgae Symbiochloris only live at altitudes below 1500 metres.

These results, obtained by collecting DNA from five Alpine sites, forms the foundation on which ALPALGA will build its work. The scientists will try to answer fundamental questions such as: what species of microalgae inhabit the snow? How can these organisms withstand such extreme temperature and sunlight conditions? Does global warming favour blooms? What effect do blooms have on snowmelt? The aim is to study the process of transformation of this ecosystem in order to promote and protect it.

Credit: 
CNRS

3D printed micro-optics for quantum technology

image: a, μ-PL spectra of the same QD underneath a Weierstrass SIL (left) and a TIR-SIL (right) and without a lens. Emission characteristics were identified prior to the intensity enhancement evaluation via power-dependent measurements. The insets depict an SEM angular view picture (45° tilt) of the printed lenses. b, (left) Schematic of the fiber chuck design. A TIR-SIL with an NA of 0.001 is printed deterministically aligned on the QD position. After the characterization of the printed lens, the big tube-like chuck is fabricated, being aligned on this lens. On the fiber tip, another lens is printed for coupling the modified emission into the fiber core. The modified fiber is then inserted into the chuck. Epoxy is used to fix the fiber position. Excitation and collection of the QD are carried out via the same fiber. (right) Microscope picture of a fiber inside a fiber chuck. The fiber is stopped via the step indicated by the dashed white lines and is ready for being fixed with epoxy glue. c, Unfiltered PL signal of the standalone QD device (left) and spectrum filtered with a band-pass filter that is designed for 885?nm?±?12.5?nm (right). Tilting the filter shifts the wavelength window down to lower wavelengths.

Image: 
by Marc Sartison, Ksenia Weber, Simon Thiele, Lucas Bremer, Sarah Fischbach, Thomas Herzog, Sascha Kolatschek, Michael Jetter, Stephan Reitzenstein, Alois Herkommer, Peter Michler, Simone Luca Portalupi, and Harald Giessen

Quantum computing and quantum communication are believed to be the future of information technology. In order to achieve the challenging and long-standing goal to make secure, wide-spread quantum communication networks a reality, high-brightness single-photon sources are indispensable. Single-photon emission from semiconductor quantum dots (QDs) has been shown to be a pure and efficient non-classical light source with a high degree of indistinguishability. However, the total internal reflection (TIR) as a result of the high semiconductor-to-air refractive index contrast severely limits the single-photon extraction efficiency. Another crucial step in the development of practical quantum networks is the implementation of quantum repeater protocols, which enable long-distance quantum communication via optical fibre channels. These protocols rely on the use of highly indistinguishable, entangled photons, which require the use of single-mode fibres. Thus, an efficient on-chip single-mode fibre-coupled quantum light source is a key element in the realisation of a QD-based real-world quantum communication network.

In a new paper published in Light Science & Application, a team of scientists, led by, Professor Harald Giessen and Professor Peter Michler from the 4th Physics Institute and the Institut für Halbleiteroptik und Funktionelle Grenzflächen, University of Stuttgart, Germany, and co-workers have worked on enhancing the extraction efficiency of semiconductor QDs by optimising micrometre-sized solid-immersion lens (SIL) designs. Two state-of-the-art technologies, i.e., low-temperature deterministic lithography and femtosecond 3D direct laser writing, are used in combination to deterministically fabricate micro-lenses on pre-selected QDs. Because of the high flexibility of 3D direct laser writing, various SIL designs, including hemispherical SILs (h-SILs), Weierstrass SILs (W-SILs), and total internal reflection SILs (TIR-SILs), can be produced and compared with respect to single-photon extraction enhancement. The experimentally obtained values are compared with analytical calculations, and the role of misalignment between SIL and QD as an error source is discussed in detail.

Furthermore, they highlight the implementation of an integrated single-mode fibre-coupled single-photon source based on 3D printed micro-optics. A 3D printed fibre chuck is used to precisely position an optical single-mode fibre onto a QD with a micro-lens printed on top. This fibre is equipped with another specifically designed 3D printed in-coupling lens to efficiently guide light from the TIR-SIL into the fibre core.

The main results presented in this paper are two-fold:

A reproducible method to enhance the collection efficiency of single QDs based on 3D printed micro-lenses is presented. For all lens geometries, an increase in the collection efficiency was confirmed. The simplest geometry, namely h-SIL, resulted in an intensity enhancement of approximately 2.1. A further increase of up to approximately 3.9 in collection efficiency is promised by the hyperhemispherical Weierstrass geometry. The highest values were achieved for the total internal reflection geometries which reliably provide a PL intensity ratio between 6 and 10.

A standalone a fibre-coupled standalone quantum dot device was realised. The validation of the approach for fibre in-coupling, that is the use of a QD provided with a TIR-SIL and a fibre with an additional focusing lens, was performed, employing a setup capable of precisely aligning the fibre with respect to the emitter. A value of up to 26?±?5% was shown, opening the route to a stable stand-alone, fibre-coupled device.

In the future, this technology can be combined with a QD single-photon source based on circular Bragg gratings, NV centres, defects, and a variety of other quantum emitters. In addition, a highly efficient combination with single quantum detectors should be feasible.

Credit: 
Light Publishing Center, Changchun Institute of Optics, Fine Mechanics And Physics, CAS

Protect the sea, neglect the people? Social impact of marine conservation schemes revealed

image: A boat at sunset in Cambodia.

Image: 
Marco J Haenssgen

As G7 governments renew commitments to protecting marine spaces and biodiversity, global conservation initiatives such as 30x30 are feared to pay too little attention to the livelihood impacts on communities

Close-up inspection of an upcoming marine conservation area in Cambodia shows mixed livelihood consequences ranging from improving relationships to the state to increased anxiety and social division

In the long term and on a regional scale in Southeast Asia, communities exposed to marine conservation are poorer and experience higher child mortality

Researchers warn that the rapid global expansion of nominal marine protected area (MPA) coverage can undermine community livelihoods if it proceeds with a sole focus on marine resource conservation, a disregard of local social contexts, and a lack of livelihood adaptation support for affected communities.

Researchers urge that marine conservation interventions should follow social impact assessments and go hand-in-hand with livelihood support schemes for affected communities

Governments and international organisations are expanding targets to conserve marine spaces to stem the worrying depletion of biodiversity and fish stocks around the globe. A new study now demonstrates the wide range of unintended impacts that such conservation efforts have on affected communities.

Published today in the leading international development journal World Development, the research presents a ground-breaking case study of the Cambodian Koh Sdach Archipelago combined with a cross-country statistical analysis of the impacts of marine conservation across Southeast Asian communities:

The detailed and up-close analysis demonstrates a mix of positive and negative impacts:

On the positive side, communities can find economic relief from the slowing deterioration of fish stock, and, in the case of Cambodia as a post-conflict country even experience improving relationships with the state.

Negative consequences included social division, heightened livelihood anxiety, and a false sense of economic security.

On balance and at the regional scale of Southeast Asia, long-term community exposure to marine protection schemes was linked to decreasing wealth and increasing child mortality.

Current research on surrounding marine protected areas (MPAs) has focused on pragmatic questions of effectiveness in protecting marine resources, good governance, and compliance. There has been little work to date on the "human dimension" of marine conservation and its unintended socio-economic consequences.

To evaluate the impact of marine conservation schemes on affected communities the researchers adapted techniques from international development research, developing a conceptual framework and using a qualitative case study of marine protection in Cambodia in combination with secondary household survey data from across Southeast Asia to tackle this research gap.

The case study revealed that although there had been some positive outcomes from the creation of the Koh Sdach community fishery organisation, such as slowing the deterioration of marine resources and encouraging diversification towards tourism, there were also negative social consequences - some groups remained involuntarily excluded, others evaded participation and adherence with its rules, and yet others struggled with the consequences of rule enforcement including threats of personal harm.

The quantitative analysis found that MPAs across Cambodia, Philippines, and Timor-Leste tended to emerge initially in socio-economically relatively well-off communities, but prolonged exposure was associated with a slower development pace in terms of household wealth, education, and child mortality.

The authors therefore argue that the target-driven expansion of marine protection areas - now up to 50% of global marine areas - neglects the social realities and livelihoods of affected communities.

The researchers recommend that the environmental objectives of marine protection be supplemented by social impact analyses and livelihood support schemes that help alleviate the disruptions of community lives.

Lead author Dr Haenssgen comments, "The importance of ecological impacts of marine conservation is beyond dispute, but we also need to ensure that such interventions are socially sustainable.

"What makes our study special in this respect is our use of cutting-edge social research approaches, both conceptual and methodological, which help unravel the social dimensions of marine protection on the micro and macro levels."

The social research study comprised a team of University of Warwick and Southeast Asian researchers directed by Dr Marco J Haenssgen, Assistant Professor in Global Sustainable Development. It is part of a broader research project to understand and inform marine conservation, which is entitled "Protected Areas and People" and funded by the UK's Global Challenges Research Fund.

Project leader and co-author Dr Jessica Savage (Global Sustainable Development, University of Warwick) observes that, "Realism is essential in our design of future conservation targets. In order to achieve sustainable development, we need to not only design achievable goals, but also goals that are inherently sustainable."

Credit: 
University of Warwick

You're more likely to fight misinformation if you think others are being duped

People in both the United States and China who think others are being duped by online misinformation about COVID-19 are also more likely to support corporate and political efforts to address that misinformation, according to a new study. The study suggests negative emotions may also play a role in the U.S. - but not in China.

"A lot of misinformation has been shared online over the course of the COVID-19 pandemic, and we had a range of questions about how people are responding to this misinformation," says Yang Cheng, co-lead author of the study and an assistant professor of communication at North Carolina State University.

"How do different emotions influence the way we feel about the impact of misinformation on other people? How does the way we perceive misinformation affecting other people influence which actions we think should be taken to address misinformation? Do emotions affect support for these actions? And do these things vary between China and the U.S.?"

To address these questions, researchers conducted two surveys: a survey of 1,793 adults in the U.S., and a survey of 504 adults in China. The surveys asked questions aimed at understanding what participants think of online misinformation concerning COVID-19; how that misinformation makes them feel; how they think misinformation affects other people; their willingness to fact-check online statements and report misinformation; and what they think social media companies and the government should do to address misinformation.

For the most part, study participants in the U.S. and China responded in similar ways. For example, study participants in both the U.S. and China who said online misinformation caused anger and/or anxiety were also more likely to think that other people are influenced by misinformation. And participants in both nations who felt others were being influenced by online misinformation were more likely to support a range of corrective and restrictive actions.

Corrective actions in this study refers to an individual's willingness to fact-check online information, to report misinformation to social media platforms, and to file a complaint with the platform against the person who posted the misinformation. Restrictive actions refer to a range of actions that social media companies or policymakers can take. These range from deleting the accounts of social media users who share misinformation to enacting laws that bar the sharing of misinformation on social media.

The study also found that study participants in the U.S. and China who reported that misinformation makes them angry were also more likely to engage in corrective actions.

However, participants in the U.S. and China differed in terms of emotion and restrictive actions. People in the U.S. who reported that misinformation causes feelings of anxiety or anger were also more likely to support restrictive actions by lawmakers or social media companies. There was no relationship between negative emotions in Chinese study participants and support for restrictive actions.

"Overall, the findings suggest that one way to engage the public in the fight against misinformation is to highlight the ways that misinformation can harm or otherwise influence other people," Cheng says.

Credit: 
North Carolina State University

Underwater ancient cypress forest offers clues to the past

image: Marine geologist and paleoclimatologist Kristine DeLong's research reveals new information about the ancient trees that she exhumed from about 8 miles offshore in 60 feet of water.

Image: 
LSU

When saber-toothed tigers, woolly mammoths and giant sloths roamed North America during the last Ice Age about 18,000 to 80,000 years ago, the Gulf Coast's climate was only slightly cooler, more similar to regions to the north like Missouri and North Carolina's climate today. As sea level dropped and exposed more land on the continental shelf, bald cypress trees became established in swamps in what is now the northern Gulf of Mexico.

An event occurred and suddenly killed and buried the bald cypress forests along the Gulf Coast. The buried swamp trees were preserved by sediment for thousands of years. About 18,000 years ago, sea level rose. As the ocean waters moved inland, the buried trees were preserved in their former swamp sediments. In 2004, Hurricane Ivan cut a path across the region and exposed a preserved ancient bald cypress forest.

"It smells like freshly cut cypress," said marine geologist and paleoclimatologist Kristine DeLong about the ancient trees that she exhumed from about 8 miles offshore in 60 feet of water.

It's a scent that is familiar to DeLong. Her grandfather logged cypress trees in Florida. Bald cypress lumber was highly prized in the 1800s because it doesn't easily decompose and is resistant to water rot and insects. Now, it is no longer logged and is protected on public lands.

"We were surprised to find this cypress wood intact, because wood normally decomposes in the ocean from shipworms and bacteria," she said.

In 2013, DeLong and her research team SCUBA dove the site, recovered 23 specimens of cypress and analyzed the wood in her lab at LSU, where she is an associate professor in the LSU Department of Geography & Anthropology, and at the University of Idaho. She radiocarbon dated the wood samples and found that they were too old to be dated with radiocarbon, so her team used other methods to date the forest. They found the forest was from the early part of the last ice age and between 42,000 to 74,000 years old.

"The region experienced climate change but it was getting colder. It wasn't a gradual drop in climate-- rather these quick pulses with drops in sea level. It was definitely a chaotic time, but the land and the forests were resilient to these changes," she said.

In 2015 and 2016, DeLong's team collected 18 sediment cores, which are long tubes of compacted sand and dirt, from around the site of the underwater ancient cypress forest. They found sand and seashells in the top layers of the sediment cores but also dark, organic peat that looks like potting soil with roots and leaves towards the bottom of the cores.

"As a marine geologist, we don't see this type of sediment," she said. "What was interesting was finding seeds from St. John's wort, button bush and rose mallow, which are native plants we can find on land today, but we found them preserved in the ocean."

She is collaborating with terrestrial tree and plant experts on this project, who are similarly puzzled by these specimens from the ocean.

Swamp waters naturally have low oxygen, which is believed to have preserved these specimens from decomposing. The researchers have a few hypotheses on what may have happened to the cypress forests. One idea is that sea level rose suddenly and the flood plain buried the cypress forest. Another idea is that a melting ice sheet caused a sudden influx of water to flow down the Mississippi River and other nearby rivers pushing sediment that buried the coastal forests.

Regardless of how this occurred, DeLong and colleagues believe that it occurred throughout the region and that there may be other underwater ancient cypress forests along the Gulf Coast. This research was recently published in the journal BOREAS.

Credit: 
Louisiana State University

Study finds age doesn't affect perception of 'speech-to-song illusion'

audio: Does this spoken phrase begin to sound songlike? Researchers from the University of Kansas have published a study in PLOS ONE examining if the speech-to-song illusion happens in older adults who are 55 or older as powerfully as it does with younger people.

Image: 
KU

LAWRENCE -- A strange thing sometimes happens when we listen to a spoken phrase again and again: It begins to sound like a song.

This phenomenon, called the "speech-to-song illusion," can offer a window into how the mind operates and give insight into conditions that affect people's ability to communicate, like aphasia and aging people's decreased ability to recall words.

Now, researchers from the University of Kansas have published a study in PLOS ONE examining if the speech-to-song illusion happens in adults who are 55 or older as powerfully as it does with younger people.

The KU team recruited 199 participants electronically on Amazon's Mechanical Turk (MTurk), a website used to conduct research in the field of psychology. The subjects listened to a sound file that exemplified the speech-to-song illusion, then completed surveys relating to three different studies.

"In the first study, we just played them the canonical stimulus made by the researcher that discovered this illusion -- if that can't create the illusion, then nothing can," said co-author Michael Vitevitch, professor of psychology at KU. "Then we simply asked people, 'Did you experience the illusion or not?' There was no difference in the age of the number of people that said yes or no."

While the researchers hypothesized fewer older people would perceive the illusion than younger people, the study showed no difference due to age.

While older and younger people perceived the speech-to-song illusion at the same rates, in the second study investigators sought to discover if older people experienced it less powerfully.

"We thought maybe 'yes or no' was too coarse of a measurement, so let's try to use a five-point rating scale," Vitevitch said. "Maybe older adults would rate it as being a little bit more speech-like and younger adults will rate it as being more song-like and you'll see it on this five-point scale, maybe. But there was no difference in the numbers with the younger and older adults."

In the third study, Vitevitch wanted to see if older adults perhaps experience the illusion more slowly than younger people.

"We thought maybe it's not the strength of the illusion that's different but maybe it's when the illusion occurred," he said. "So, we did a final study and asked people to click a button on the screen when their perception shifted from speech to song -- we thought maybe older adults would need a few more repetitions for it to switch over. But we got the same number for both younger adults and older."

Vitevitch's co-authors were KU undergraduate researchers Hollie Mullin, Evan Norkey and Anisha Kodwani, as well as Nichol Castro of the University of Buffalo.

According to Vitevitch, the findings might translate to good news for older adults.

"We have this common misconception that everything goes downhill cognitively as we age," said the KU researcher. "That's not the case. There are some things that do get worse with age, but there are some things that actually get better with age, and some things that stay consistent with age --in the case of this illusion, you're going to get equally suckered whether you're an older adult or a younger adult."

In another aspect of the research, the investigators found people with musical training experienced the speech-to-song illusion at similar rates as people with no background in music.

"There's a debate about whether musicians or musically trained people experienced the illusion more or less or sooner or more strongly," Vitevitch said. "We looked at it and there was really no difference there either. Musicians and non-musically trained people experience this at about the same rates and have the same sort of experience. The amount of musical training didn't matter. It was just amazingly consistent however we looked at it."

Not everybody experiences the speech-to-song illusion. The study found about 73% of participants heard spoken words become song-like after several repetitions. But the ability to perceive it didn't correlate to age or musical training.

Credit: 
University of Kansas

Linked faults under Salt Lake City may elevate risk of building damage

image: Sensors for an active seismic source experiment are contained in a firehose as the researchers drive through downtown Salt Lake City, Utah.

Image: 
Lee Liberty

A complex zone of folding and faulting that links two faults underneath downtown Salt Lake City could deform the ground during a large earthquake, according to a new study.

The findings, published in the open-access journal The Seismic Record, suggest that earthquakes magnitude 5.0 and larger could cause ground displacement and liquefaction in Salt Lake City that increase the risk of earthquake-related building damage.

As part of the Wasatch Fault Zone, the region has a complex seismic history, with at least 24 large earthquakes occurring in the urbanized parts of the zone over the past 7000 years. Along with previous excavation, borehole and other geophysical studies, the new research also supports the possibility of through-going ruptures across multiple faults in the Wasatch Fault Zone.

Previous excavations in downtown Salt Lake City showed signs of ground deformation, but questions remain about whether "the deformation was related to shaking-induced liquefaction from a more distant earthquake, or was it related to faults that lie beneath downtown," said Lee Liberty, a geoscientist at Boise State University.

The new study by Liberty and colleagues indicates that the deformation may be the result of active faults beneath the city's downtown, and that "ground displacement from underlying faults presents a new hazard that should be addressed," Liberty said.

To learn more about what lies beneath downtown Salt Lake City, the researchers conducted active source experiments along busy urban streets in the area between North Temple and 800 South Streets. During the experiments, a 200-kilogram weight -the seismic source -was dropped at regular intervals to create seismic waves that could be detected by sensors embedded in a fire hose dragged behind the weight.

"We operated during normal business traffic, but we avoided peak traffic times and major commuted roads," said Liberty. "Our seismic source is pretty powerful, with respect to our offsets and target depth. We relied on off-duty police officers to control traffic and to safely operate."

Seismic reflection data collected by the experiments helped Liberty and colleagues identify faulted and folded layers of the ancient Lake Bonneville lakebed that were between 13,000 and 30,000 years old. This folded and faulted zone links the East Bench and Warm Springs faults below the city, the researchers concluded.

Further imaging from seismic wave data provided a better picture of the water saturation and sediment properties below the city, bolstering the case that local liquefaction could occur in a large earthquake. In liquefaction, strong shaking reduces the strength of water saturated and loose soils and turns them into a substance more jelly-like than solid.

Previous trenching uncovered evidence of at least one earthquake event downtown in the past 10,000 years. The researchers hope to follow up their study with more work to better define the earthquake history of individual fault strands in Salt Lake City.

Credit: 
Seismological Society of America

New algorithm for modern quilting

image: Each of the blocks in this quilt were designed using an algorithm-based tool developed by Stanford researchers.

Image: 
Mackenzie Leake

Stanford University computer science graduate student Mackenzie Leake has been quilting since age 10, but she never imagined the craft would be the focus of her doctoral dissertation. Included in that work is new prototype software that can facilitate pattern-making for a form of quilting called foundation paper piecing, which involves using a backing made of foundation paper to lay out and sew a quilted design.

Developing a foundation paper piece quilt pattern - which looks similar to a paint-by-numbers outline - is often non-intuitive. There are few formal guidelines for patterning and those that do exist are insufficient to assure a successful result.

"Quilting has this rich tradition and people make these very personal, cherished heirlooms but paper piece quilting often requires that people work from patterns that other people designed," said Leake, who is a member of the lab of Maneesh Agrawala, the Forest Baskett Professor of Computer Science and director of the Brown Institute for Media Innovation at Stanford. "So, we wanted to produce a digital tool that lets people design the patterns that they want to design without having to think through all of the geometry, ordering and constraints."

A paper describing this work is published and will be presented at the computer graphics conference SIGGRAPH 2021 in August.

Respecting the craft

In describing the allure of paper piece quilts, Leake cites the modern aesthetic and high level of control and precision. The seams of the quilt are sewn through the paper pattern and, as the seaming process proceeds, the individual pieces of fabric are flipped over to form the final design. All of this "sew and flip" action means the pattern must be produced in a careful order.

Poorly executed patterns can lead to loose pieces, holes, misplaced seams and designs that are simply impossible to complete. When quilters create their own paper piecing designs, figuring out the order of the seams can take considerable time - and still lead to unsatisfactory results.

"The biggest challenge that we're tackling is letting people focus on the creative part and offload the mental energy of figuring out whether they can use this technique or not," said Leake, who is lead author of the SIGGRAPH paper. "It's important to me that we're really aware and respectful of the way that people like to create and that we aren't over-automating that process."

This isn't Leake's first foray into computer-aided quilting. She previously designed a tool for improvisational quilting, which she presented at the human-computer interaction conference CHI in May.

Quilting theory

Developing the algorithm at the heart of this latest quilting software required a substantial theoretical foundation. With few existing guidelines to go on, the researchers had to first gain a more formal understanding of what makes a quilt paper piece-able, and then represent that mathematically.

They eventually found what they needed in a particular graph structure, called a hypergraph. While so-called "simple" graphs can only connect data points by lines, a hypergraph can accommodate overlapping relationships between many data points. (A Venn diagram is a type of hypergraph.) The researchers found that a pattern will be paper piece-able if it can be depicted by a hypergraph whose edges can be removed one at a time in a specific order - which would correspond to how the seams are sewn in the pattern.

The prototype software allows users to sketch out a design and the underlying hypergraph-based algorithm determines what paper foundation patterns could make it possible - if any. Many designs result in multiple pattern options and users can adjust their sketch until they get a pattern they like. The researchers hope to make a version of their software publicly available this summer.

"I didn't expect to be writing my computer science dissertation on quilting when I started," said Leake. "But I found this really rich space of problems involving design and computation and traditional crafts, so there have been lots of different pieces we've been able to pull off and examine in that space."

Credit: 
Stanford University

MDI Biological Laboratory scientist identifies signaling underlying regeneration

image: A team of scientists led by James Godwin, Ph.D., of the MDI Biological Laboratory in Bar Harbor, Maine, has come a step closer to unraveling the mystery of why salamanders can regenerate while adult mammals cannot with the discovery of differences in molecular signaling that promote regeneration in the axolotl, a highly regenerative salamander, while blocking it in the adult mouse. Godwin is pictured here with a tank containing an axolotl.

Image: 
MDI Biological Laboratory

BAR HARBOR, MAINE — Many salamanders can readily regenerate a lost limb, but adult mammals, including humans, cannot. Why this is the case is a scientific mystery that has fascinated observers of the natural world for thousands of years.

Now, a team of scientists led by James Godwin, Ph.D., of the MDI Biological Laboratory in Bar Harbor, Maine, has come a step closer to unraveling that mystery with the discovery of differences in molecular signaling that promote regeneration in the axolotl, a highly regenerative salamander, while blocking it in the adult mouse, which is a mammal with limited regenerative ability.

“Scientists at the MDI Biological Laboratory have been relying on comparative biology to gain insights into human health since its founding in 1898,” said Hermann Haller, M.D., the institution’s president. “The discoveries enabled by James Godwin’s comparative studies in the axolotl and mouse are proof that the idea of learning from nature is as valid today as it was more than one hundred and twenty years ago.”

Instead of regenerating lost or injured body parts, mammals typically form a scar at the site of an injury. Because the scar creates a physical barrier to regeneration, research in regenerative medicine at the MDI Biological Laboratory has focused on understanding why the axolotl doesn’t form a scar – or, why it doesn’t respond to injury in the same way that the mouse and other mammals do.

“Our research shows that humans have untapped potential for regeneration,” Godwin said. “If we can solve the problem of scar formation, we may be able to unlock our latent regenerative potential. Axolotls don’t scar, which is what allows regeneration to take place. But once a scar has formed, it’s game over in terms of regeneration. If we could prevent scarring in humans, we could enhance quality of life for so many people.”

The axolotl as a model for regeneration

The axolotl, a Mexican salamander that is now all but extinct in the wild, is a favorite model in regenerative medicine research because of its one-of-a-kind status as nature’s champion of regeneration. While most salamanders have some regenerative capacity, the axolotl can regenerate almost any body part, including brain, heart, jaws, limbs, lungs, ovaries, spinal cord, skin, tail and more.

Since mammalian embryos and juveniles have the ability to regenerate – for instance, human infants can regenerate heart tissue and children can regenerate fingertips – it’s likely that adult mammals retain the genetic code for regeneration, raising the prospect that pharmaceutical therapies could be developed to encourage humans to regenerate tissues and organs lost to disease or injury instead of forming a scar.

In his recent research, Godwin compared immune cells called macrophages in the axolotl to those in the mouse with the goal of identifying the quality in axolotl macrophages that promotes regeneration. The research builds on earlier studies in which Godwin found that macrophages are critical to regeneration: when they are depleted, the axolotl forms a scar instead of regenerating, just like mammals.

The recent research found that although macrophage signaling in the axolotl and in the mouse were similar when the organisms were exposed to pathogens such as bacteria, funguses and viruses, when it came to exposure to injury it was a different story: the macrophage signaling in the axolotl promoted the growth of new tissue while that in the mouse promoted scarring.

The paper on the research, entitled “Distinct TLR Signaling in the Salamander Response to Tissue Damage” was recently published in the journal Developmental Dynamics. In addition to Godwin, authors include Nadia Rosenthal, Ph.D., of The Jackson Laboratory; Ryan Dubuque and Katya E. Chan of the Australian Regenerative Medicine Institute (ARMI); and Sergej Nowoshilow, Ph.D., of the Research Institute of Molecular Pathology in Vienna, Austria.

Godwin, who holds a joint appointment with The Jackson Laboratory, was formerly associated with ARMI and Rosenthal is ARMI’s founding director. The MDI Biological Laboratory and ARMI have a partnership agreement to promote research and education on regeneration and the development of new therapies to improve human health.

Specifically, the paper reported that the signaling response of a class of proteins called toll-like receptors (TLRs), which allow macrophages to recognize a threat such an infection or a tissue injury and induce a pro-inflammatory response, were “unexpectedly divergent” in response to injury in the axolotl and the mouse. The finding offers an intriguing window into the mechanisms governing regeneration in the axolotl.

Being able to ‘pull the levers of regeneration’

The discovery of an alternative signaling pathway that is compatible with regeneration could ultimately lead to regenerative medicine therapies for humans. Though regrowing a human limb may not be realistic in the short term, significant opportunities exist for therapies that improve clinical outcomes in diseases in which scarring plays a major role in the pathology, including heart, kidney, liver and lung disease.

“We are getting closer to understanding how axolotl macrophages are primed for regeneration, which will bring us closer to being able to pull the levers of regeneration in humans,” Godwin said. “For instance, I envision being able to use a topical hydrogel at the site of a wound that is laced with a modulator that changes the behavior of human macrophages to be more like those of the axolotl.”

Godwin, who is an immunologist, chose to examine the function of the immune system in regeneration because of its role in preparing the wound for repairs as the equivalent of a first responder at the site of an injury. His recent research opens the door to further mapping of critical nodes in TLR signaling pathways that regulate the unique immune environment enabling axolotl regeneration and scar-free repair.

Credit: 
MDI Biological Laboratory

Solar energy-driven sustainable process for synthesis of ethylene glycol from methanol

image: Direct photocatalytic coupling of methanol to ethylene glycol (EG) is highly attractive. The first metal oxide photocatalyst, tantalum-based semiconductor, is reported for preferential activation of C-H bond within methanol to form hydroxymethyl radical (* CH2OH) and subsequent C-C coupling to EG. The nitrogen doped tantalum oxide (N-Ta2O5) photocatalyst is an environmentally friendly and highly stable candidate for photocatalytic coupling of methanol to EG.

Image: 
<em>Chinese Journal of Catalysis</em>

The photochemistry of the future will spring up human industry without smoke, and bring a brighter civilization based on the utilization of solar energy instead of fossil energy. Photochemistry has been used in controlling many reaction processes, especially for the challenging reactions containing selective C-H activation and C-C coupling in chemical synthesis. It is of great interests that a "dream catalytic reaction" of direct coupling of methanol to ethylene glycol (2CH3OH → HOCH2CH2OH + H2, denoted as MTEG) could be achieved through the solar energy-driven C-H activation and C-C coupling processes, and this MTEG reaction has not been achieved through thermocatalysis yet.

Ethylene glycol (EG) is an important monomer for the manufacture of polymers (e.g., poly(ethylene terephthalate), PET), and can also be used as antifreeze and fuel additive. The annual production of EG is more than 25 million tons, which is primarily produced from petroleum-derived ethylene industrially. Methanol is a clean platform chemical, which can not only traditionally produced from natural gas and coal, but also has been directly synthesized from biomass and CO2. Thus, the solar energy-driven MTEG route provides an alternative process for sustainable synthesis of EG and H2 from methanol directly with great attractions.

Although direct photocatalytic coupling of methanol to EG is highly attractive, the reported photocatalysts for this reaction are all metal sulfide semiconductors, which may suffer from photocorrosion and have low stability. Thus, the development of non-sulfide photocatalysts for efficient photocatalytic coupling of methanol to EG and H2 with high stability is urgent but extremely challenging.

Recently, a research team led by Prof. Ye Wang from Xiamen University and Yanshan University, China reported the first metal oxide photocatalyst, tantalum-based semiconductor, for preferential activation of C-H bond within methanol to form hydroxymethyl radical (* CH2OH) and subsequent C-C coupling to EG. Compared with other metal oxide photocatalysts, such as TiO2, ZnO, WO3, Nb2O5, tantalum oxide (Ta2O5) is unique in that it can realize the selective photocatalytic coupling of methanol to EG. The co-catalyst free nitrogen doped tantalum oxide (2%N-Ta2O5) shows an EG formation rate as high as 4.0 mmol/g/h, about 9 times higher than that of Ta2O5, with a selectivity higher than 70%. The high charge separation ability of nitrogen doped tantalum oxide plays a key role in its high activity for EG production. This catalyst also shows excellent stability longer than 160 h, which has not been achieved over the reported metal sulfide photocatalysts. Tantalum-based photocatalyst is an environmentally friendly and highly stable candidate for photocatalytic coupling of methanol to EG. The results were published in Chinese Journal of Catalysis.

Credit: 
Dalian Institute of Chemical Physics, Chinese Academy Sciences

Ancient volcanic eruption destroyed the ozone layer

A catastrophic drop in atmospheric ozone levels around the tropics is likely to have contributed to a bottleneck in the human population around 60 to 100,000 years ago, an international research team has suggested. The ozone loss, triggered by the eruption of the Toba supervolcano located in present-day Indonesia, might solve an evolutionary puzzle that scientists have been debating for decades.

"Toba has long been posited as a cause of the bottleneck, but initial investigations into the climate variables of temperature and precipitation provided no concrete evidence of a devastating effect on humankind," says Sergey Osipov at the Max Planck Institute for Chemistry, who worked on the project with KAUST's Georgiy Stenchikov and colleagues from King Saud University, NASA and the Max Planck Institute for Chemistry.

"We point out that, in the tropics, near-surface ultraviolet (UV) radiation is the driving evolutionary factor. Climate becomes more relevant in the more volatile regions away from the tropics," says Stenchikov.

Large volcanic eruptions emit gases and ash that create a sunlight-attenuating aerosol layer in the stratosphere, causing cooling at the Earth's surface. This "volcanic winter" has multiple knock-on effects, such as cooler oceans, prolonged El Niño events, crop failures and disease.

"The ozone layer prevents high levels of harmful UV radiation reaching the surface," says Osipov. "To generate ozone from oxygen in the atmosphere, photons are needed to break the O2 bond. When a volcano releases vast amounts of sulfur dioxide (SO2), the resulting volcanic plume absorbs UV radiation but blocks sunlight. This limits ozone formation, creating an ozone hole and heightening the chances of UV stress."

The team examined UV radiation levels after the Toba eruption using the ModelE climate model developed by NASA GISS (Goddard Institute for Space Studies). They simulated the possible after-effects of different sizes of eruptions. Running such a model is computationally intensive, and Osipov is grateful for the use of KAUST's supercomputer, Shaheen II, and associated expertise.

Their model suggests that the Toba SO2 cloud depleted global ozone levels by as much as 50 percent. Furthermore, they found that the effects on ozone are significant, even under relatively small eruption scenarios. The resulting health hazards from higher UV radiation at the surface would have significantly affected human survival rates.

"The UV stress effects could be similar to the aftermath of a nuclear war," says Osipov. "For example, crop yields and marine productivity would drop due to UV sterilization effects. Going outside without UV protection would cause eye damage and sunburn in less than 15 minutes. Over time, skin cancers and general DNA damage would have led to population decline."

Credit: 
King Abdullah University of Science & Technology (KAUST)

Researchers Fine-Tune Control Over AI Image Generation

image: The new AI method enables the system to create and retain a background image, while adding new figures. In addition, the method allows AI to move or alter elements in the image while keeping them identifiably the same. For example, it can show the same skiers in multiple poses.

Image: 
Tianfu Wu, NC State University

Researchers from North Carolina State University have developed a new state-of-the-art method for controlling how artificial intelligence (AI) systems create images. The work has applications for fields from autonomous robotics to AI training.

At issue is a type of AI task called conditional image generation, in which AI systems create images that meet a specific set of conditions. For example, a system could be trained to create original images of cats or dogs, depending on which animal the user requested. More recent techniques have built on this to incorporate conditions regarding an image layout. This allows users to specify which types of objects they want to appear in particular places on the screen. For example, the sky might go in one box, a tree might be in another box, a stream might be in a separate box, and so on.

The new work builds on those techniques to give users more control over the resulting images, and to retain certain characteristics across a series of images.

"Our approach is highly reconfigurable," says Tianfu Wu, co-author of a paper on the work and an assistant professor of computer engineering at NC State. "Like previous approaches, ours allows users to have the system generate an image based on a specific set of conditions. But ours also allows you to retain that image and add to it. For example, users could have the AI create a mountain scene. The users could then have the system add skiers to that scene."

In addition, the new approach allows users to have the AI manipulate specific elements so that they are identifiably the same, but have moved or changed in some way. For example, the AI might create a series of images showing skiers turn toward the viewer as they move across the landscape.

"One application for this would be to help autonomous robots 'imagine' what the end result might look like before they begin a given task," Wu says. "You could also use the system to generate images for AI training. So, instead of compiling images from external sources, you could use this system to create images for training other AI systems."

The researchers tested their new approach using the COCO-Stuff dataset and the Visual Genome dataset. Based on standard measures of image quality, the new approach outperformed the previous state-of-the-art image creation techniques.

"Our next step is to see if we can extend this work to video and three-dimensional images," Wu says.

Training for the new approach requires a fair amount of computational power; the researchers used a 4-GPU workstation. However, deploying the system is less computationally expensive.

"We found that one GPU gives you almost real-time speed," Wu says.

"In addition to our paper, we've made our source code for this approach available on GitHub. That said, we're always open to collaborating with industry partners."

Credit: 
North Carolina State University

Study: Parler provided echo chamber for vaccine misinformation, conspiracy theories

LAWRENCE -- In the early days of COVID-19 vaccine development, a new social media platform provided a place for like-minded people to discuss vaccines, share misinformation and speculate about the motivations for its development. A new study from the University of Kansas shows people flocked to Parler to discuss the vaccines in an echo chamber-type environment, and those conversations can shed light about how to communicate about vaccine efficacy during health crises.

COVID-19 vaccine vial and syringe photo from the U.S. Census Bureau.In the runup to the 2020 election, then-president Donald Trump claimed a COVID-19 vaccine could be ready before people went to the polls. About that time, millions were flocking to Parler, a new social media platform that promised a free speech environment via unregulated posts. A trio of KU doctoral students in journalism & mass communications analyzed a sample of 400 posts on the platform between November 2020 and January 2021 about the vaccines. Results showed people followed messages of conservative political leaders, shared misinformation and reinforced messages shedding doubt on vaccine efficacy.

"Around October last year, we were hearing a lot of buzz around a new social media platform, Parler, not just in the political field, but in the health field as well," said Annalise Baines, the study's lead author. "We wanted to understand more about what was being said about COVID-19 vaccines specifically, as we noticed a shift in the conversation from developing vaccines to distrusting science around the efficacy of vaccines."

The study, co-written by Baines, Muhammad Ittefaq and Mauryne Abwao, was published in the journal Vaccines.

A thematic analysis of the posts, known as "parleys," showed users discussing the vaccines in five distinct themes:

Reasons to refuse the vaccine

Side effects

Population control through the vaccines

Children getting vaccinated without parental consent

Comparison of other health issues with COVID-19

Previous research has extensively examined communication via social media, but Parler, a relatively new platform that has undergone controversy and was deplatformed for several weeks following the Jan. 6 insurrection at the U.S. Capitol, has not been studied widely. The authors, who study environmental, health communications, and new social and digital media topics, analyzed how people discussed vaccines, an important public health issue, among like-minded individuals. While the research didn't compare users' political leanings, the platform was widely popular with conservative users and was touted as an alternative to sites such as Twitter or Facebook, which they accused of anti-conservative bias.

"If you live in a bubble in which you only hear from people who share the same views as you and information that supports that, that tends to reinforce what you already believe in. It's also about people you trust. We didn't have demographic information on users, but we did find echo chambers existed there, and people even used the hashtag #echo," Ittefaq said.

Among the five key themes, users frequently used hashtags to further spread their content, such as #nocovidvaccine, #novaccine, #wedonotconsent, #vaxaware, #wakeupworld. Users also shared polls showing people in the U.S. and Europe didn't want a vaccine, or they shared dubiously sourced news stories about nurses who suffered from Bell's palsy after getting the dose. This finding also appeared in the second most frequent theme regarding side effects. Users shared posts about people dying after receiving the vaccine or who had cognitive effects or were hospitalized.

"Some of the reasons for vaccine hesitancy are deemed legit, and the major concerns may have been a result of people being skeptical of the side effects," Abwao said. "Currently, we have experienced cases where some vaccines have been discontinued; however, this should not deter people from getting vaccinated."

One of the most popular conspiracy theories shared was that vaccines were being developed as a means for government or a new world order to control the population, according to the KU researchers. Frequent conspiracies involved the use of microchips via the vaccine or an enzyme that would control the population, the study found. People often included links, videos and images with such parleys, though when sources were included they were from unverified sites or contained videos that were purported to be leaked from the government. In other themes, users shared posts claiming schools would vaccinate children without parents' consent or cast doubt on the pandemic's severity, comparing it to other health issues such as the flu, or citing abortion statistics to claim it was not as deadly as commonly reported. Common hashtags in those themes included #scamdemic, #plandemic, #idonotconsent, #covidhoax and #nocovidvaccine.

The findings illustrate several key points in health communications and social media use, the authors said. People listen to the messages of elected officials and will adhere to them, such as Trump calling the virus a hoax or U.S. Sen. Ted Cruz's endorsement of Parler as a place to share opinions on current issues. The views shared by people can not only spread misinformation but can also be informative for policymakers and public health officials to counter anti-vaccination rhetoric, the KU researchers said. On the Parler platform anyone can post claims online without going through verification steps and share that information with others. This can be dangerous, especially for those who might be more vulnerable and not be able to identify misinformation, Baines said.

Public health officials have a difficult job in which they are trained to perform science and share findings, not to combat misinformation, Ittefaq said. But, if they are able to use credible information through stories of individuals and trusted experts, they can help disseminate credible information on health crises to the public. Failing to do so can have negative ramifications in future public health crises, he added. The analysis shows people listen to those similar to themselves, and that if public health officials can share valid information with people who can share it with their peers, they will have greater success in situations such as the COVID-19 pandemic, in which people continue to seek valid information about the vaccines.

Credit: 
University of Kansas

UB researchers look to improve the WIC shopping experience

BUFFALO, N.Y. -- For many people, the need to go grocery shopping is met with a sigh, or an "ugh." It's generally not considered to be an enjoyable experience.

For moms who shop using WIC benefits, it can be a downright awful experience, one that's often made worse by difficulty finding eligible products and dealing with a lengthy checkout process. Add kids in tow and it's enough for many moms to forego re-enrolling in the Special Supplemental Nutrition Program for Women, Infants and Children, commonly known as WIC.

But researchers at the University at Buffalo are working on ways to improve the WIC shopping experience so that customers stay in the program. Moreover, they're working with a Western New York-based supermarket chain on a pilot project aimed at making it easier for WIC customers to find and use eligible products.

The team, which includes a researcher from North Carolina State University, recently published a study in the Journal of Hunger & Environmental Nutrition that is among the first to examine both barriers to and possible strategies for WIC shopping.

WIC provides supplemental foods and nutrition education to low-income pregnant and postpartum women with infants and children up to age 5 whose household incomes are below state-defined thresholds.

While WIC has been proven to improve children's health, participation in the program isn't great, with some reports indicating that as few as 73% of infants, 38% of children and 67% of pregnant and postpartum women eligible for WIC actually participate.

"The restrictions on the foods that families can buy with WIC make the shopping experience difficult, but there are some things that stores can do to make it easier, such as using product placement and signage. Better staff training helps a lot, too," says Lucia Leone, PhD, the study's lead author.

"Poor shopping experiences can lead people to drop off WIC or not re-enroll because they feel like the time and frustration isn't worth it. This leaves them without a benefit that we know improves children's health," adds Leone, an assistant professor of community health and health behavior in UB's School of Public Health and Health Professions.

WIC barriers

Researchers identified several key barriers to shopping with WIC. In addition to restrictions on eligible foods, another common challenge is identifying the correct product size (11 oz. vs. 14 oz., for example) and type (low sodium vs. regular). New York State now has an app to help identify which items are eligible, but it doesn't let you know where to find them in a specific store.

Until recently, further confusion often ensued during checkout, which was challenging when the cashier wasn't well trained, a common occurrence given the frequent turnover among grocery store staff. The checkout experience has improved, Leone notes, with the implementation of electronic benefits transfer (EBT) for WIC, which occurred after this study took place. The WIC EBT system replaced the paper vouchers shoppers used previously.

Then there's the obstacle of product availability, an issue that was exacerbated over the past year due to the COVID-19 pandemic, during which retailers struggled to keep certain products in stock due to high demand.

WIC doesn't allow replacement products, so if a shopper can't find a 12 oz. box or larger of a plain cereal during her first shopping trip, she will then have to make a special trip, possibly visiting multiple stores.

In addition, WIC benefits can't be redeemed online. "This meant that parents had to choose between bringing small children to the store during a pandemic or not fulfilling their benefits," Leone says.

"Aside from the shopping issues, some families struggle to use all of the products if they are not items their family eats regularly," she adds.

There's also the issue of stigma, as many WIC shoppers express worry about how other customers perceive them.

Strategies for an improved experience

For this study, researchers in 2015 conducted eight focus groups involving 63 women in Erie and Niagara counties in Western New York. Participants described some of the challenges associated with WIC, and also talked about what worked well.

One such strategy is "shelf-talkers," or special signs that denote WIC eligible products, making them easier to find. The signs help reduce search time and alleviate confusion at checkout. New York State regulations, however, don't allow most stores to use shelf-talkers. Nor can retailers offer WIC-only sections.

Of course, well-trained grocery store staff, especially cashiers and store managers, also improve a shopping trip by cutting down on the amount of time spent cashing out.

Additional strategies mentioned included having a WIC product guide available in store and allowing WIC shoppers to use self-checkout.

Partnering with Tops to improve WIC shopping experience

There is currently no research available on the role of retailer interventions to improve WIC redemption and/or retention rates.

That's why Leone and her team are piloting a project with Tops Markets on Niagara Street in Buffalo.

"The goal of this project is to make it easier to use WIC products by sharing recipes made with mostly WIC products," Leone says. "More importantly, all those items will be 'bundled' together in the store so that families can quickly go in and find all of the WIC items they need for the recipe in one place rather than searching around the store."

Customers won't have to purchase all the products, but, Leone notes, they tend to purchase bundled items because of perceived convenience. Non-WIC families can take advantage of these kid-friendly meal deals, too. Some items in the recipe bundle will also be on sale.

The bundled items are not sold in a package together, but are instead tied together by a recipe and located together in the store to make shopping for WIC products more convenient.

Credit: 
University at Buffalo